A fast content pipeline is crucial for today’s engines, allowing the user to quickly iterate on features, assets, etc. Molecule’s content pipeline enables hot-reloading of every asset the engine understands (textures, models, shaders, …), while retaining blazingly fast loading times.
Having worked with a rather tedious workflow in the past (quit engine, compile asset, start engine again) with somewhat long loading times during development because intermediate data formats had to be parsed/loaded, I wanted Molecule to support two major features:
- No loading of non-binary data. The engine should only load and understand binary data compiled for the platform it’s currently running on.
- Hot-reloading of every asset the engine understands – be it textures, models, shaders, or scripts, or even configuration files.
The benefits of such an approach are manifold:
- If the engine only ever reads binary data, no intermediate data formats (jpg/tga/png, fbx/obj) have to be parsed, greatly reducing loading times for the whole team. I’ve seen lots of hours of precious development time being spent on converting assets locally, or trying to load intermediate formats like .tgas and .xmls, which additionally cause bad memory behaviour (e.g. temporary allocations). Even if it’s just 5min. each day for each team member, it adds up during a 6 month development cycle.
- Loading binary data only leads to much cleaner, shorter and faster code paths.
- Checking for bad assets is done at asset compile time, not at run-time, again leading to cleaner code.
- Hot-reloading obviously enables people to work and iterate much faster. Once you’re used to that feature, going back to the old quit/compile/restart workflow certainly feels like living in the stone age.
The way the content pipeline in Molecule works is quite straightforward:
- Every asset to be added to a project gets assigned an option preset which tells the content pipeline how that asset should be processed. An option preset is nothing more than a simple list of key-value pairs parsed by the corresponding stage of the content pipeline (e.g. an option preset for textures has vastly different values compared to a preset for converting meshes). You can create as many presets as you want.
- Each time a file is added/removed/changed, the content pipeline verifies whether that file belongs to the project, and has any option preset assigned to it. If so, it processes the file via various stages in the pipeline, and writes a compiled binary file if the conversion process is successful. The latter is crucial, because that means that faulty assets will no longer break the build, because no binary file is ever written for such assets.
- Of course, the above works for option presets, too. If any preset is changed, all the assets affected by the change will be recompiled automatically. This means that e.g. if any option of the preset for converting diffuse texture maps is changed, all affected textures will automatically be compiled and reloaded by the engine. You can literally watch the new assets getting streamed in.
- Whenever a new file is available for the engine to be loaded, the content pipeline sends a notification via TCP/IP to the running engine. This allows assets to be hot-reloaded on consoles, too. This also works for files coming from external sources, e.g. Perforce/CVS/SVN – you can sync from Perforce, and the engine will automatically pick-up and reload new files (if you want it to).
The following would be an example of two different option presets for compiling textures (diffuse maps and light maps):
TexturePresets: { DiffuseMap: { ScaleX = 0.5 ScaleY = 0.5 Degamma = 2.2 Gamma = 2.0 Format = DXT } LightMap: { ScaleX = 1.0 ScaleY = 1.0 Degamma = 2.2 Gamma = 2.2 Format = RGBA } }
Most of the files stored by the content pipeline/editor are just simple text files somewhat similar to JSON/other human-readable formats – no fancy XML or anything. As stated above, if you were to change the gamma setting of the diffuse map option preset, all affected textures would automatically get recompiled again.
In addition, Molecule’s content pipeline offers some additional handy features:
- Direct loading of .psd Photoshop files. You can save your work in Photoshop, and the content pipeline/engine will automatically reload the asset. No need for intermediate formats like .tga or similar if you don’t need them.
- Direct loading of .mb Maya files. On machines having a Maya license installed (useful for artists on a team), the engine can directly compile assets from Maya binary files. Additionally, assets can be exported/compiled from within Maya/Max/XSI with a simple button-click.
The following video shows some of the above features – the scene itself is the well-known Sponza Atrium (made publicly available by Crytek), textured with a simple texture for testing purposes. The video shows hot-reloading of textures, shaders, and meshes (make sure to enable HD quality):
http://www.youtube.com/watch?v=BQPpZkRk6y4
Having worked with such a workflow during the past few months, I can honestly say that I never want to go back to anything different.
is it an own tool that waits for changes on files in a special folder, handles the rebuild of the files, its dependencies and notify the running engine? and how do you handle dependencies?
Exactly, the editor is a separate tool which handles file changes in the project folder, and does all the work involved. Which dependencies specifically are you asking about? Can you give an example?
after some thoughts about dependencies I came to the conclusion that I meant more a build-system for assets where a build system recognizes which assets are needed by the game and which dependent assets are needed also (textures are needed by models and so on).
Option presets are fine, but what about per-file specific settings? Is it possible to have them? Say, if I want to use some special scaling or special file format for an image file. Yes, it’s ok to create an additional preset option just for this particular file, but what if there are *many* files with specific settings, what would you recommend?
Good point.
I was thinking about letting individual files override settings from the option preset, but that feels a lot like inheritance and base class members in C++, which sometimes have their fair share of problems if you’re not careful.
Allowing specific settings for single files (without any option preset) would be straightforward to add code-wise, but it surely needs nice and easy-to-use options in the frontend/editor for quickly applying the same settings to several files, like you mentioned.
The problem is that for some projects it is really good to have few settings that are specified per-file, for example scaling factor. Again, it’s project-specific, and it’s better to avoid introducing such settings.
Where do you store what option preset is used by an image file? Is it a global or per-folder “database” file? Is it part of the image filename itself? Is it a separate single-line text file in same folder with the image file ?
At the moment it is a global per-project file, which stores the mapping of option presets to assets in the project. However, I might change that to separate files next to the assets themselves to make it behave more nicely with source control on a large team (global file and merging mess vs. single file and locked check-outs).
I would be interested to hear what your experiences are?
In a small personal project I use a global per-project file together with separate text files next to the assets. It’s not ideal, especially because I don’t have any front-end tool, I use a text editor. Actually I don’t have presets, the global file specifies groups of files and settings that should be used to process them.
Non-personal stuff: for image files we store option preset name and per-file settings inside of the asset file itself. Not every image file format allows that, we use .TIF. Unfortunately in this case you definitely need a front-end tool for adding/changing settings. Keeping the settings during re-exporting/modifying of an image is also not really easy.
Is there any chance to see your implementation? Source code, public headers/interfaces, class diagrams, maybe…
Because the content pipeline is part of the Molecule Engine, I can’t really show you all the source involved, but I would be glad to help where I can. Maybe there’s specific things you want to know about in detail? .psd loading, .mb loading, TCP/IP, directory watching?
I’m also trying to come up with an architecture for content build system. Just wanted to take a look at your design, for comparison. Can you please give a short description of your base classes, if there are any, e.g.:
class SourceFile
{
virtual const wchar_t* GetPath() = 0;
virtual bool Build() = 0;
virtual void Clean() = 0;
}
@Iurii: Well, there aren’t lots of base classes or a grand design or something.
An option preset is just a simple list of key-value pairs, stored in an array. Watching modified files is done by a simple DirectoryWatcher class, which uses Delegates/Events to notify interested parties of modified/added/deleted files.
An asset package again is just an array of files, with an additional option preset the corresponding file is to be built with.
In general, everything in Molecule is very flat, with lots of namespaces and free functions, and very few occassions of inheritance.
Pingback: Schema-based entity-component data for fast iteration times | Molecular Musings
Pingback: Using runtime-compiled C++ code as a scripting language: under the hood | Molecular Musings