At last, I can show what I’ve been working on for 3 months. It’s a real-time radiosity system which dynamically updates lightmaps with bounced, diffuse indirect lighting. Without further ado, here are some screenshots and comparisons with direct lighting approaches to see what a difference indirect lighting makes (click the thumbnails for a higher resolution image, and make sure that your browser displays them correctly, e.g. Firefox is unable to cleanly show the darker .pngs, but they work perfectly in Chrome):
Diffuse global illumination in Crytek’s Sponza scene. Notice the indirect lighting in the halls, and the bounced skylight in other parts of the scene.
The same scene lit with direct + ambient lighting only, without global illumination.
The same scene from a different viewpoint.
For comparison, direct + ambient lighting only.
The system is able to handle an infinite number of light bounces, which eventually converge towards the correct solution. Multi-bounce lighting adds detail to otherwise unlit surfaces, and can make quite a difference:
Multi-bounce lighting, adding more light to otherwise dark areas.
Bounced skylight at night.
The same scene lit with only an ambient term. Notice the lack of shadows and shading.
From a technical point of view, the radiosity system is fed with the direct lighting in the scene, and calculates all the bounced lighting into a lightmap. Basically every kind of light source can be used, with environmental lights, sky lights, etc. being handled best by the radiosity system itself.
Lighting at noon using a directional light and a blue-ish skylight. Note the subtle color bleeding on the pillars above the carpets caused by the radiosity solution.
Direct lighting fed into the radiosity system. Notice the absence of any skylight or ambient term.
Indirect lighting with correct visibility. Notice the indirect shadows behind the carpets.
For comparison, the same shot lit with an ambient term only. Notice the lack of shading and depth.
Sponza lit at sunset.
Diffuse global illumination caused by a point light.
Corresponding direct lighting handed to the radiosity solution.
The radiosity system works for static geometry. Dynamic objects are indirectly lit, but do not cause indirect shadows or color bleeding into the solution, which is similar to how Geomerics’ Enlighten works. In order to be able to update all the lighting in real-time, the system needs to do a precomputation step which only has to be re-done every time the geometry changes. All the lighting, be it local light sources, environmental lights, area lights, etc. can be changed in real-time, as well as surface albedos and emissivity constants – the implementation is not based on PRT.
The radiosity system takes this input and dynamically updates a lightmap which can then simply be added to the direct lighting done via standard shaders. Because indirect lighting is low-frequency in nature, the lightmaps can be very small (128×128 for default quality, 256×256 for high-quality suitable for high-end PCs). Working with lightmaps has the additional benefit that all the lighting information is cacheable, which means that the radiosity update is not bound to the frame rate, and can be done asynchronously with other tasks on a separate thread/CPU.
Furthermore, depending on the target framerate, quality level, platform capabilities, etc. the radiosity updates can be staggered over several frames without being bound to the output frame rate. This kind of temporal coherence also means that as long as lighting doesn’t change, there’s no need to calculate anything at all.
As stated above, the pipeline consists of both a precomputation part and a run-time part. Precomputation times are fairly low compared to how long artists usually need to wait until their baked lighting is done. On an i7-2600K 4-core SMT machine, precomputing all the information needed in the Sponza scene takes about 5 seconds (!) with single visibility checks and 45 seconds with 16x supersampling (final shipping quality) for a 128×128 target resolution. With high-quality 256×256 lightmaps, precomputation times are 40 seconds without and 5 minutes with supersampling, respectively.
The precomputation part of the pipeline is heavily optimized, vectorized, multi-threaded code. It uses SSE/SSE2/SSE4 instructions and Molecule’s own task scheduler in order to balance the workload across several threads. I do not yet use the GPU for this kind of calculations, but that would further decrease the precomputation time a lot because it’s mostly governed by visibility checks/raycasting.
The memory needed for storing all the precomputed data ranges from about 3 MB to 10 MB depending on the output resolution and desired quality level. For 128×128 resolution lightmaps and still good quality, even directional irradiance (provided by the system in terms of spherical harmonics coefficients) needs about 4-5 MB. A high-quality 256×256 solution will obviously need more data (~10 MB), but that’s intended for high-end PCs/next-gen consoles anyways. The above figures don’t take into account that most of the data can probably be quantized and stored hierarchically, leading to further savings – I haven’t touched that part yet as much as I would like to have.
A complete update of a 128×128 lightmap takes between 8 and 13 ms including directional irradiance on a single thread (the main thread), depending on the quality settings. This updates the complete lightmap every frame, and doesn’t make use of any kind of temporal coherence, culling, or similar – remember that lighting only needs to be updated whenever something changes, and updates can be done asynchronously over several frames, so there’s a lot of room for improvement in this area.
Note that this part of the implementation also isn’t optimized yet, and doesn’t use any vectorized code or block-based updates which would greatly benefit cache performance – at the moment it just pulls in a lot of data which doesn’t fit into the L1/L2 caches entirely. I will be working on that during the next weeks, posting updates in between.
I have been working on this for about 3 months now, and I would very much like to turn this into its own product – if this sparked your interest, drop me a line at: office at molecular-matters dot com!