Real-time radiosity

At last, I can show what I’ve been working on for 3 months. It’s a real-time radiosity system which dynamically updates lightmaps with bounced, diffuse indirect lighting. Without further ado, here are some screenshots and comparisons with direct lighting approaches to see what a difference indirect lighting makes (click the thumbnails for a higher resolution image, and make sure that your browser displays them correctly, e.g. Firefox is unable to cleanly show the darker .pngs, but they work perfectly in Chrome):

Diffuse global illumination in Crytek’s Sponza scene. Notice the indirect lighting in the halls, and the bounced skylight in other parts of the scene.

 

Ambient lighting only

The same scene lit with direct + ambient lighting only, without global illumination.

 

Diffuse global illumination

The same scene from a different viewpoint.

 

Diffuse + ambient lighting only

For comparison, direct + ambient lighting only.

 

The system is able to handle an infinite number of light bounces, which eventually converge towards the correct solution. Multi-bounce lighting adds detail to otherwise unlit surfaces, and can make quite a difference: Single-bounce lighting

Single-bounce lighting.

 

Multi-bounce lighting

Multi-bounce lighting, adding more light to otherwise dark areas.

 

The following screenshots show the same scene lit only by a skylight, which is also handled by the radiosity system, increasing the realism of outdoor scenes compared to just an ambient term. Night skylight

Bounced skylight at night.

 

Night ambient term

The same scene lit with only an ambient term. Notice the lack of shadows and shading.

 

From a technical point of view, the radiosity system is fed with the direct lighting in the scene, and calculates all the bounced lighting into a lightmap. Basically every kind of light source can be used, with environmental lights, sky lights, etc. being handled best by the radiosity system itself. Lighting at noon

Lighting at noon using a directional light and a blue-ish skylight. Note the subtle color bleeding on the pillars above the carpets caused by the radiosity solution.

 

Direct lighting

Direct lighting fed into the radiosity system. Notice the absence of any skylight or ambient term.

 

The system correctly handles visibility for multiple bounces, as can be seen in the following shots showing mainly parts of the scene that are indirectly lit: Indirectly lit areas

Indirect lighting with correct visibility. Notice the indirect shadows behind the carpets.

 

Ambient term

For comparison, the same shot lit with an ambient term only. Notice the lack of shading and depth.

 

Sunset lighting

Sponza lit at sunset.

 

Lastly, the following shots show the scene lit by a point light, with the corresponding input handed to the radiosity system: Point light

Diffuse global illumination caused by a point light.

 

Point light direct lighting

Corresponding direct lighting handed to the radiosity solution.

 

Pipeline

The radiosity system works for static geometry. Dynamic objects are indirectly lit, but do not cause indirect shadows or color bleeding into the solution, which is similar to how Geomerics’ Enlighten works. In order to be able to update all the lighting in real-time, the system needs to do a precomputation step which only has to be re-done every time the geometry changes. All the lighting, be it local light sources, environmental lights, area lights, etc. can be changed in real-time, as well as surface albedos and emissivity constants – the implementation is not based on PRT.

The radiosity system takes this input and dynamically updates a lightmap which can then simply be added to the direct lighting done via standard shaders. Because indirect lighting is low-frequency in nature, the lightmaps can be very small (128×128 for default quality, 256×256 for high-quality suitable for high-end PCs). Working with lightmaps has the additional benefit that all the lighting information is cacheable, which means that the radiosity update is not bound to the frame rate, and can be done asynchronously with other tasks on a separate thread/CPU.

Furthermore, depending on the target framerate, quality level, platform capabilities, etc. the radiosity updates can be staggered over several frames without being bound to the output frame rate. This kind of temporal coherence also means that as long as lighting doesn’t change, there’s no need to calculate anything at all.

Precomputation times

As stated above, the pipeline consists of both a precomputation part and a run-time part. Precomputation times are fairly low compared to how long artists usually need to wait until their baked lighting is done. On an i7-2600K 4-core SMT machine, precomputing all the information needed in the Sponza scene takes about 5 seconds (!) with single visibility checks and 45 seconds with 16x supersampling (final shipping quality) for a 128×128 target resolution. With high-quality 256×256 lightmaps, precomputation times are 40 seconds without and 5 minutes with supersampling, respectively.

The precomputation part of the pipeline is heavily optimized, vectorized, multi-threaded code. It uses SSE/SSE2/SSE4 instructions and Molecule’s own task scheduler in order to balance the workload across several threads. I do not yet use the GPU for this kind of calculations, but that would further decrease the precomputation time a lot because it’s mostly governed by visibility checks/raycasting.

Memory requirements

The memory needed for storing all the precomputed data ranges from about 3 MB to 10 MB depending on the output resolution and desired quality level. For 128×128 resolution lightmaps and still good quality, even directional irradiance (provided by the system in terms of spherical harmonics coefficients) needs about 4-5 MB. A high-quality 256×256 solution will obviously need more data (~10 MB), but that’s intended for high-end PCs/next-gen consoles anyways. The above figures don’t take into account that most of the data can probably be quantized and stored hierarchically, leading to further savings – I haven’t touched that part yet as much as I would like to have.

Performance

A complete update of a 128×128 lightmap takes between 8 and 13 ms including directional irradiance on a single thread (the main thread), depending on the quality settings. This updates the complete lightmap every frame, and doesn’t make use of any kind of temporal coherence, culling, or similar – remember that lighting only needs to be updated whenever something changes, and updates can be done asynchronously over several frames, so there’s a lot of room for improvement in this area.

Note that this part of the implementation also isn’t optimized yet, and doesn’t use any vectorized code or block-based updates which would greatly benefit cache performance – at the moment it just pulls in a lot of data which doesn’t fit into the L1/L2 caches entirely. I will be working on that during the next weeks, posting updates in between.

Outlook

I have been working on this for about 3 months now, and I would very much like to turn this into its own product – if this sparked your interest, drop me a line at: office at molecular-matters dot com!

10 thoughts on “Real-time radiosity

    • It depends. For low-frequency lighting like indirect diffuse lighting, it’s perfectly valid to just sample the direct lighting at texture resolution without any quality loss. The only thing you have to make sure is to use some kind of supersampling (at least for visibility), so that input texels do not suddenly become visible/invisible from one frame to the next. How you do this is independent of the radiosity system, but when e.g. using shadow maps you normally do some kind of filtering anyways.

      I’ve also implemented gathering the direct lighting on a point cloud with evenly distributed samples on the surfaces of the mesh (which is what Enlighten is doing), and that works as well. I haven’t yet decided which method to use, maybe I’ll leave that as an option to the user – there’s no clear winner.

  1. Oh, one more question: does the resulting lightmap have any directionality information? do you output just a color or some low order SH/basis3/color+dir information?

    • Yes, it does if you want to – I’ve only stated that in passing in the post, my apologies.
      The system can either provide indirect lighting as an RGB texture, or spherical harmonics coefficients which can then be used to evaluate the indirect lighting in conjunction with e.g. normal maps.
      If needed, other bases like H and HL2 could be useful as well.

  2. Bit confused, is all the direct lighting dynamically (Deffered lighting) done in screen space? I have implemented plently of radiosity solutions, including one that even uses billboards to simulate area lights in screen space, so I am assuming that your method is a bit like that.

    Or is all the direct lighting done with shadowmaps?

      • I’m sorry, but I cannot really go into detail how the precomputed data is stored and used at run-time. Let me say though that all the bounced lighting calculations make use of precomputed visibility in one way or another, so this indeed simulates a correct radiosity bounce.

    • No, nothing is done in screen-space. This is a true radiosity simulation, not a screen-space approximation.

      Regarding direct lighting, all you need to do is calculate the direct lighting for certain points in space. Given these points, it’s completely up to you how you calculate direct lighting for them. You can either use functions provided by the radiosity system (e.g. for point lights, sky lights, etc.), or you can do your own calculations.
      Furthermore, you decide whether you want to use the CPU, or GPU for that. As an example, you could calculate direct lighting for a point light on the CPU, or use the GPU in conjunction with shadow mapping to do the same, using rendertargets, memexport, or any other means of getting something written to main memory.

      In the screenshots above, all direct lighting is done on the CPU.

      • Interesting post and very pretty results! Do you make use of proxy meshes which need manual UV unwrapping for the lightmap (like Enlighten does), or have you found a different solution?

      • Thanks!

        I tried both a 3d volume data structure as well as lightmaps, and results with the latter are just so much better. So yes, this means you have to add a separate UV layout for projecting the lighting onto the geometry, probably similar to what Enlighten uses.

Leave a Reply to Louis Castricato Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.