Baking signals into textures

Whenever you’re baking signals into texture maps, there’s quite some issues to watch out for, regardless of the signal you’re baking. This post will try to explain issues I ran into while implementing a baking framework.

There’s a multitude of signals that can be baked into texture maps, with the most widely known one being static lighting, resulting in so-called lightmaps. Other signals that can be baked are radiosity normal maps, coefficients for precomputed radiance transfer (e.g. spherical harmonics or wavelets), radiosity form factors, ambient occlusion, etc.

In theory, baking signals into texture maps is easy, and basically follows the following steps:

  1. Generate a separate UV-set for all surfaces of a mesh, making sure that no UVs overlap each other.
  2. Tightly pack the surfaces into a certain number of texture maps, resulting in an atlas to be used during rendering.
  3. Evaluate the signal to be baked (e.g. static lighting) for each texel in UV-space at the corresponding world space position.

Steps 1 and 2 can get quite involved and won’t be handled in this post – see Ignacio Castaño’s excellent post on Lightmap Parameterization instead.

Let us focus on Step 3, assuming that an appropriate UV-set has been generated for all surfaces in the scene already, using a suitable tool such as Maya’s built-in automatic-unwrapping, or an external tool such as Unwrella.

While Step 3 sounds simple in theory, there are several practical problems when trying to implement it.

Rasterization

In order to be able to evaluate our signal in world space, we need to map texels in UV-space to their corresponding position in world space. What we need to do is rasterize each triangle in UV-space, interpolating its attributes (e.g. the world space position) along the edges, like we used to do in the days of good ol’ software rasterization.

Interpolation of any attribute can easily be achieved by using barycentric coordinates, but the question remains which texels are actually touched by our rasterizer? Each graphics API has a clearly defined set of rules for rasterizing triangles (mostly top-left filling) which make sure that no pixel is touched twice, and that there are no visible seams at triangle edges. This article has a nice software implementation of a half-space triangle rasterizer following the rules defined by both Direct3D and OpenGL.

But even if you use a rasterization scheme as the one mentioned above when rasterizing triangles in UV-space, there will still be visible seams at the edges of some triangles. The problem is that the UV-space and world space do not coincide, because triangles in UV-space can be arbitrarily rotated in order to provide a better fit in the UV atlas. Texture fetching rules then dictate the GPU to fetch some texels which were never filled by our rasterizer, resulting in black spots along some edges – note that this has nothing to do with bilinear filtering, but even occurs using point filtering!

The solution is to account for each texel’s area (not treating them as being infinitesimal small), and use a process known as conservative rasterization. Using point-filtering, the GPU should no longer fetch unbaked texels, getting rid of any black spots along edges.

UV / texture size mismatch

One problem to be aware of is that you cannot bake any signal into texture maps using a texture resolution which doesn’t match the resolution at UV creation time. If the UV-set was generated for a 1024×1024 texture map, you cannot bake the signal into a 512×512 texture map – this will lead to leaks and other problems, as can be seen in this presentation from Gamefest 2010. However, baking into a texture map with the correct resolution and downsampling the result (using an appropriate filter, not a simple bilinear downfilter) will work.

Bilinear filtering

In order to make bilinear filtering work, a so-called gutter needs to be added along triangle edges which prevents wrong colors from bleeding into the baked signal. This can be achieved relatively simply by adding skirts when rasterizing.

Mip-maps

Mip-mapping baked signals such as lightmaps poses yet another problem – naive downscaling of an image (even with proper padding/gutter) will result in black texels being fetched at lower mip-map resolutions. A simple solution is to weight texels when downscaling/averaging, based on whether they actually belong to the signal (weight=1) or not (weight=0).

Hemispherical sampling

Most signals such as static lighting, ambient occlusion, PRT, etc. need to be integrated over the hemisphere using e.g. Monte Carlo integration, which means that many samples (~10000) have to be taken at each world space position in order to properly evaluate the integral/signal at the respective texel. Again, the fact that the UV-space and world space do not coincide causes problems when sampling signals over the hemisphere.

Assume we want to bake ambient occlusion information, and imagine a simple box where the floor is slightly tilted to the left or right, so that the lightmap texels no longer align with the box’s edges. This means that some of the floor’s texels will be partially covered by either the left or right wall. When sampling the hemisphere at each texel, which point do we use as the ray’s origin used for sampling/ray-tracing?

For an infinitesimal small texel, the origin will always lie at the texel’s center, which yields wrong results if the center is occluded, but e.g. the other 40% of the texel are not. The solution is again to consider the area of each texel, and randomize the ray’s origin so it starts anywhere on the texel’s area.

Conclusion

While baking signals in UV-space sounds simple, coming up with a robust implementation can be tricky if you’ve never done it before. I hope this post provided some insights into the process, and hopefully there will be screenshots to show as soon as I get assets approved for public viewing.

1 thought on “Baking signals into textures

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.