At A Glance: Slope-Space Integrals for Specular Next Event Estimation

Figure 1.

Anyone who has tried to render caustic reflections off of water or metallic objects will be aware that it can take a very long time for the render to converge to a good solution. This is because Monte Carlo light transport algorithms struggle to find light paths involving specular materials. For example when we render a diffuse surface such as the wooden table in Figure 1, our first ray shot from the camera will hit the table, and then a secondary ray will bounce off in a random direction. Because the light being reflected off of the specular plant pots is being reflected in a very small cone of directions, the likelihood of our secondary ray shooting in the correct direction to get a significant contribution from the plant pots is very small. However, the specular light reflecting off of the plant pots onto the table is very bright, and so as we only get a few hits here and there, we end up with noisy caustic reflections. What we need to do is come up with away to shoot in these directions with a higher probability.

In a recent paper, Loubet et al. come up with a way to predict the radiance from a specular triangle, and show how thiscan be used to efficiently sample connections between meshes made of millions of triangles. First, they choose a triangleon a specular object based on its total contribution. Then, they sample a position within the triangle – sampling the mostrelevant area in the triangle is important as the region of the triangle making a significant contribution will usually be muchsmaller than the triangle itself.

To solve both of these steps, they do calculations in the space of “microfacet slopes”, or what they call “slope-space”. The term “microfacet” refers to specular reflection models which approximate reflections off of surfaces as distributions of millions of microscopic mirrors, or microfacets, on the surface. The statistical distribution of the directions of these microfacets determines the appearance of the reflection – some distributions you may have heard of include GGX, Beckmann, and Ward.

Figure 2.

Previous work such as Olano and Baker’s 2010 paper on LEAN Mapping involves blending the contributions of bump maps into the microfacets as the surface moves further away from the camera. As the camera moves away, the bump details are filtered out, so moving those details into the microfacets preserves them. To achieve this, they introduce the concept of “off-center” microfacet models, where the average direction of the microfacets differs from the surface normal. To work with these models, Olano and Baker transform the microfacets into a common space, referred to here as “slope-space”, which allows them to do things such as combining the results of multiple bump maps with the microfacets. Transforming into slope-space involves projecting the directions from the surface onto a plane tangent to it (Figure 2). Loubet et al. build on this work, using slope-space to simplify their own calculations.

So, the first step; predicting the radiance from a specular triangle. Using the example of our wooden table and specular pots shown in Figure 1, we need to find the amount of light that is reflected into our camera from a point on the table, after it has been reflected by a specular triangle belonging to a pot. In the paper, they assume that specular roughness is constant over the triangle, and the triangle has shading normals defined at its vertices. Calculating the amount of radiance requires several key ingredients;
● The set of directions from the triangle to the point on the table
● The set of directions from the light source to the triangle
● The result of the BSDF (shader) at the point on the table
● The accumulated result (integral) of the BSDF over the triangleS

Solving this is very difficult for a couple of reasons. The first is that the directions to and from the triangle vary spatially over the triangle in a “non-linear” way, meaning that it is hard to calculate without just taking lots of samples, which is extremely slow. To work around this, Loubet et al. use “far-field approximation”, where they assume that the triangle is significantly distant from both the light source and the point on the table, and therefore the directions will vary by negligible amounts – similar to how we might use a directional light to simulate the sun. This assumption holds if the triangle is either far away from both points or is relatively small. To ensure this, they subdivide the triangle into smaller triangles until the triangle being looked at can be approximated to an acceptable degree.

This approximation makes everything straightforward to compute, other than the last element on our list; the integral of the BSDF over the triangle. The BSDF over the triangle is still spatially varying, because the 3 shading normals defined at the vertices of the triangle are interpolated over the triangles surface. This means that the average direction of the microfacets over the triangle can change a lot, which is something we don’t want to neglect, as getting an accurate specular response is essential.

Figure 3.
Figure 4.

When we compare the shading normals, we can see that they define a spherical triangle of directions (the blue triangle in Figure 3) over which we need to integrate the BSDF. There is no way to calculate this directly, so we need a clever solution. In the paper they give the specular triangle an off-center microfacet BSDF, which allows them to define the shading normals in slope-space. To transform into slope-space, the 3D normals are projected onto a plane, becoming 2D points (Figure 3). Therefore our spherical triangle is projected as a 2D shape in slope-space. They show that it is possible to directly calculate the result in slope-space as long as the shading normals vary linearly within the projection. Although they do not actually vary linearly in slope-space, they show that as long as the triangle is sufficiently small / distant, linear interpolation in slope-space (the red triangle in Figure 3) is a very close approximation. In Figure 4, we see that linear interpolation matches well on a distant projected spherical triangle (left), but not well on a close one (right). This works in our case, because we are already assuming small / distant triangles.

Ta-dah! We now have a way to efficiently approximate the total amount of radiance reflected by a triangle from one point to another, as well as an approximate distribution over the whole triangle. As this is after all just an approximation, instead of using this result directly we can use it to take more samples in the area of the triangle with the biggest approximated impact (this is known as “importance sampling”). Loubet et al. use this result to create a new “Next Event Estimation” (NEE) strategy, which they call “Specular Next Event Estimation” (SNEE). NEE involves reusing found light paths, by drawing connections from every ray hit from indirect bounces to the light sources. This means we get more bang for our buck for each ray we shoot, speeding up render times significantly. In contrast, SNEE involves finding sub-paths between the specular triangles in the scene and the light sources, rather than direct connections.

If we were to do this naively, we would shoot a ray at a point to be shaded from our camera, randomly select a point on a light source, estimate the contribution of all specular triangles in the scene reflecting light from the light source onto the point being shaded, and finally importance sample one of the triangles that makes some significant contribution. In a standard scene with millions of triangles, this will be very slow. To address this they build a hierarchical data structure in a pre-rendering step, where they trace rays from the light sources to each specular triangle at random points. For each triangle, they calculate reflection and refraction bounces and store all intersections with non-specular geometry in two bounding boxes with the associated amount of energy; one for reflections, and another for refractions. All of these bounding boxes are then stored in a weighted bounding volume hierarchy (BVH), which can then be used to efficiently search for specular paths.

Figure 5.
Figure 6.

The results are really impressive; they rendered the plant pot scene in 30 minutes, and the turtle scene (Figure 5) in just 5 minutes! They also use similar techniques to efficiently render high frequency normal maps on specular surfaces (Figure 6). The algorithm is unbiased, and works with standard unidirectional path-tracing. The biggest limitation with SNEE is that it only works on specular paths with a single bounce, so adding more bounces will be the next big step for improving this method.

That about wraps it up for this one, I really hope you found it interesting. Perhaps we will see these techniques coming to a path-tracer near you within the next few years!

Leave a comment