Ariane Robineau
Head of 3D
In charge of 3D

An overview of Differentiable Rendering

October 18, 2021 - 3D

Introduction

For the last 50 years, there have been major improvements in 3D rendering technologies and they have been increasingly present in our daily lives: today’s path tracing algorithms are everywhere in movies while most videogames are using rasterizing programs for their graphics.

 

CG in video games and movies: Control by Remedy and Luca by Pixar

 

 

While these techniques came closer and closer to photorealism, another question arose: what if instead of going from a 3D scene to a 2D image (rendering), we went from a 2D image to a 3D scene? As you may imagine, reconstructing 3D scenes from 2D information is quite complex, but there have been many advances in the last few years. This area of study is called inverse graphics.

This article describes the Differentiable rendering (DR) method which belongs to the methods used for solving inverse graphics problems.

 

What is Differentiable Rendering?

3D rendering can be defined as a function that takes a 3D scene as an input and outputs a 2D image. The goal of differentiable rendering is to provide a differentiable rendering function, that is to say to compute the derivatives of that function with regard to different scene parameters.

 

The process of rendering illustrated by the Classroom scene from Blender

 

You may wonder why we would need a differentiable rendering function: many optimization techniques are using derivatives. For instance, gradient descent algorithms are using derivatives to adjust the parameters and neural networks are trained by adjusting their weights with gradient backpropagation techniques.

 

An example of a gradient descent loop using differentiable rendering

 

Once a renderer is differentiable, it can be integrated in optimization or neural network pipelines. These pipelines can then be used to solve inverse graphics problems such as 3D reconstruction from 2D images or light transport optimization tasks.

Many forward rendering algorithms (as opposed to inverse rendering algorithms) have not been designed with differentiation in mind, phenomena such as occlusion introduce many discontinuities and in rasterization algorithms almost every step is non-differentiable. While derivatives might be easy to obtain with regard to parameters such as color or glossiness, differentiation with regard to geometric parameters such as vertex positions or object orientation often requires changing the way the image is computed. Designing powerful and efficient differentiable rendering methods is an active area of computer graphics research.

Over the last 10 years, differentiable renderers using rasterization or ray tracing have been released by researchers, along with many experimental applications.

 

Differentiable Rendering applied to Path Tracing

Path tracing is a rendering technique widely used in the CGI industry that uses ray tracing to render images. Well known renderers such as Arnold, Redshift or Blender Cycles are using this algorithm. Path tracing is an excellent technique for generating photo-realistic images: simulating light transport through space allows us to take into account effects such as global illumination or reflections.

Our goal here is to understand how path tracing works and how to build a differentiable path tracer.

How does Path Tracing work?

Path tracing is a variant of ray tracing: the propagation of light through space is modelled by rays coming from the camera. The idea behind path tracing techniques was proposed by James Kaijya in his 1986 paper, the rendering equation. The rendering equation can be written:

 

with:
- Lo : light intensity (radiance) at a point
- Le : emitted radiance at this point
- Li : incoming radiance to this point
- f : Bidirectional Reflectance Distribution Function (BRDF) that quantifies how light is reflected
- ? : angle between incoming direction (?) and the normal to the surface

 

The camera model that is used is a pinhole camera model. The camera is defined by its origin and the image plane. The image plane is subdivided into a grid of pixels corresponding to the render resolution and rays are launched from the origin of the camera through the pixels of the image plane. Figure 1.1 illustrates how the camera works.

Now that the image plane has been transformed into a pixel grid, how is the color of each pixel collected? Propagating the boundary of a pixel through the scene highlights a large region of the scene containing multiple objects, all of which will contribute to the final color of a pixel. To compute that color, all the light passing through the pixel will have to be taken into account.

 

Figure 1: the camera and primary rays

 

Mathematically, this corresponds to integrating the intensity of light (radiance) over the area of the pixel. This integral is complex and expensive to compute as the functions that define the scene are quite complicated. A Monte Carlo estimator will be used to compute the value of this integral: N rays will be launched randomly through the pixel into the scene and evaluate their light intensity. This operation is shown in Figure 1.2. The pixel radiance value at position (i,j) in the image grid can be written:

 

with:
- p : the probability distribution function used to sample the pixel space with rays
- uk : the random rays.

Those rays going from the camera into the scene are called primary rays. Once these rays have been intersected with the scene geometry, they are going to be propagated through space. At each intersection, light intensity at the intersection point is estimated by solving the rendering equation.
A single ray (a single sample) is launched from this intersection point into the scene. While launching a single ray per pixel won’t give precise results, launching multiple rays per pixel will give a good approximation of incoming light.

 

Figure 2: a ray bounce

 

Uniform sampling can be used to pick your ray but production path tracers use importance sampling methods based on the material characteristics of the object to drastically reduce noise in the image and improve convergence. The rays keep bouncing until they encounter a light source or until they reach the ray depth limit (the maximum number of bounces). Figure 3 illustrates this process.

 

Figure 3: multiple rays bouncing through the scene

Path Tracing and differentiation

Now that we know how images are rendered with path tracing, let’s tackle the differentiation of the rendering function.

To sum up the precedent section, the rendering function of path tracing consists of nested integrals estimated with Monte Carlo methods. The main hurdles we will have when differentiating this function with regard to scene parameters will come from the differentiation of these integrals.

First, let’s not forget about when and how integrals can be differentiated. In order to differentiate the integral of a function f over a domain D with regard to one of its parameters p, the function f must satisfy the following requirements:
- f must be continuous with respect to p
- df/dp must be continuous with respect to p over D.
This condition can also be expressed as f belongs to the C1(D) class. To be precise, the functions we will deal with in our study are piecewise-continuous and for the former properties to be valid, the function must satisfy the continuity conditions on each of its subdomains and the boundaries of these subdomains must not depend on the parameter p.

Let’s differentiate the rendering function f for one pixel with respect to a scene parameter p (p can be the color of an object, ist position etc).
The first step of the differentiation of the light intensity value of the pixel is to differentiate the pixel integral:

 

 

In order to be able to proceed with this step and replace the differentiated integral with an integral with a differentiated integrand, the incoming light function L and its derivative dL/dp must be continuous over the integration domain A (the pixel area) regard to the parameter p, i.e. L belongs to C1(A). This point is the key to a differentiable path tracer.

Let’s review some types of scene parameters and the continuity of the incoming light function L. To explain the issues arising with the differentiation of these integrals, we’ll draw from the explanations in Loubet et al.. We’ll have to go back to the “pixel view” area to illustrate the influence of each kind of parameter on the integral. Changes in color parameters (and by extension textures) do not affect discontinuities in the “pixel view” area so the integrand can be differentiated. However the location of discontinuities in the “pixel view” area changes when geometric parameters such as object or light positions vary. These parameters don't meet the continuity conditions. Figure 4 illustrates this: (4.2) for color parameters and (4.3) for geometric ones.

 

Figure 4: evolution of the integration domain with regard to different parameters

 

If this condition (continuity) is met, differentiation is quite straightforward: the integral of dL/dp can be estimated with a Monte Carlo method in a similar way to forward path tracing. The following can be written:

 

with:
- pdf : the probability distribution function used to sample the pixel space with rays
- uk : the random samples.

 

However, if these continuity conditions are not met, the derivatives can’t be computed that easily. This topic (differentiating light integrals with regard to geometric parameters) is an active subject of research and different researchers have proposed various ways to solve this issue, two of which will be presented in the following paragraphs.

The first solution to differentiate incoming light with regard to geometric parameters was proposed by Loubet et al. in their paper, Reparameterizing discontinuous integrands for differentiable rendering. They handle the issues around geometric discontinuities with changes of variables in light integrals. Once the discontinuity has been eliminated from the integral, it can be estimated with Monte Carlo methods. This method has been integrated into the Mitsuba 2 renderer and will be the one we use later in our examples.

A second solution, proposed by Li et al. in their paper Differentiable Monte Carlo Ray Tracing through Edge Sampling uses the Reynolds transfer theorem to transform the derivative of the integral into a continuous integral and a boundary integral. The boundary integral is needed to take the geometric discontinuities into account when computing the derivative. The first of these two integrals can be estimated using the same techniques used if the continuity conditions are met. The boundary integral is evaluated by sampling the silhouette edges of scene objects.

 

Conclusion

In this article we studied the path tracing rendering algorithm and explained how to build a differentiable path tracer.
A differentiable path tracer will allow the integration of advanced light transport simulation into optimization pipelines and neural networks. An implementation of the Mitsuba 2 differentiable path tracer on Qarnot and two experiments will be presented in a second article, soon to come!

If you have any questions about this article and differentiable rendering, feel free to contact us at qlab@qarnot.com.

written by Victor Tuizat

Share on networks