Rendering Solution for Complex Geometry

Using sphere tracing



Author:
Dave De Breuck

Website: www.davedebreuck.com
Github: github/Dyronix

Hello, my name is Dave and I'm a Software Engineer @ Twikit NV. We create products for all types of industries ranging from automotive, medical, sportsware and even games. Our specialty is designing configurators so a customer can design its own set of 3D geometry to be able to print using any 3D printer currently available on the market. However, my role does not have anything to do with the configurator. Due to my background in games and graduating from Digital Arts and Entertainment as a Game Developer I am responsible for the graphics, so anything that is visible on screen I create. At the time of writing I am obtaining my master's degree in Game Technology at Breda University of Applied Science. Within this article I will try and elaborate how one can achieve the results of the findings of my master thesis.


Introduction

Within modern society, many media devices exist. Having a responsive software product on any device would benefit different industries, such as individualising three-dimensional (3D) printed products, medical visualisations representing a diagnosis in a 3D environment, and even games to support a more extensive player base.

We will explore an example from the Additive Manufacturing (AM) industry. One of the complex structures AM recently took interest in are lattice structures. Lattices are space-filling unit cells that can be tessellated along any axis with no gaps between cells that can generate all types of geometry without losing structural integrity. However, visualisations of lattice structures using computer graphics can become hard to render on sub-optimal hardware. Depending on the size of the lattice, polycount can reach millions of triangles which is not feasible to visualise optimally on consumer hardware in real-time. Additionally a representation of such geometry within a modelling tools such as MAYA or Blender necessitates a high level of knowledge, patience, foresight and meticulous preparations to ensure that models have adequate control vertices where details is desired.


Figure 1: Example of a lattice structure in the shape of a shoe – Carbon 3D


In this article, we propose a solution for developers to create a 3D renderer that uses a volumetric approach to visualise these structures. Our target platform will be the web, focusing on chromium browsers. This also means that state of the art technology such as mesh-shaders or raytracing are not available. However, this will make sure that our solution is compatible with all kinds of platforms as a more generic approach is utilized to achieve the desired outcome. Volumes have shown promising results in visualising high fidelity geometry without the cost of uploading the required surface-based data to the GPU. An added benefit of volumes is that they can perform Boolean operations more efficiently.

Preliminaries

Before we can talk about the process of creating a volumetric rendering pipeline. Some key mathematics and programming ideas involved within this article have to be explained.

Distance Fields

A distance field is a scalar field that specifies the minimum distance to the surface of a shape. If we examine this in more detail, a distance field can be represented by a function F, such that any given point P will return a distance d from the object represented by the function. We store the distances returned by such a function as 3D matrices or, more commonly known within graphics programming, a 3D texture. Each texture cell stands for the closest distance from the grid element to the nearest surface. Therefore, a grid element containing a value of 0 represents the surface of a shape.


Figure 2: A circle is represented by a 2D distance field – Inigo Quilez

Sphere Tracing

Visualising a distance field can be achieved by using an algorithm called sphere tracing. Sphere tracing is a technique for rendering implicit surfaces using a geometric distance. Which is exaclty what we stored within our 3D texture. To find the distance towards a shape, we need to define a distance function for it or have a generated volume available to trace against. For example, a sphere with center (x_0,y_0,z_0) situated at the world origin (P) and radius r can be representated as followed:


P^2-r^2=0


This equation is what is called an implicit function. A sphere represented in this form is also called an implicit shape. An implicit equation only tells us if a particular point is inside a shape (negative values), outside a shape (positive values), or precisely on the surface (value of 0). The collection of points where the implicit function equals x is called an iso-surface of value x (or iso-contour in 2 dimensions). Sphere tracing is a method of drawing a surface solely based on this data. For more information about sphere tracing, click here

Boolean Operations

Volumetric data can easily represent shapes defined via Boolean operations. These operations are often used in CAD software in collaboration with a technique called Constructive Solid Geometry (CSG), which consists of the same operations only based on surface-data not on geometry, which makes this algorithm a lot more CPU intesive as new geometry has to be constructed on the fly. Modelling complex shapes by assembling simple shapes such as spheres, cubes, planes might be hard to achieve if we modelled our geometry by hand. Being able to blend implicit shapes is a quality that parametric surfaces lack and thus one of the main motivations for using them. For more information about Boolean operations, click here

Deferred Shading

Deferred rendering or deferred shading is based on the idea that we defer most heavy calculations (such as light calculations) to a later stage. We can achieve deferred shading with one geometry pass and one light pass. The geometry pass renders the scene once and stores distinct data about the displayed geometry in different textures, commonly known as the G-buffer. Position vectors, color vectors, normal vectors, and/or specular values make up the majority of this data. In the second pass, we render a full-screen quad and calculate the final render using the provided G-buffer. We only need to do our light calculations once when deferring them to a later stage because the G-buffer contains all of the data from the topmost fragment. If deferred rendering is a little fuzzy I highly recommend reading the following article: Learn OpenGL: Deferred Shading from Joey De Vries.

Strategy

Now that we've caught up, we can start working on a volume renderer. Any watertight (closed, non-self-intersecting, manifold) mesh can be used to construct a volume. There are already a lot of tools that can produce a volume for us, such as the Unity build-in tool named SDF Bake Tool. However, we opted for a more programatical approach and used a library called [IGL](https://libigl.github.io). This library is developed in C++ and may be used to produce a volume as part of our pipeline. The steps for creating a volume with the IGL library are as follows. First, we import a mesh (which is also possible using an IGL function igl::readObj). Next we feed the data that was imported into IGL's signed distance function:


    // Choose type of signing to use
    igl::SignedDistanceType type = SIGNED_DISTANCE_TYPE_PSEUDONORMAL;
    igl::signed_distance(P,V,F,sign_type,S,I,C,N);

When executing this function properly a volume should be created.


Figure 3: Generated signed distance field of the Standford Bunny - IGL.


As previously indicated, we employed a deferred rendering approach to incorporate our volumetric renderer into a conventional rendering pipeline. This means that our volumetric framebuffer will produce a G-Buffer. This G-Buffer was built by leveraging our sphere-tracer within the fragment shader of our render pass. This renderpass might be created using the following pseudocode:


    /*
        color attachment 00: stores position of the surface that was hit
        color attachment 01: stores normal of the surface that was hit
        color attachment 02: stores albedo (RGB) and specular (A) of the surface that was hit
    */

    color_attachments.add(create_color_attachment(Format::RGB_32_FLOAT));
    color_attachments.add(create_color_attachment(Format::RGB_32_FLOAT));
    color_attachments.add(create_color_attachment(Format::RGBA_32_FLOAT));

    framebuffer = ResourceFactory::create_frame_buffer(color_attachments);

    /*
        What remains is adding this framebuffer to the pipeline.

        Our sphere tracer is a full screen image effect so we only require position and texcoord.
        Depth testing and face culling can be disabled.
    */

    pipeline_desc.layout = { {DataType::VEC3, "a_Position"}, {DataType::VEC2, "a_TexCoord"} };
    pipeline_desc.framebuffer = framebuffer
    pipeline_desc.depth_test_state = { DepthTestEnabled::NO };
    pipeline_desc.facecull_state = { FaceCullingEnabled::NO };

    create_pipeline(pipeline_desc);

Accompanied with this render pass comes a shader which traces against our generated volume.


    out vec3 g_Position;
    out vec3 g_Normal;
    out vec4 g_Albedo_Spec;

    void main(void) 
    {
        ray_origin = get_ray_origin(v_camera_world);
        ray_direction = get_ray_direction(v_camera_world);

        raymarch(ray_origin, ray_direction);

        if(HIT_NOTHING || HIT_MAX_MARCH_DISTANCE)
            discard;

        if(HIT_SURFACE)
        {
            position = ray_origin + (ray_direction * distance_travelled);
            normal = calculate_normal(position);

            RenderResult render_result;

            g_Position = position;
            g_Normal = normal;
            g_Albedo_Spec = get_material(suface_id);
        }
    }

We now have all of the data we need to develop a high-quality renderer. The data in the G-Buffer is given to the lighting pass, which calculates all relevant lighting information needed to illuminate our scene. Furthermore, the produced frame might be enhanced using other rendering techniques such as ambient occlusion, reflection, or subsurface scattering. Other material attributes, such as roughness and metallicity, might be added to the lookup table in addition to albedo and specular values. This would allow us to make a PBR material that we could use on our traced volume (We opted simple diffuse shading since light propagation and varied visual effects are not the focus of this post). Finally, to create a depth buffer, the travelled distances might be translated back to the camera's distance. The depth buffer could be used to create a hybrid approach that combines surface-based geometry with volumetric data in the same scene.

Lattified Geometry

There is only one step left: producing a lattified version of the specified volume. I mentioned that volumetric data offers the advantage of being able to use Boolean Operations more efficiently. Three sets of operations would be used to create a lattice structure. Which are as follows:

  1. A union of line segments to create the unit cell we want to visualise, as previously said, "a lattice is a tesselated unit cell..." At this point, we shall create that single unit cell.
  2. A repetition operation to tesselate our unit cells on any axis. Which can be found in the following article: Inigo Quilez: Infinite Repetition
  3. An intersection operation to mold our lattice into the correct shape. Which can be found in the following article: Inigo Quilez: Primitive Combinations

The implementation of these steps within our fragment shader could look as followed:

  // signed distance function of a cross unit cell
  float sd_lattice_cross(vec3 position, float beamWidth)
  {
    vec3 center_unit = vec3(0.5, 0.5, 0.5);
    vec3 coord_translated = position - center_unit;
    vec3 coord_translated_q1 = abs(coord_translated);
    
    vec3 start_point = vec3(0.0);
    vec3 end_point = vec3(0.5);
    
    float dist = sd_line_segment(coord_translated_q1, start_point, end_point);

    return dist - beamWidth;
  }

  // our raymarch function would evaluate this step for x amount of iterations
  float map_distance_field(vec3 position)
  {
      // Calculate distance value towards the lattice

      // Check the bounds of the scene
      if(sd_cube(position, scene_size, scene_center) >= max_march_distance)
      {
          return u_max_march_distance;
      }

      // First, tesselate our given position
      // Second, map the lattice structure by tracing against a single or multiple unit cells
      vec3 tesselation = op_tesselate(position, repitition);
      float lattice_distance = map_lattice(tesselation);

      // Calculate distance value towards the outershape
      float shape_distance = map_outershape(position);

      return op_intersect(lattice_distance, shape_distance);
  }

Summary

All relevant data of our volumetric renderer can be stored within a G-Buffer, this allowes us to utilize the output of our framebuffer in a deferred rendering pipeline.


Figure 4: Example of the G-Buffer.


We can use this G-Buffer to calculate any light information in our scene and any additional image effects that might be required such as ambient occlusion.


Figure 5: Example of a lit scene, generated from a given G-Buffer.


To achieve a hybrid renderer we can export a depth buffer by converting the "true" distance to the scene back to camera distance. Additionally we could create a lattified version of the given volume if desired


Figure 6: Light fixtures (cubes) added on top of a volume rendered frame.
Figure 7: Lattified version of the given volume.

Conclusion

Distance field rendering in its current state is neither robust or fast enough for large-scale commercial video game productions. Nonetheless, in comparison to today's industry norms, the simplicity of these techniques makes them desirable for rendering and other use cases such as modelling. Algorithms and rendering technology will advance over time, allowing for efficient hybrid or full-on volume rendering within game development.

Sphere Tracing


Visualising a distance field can be achieved by using an algorithm called sphere tracing. Sphere tracing [Figure 1] is a technique for rendering implicit surfaces using a geometric distance.

Figure 1: Sphere tracing visualised

John Hart published an important paper on this topic. To find the distance towards a shape, we need to define a distance function for it. For example, a sphere with center (x_0,y_0,z_0) and radius r can be represented in the following way:

〖(x-x_0)〗^2+〖(y- y_0)〗^2+〖(z- z_0)〗^2=r^2

If we consider that x, y, and z are the coordinates of point P and position our sphere on the world origin of (0,0,0), we can simplify the above equation as follows:

P^2-r^2=0

This equation is what is called an implicit function. A sphere represented in this form is also called an implicit shape. Unlike an explicit equation that can compute surface points based on parameters, an implicit equation only tells us if a particular point is inside a shape (negative values), outside a shape (positive values), or precisely on the surface (value of 0). The collection of points where the implicit function equals x is called an iso-surface of value x (or iso-contour in 2 dimensions). Sphere tracing is a method of drawing a surface solely based on this data.

Intersections between our defined implicit shape (in this example, a sphere) and a ray may be computed. The following equation can be used to represent a ray:

O=tD

Where O is the ray’s origin, D is the ray’s direction, and we use t to define any point along the ray. When t > 0, the point we describe is ‘ahead’ of the ray’s origin. When t < 0, the point we describe is ‘behind’ the ray’s origin, and when t = 0, the defined point and the origin are the same. If we combine both the implicit function of the sphere and the equation of a ray, our equation will give the following result:

|O+tD|^2- r^2=0

Hart’s technique is based on the idea that we can identify the closest point B on the surface of an implicit shape to any random point A in space. Once B is found, one can trace a sphere ( a sphere in 3D, a circle in 2D ) of radius ‖AB‖ around point A. From this point, we can make a helpful observation. If you start from A an move in any direction you like around A, you will not intersect the shape. [Figure 2]

Figure 2: Finding the nearest point on a surface. A circle is defined with a radius equal to the distance from the origin to the nearest point on the surface

Hart took use of this discovery and computed the nearest point on the object's surface from the ray origin (the point A), then moved in the ray direction by a distance that does not exceed the circle or sphere radius (‖AB‖). The distance between the closest surface and the ray origin will become smaller and smaller by repeating this process. Eventually, the distance will have reached such a small value that it can be neglected and considered on the surface. This minimum value must be specified and might require some adjustments to reach optimal results. [Figure 3] [Figure 4]

         

Figure 3: We move along the ray direction by the radius of the circle.
Figure 4: We repeat the process from Figure 9 and Figure 10 until we reach a minimum threshold

According to Hart, when we can supply a distance function for a shape, we can build up a scene with implicit shapes. Due to the nature of the sphere tracing algorithm, we know we will not intersect any shape that is present within this scene. In terms of our lattice structures, which are made up of a tessellation of unit cells, we may create a scene of unit cells and provide a distance function to each one. Afterward, we may utilise sphere tracing to determine the surface of those unit cells to portray our lattice structure correctly.

Images are taken from scratchapixel.com

Boolean Operations


Our primary motivation for using sphere tracing is that it can easily represent shapes defined via Boolean operations. These operations are often used in a technique called Constructive Solid Geometry (CSG), which consists of modelling complex shapes by assembling simple shapes such as spheres, cubes, planes, cones, etc. The outcome of these operations might be hard to achieve if we modelled our geometry by hand, and being able to blend implicit shapes is a quality that parametric surfaces lack and thus one of the main motivations for using them. One of the most fundamental actions is to combine two things, which is best expressed mathematically as the union operator. As illustrated in [Figure 1], you may achieve this effect by simply returning the smallest distance between the two shapes you want to merge.

Figure 1: Union

You may also subtract the volume of one form from another, which is best expressed mathematically as a set difference operator (also known as a subtraction). To do so, you first need to invert the sign of the distance estimator of the first shape. When examining [Figure 2], it’s clear that the sphere's interior now provides a positive signed distance to the surface, whereas any point outside the sphere now gives a negative signed distance. Then take the maximum of the two distances to create a hole in the second object that’s equal to the shape of the first object.

Figure 2: Subtraction

One last example could be calculating the surface resulting from the intersection of two surfaces. We can compute this (as illustrated in [Figure 3]) by taking the largest distance using the max operator. What these operations have in common is that they use Boolean operators, hence the name Boolean operations.

Figure 3: Intersection

Note that these operations only have a valid distance representation outside the shape (positive values). Even though these operations work for visualisation, evaluations used by these operations will only return a lower bound to the actual distance of the resulting surface. However, this is not an issue for our use case, which is rendering implicit geometry, as we will never “march” within the object.

Images are taken from scratchapixel.com