I added a repository on GitHub containing models and images of the Platonic Solids in 3D.

]]>Some recent GitHub repositories that might be of interest:

- PPM image IO – Convenient image format that does not bother with compression. Files can be opened in Photoshop; what more can you ask? This is a single-header C++ implementation of the PPM image format.
- Poisson Disk Sampling – A single-header C++ implementation of the paper “Fast Poisson Disk Sampling in Arbitrary Dimensions” published by Robert Bridson in 2007.

The goal is to add a note to this page whenever I publish something interesting on GitHub.

]]>

Wireframe rendering is normally done in two passes; the first renders the filled triangles and the second renders the lines, using the depth buffer from the first pass to remove hidden lines. Not only does this involve passing the geometry twice to the graphics card, there are issues with depth testing for the lines due to slight differences in rasterization techniques between lines and triangles. These differences result in rendering artefacts and there is no good way to resolve this. In 2006 a new technique was proposed in a SIGGRAPH sketch entitled Single-pass Wireframe Rendering. The technique uses a pair of shaders to render triangles and lines in a single pass. Not only does this overcome rasterization issues, it is also faster and produces smooth results. The main idea is to compute the distances from fragments to triangle edges. If a fragment is within a threshold distance (half the line width) from a triangle edge, the fragment is rendered with the line color, otherwise it is rendered with the triangle color. A smoothing function is applied at the boundary between triangle and line to remedy aliasing artefacts. Most of the work is done in a vertex shader, where the distances to all triangle vertices are computed in viewport space. It is these (interpolated) distances that are the input to the fragment shader. A more robust implementation, using geometry shaders, has been proposed by NVIDIA. Their implementation deals with some tricky cases related to primitives having one or more vertices outside the viewing frustum and further reduces the amount of data sent to the graphics card.

Volumetric effects, such as smoke, are difficult to capture with standard rasterization techniques because light interacts with volumes rather than surfaces. This real-time renderer was written for Naiad Studio and uses a volume rendering approach based on camera-aligned proxy geometry and shaders. This method is well explained in GPU Gems. Lighting equations are integrated by rendering proxy geometry in multiple passes, storing accumulated light in a view buffer, while simultaneously accumulating visibility from a light source.

Iso-surfaces are ubiquitous in computer graphics, especially in applications where geometry undergoes significant topological changes over time. Level sets are a special type of iso-surface where the volumetric data is a Euclidean distance field. Fluid simulations often use level sets to track the fluid surfaces over time, using the distance zero-crossing to represent interfaces. This real-time renderer was written for Naiad Studio and uses a volume rendering approach based on camera-aligned proxy geometry and shaders. This method is well explained in GPU Gems. Volume data is stored as a 3D texture on the GPU and the fragments generated from the proxy geometry are used as sampling locations into this texture. This approach has several advantages over methods that extract explicit geometry from volumetric data (e.g. *Marching Cubes*). The distance field is shown “as is”, avoiding artefacts caused by super-imposed structures. Per-pixel shading is inherently provided and it becomes trivial to render any iso-value without additional setup.

Manipulators are a set of interactive tools for controlling transformations of objects. Letting users manipulate objects interactively on the screen is far more intuitive than trial and error tweaking of numbers in pursuit of suitable values. The most common transformations are *translation*,*rotation*, and *scale*. Manipulators are presented to users as widgets attached to objects and by interacting with these widgets the transformation parameters are changed. Many modeling packages, e.g. Maya, offer some form of interactive manipulation of objects. The manipulators shown here were implemented from scratch, using C++ and OpenGL, for Naiad Studio. Rotation was implemented using the famous ArcBall quaternion methods.

Large point cloud visualization is challenging because of the huge sizes of modern data sets. Additionally, using points as rendering primitives has significant drawbacks, that are further amplified in situations where large numbers of points are involved. Put simply, mathematical points have no volume, and, therefore do not cast shadows. Further, points have no surface area and no natural normal vector, which is commonly used for shading. Although, both volume and surface area can be approximated, it is difficult to get it right. An alternative is to use cubes as rendering primitives. Cubes have clearly defined volumes and six distinct surfaces, which means they can be rendered with traditional techniques.

Cubes are created and stored in a hierarchical data structure known as an octree. Using a divide-and-conquer strategy, cubes are created to encapsulate the input point set. So, for each cube there is one or more data points inside its volume. The cubes are in fact the bounding boxes of the leaf nodes in the octree, which is recursively sub-divided to a user-defined depth. Data points are streamed, which means that there is no limit to the number of points being used in octree creation. Further, the memory footprint of the octree is orders of magnitude smaller than that of the raw points. The cubes are output to an OBJ-file, which can be rendered in real-time or offline. In the case of offline rendering (especially ray-tracing) it is possible to merge neighboring cube faces, to drastically reduce the amount of geometry to process (approx. an order of magnitude). This greatly speeds up rendering times without compromising visual quality. Images on the left were rendered with Maxwell.

The image below was shortlisted for the UCD Image of Research Competition 2008. The input point set for this image was part of a high-grade aerial laser scan of the city of Dublin.

Famous rockband Radiohead have generously released the data captured during the making of the video for their single House of Cards. The data consists of several thousand frames of real-time laser scan data of singer Thom Yorke’s face. A few frames are showed on the left. Two levels of voxelization, rendered in different colors, are overlaid and rendered with depth-of-field and motion blur. Finally, the images were post-processed to remove some saturation from the green and blue channels.

The Preetham sky model simulates sky color for a given time (year, day of year and time of day) and location (latitude, longitude). The main idea is to compute the color at zenith and use a clever distribution technique to approximate the rest of the hemisphere. This leads to a cheap, closed-form solution that is reasonably close to meteorological measurements and relatively easy to implement. However, this model is not perfect, and a rigorous criticism was made recently. In spite of this, the Preetham sky model provides nice backgrounds to CG scenes, especially when compared to many of the simpler alternatives, such as single colors or gradients. The main alternative is to go out and acquire panoramic photographs of real skies, a task that requires expensive hardware and is time-consuming. Also, photographs cannot be animated over time, providing only static snapshots of lighting conditions. Finally, this implementation could be improved by adding physically based glare and some form of tone mapping.