Published an open-source C++ header library for reading and writing OBJ mesh files. The distribution is single-file, header-only, and depends only on the standard C++ libraries. Code, documentation and examples can be found at github.com/thinks/obj-io.
I added a repository on GitHub containing models and images of the Platonic Solids in 3D.
I will be posting more on GitHub than on this page in the future. As I intend to be sharing more actual code than I have done in the past, GitHub is a much better fit than a blog. GitHub is ideal for sharing code along with a small write-up. I will be focusing on very small projects tackling well-defined problems. The code I share will always try to minimize dependencies and will for the most part be in C++.
Some recent GitHub repositories that might be of interest:
The goal is to add a note to this page whenever I publish something interesting on GitHub.
Wireframe rendering is normally done in two passes; the first renders the filled triangles and the second renders the lines, using the depth buffer from the first pass to remove hidden lines. Not only does this involve passing the geometry twice to the graphics card, there are issues with depth testing for the lines due to slight differences in rasterization techniques between lines and triangles. These differences result in rendering artefacts and there is no good way to resolve this. In 2006 a new technique was proposed in a SIGGRAPH sketch entitled Single-pass Wireframe Rendering. The technique uses a pair of shaders to render triangles and lines in a single pass. Not only does this overcome rasterization issues, it is also faster and produces smooth results. The main idea is to compute the distances from fragments to triangle edges. If a fragment is within a threshold distance (half the line width) from a triangle edge, the fragment is rendered with the line color, otherwise it is rendered with the triangle color. A smoothing function is applied at the boundary between triangle and line to remedy aliasing artefacts. Most of the work is done in a vertex shader, where the distances to all triangle vertices are computed in viewport space. It is these (interpolated) distances that are the input to the fragment shader. A more robust implementation, using geometry shaders, has been proposed by NVIDIA. Their implementation deals with some tricky cases related to primitives having one or more vertices outside the viewing frustum and further reduces the amount of data sent to the graphics card.
Volumetric effects, such as smoke, are difficult to capture with standard rasterization techniques because light interacts with volumes rather than surfaces. This real-time renderer was written for Naiad Studio and uses a volume rendering approach based on camera-aligned proxy geometry and shaders. This method is well explained in GPU Gems. Lighting equations are integrated by rendering proxy geometry in multiple passes, storing accumulated light in a view buffer, while simultaneously accumulating visibility from a light source.
Iso-surfaces are ubiquitous in computer graphics, especially in applications where geometry undergoes significant topological changes over time. Level sets are a special type of iso-surface where the volumetric data is a Euclidean distance field. Fluid simulations often use level sets to track the fluid surfaces over time, using the distance zero-crossing to represent interfaces. This real-time renderer was written for Naiad Studio and uses a volume rendering approach based on camera-aligned proxy geometry and shaders. This method is well explained in GPU Gems. Volume data is stored as a 3D texture on the GPU and the fragments generated from the proxy geometry are used as sampling locations into this texture. This approach has several advantages over methods that extract explicit geometry from volumetric data (e.g. Marching Cubes). The distance field is shown “as is”, avoiding artefacts caused by super-imposed structures. Per-pixel shading is inherently provided and it becomes trivial to render any iso-value without additional setup.
Manipulators are a set of interactive tools for controlling transformations of objects. Letting users manipulate objects interactively on the screen is far more intuitive than trial and error tweaking of numbers in pursuit of suitable values. The most common transformations are translation,rotation, and scale. Manipulators are presented to users as widgets attached to objects and by interacting with these widgets the transformation parameters are changed. Many modeling packages, e.g. Maya, offer some form of interactive manipulation of objects. The manipulators shown here were implemented from scratch, using C++ and OpenGL, for Naiad Studio. Rotation was implemented using the famous ArcBall quaternion methods.