Monday, September 30, 2013

A complete, cross-platform Oculus Rift sample application in almost one file

There are few things as dispiriting as finding a blog on a topic you're really keen on and then seeing that it withered and died several years before you found it.

Dear future reader, I'm totally sorry if this happened to you on my account.  However, you needn't worry about it yet; I have lots more to write.  In fact, today I will vouchsafe to my readers (both of them) a large swath of code, representing a minimal application for the Rift, in a single file C++ source file.  Aside from the main source file, there are six shader files representing the vertex and fragment shaders for three programs.

The program uses four libraries, all of which are embedded as git submodules in the project, so there's no need to download and install anything else.  All you need is your preferred development environment, and CMake to generate your project files.

I've attempted to make minimal use of preprocessor directives, but some small platform differences make it unavoidable.

GL3W - OpenGL 2.x 

Unless you're using some custom build ray-tracing engine, using the Rift pretty much requires per-pixel processing in order to apply the distortion shader.  This means using the OpenGL or DirectX programmable pipeline functionality.  Microsoft does it's best to encourage the use of DirectX, and one of the ways it does so is by providing no access to any OpenGL API beyond 1.x in it's development tools.  That means if you want to target Win32 and any other platform you basically have to make a choice between abstracting your rendering code so that you can write both DirectX and OpenGL implementations, or you can jump through a few hoops to make use of the modern OpenGL APIs.  I've chosen the latter, since providing both a DirectX and OpenGL implementation in a single file would produce a (more) unreadable mess.

Currently on Windows I'm using a library called GL3W which on initialization goes out and dynamically finds the addresses of all the GL 3.x APIs, making them available for use by the application.  This is similar to what GLEW does, and I may switch to GLEW as it appears to be more maintained.  Both GLEW & GL3W require an intermediary step to actually construct the source code that does the dynamic loading, and GL3W is what I got working first, so that's why it's currently in use.  Rather than attempting to clone the source repository and integrate the meta-build mechanism into my CMake build, I've simply done the meta-build and embedded the resulting gl3w.c directly into my repository.

The result is that my inclusion of OpenGL headers looks like this:

#if defined(WIN32)

    #include <GL/gl3w.h>


    #include <sstream>
    #include <fstream>

    #ifdef __APPLE__
        #include "CoreFoundation/CFBundle.h"
        #include <OpenGL/gl.h>
        #include <OpenGL/glext.h>
        #include <GL/gl.h>
        #include <GL/glext.h>

#endif // WIN32

GLFW - Windowing

In order abstract away the creation of the UI window and OpenGL context, as well as input handling, I'm using GLFW.    Fortunately it does a very good job of abstracting away the per-OS differences, so it's include block is much simpler.

#include <GLFW/glfw3.h>

The remainder of the includes provide the use of some basic C++ standard library headers, GLM (an excellent math library), and the Oculus SDK header.  I chose GLM because it closely mirrors the types and APIs of the GLSL language itself.  That makes moving math operations into and out of a shader easier, and requires less mental translation between when switching from between GLSL and C++ code.  It's also handy in that it's a header only library; it requires no build step of it's own.

#include <glm/glm.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <glm/gtc/noise.hpp>

#include <iostream>
#include <string>
#include <map>
#include <stdint.h>

#include "OVR.h"
#undef new


For some operations, I use the elapsed time since the program start as part of a calculation.  This means I need a cross-platform way of getting that.  I'm used to Java, which has System.currentTimeMillis(), but unless I want to use Boost, there's nothing like this in C++.  So I'm forced again to create dual implementations.

#ifdef WIN32

    long millis() {
        static long start = GetTickCount();
        return GetTickCount() - start;


    #include <sys/time.h>

    long millis() {
        timeval time;
        gettimeofday(&time, NULL);
        long millis = (time.tv_sec * 1000) + (time.tv_usec / 1000);
        static long start = millis;
        return millis - start;



Not wanting to include a mesh loading library in my application means I'm limited in the geometry I can present.  However, the a simple cube does an adequate job as the geometric version of Hello, World.  Along with the octahedron, it's the only platonic solid whose vertex positions that doesn't require trigonometry to calculate.  On reflection, an octahedron might have been simpler to deal with since it's faces are triangles rather than quads, but then again, I would have had to come up with two whole additional colors for the additional faces.  So I dodged a bullet there.

#define CUBE_SIZE 0.4f
#define CUBE_P (CUBE_SIZE / 2.0f)
#define CUBE_N (-1.0f * CUBE_P)

// Vertices for a unit cube centered at the origin
    CUBE_N, CUBE_N, CUBE_N, // Vertex 0 position
    CUBE_P, CUBE_N, CUBE_N, // Vertex 1 position
    CUBE_P, CUBE_P, CUBE_N, // Vertex 2 position
    CUBE_N, CUBE_P, CUBE_N, // Vertex 3 position
    CUBE_N, CUBE_N, CUBE_P, // Vertex 4 position
    CUBE_P, CUBE_N, CUBE_P, // Vertex 5 position
    CUBE_P, CUBE_P, CUBE_P, // Vertex 6 position
    CUBE_N, CUBE_P, CUBE_P, // Vertex 7 position

const GLfloat CUBE_FACE_COLORS[] = {
    RED, 1,
    GREEN, 1,
    BLUE, 1,
    YELLOW, 1,
    CYAN, 1,
    MAGENTA, 1,

// 6 sides * 2 triangles * 3 vertices
   0, 4, 5, 0, 5, 1, // Face 0
   1, 5, 6, 1, 6, 2, // Face 1
   2, 6, 7, 2, 7, 3, // Face 2
   3, 7, 4, 3, 4, 0, // Face 3
   4, 7, 6, 4, 6, 5, // Face 4
   3, 0, 1, 3, 1, 2  // Face 5

   0, 1, 1, 2, 2, 3, 3, 0, // square
   4, 5, 5, 6, 6, 7, 7, 4, // facing square
   0, 4, 1, 5, 2, 6, 3, 7, // transverse lines

I'm surprised that the GLM vector type doesn't contain static members for the unit vectors, but then again, perhaps it's a bulwark against issues with static class members in dynamic link libraries.  Or perhaps it's simply because they're easy enough to define in an application.  I also use const vectors to define the default camera position.

const glm::vec3 X_AXIS = glm::vec3(1.0f, 0.0f, 0.0f);
const glm::vec3 Y_AXIS = glm::vec3(0.0f, 1.0f, 0.0f);
const glm::vec3 Z_AXIS = glm::vec3(0.0f, 0.0f, 1.0f);
const glm::vec3 CAMERA = glm::vec3(0.0f, 0.0f, 0.8f);
const glm::vec3 ORIGIN = glm::vec3(0.0f, 0.0f, 0.0f);
const glm::vec3 UP = Y_AXIS;

Loading Shader Resources

Probably the single biggest difference between platforms is in the area of resource loading.  I need to load the shader source code from the files containing it in order to pass it into OpenGL.

One approach that avoids this issue is to embed the shader source code as string literals in the C++ source itself.   Indeed this is what the Oculus SDK samples do.  However, doing that makes iterating and tweaking the shaders much more difficult, and if I'm anything, I'm an iterator and a tweaker.

No, not that kind of tweaker

In other instances I've written my shader wrapper objects to automatically detect when a shader file has changed and reload it (or report and error and continue using the last working version if it fails to compile or link).  I don't do that here, but it's a direction in which I'm likely to go, so it's another reason to avoid embedded shaders.

This means I have to approach the different mechanisms of loading resources head on.  Fortunately, both Windows and OSX have relatively straightforward ways of embedding content into the executable or application bundle during the build process, and then extracting it at runtime.  CMake provides reasonably easy access to these mechanisms.  For the remaining platform Linux, CMake copies them to a resource directory from which I load them (after computing the location of the executable during the main() function, much further down in the source.

#ifdef WIN32

    static string loadResource(const string& in) {
        static HMODULE module = GetModuleHandle(NULL);
        HRSRC res = FindResourceA(module, in.c_str(), "TEXTFILE");
        HGLOBAL mem = LoadResource(module, res);
        DWORD size = SizeofResource(module, res);
        LPVOID data = LockResource(mem);
        string result((const char*)data, size);
        return result;


    static string slurp(ifstream& in) {
        stringstream sstr;
        sstr << in.rdbuf();
        string result = sstr.str();
        return result;

    static string slurpFile(const string & in) {
        ifstream ins(in.c_str());
        return slurp(ins);

    #ifdef __APPLE__
        static string loadResource(const string& in) {
            static CFBundleRef mainBundle = CFBundleGetMainBundle();

            CFStringRef stringRef = CFStringCreateWithCString(NULL, in.c_str(), kCFStringEncodingASCII);
            CFURLRef resourceURL = CFBundleCopyResourceURL(mainBundle, stringRef, NULL, NULL);
            char *fileurl = new char[PATH_MAX];
            auto result = CFURLGetFileSystemRepresentation(resourceURL, true, (UInt8*)fileurl, PATH_MAX);
            return slurpFile(string(fileurl));

        string executableDirectory(".");

        static string loadResource(const string& in) {
            return slurpFile(executableDirectory + "/" + in);

    #endif // __APPLE__

#endif // WIN32

Building Shaders

The code that encapsulates OpenGL shader creation, loading, compiling, & linking is self-contained and complex enough to warrant it's own class to encapsulate it.  Most of the class is composed of static methods that enclose a small bit of GL boilerplate.

class GLprogram {
    static string getProgramLog(GLuint program) {
        string log;
        GLint infoLen = 0;
        glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLen);

        if (infoLen > 1) {
            char* infoLog = new char[infoLen];
            glGetProgramInfoLog(program, infoLen, NULL, infoLog);
            log = string(infoLog);
            delete[] infoLog;
        return log;

    static string getShaderLog(GLuint shader) {
        string log;
        GLint infoLen = 0;
        glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);

        if (infoLen > 1) {
            char* infoLog = new char[infoLen];
            glGetShaderInfoLog(shader, infoLen, NULL, infoLog);
            log = string(infoLog);
            delete[] infoLog;
        return log;

    static GLuint compileShader(GLuint type, const string shaderSrc) {
        // Create the shader object
        GLuint shader = glCreateShader(type);
        assert(shader != 0);
        const char * srcPtr = shaderSrc.c_str();
        glShaderSource(shader, 1, &srcPtr, NULL);
        GLint compiled;
        glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
        if (compiled == 0) {
            string errorLog = getShaderLog(shader);
            cerr << errorLog << endl;
        assert(compiled != 0);
        return shader;

    static GLuint linkProgram(GLuint vertexShader, GLuint fragmentShader) {
        GLuint program = glCreateProgram();
        assert(program != 0);
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);
        // Link the newProgram
        // Check the link status
        GLint linked;
        glGetProgramiv(program, GL_LINK_STATUS, &linked);
        if (linked == 0) {
            cerr << getProgramLog(program) << endl;
        assert(linked != 0);
        return program;

    GLuint vertexShader;
    GLuint fragmentShader;
    GLuint program;
    typedef map<string, GLint> Map;
    Map attributes;
    Map uniforms;

    GLprogram() : vertexShader(0), fragmentShader(0), program(0) { }

    void use() {

    void close() {
        if (0 != program) {
            program = 0;
        if (0 != vertexShader) {
        if (0 != fragmentShader) {

    void open(const string & name) {
        open(name + ".vs", name + ".fs");

    void open(const string & vertexShaderFile, const string & fragmentShaderFile) {
        string source = loadResource(vertexShaderFile);
        vertexShader = compileShader(GL_VERTEX_SHADER, source);
        source = loadResource(fragmentShaderFile);
        fragmentShader = compileShader(GL_FRAGMENT_SHADER, source);
        program = linkProgram(vertexShader, fragmentShader);
        static GLchar GL_OUTPUT_BUFFER[8192];
        int numVars;
        glGetProgramiv(program, GL_ACTIVE_ATTRIBUTES, &numVars);
        for (int i = 0; i < numVars; ++i) {
            GLsizei bufSize = 8192;
            GLsizei size; GLenum type;
            glGetActiveAttrib(program, i, bufSize, &bufSize, &size, &type, GL_OUTPUT_BUFFER);
            string name = string(GL_OUTPUT_BUFFER, bufSize);
            GLint location = glGetAttribLocation(program, name.c_str());
            attributes[name] = location;
            cout << "Found attribute " << name << " at location " << location << endl;

        glGetProgramiv(program, GL_ACTIVE_UNIFORMS, &numVars);
        for (int i = 0; i < numVars; ++i) {
            GLsizei bufSize = 8192;
            GLsizei size;
            GLenum type;
            glGetActiveUniform(program, i, bufSize, &bufSize, &size, &type, GL_OUTPUT_BUFFER);
            string name = string(GL_OUTPUT_BUFFER, bufSize);
            GLint location = glGetUniformLocation(program, name.c_str());
            uniforms[name] = location;
            cout << "Found uniform " << name << " at location " << location << endl;

    GLint getUniformLocation(const string & uniform) const {
        auto itr = uniforms.find(uniform);
        if (uniforms.end() != itr) {
            return itr->second;
        return -1;

    GLint getAttributeLocation(const string & attribute) const {
        Map::const_iterator itr = attributes.find(attribute);
        if (attributes.end() != itr) {
            return itr->second;
        return -1;

    void uniformMat4(const string & uniform, const glm::mat4 & mat) const {
        glUniformMatrix4fv(getUniformLocation(uniform), 1, GL_FALSE, glm::value_ptr(mat));

    void uniform4f(const string & uniform, float a, float b, float c, float d) const{
        glUniform4f(getUniformLocation(uniform), a, b, c, d);

    void uniform4f(const string & uniform, const float * fv) const {
        uniform4f(uniform, fv[0], fv[1], fv[2], fv[3]);

    void uniform2f(const string & uniform, float a, float b) const {
        glUniform2f(getUniformLocation(uniform), a, b);

    void uniform2f(const string & uniform, const glm::vec2 & vec) const {
        uniform2f(uniform, vec.x, vec.y);

This being an example program, we want to check liberally for OpenGL errors.  The assert ensures that my program will break into the debugger as soon as the first error is encountered.

void checkGlError() {
    GLenum error = glGetError();
    if (error != 0) {
        cerr << error << endl;
    assert(error == 0);

The Application Framework

GLFW appears to be a spiritual successor to GLUT.  It's not likely to be used to build actual production quality software, but it's an excellent tool to let you stop fighting with the OS UI routines for a while so you can get to the real work of fighting with the OpenGL and GLSL routines.  No one wants to spend a day trying to figure out how to write a simple application to launch an OpenGL window when they're trying to figure out why their super awesome lighting shader produces incorrect shadows on Tuesdays and Thursdays.

However, like GLUT, GLFW is strictly a C API, so typically in a C++ application it's worth writing a small class to encapsulate its use.

class glfwApp {
    GLFWwindow * window;
    glfwApp() : window(    ) {
        // Initialize the GLFW system for creating and positioning windows
        if( !glfwInit() ) {
            cerr << "Failed to initialize GLFW" << endl;
            exit( EXIT_FAILURE );

    virtual void createWindow(int w, int h, int x = 0, int y = 0) {
        glfwWindowHint(GLFW_DEPTH_BITS, 16);
        window = glfwCreateWindow(w, h, "glfw", NULL, NULL);
        assert(window != 0);
        glfwSetWindowUserPointer(window, this);
        glfwSetWindowPos(window, x, y);
        glfwSetKeyCallback(window, glfwKeyCallback);

        // Initialize the OpenGL 3.x bindings
#ifdef WIN32
        if (0 != gl3wInit()) {
            cerr << "Failed to initialize GLEW" << endl;
            exit( EXIT_FAILURE );


    virtual ~glfwApp() {

    virtual int run() {
        while (!glfwWindowShouldClose(window)) {
        return 0;

    virtual void onKey(int key, int scancode, int action, int mods) = 0;
    virtual void draw() = 0;
    virtual void update() = 0;

    static void glfwKeyCallback(GLFWwindow* window, int key, int scancode, int action, int mods) {
        glfwApp * instance = (glfwApp *)glfwGetWindowUserPointer(window);
        instance->onKey(key, scancode, action, mods);

    static void glfwErrorCallback(int error, const char* description) {
        cerr << description << endl;
        exit( EXIT_FAILURE );

You may notice the gl3wInit() call embedded in there.  Initially I had this at the start of the program, but I discovered that if you call that method before creating an OpenGL context, it will only find the methods in the Windows software implementation of OpenGL, i.e. only 1.x.  This means your code will crash with a null pointer exception as soon as you try to call one of the OpenGL 2.x methods, like glCreateShader().

The Real Program

Now that we have our framework, we can build the actual application code on top of it.  

class Example00 : public glfwApp {
    enum Mode {

    glm::mat4 projection;
    glm::mat4 modelview;
    Ptr<SensorDevice> ovrSensor;
    SensorFusion sensorFusion;
    StereoConfig stereoConfig;

    // Provides the resolution and location of the Rift
    HMDInfo hmdInfo;
    // Calculated width and height of the per-eye rendering area used
    int eyeWidth, eyeHeight;
    // Calculated width and height of the frame buffer object used to contain
    // intermediate results for the multipass render
    int fboWidth, fboHeight;

    Mode renderMode;
    bool useTracker;
    long elapsed;

    GLuint cubeVertexBuffer;
    GLuint cubeIndexBuffer;
    GLuint cubeWireIndexBuffer;

    GLuint quadVertexBuffer;
    GLuint quadIndexBuffer;

    GLprogram renderProgram;
    GLprogram textureProgram;
    GLprogram distortProgram;

    GLuint frameBuffer;
    GLuint frameBufferTexture;
    GLuint depthBuffer;

We allocate space for a modelview and projection matrix, a number of Oculus SDK types, some intermediate size calculations, a few user variables, and the requisite ton of opaque OpenGL handles.

    Example00() :  renderMode(MONO), useTracker(false), elapsed(0),
        cubeVertexBuffer(0), cubeIndexBuffer(0), cubeWireIndexBuffer(0), quadVertexBuffer(0), quadIndexBuffer(0),
        frameBuffer(0), frameBufferTexture(0), depthBuffer(0)

        // do the master initialization for the Oculus VR SDK


We use the Rift hardware information in the Rift distortion shader. If the Rift isn't present we still want to behave in a sane manner, so we initialize the hmdInfo structure with values it would likely get from a Development Kit. This way if there's no Rift detected, we can still render properly.

        hmdInfo.HResolution = 1280;
        hmdInfo.VResolution = 800;
        hmdInfo.HScreenSize = 0.149759993f;
        hmdInfo.VScreenSize = 0.0935999975f;
        hmdInfo.VScreenCenter = 0.0467999987f;
        hmdInfo.EyeToScreenDistance    = 0.0410000011f;
        hmdInfo.LensSeparationDistance = 0.0635000020f;
        hmdInfo.InterpupillaryDistance = 0.0640000030f;
        hmdInfo.DistortionK[0] = 1.00000000f;
        hmdInfo.DistortionK[1] = 0.219999999f;
        hmdInfo.DistortionK[2] = 0.239999995f;
        hmdInfo.DistortionK[3] = 0.000000000f;
        hmdInfo.ChromaAbCorrection[0] = 0.995999992f;
        hmdInfo.ChromaAbCorrection[1] = -0.00400000019f;
        hmdInfo.ChromaAbCorrection[2] = 1.01400006f;
        hmdInfo.ChromaAbCorrection[3] = 0.000000000f;
        hmdInfo.DesktopX = 0;
        hmdInfo.DesktopY = 0;

Pretty standard code for initializing the Rift. It's simpler than it might otherwise be because we're not accounting for edge cases like either a Rift device being connected or the desire to change what's handling the tracker messages after program launch.

        Ptr<DeviceManager> ovrManager = *DeviceManager::Create();
        if (ovrManager) {
            ovrSensor = *ovrManager->EnumerateDevices<SensorDevice>().CreateDevice();
            if (ovrSensor) {
                useTracker = true;
            Ptr<HMDDevice> ovrHmd = *ovrManager->EnumerateDevices<HMDDevice>().CreateDevice();
            if (ovrHmd) {
            // The HMDInfo structure contains everything we need for now, so no
            // need to keep the device handle around
        // The device manager is reference counted and will be released automatically
        // when our sensorObject is destroyed.

        // Create OpenGL window context
        createWindow(hmdInfo.HResolution, hmdInfo.VResolution, hmdInfo.DesktopX, hmdInfo.DesktopY);

        // Init OpenGL

        // Enable the zbuffer test
        glClearColor(0.1f, 0.1f, 0.1f, 1.0f);

        glGenBuffers(1, &cubeVertexBuffer);
        glBindBuffer(GL_ARRAY_BUFFER, cubeVertexBuffer);
        glBindBuffer(GL_ARRAY_BUFFER, 0);

        glGenBuffers(1, &cubeIndexBuffer);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, cubeIndexBuffer);
                CUBE_INDICES, GL_STATIC_DRAW);

        glGenBuffers(1, &cubeWireIndexBuffer);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, cubeWireIndexBuffer);
                sizeof(GLuint) * EDGE_COUNT * VERTICES_PER_EDGE,
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);

Unlike all the other buffers, we don't pre-load the information in quadVertexBuffer below. That's because what goes into it depends on the current rendering mode, i.e. whether we are using distortion or not.
        glGenBuffers(1, &quadVertexBuffer);
        glGenBuffers(1, &quadIndexBuffer);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, quadIndexBuffer);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * 6, QUAD_INDICES, GL_STATIC_DRAW);

        eyeWidth = hmdInfo.HResolution / 2;
        eyeHeight = hmdInfo.VResolution;
        fboWidth = eyeWidth * FRAMEBUFFER_OBJECT_SCALE;
        fboHeight = eyeHeight * FRAMEBUFFER_OBJECT_SCALE;

        glGenFramebuffers(1, &frameBuffer);
        glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);

        glGenTextures(1, &frameBufferTexture);
        glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
        // Allocate space for the texture
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, fboWidth, fboHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, frameBufferTexture, 0);
        glGenRenderbuffers(1, &depthBuffer);
        glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
        glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, fboWidth, fboHeight);
        glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);
        glBindFramebuffer(GL_FRAMEBUFFER, 0);

        // Create the rendering shaders"Simple");"Texture");"Distort");

        modelview = glm::lookAt(CAMERA, ORIGIN, UP);
        projection = glm::perspective(60.0f, (float)hmdInfo.HResolution / (float)hmdInfo.VResolution, 0.1f, 100.f);

    virtual ~Example00() {

There are three ways in which we want to respond to user input (other than the head tracker). The first is to allow the user to reset the orientation stored in sensor fusion object using the R key. The second is to toggle between using the head tracker at all using the T key. Finally, we want the user to be able to cycle between non-stereo, side-by-side stereo, and Rift stereo, using the P key.
    virtual void onKey(int key, int scancode, int action, int mods) {
        if (GLFW_PRESS != action) {
        switch (key) {
        case GLFW_KEY_R:

        case GLFW_KEY_T:
            useTracker = !useTracker;

        case GLFW_KEY_P:
            renderMode = static_cast<Mode>((renderMode + 1) % 3);
            if (renderMode == MONO) {
                projection = glm::perspective(60.0f,
                        (float)hmdInfo.HResolution / (float)hmdInfo.VResolution, 0.1f, 100.f);
            } else if (renderMode == STEREO) {
                projection = glm::perspective(60.0f,
                        (float)hmdInfo.HResolution / 2.0f / (float)hmdInfo.VResolution, 0.1f, 100.f);
            } else if (renderMode == STEREO_DISTORT) {
                projection = glm::perspective(stereoConfig.GetYFOVDegrees(),
                        (float)hmdInfo.HResolution / 2.0f / (float)hmdInfo.VResolution, 0.1f, 100.f);

Our update method either gets the latest orientation from the sensor tracker and positions the camera accordingly, or it applies a continuous rotation to the cube. The latter is done to ensure that even if no Rift is connected, or the Rift sensor isn't detected for some reason, the resulting rendered display isn't simply a single static view of one face of the cube.
    virtual void update() {
        long now = millis();
        if (useTracker) {
            // For some reason building the quaternion directly from the OVR
            // x,y,z,w values does not work.  So instead we convert it into
            // euler angles and construct our glm::quaternion from those

            // Fetch the pitch roll and yaw out of the sensorFusion device
            glm::vec3 eulerAngles;
            sensorFusion.GetOrientation().GetEulerAngles<Axis_X, Axis_Y, Axis_Z, Rotate_CW, Handed_R>(
                &eulerAngles.x, &eulerAngles.y, &eulerAngles.z);

            // Not convert it into a GLM quaternion.
            glm::quat orientation = glm::quat(eulerAngles);

            // Most applications want take a basic camera postion and apply the
            // orientation transform to it in this way:
            // modelview = glm::mat4_cast(orientation) * glm::lookAt(CAMERA, ORIGIN, UP);

            // However for this demonstration we want the cube to remain
            // centered in the viewport, and orbit our view around it.  This
            // serves two purposes.
            // First, it's not possible to see a blank screen in the event
            // the HMD is oriented to point away from the origin of the scene.
            // Second, a scene that has no points of reference other than a
            // single small object can be disorienting, leaving the user
            // feeling lost in a void.  Having a fixed object in the center
            // of the screen that you appear to be moving around should
            // provide less immersion, which in this instance is better
            modelview = glm::lookAt(CAMERA, ORIGIN, UP) * glm::mat4_cast(orientation);
        } else {
            // In the absence of head tracker information, we want to slowly
            // rotate the cube so that the animation of the scene is apparent
            static const float Y_ROTATION_RATE = 0.01f;
            static const float Z_ROTATION_RATE = 0.05f;
            modelview = glm::lookAt(CAMERA, ORIGIN, UP);
            modelview = glm::rotate(modelview, elapsed * Y_ROTATION_RATE, Y_AXIS);
            modelview = glm::rotate(modelview, elapsed * Z_ROTATION_RATE, Z_AXIS);
        elapsed = now;

The draw function does all the meta-work of setting viewports and managing the frame buffer object (FBO) and non-scene shaders.
    virtual void draw() {
        if (renderMode == MONO) {
            // If we're not working stereo, we're just going to render the
            // scene once, from a single position, directly to the back buffer
            glViewport(0, 0, hmdInfo.HResolution, hmdInfo.VResolution);
            renderScene(projection, modelview);
        } else {
            // If we get here, we're rendering in stereo, so we have to render our output twice

            // We have to explicitly clear the screen here.  the Clear command doesn't object the viewport
            // and the clear command inside renderScene will only target the active framebuffer object.
            glClearColor(0, 1, 0, 1);
            for (int i = 0; i < 2; ++i) {
                StereoEye eye = EYES[i];
                glBindTexture(GL_TEXTURE_2D, 0);
                glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
                glViewport(0, 0, fboWidth, fboHeight);

                // Compute the modelview and projection matrices for the rendered scene based on the eye and
                // whether or not we're doing side by side or rift rendering
                glm::vec3 eyeProjectionOffset(-stereoConfig.GetProjectionCenterOffset() / 2.0f, 0, 0);
                glm::vec3 eyeModelviewOffset = glm::vec3(-stereoConfig.GetIPD() / 2.0f, 0, 0);
                if (eye == StereoEye_Left) {
                    eyeModelviewOffset *= -1;
                    eyeProjectionOffset *= -1;

                glm::mat4 eyeModelview = glm::translate(glm::mat4(), eyeModelviewOffset) * modelview;
                glm::mat4 eyeProjection = projection;
                if (renderMode == STEREO_DISTORT) {
                    eyeProjection = glm::translate(eyeProjection, eyeProjectionOffset);
                renderScene(eyeProjection, eyeModelview);
                glBindFramebuffer(GL_FRAMEBUFFER, 0);

Note that as we set the viewport here, we actually shrink the covered space by a single pixel. This allows us to see the bright green that we've used above in glClearColor(). The presence or absence of this green border can alert a developer to problems with their rendering code in some cases.
                // Setup the viewport for the eye to which we're rendering
                glViewport(1 + (eye == StereoEye_Left ? 0 : eyeWidth), 1, eyeWidth - 2, eyeHeight - 2);

                GLprogram & program = (renderMode == STEREO_DISTORT) ? distortProgram : textureProgram;
                GLint positionLocation = program.getAttributeLocation("Position");
                assert(positionLocation > -1);
                GLint texCoordLocation = program.getAttributeLocation("TexCoord");
                assert(texCoordLocation > -1);

                float texL = 0, texR = 1, texT = 1, texB = 0;
                if (renderMode == STEREO_DISTORT) {
                    // Pysical width of the viewport
                    static float eyeScreenWidth = hmdInfo.HScreenSize / 2.0f;
                    // The viewport goes from -1,1.  We want to get the offset
                    // of the lens from the center of the viewport, so we only
                    // want to look at the distance from 0, 1, so we divide in
                    // half again
                    static float halfEyeScreenWidth = eyeScreenWidth / 2.0f;

                    // The distance from the center of the display panel (NOT
                    // the center of the viewport) to the lens axis
                    static float lensDistanceFromScreenCenter = hmdInfo.LensSeparationDistance / 2.0f;

                    // Now we we want to turn the measurement from
                    // millimeters into the range 0, 1
                    static float lensDistanceFromViewportEdge = lensDistanceFromScreenCenter / halfEyeScreenWidth;

                    // Finally, we want the distnace from the center, not the
                    // distance from the edge, so subtract the value from 1
                    static float lensOffset = 1.0f - lensDistanceFromViewportEdge;
                    static glm::vec2 aspect(1.0, (float)eyeWidth / (float)eyeHeight);

                    glm::vec2 lensCenter(lensOffset, 0);

The version of the Distortion shader I use reflects earlier blog posts about doing less calculation in the fragment shader and more in the calling OpenGL context. In this instance we're pre-calculating the texture coordinates to place them into lens-space with uniform X and Y scaling. Every texture coordinate handled by the fragment shader will already be a positive or negative coordinate relative to the position on the screen directly under the center of the lens for that eye.
                    // Texture coordinates need to be in lens-space for the
                    // distort shader
                    texL = -1 - lensOffset;
                    texR = 1 - lensOffset;
                    texT = 1 / aspect.y;
                    texB = -1 / aspect.y;
                    // Flip the values for the right eye
                    if (eye != StereoEye_Left) {
                        swap(texL, texR);
                        texL *= -1;
                        texR *= -1;
                        lensCenter *= -1;

                    static glm::vec2 distortionScale(1.0f / stereoConfig.GetDistortionScale(),
                        1.0f / stereoConfig.GetDistortionScale());
                    program.uniform2f("LensCenter", lensCenter);
                    program.uniform2f("Aspect", aspect);
                    program.uniform2f("DistortionScale", distortionScale);
                    program.uniform4f("K", hmdInfo.DistortionK);

                const GLfloat quadVertices[] = {
                    -1, -1, texL, texB,
                     1, -1, texR, texB,
                     1,  1, texR, texT,
                    -1,  1, texL, texT,

                glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
                glBindBuffer(GL_ARRAY_BUFFER, quadVertexBuffer);
                glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 2 * 2 * 4, quadVertices, GL_DYNAMIC_DRAW);

                int stride = sizeof(GLfloat) * 2 * 2;
                glVertexAttribPointer(positionLocation, 2, GL_FLOAT, GL_FALSE, stride, 0);
                glVertexAttribPointer(texCoordLocation, 2, GL_FLOAT, GL_FALSE, stride, (GLvoid*)(sizeof(GLfloat) * 2));

                glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, quadIndexBuffer);
                glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, (GLvoid*)0);
                glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);

                glBindBuffer(GL_ARRAY_BUFFER, 0);
            } // for
        } // if

Here we've encapsulated scene rendering. As long as we take care to apply the projection and modelview matrices as appropriate and avoid messing with the viewport or the FBO, we can do pretty much whatever we want and the result will be a properly rendered stereo scene. For this sample all we do is render the 6 faces of the cube and then render a wire frame outline.
    virtual void renderScene(const glm::mat4 & projection, const glm::mat4 & modelview) {
        // Clear the buffer
        glClearColor(0.1f, 0.1f, 0.1f, 1.0f);

        // Configure the GL pipeline for rendering our geometry

        // Load the projection and modelview matrices into the program
        renderProgram.uniformMat4("Projection", projection);
        renderProgram.uniformMat4("ModelView", modelview);

        // Load up our cube geometry (vertices and indices)
        glBindBuffer(GL_ARRAY_BUFFER, cubeVertexBuffer);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, cubeIndexBuffer);

        // Bind the vertex data to the program
        GLint positionLocation = renderProgram.getAttributeLocation("Position");
        GLint colorLocation = renderProgram.getUniformLocation("Color");

        glVertexAttribPointer(positionLocation, 3, GL_FLOAT, GL_FALSE, 12, (GLvoid*)0);

        // Draw the cube faces, two calls for each face in order to set the color and then draw the geometry
        for (uintptr_t i = 0; i < FACE_COUNT; ++i) {
            renderProgram.uniform4f("Color", CUBE_FACE_COLORS + (i * 4));
            glDrawElements(GL_TRIANGLES, TRIANGLES_PER_FACE * VERTICES_PER_TRIANGLE, GL_UNSIGNED_INT, (void*)(i * 6 * 4));

        // Now scale the modelview matrix slightly, so we can draw the cube outline
        glm::mat4 scaledCamera = glm::scale(modelview, glm::vec3(1.01f));
        renderProgram.uniformMat4("ModelView", scaledCamera);

        // Drawing a white wireframe around the cube
        glUniform4f(colorLocation, 1, 1, 1, 1);

        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, cubeWireIndexBuffer);
        glDrawElements(GL_LINES, EDGE_COUNT * VERTICES_PER_EDGE, GL_UNSIGNED_INT, (void*)0);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);

        glBindBuffer(GL_ARRAY_BUFFER, 0);


Here we have our last bit of pre-processor business. The main prototype on Windows is drastically different than on other platforms. In addition, since we don't have a resource loading mechanism on Linux, the previously mentioned calculation of the executable location comes in at this point. Actually, the name executableDirectory is a slight misnomer, since CMake puts the files in a Resources subdirectory, perhaps in an attempt to emulate OSX Bundle like behavior.
#ifdef WIN32
    int WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) {
    int main(int argc, char ** argv) {

    // Window and Apple both support applications embedded resources.  For
    // Linux we're going to try to locate the shaders relative to the
    // executable
    #ifndef __APPLE__
        string executable(argv[0]);
        string::size_type sep = executable.rfind('/');
        if (sep != string::npos) {
            executableDirectory = executable.substr(0, sep) + "/Resources";


    return Example00().run();

I have no doubt there's more I can say about this example application, but it's super late and I have a meeting tomorrow. The complete application is available on Github at


  1. Great article!

    Just one question:

    "In other instances I've written my shader wrapper objects to automatically detect when a shader file has changed and reload it (or report and error and continue using the last working version if it fails to compile or link)."

    Do you happen to have a link for that? Would be interesting to see...

    Also, I take that the "almost in one file" does no longer apply for the code that is on Github -- or is the above one-file-approach one of the examples?

  2. My C++ wrapper code for shaders is located here:

    It's pretty primitive though as it only supports vertex and fragment shaders, with no way of attaching geometry and tessellation shaders.

    The code that's responsible for reloading the shaders when they've changed on disk is here: and relies on my toolset's concept of a resource (which might be baked into an Apple resource fork, or in a Win32 resource, or in a file on disk, etc). The biggest problem with it is that I haven't written code to preserve the uniforms that have been set on a program, meaning that if you use the functionality, you have to make sure you set all the required uniforms before using the program. You can't simply set some at the beginning of the application and rely on them staying set.

    The current code on Github is heavily broken down into components, but the original 'all in one' code is still visible here:

    I should probably update this post, or produce a new one that covers the changes I've made over time.

  3. Oh that's already extremely helpful as it is, thanks a lot!