Per some comments on
this thread in the Oculus developer forums, I've made some small changes to the OpenGL example I
previously discussed.
First, I've moved from GL3W to GLEW. Both of these libraries serve the same function, to allow full access to the OpenGL API on the machine by programatically loading all the method calls by name out of the OpenGL library. However, GL3W only exposes the core profile functionality of GL 3.x and higher, leaving out all the deprecated functionality such as the glBegin()/glEnd() mechanism of drawing and the OpenGL 1.x matrix stack manipulation methods. Since I want my example to be useful to people who are still working primarily with the OpenGL 1.x stack, or are working with legacy code, moving to GLEW makes sense. I've also modified the code so that it uses GLEW on all the target platforms, rather than only on Win32 platforms. This should hopefully improve stability and make it easier to target platforms I haven't, such as MinGW.
Additionally, I've modified the way the example calls the 'renderScene()' method. Previously I was passing two entire matrices into the method, for projection and modelview. However, this doesn't make sense in terms of what I'm trying to demonstrate, as it presumes that the code calling renderScene() is authoritative about the camera. If this code were to be developed as a wrapper around some existing code, that's not a reasonable assumption to make. So instead the new version passes not the matrices, but rather the translation vectors that should be applied to the matrices. This allows the renderScene() method to apply them directly, either as I have done here, applying them to the existing matrices, or alternatively, they could be applied via a set of glMatrixMode()/glTranslate() calls if the OpenGL 1.x style fixed-function pipeline is being used.
Note that this example still isn't ideal. While allowing the renderScene() call to manipulate it's own camera, the projection matrix isn't really subject to that much freedom. In order to get proper rendering, the function must set the projection matrix to the mandated values, because that's what the calling code expects when it renders the resulting distorted image. There are a number of approaches that could correct this. For instance, the renderScene() method could be declared like this
virtual void renderScene(
const float recommendedFov,
const float recommendedAspect,
const glm::vec3 & projectionOffset,
const glm::vec3 & modelviewOffset,
float & actualFov,
float & actualAspect
) ...
In this way the calling code could tell the rendering code what field of view and aspect should be used, but at the same time provide a mechanism for the rendering code to respond to the caller with the values it actually used. This would require changes in the caller to adjust the vertices of the texture appropriately. This will be revisited in a future example where we show the relationship between the FoV & aspect ratio with the resulting image.
The last change to this program is a modification to the destructor, releasing the sensor object, and detatching the sensor fusion object from it. This allows the reference count to fall to zero, and ensures that the application actually exits when you tell it to, rather than freezing.