Wednesday, October 16, 2013

VR Usability - Where to Look for Guidance

Brad's post on on cut scenes and borrowing from cinematography for VR got me thinking about VR usability.   When designing a VR experience, many of the standard usability checklist questions still apply:
  • Are users able to access the information they need?
  • Are users able to make progress?
  • Do the navigation mechanics feel natural?
  • Does accessing information or making progress take the user out of the experience? That is, are users able to concentrate on their goal and not on how to use the software?
  • Does the experience create an appropriate emotional response? 
Along with the more standard questions, for VR, you also need to add the literal question:
  • Does the experience make the user want to vomit?
The techniques and conventions used to achieve the desired usability result is uncharted territory. So, without a map or other guidelines, how do you start to address these issues? One way to start is by borrowing from other mediums. Film and cinematography is one source. However, as Brad explained in another post, mapping film conventions 1:1 to VR does not work as some conventions, such as cut scenes and zoom, are simply too disorienting in an immersive environment. So while borrowing conventions is a good start, what we really need to do to build a map is experiment, iterate, and share what we know.

With that in mind, here is a roundup of some current resources:

Monday, October 14, 2013

Understanding Matrix Transformations for Rendering to the Oculus Rift

If you look at the examples in the Oculus SDK or in my example, you may notice that in the normal course of rendering, there are modifications to both the model-view matrix, as well as to the projection matrix.  Each matrix is translated on the X axis by a small amount, but it's not really immediately obvious why.  This article will attempt to make the issue a little clearer by going into the purpose and effect of each translation.

Thursday, October 10, 2013

OpengGL Example Updates

Per some comments on this thread in the Oculus developer forums, I've made some small changes to the OpenGL example I previously discussed.

First, I've moved from GL3W to GLEW.  Both of these libraries serve the same function, to allow full access to the OpenGL API on the machine by programatically loading all the method calls by name out of the OpenGL library.  However, GL3W only exposes the core profile functionality of GL 3.x and higher, leaving out all the deprecated functionality such as the glBegin()/glEnd() mechanism of drawing and the OpenGL 1.x matrix stack manipulation methods.  Since I want my example to be useful to people who are still working primarily with the OpenGL 1.x stack, or are working with legacy code, moving to GLEW makes sense.  I've also modified the code so that it uses GLEW on all the target platforms, rather than only on Win32 platforms.  This should hopefully improve stability and make it easier to target platforms I haven't, such as MinGW.

Additionally, I've modified the way the example calls the 'renderScene()' method.  Previously I was passing two entire matrices into the method, for projection and modelview.  However, this doesn't make sense in terms of what I'm trying to demonstrate, as it presumes that the code calling renderScene() is authoritative about the camera.  If this code were to be developed as a wrapper around some existing code, that's not a reasonable assumption to make.  So instead the new version passes not the matrices, but rather the translation vectors that should be applied to the matrices.  This allows the renderScene() method to apply them directly, either as I have done here, applying them to the existing matrices, or alternatively, they could be applied via a set of glMatrixMode()/glTranslate() calls if the OpenGL 1.x style fixed-function pipeline is being used.

Note that this example still isn't ideal.  While allowing the renderScene() call to manipulate it's own camera, the projection matrix isn't really subject to that much freedom.  In order to get proper rendering, the function must set the projection matrix to the mandated values, because that's what the calling code expects when it renders the resulting distorted image.  There are a number of approaches that could correct this.  For instance, the renderScene() method could be declared like this

virtual void renderScene(
    const float recommendedFov,
    const float recommendedAspect,
    const glm::vec3 & projectionOffset,
    const glm::vec3 & modelviewOffset,
    float & actualFov,
    float & actualAspect
) ...
In this way the calling code could tell the rendering code what field of view and aspect should be used, but at the same time provide a mechanism for the rendering code to respond to the caller with the values it actually used.  This would require changes in the caller to adjust the vertices of the texture appropriately.  This will be revisited in a future example where we show the relationship between the FoV & aspect ratio with the resulting image.

The last change to this program is a modification to the destructor, releasing the sensor object, and detatching the sensor fusion object from it.  This allows the reference count to fall to zero, and ensures that the application actually exits when you tell it to, rather than freezing.



Wednesday, October 9, 2013

The Challenges of Adapting Cinematography to VR

There was a question today on the Oculus VR Developer Forums about cut scenes in video games and how to approach them in VR.  Essentially, the original poster was asking how to address the conflict between cut scenes, which typically follow the language of cinema, offering frequent cuts from one point of view to another, and VR where 'cuts' can be disorienting to the viewer.

Here's my response, only lightly edited:

This is an interesting topic and is part of understanding the intersection of VR and cinematography as a whole. Part of cinematography is the manipulation of the frame, the field of view, the depth of field, the lighting, etc in order to convey an emotion or idea.

However, the language of cinema doesn't necessarily translate 1:1 to VR. Sudden transition of the viewpoint in film, or 'cuts' are so common in that it's considered notable when you go for long periods without them. But as you say, they can be extremely disconcerting when done in VR, where you've increased the level of immersion specifically to make the viewer feel more like they're present in the scene than simply a viewer. This means that common things like the 'Angle / Reverse Angle' sequences used to alternate between closeups of two people speaking as they converse is something that doesn't really work in VR.

The problem is, how do you convey certain non-verbal ideas without saying them. Cinema itself had this very problem, because it was a medium that grew out of the effect of disruptive technology on a prior storytelling medium, the stage, which in turn had to deal with the same problems.