Saturday, November 23, 2013

Setting up the Rift on a MacBook Pro

Wrote this up for someone else, so thought I would post it here too. This was my experience setting up the Rift on a MacBook Pro running OS X 10.9.

For connecting the Rift, I simply followed the instructions that came with the Rift.  Basically, I connected the Rift power box to my computer using both the USB cable and the HDMI cable that came with the Rift and then I plugged in the Rift power cable. The only issue I had here was that I needed a converter (HDMI to miniDisplayPort) that was not included in the Rift development kit.  It is easy to forget to connect one of the cables, I’ve done so more than once,  so be sure to check all connections first if you have any issues running the Rift.

For the display setup, Oculus recommends using the Rift as an extended monitor in most cases, but I found that did not work for me. The primary display defaulted to the Rift which meant my menus, windows and  mouse were all on the Rift portion of the display.  Because looking through the Rift shows two overlapping images of the desktop, actually selecting anything located there was nigh impossible. I was never able to get any demos to run properly on the Rift display.

The other option is mirroring, which is what I chose to do. There are two downsides to mirroring:
  1. Performance can suffer, specifically, screen tearing can occur.  Tearing on the Rift can be very distracting and it is the reason Oculus recommends extended displays. So far this has not been a serious issue for me, but I expect I may need to revisit this issue in the future.
  2. Optimizing the resolution for the Rift display (1280 X 800) doesn’t allow for a lot of screen real estate for doing work.  To simplify switching between the resolutions for the “built in display” and the “Rift display,” I made sure to check Show mirroring options in the menu bar when available in the System Preferences Displays panel. That way, I can easily switch back and forth using the display menu now located in the top right of the menu bar.
Using a single monitor isn’t ideal for Rift development as it doesn’t allow you to both see your development project and the results at the same time. But as my current system doesn’t have triple-monitor support, to add another monitor, I would  need to purchase an HDMI splitter. HDMI splitters have their own issues though, in addition to the expense of buying a good one.  For the time being, I will stick with mirroring.

Wednesday, October 16, 2013

VR Usability - Where to Look for Guidance

Brad's post on on cut scenes and borrowing from cinematography for VR got me thinking about VR usability.   When designing a VR experience, many of the standard usability checklist questions still apply:
  • Are users able to access the information they need?
  • Are users able to make progress?
  • Do the navigation mechanics feel natural?
  • Does accessing information or making progress take the user out of the experience? That is, are users able to concentrate on their goal and not on how to use the software?
  • Does the experience create an appropriate emotional response? 
Along with the more standard questions, for VR, you also need to add the literal question:
  • Does the experience make the user want to vomit?
The techniques and conventions used to achieve the desired usability result is uncharted territory. So, without a map or other guidelines, how do you start to address these issues? One way to start is by borrowing from other mediums. Film and cinematography is one source. However, as Brad explained in another post, mapping film conventions 1:1 to VR does not work as some conventions, such as cut scenes and zoom, are simply too disorienting in an immersive environment. So while borrowing conventions is a good start, what we really need to do to build a map is experiment, iterate, and share what we know.

With that in mind, here is a roundup of some current resources:

Monday, October 14, 2013

Understanding Matrix Transformations for Rendering to the Oculus Rift

If you look at the examples in the Oculus SDK or in my example, you may notice that in the normal course of rendering, there are modifications to both the model-view matrix, as well as to the projection matrix.  Each matrix is translated on the X axis by a small amount, but it's not really immediately obvious why.  This article will attempt to make the issue a little clearer by going into the purpose and effect of each translation.

Thursday, October 10, 2013

OpengGL Example Updates

Per some comments on this thread in the Oculus developer forums, I've made some small changes to the OpenGL example I previously discussed.

First, I've moved from GL3W to GLEW.  Both of these libraries serve the same function, to allow full access to the OpenGL API on the machine by programatically loading all the method calls by name out of the OpenGL library.  However, GL3W only exposes the core profile functionality of GL 3.x and higher, leaving out all the deprecated functionality such as the glBegin()/glEnd() mechanism of drawing and the OpenGL 1.x matrix stack manipulation methods.  Since I want my example to be useful to people who are still working primarily with the OpenGL 1.x stack, or are working with legacy code, moving to GLEW makes sense.  I've also modified the code so that it uses GLEW on all the target platforms, rather than only on Win32 platforms.  This should hopefully improve stability and make it easier to target platforms I haven't, such as MinGW.

Additionally, I've modified the way the example calls the 'renderScene()' method.  Previously I was passing two entire matrices into the method, for projection and modelview.  However, this doesn't make sense in terms of what I'm trying to demonstrate, as it presumes that the code calling renderScene() is authoritative about the camera.  If this code were to be developed as a wrapper around some existing code, that's not a reasonable assumption to make.  So instead the new version passes not the matrices, but rather the translation vectors that should be applied to the matrices.  This allows the renderScene() method to apply them directly, either as I have done here, applying them to the existing matrices, or alternatively, they could be applied via a set of glMatrixMode()/glTranslate() calls if the OpenGL 1.x style fixed-function pipeline is being used.

Note that this example still isn't ideal.  While allowing the renderScene() call to manipulate it's own camera, the projection matrix isn't really subject to that much freedom.  In order to get proper rendering, the function must set the projection matrix to the mandated values, because that's what the calling code expects when it renders the resulting distorted image.  There are a number of approaches that could correct this.  For instance, the renderScene() method could be declared like this

virtual void renderScene(
    const float recommendedFov,
    const float recommendedAspect,
    const glm::vec3 & projectionOffset,
    const glm::vec3 & modelviewOffset,
    float & actualFov,
    float & actualAspect
) ...
In this way the calling code could tell the rendering code what field of view and aspect should be used, but at the same time provide a mechanism for the rendering code to respond to the caller with the values it actually used.  This would require changes in the caller to adjust the vertices of the texture appropriately.  This will be revisited in a future example where we show the relationship between the FoV & aspect ratio with the resulting image.

The last change to this program is a modification to the destructor, releasing the sensor object, and detatching the sensor fusion object from it.  This allows the reference count to fall to zero, and ensures that the application actually exits when you tell it to, rather than freezing.



Wednesday, October 9, 2013

The Challenges of Adapting Cinematography to VR

There was a question today on the Oculus VR Developer Forums about cut scenes in video games and how to approach them in VR.  Essentially, the original poster was asking how to address the conflict between cut scenes, which typically follow the language of cinema, offering frequent cuts from one point of view to another, and VR where 'cuts' can be disorienting to the viewer.

Here's my response, only lightly edited:

This is an interesting topic and is part of understanding the intersection of VR and cinematography as a whole. Part of cinematography is the manipulation of the frame, the field of view, the depth of field, the lighting, etc in order to convey an emotion or idea.

However, the language of cinema doesn't necessarily translate 1:1 to VR. Sudden transition of the viewpoint in film, or 'cuts' are so common in that it's considered notable when you go for long periods without them. But as you say, they can be extremely disconcerting when done in VR, where you've increased the level of immersion specifically to make the viewer feel more like they're present in the scene than simply a viewer. This means that common things like the 'Angle / Reverse Angle' sequences used to alternate between closeups of two people speaking as they converse is something that doesn't really work in VR.

The problem is, how do you convey certain non-verbal ideas without saying them. Cinema itself had this very problem, because it was a medium that grew out of the effect of disruptive technology on a prior storytelling medium, the stage, which in turn had to deal with the same problems.

Monday, September 30, 2013

A complete, cross-platform Oculus Rift sample application in almost one file

There are few things as dispiriting as finding a blog on a topic you're really keen on and then seeing that it withered and died several years before you found it.

Dear future reader, I'm totally sorry if this happened to you on my account.  However, you needn't worry about it yet; I have lots more to write.  In fact, today I will vouchsafe to my readers (both of them) a large swath of code, representing a minimal application for the Rift, in a single file C++ source file.  Aside from the main source file, there are six shader files representing the vertex and fragment shaders for three programs.

The program uses four libraries, all of which are embedded as git submodules in the project, so there's no need to download and install anything else.  All you need is your preferred development environment, and CMake to generate your project files.

I've attempted to make minimal use of preprocessor directives, but some small platform differences make it unavoidable.

Wednesday, September 11, 2013

Lessons learned from the PAX 2013 costume

Wearing an Oculus Rift as part of my costume for PAX 2013 had it's ups and downs.  That is to say, it was a fantastic experience, but there were steps I could have taken to improve it, and in some ways I was limited by my hardware.

Googly eyes functioned flawlessly

Monday, September 2, 2013

Wearing the Oculus Rift at PAX Prime 2013

This entry is very much about the Oculus Rift, but it will take a slight detour to get there.

When I make 'cool plans', I try not to talk too much about them, because based on past experience I feel that the more I talk about a cool idea, the less likely I am to actually follow through on doing it.  To that end I haven't written much up to now on my costume plans for PAX 2013.

Sunday, August 18, 2013

Troubleshooting the SDK: Accessing the Head Tracker Data

For myself, people on the Oculus Developer forums who report issues and request help boil down into two main categories: people with display issues, people with tracker issues.  Display issue people are having problems with display splitters, monitor cloning, or getting the output to the correct window, or even getting the rift display to show any output at all.   Tracker issue people are having issues getting the tracker to properly represent the orientation of the Rift and convey that information to the application.  Today we're going to focus on tracker issues.

Thursday, August 15, 2013

Improving on the Distortion Shader

In our last article we examined the distortion required by the Oculus Rift and created a shader similar to the one used in the example code in the Oculus VR SDK.  Similar, but not identical.  One big difference was that I broke out the code that did coordinate transformation into separate functions for clarity.  In addition, I used viewport coordinates instead of screen coordinates, again for reasons of clarity.

However, there is more that could be done to reduce the complexity of the shader and improve it's performance.

Friday, August 9, 2013

Understanding the Oculus Rift Distortion Shader

As much as any other single thing, the distortion shader is the heart of what makes the Oculus Rift possible.  It's enabled by incredible advances in computer rendering technology that have been driven by both the computer gaming industry and the movie and television entertainment industry.  Only in the past few years has the rendering power to apply fairly complex calculations to every single output pixel of a rendered frame has become ubiquitous enough that a product like the Rift could become possible.

Part of finding new applications for the Rift will come from a detailed understanding of how the whole system works.  For many, it's enough to be told that it's done with lenses and software the corrects for the lens distortion.


A more detailed understanding can be useful though, especially when you're trying to make something render properly and having trouble with the distortion shader and it's settings.

Monday, August 5, 2013

Digging Into the Oculus SDK - Part 2: Device Enumeration

In the previous post we examined the SDK in order to determine how better to trace into the code in order to learn what's going on when we issue commands using the public API.  Today we're going to go more into detail on how actual hardware is detected inside the SDK.

Friday, August 2, 2013

Digging Into the Oculus SDK - Part 1: The Worker Thread and the Command Queue

When working with new technology, it is not always immediately obvious where one should start when debugging an issue, attempting to experiment, or even just following the code to understand what's going on.  This is the first in a series of guides meant to share some hard-won understanding of some of the internals of the Oculus VR SDK; with luck you might find advice on where you need to focus your efforts to find a specific bit of functionality.

Welcome

Welcome to Rifty Business, a blog about all things related to the Oculus Rift virtual reality headset, primarily focused on software development.

We intend to write articles on the inner workings of the SDK, best practices for developing software for the Rift, and projects we're working on to provide more insight into the software development process and working with a new piece of hardware like the Rift.

First up is a series of posts about debugging with the SDK...