Wednesday, November 12, 2014

Unity 4: Knowing which user profile is in use

Previous versions of the Unity Integration package did not include a call for getting the user profile name. As of 0.4.3, it is now possible get the the user profile name. To know which profile is being used, you can use GetString()found in the OVRManager.cs script.

public string GetString(string propertyName, string defaultVal = null)

Below is a simple example script (report.cs) that uses this method to print out the name of the current user profile to the console. To use this script,  attach it to an empty game object in a scene that is using the OVRCameraRig or OVRPlayerController prefab. With the Rift connected and powered on, run the scene in the Unity Editor. If default is returned, no user profile has been found.


using UnityEngine;
using System.Collections;
using Ovr;

public class report : MonoBehaviour {
    void Start () {
     Debug.Log (OVRManager.capiHmd.GetString(Hmd.OVR_KEY_USER, "")) 
    }
}


The GetString()method found in the OVRManager.cs script method is used to get the profile values for the current HMD. The OVRManager.cs script gets a reference to the current HMD, capiHmd. The Hmd class, defined in OvrCapi.cs, provides a number of constants that you can use to get user profile information for the current HMD. In this example, I used OVR_KEY_USER to get the profile name. You could also get the user’s height (OVR_KEY_PLAYER_HEIGHT), IPD (OVR_KEY_IPD) or gender (OVR_KEY_GENDER), for example.

Thursday, November 6, 2014

Thoughts on an alternative approach to distortion correction in the OpenGL pipeline

Despite some of the bad press it's gotten lately, I quite like OpenGL.  However, it has some serious limitations when dealing with the kind of distortion required for VR.

The problem

VR distortion is required because of the lenses in Ouclus Rift style VR headsets.  Put (very) simply, the lenses provide a wide field of view even though the screen isn't actually that large, and make it possible to focus on the screen even though it's very close to your eyes.

However, the lenses introduce curvature into the images seen through them.  If you render a cube in OpenGL that takes up 40° of your field of view, and look at it through the lenses of the Rift, you'll see curvature in the sides, even though they should be straight.

In order to correct for this, the current approach to correction is to render images to textures, and then apply distortion to the textures.  Think of it as painting a scene on a canvas of latex and then stretching the latex onto a curved surface.  The curvature of the surface is the exact inverse of the curvature introduced by the lenses, so when you look at the result through the lens, it no longer appears distorted.

However, this approach is extremely wasteful.  The required distortion magnifies the center of the image, while shrinking the outer edges.  In order to avoid loss of detail at the center, the source texture you're distorting has to have enough pixels so that at the area of maximum magnification, there is a 1:1 ratio of texture pixels to screen pixels.  But towards the edges, you're shrinking the image, so all your extra rendered pixels are essentially going to waste.  A visual representation of this effect can be seen in my video on dynamic framebuffer scaling below, at about 1:12.




A possible solution...

So how do we render a scene with distortion but without the cost of all those extra pixels that never make it to the screen?  What if we could modify the OpenGL pipeline so that it renders only the pixels actually required?

The modern OpenGL pipeline is extremely configurable, allowing clients to write software for performing most parts of it.  However, one critical piece of the pipeline remains fixed: the rasterizer.  During rendering, the rasterizer is responsible for taking groups of normalized devices coordinates (where the screen is represented as a square with X and Y axes going from -1 to 1) representing a triangle and converting them to lists of pixels which need to be rendered by the fragment shaders.  This is still a fixed function because it's the equivalent of picking 3 points on a piece of graph paper and deciding which boxes are inside the triangle.  It's super easy to implement in hardware, and prior to now there hasn't been a compelling reason to mess with it.

But just as the advent of more complex lighting and surface coloring models made the fixed function vertex and fragment shaders in the old pipeline led to the rise the current model, the needs of VR give us a reason to add programmability to the rasterizer.  

What we need is a way to take the rasterizers traditional output (a set of pixel coordinates) and displace them based on the required distortion.  

What would such a shader look like?  Well, first lets assume that the rasterizer operates in two separate steps.  The first takes the normalized devices coordinates (which are all in the range [-1,1] on both axes) and outputs a set of N values that are still in normalized devices coordinates.  The second step displaces the output of the first step based on the distortion function.

In GLSL terms, the first step takes three vec3 values (representing a triangle) and outputs N vec3 coordinates.  How many N depends on how much of the screen the triangle covers and also the specific resolution of the rasterization operation.  This would not be the same resolution as the screen for the same reason that we render to a larger than screen resolution texture in the current distortion method.  This component would remain in the fixed function pipeline.  It's basically the same as the graph paper example, but with a specific coordinate system.  

The second step would be programmable.  It would consist of a shader with a single vec2 input and a single vec2 output, and would be run for every output of the first step (the vec3's become vec2's because at this point in the pipeline we aren't interacting with depth, so we only needs the xy values of the previous step).  

in vec2 sourceCoordinate;
out vec2 distortedCoordinate;

void main() {
  // Use the distortion function (or a pre-baked structure) to 
  // compute the output coordinate based on 
  // the input coordinate
}

Essentially this is just a shader that says "If you were going to put this pixel on the screen here, you should instead put it here".  This gives the client the displace the pixels that make up the triangle in exactly the same way they would be displaced using the texture distortion method currently used, but without the cost of running so many extra pixels through the pipeline.  

Once OpenGL has all the output coordinates, it can map them to actual screen coordinates.  Where more than one result maps to a single screen coordinate, OpenGL can blend the source pixels together based on each's level of contribution, and send the results as a single set of attributes to the fragment shader.  

The application of such a rasterization shader would be orthogonal to the vertex/fragment/geometry/tesselation shaders, similar to the way compute shaders are independent.   Binding and unbind a raster shader would have no impact on the currently bound vertex/fragment/geometry/tesselation shader, and vice versa.  

Chroma correction

Physical displacement of the pixels is only one part of distortion correction.  The other portion is correction for chromatic aberration, which this approach doesn't cover.

One approach would be to have the raster shader output three different coordinates, one for each color channel.  This isn't appealing because the likely outcome is that the pipeline then has to run the fragment shader multiple times, grabbing only one color channel from each run.  Since avoiding running the fragment shader operations more than we have to is the whole point of this exercise, this is unappealing.

Another approach is to add an additional shader to the program that specifically provides the chroma offset for each pixel.  In the same way you must have both a vertex and a fragment shader to create a rendering program in OpenGL, a distortion correction shader might require both a raster and a chroma shader.  This isn't ideal, because only the green channel would be perfectly computed for the output pixel it covers, while the red and blue pixels would be covering either slightly more or slightly less of the screen than they actually should be.  Still it's likely that this imperfection would be well below the level of human perception, so maybe it's a reasonable compromise.

Issues

Cracks
You want to avoid situations where two pixels are adjacent in the raster shader but the outputs have a gap between them when mapped to the screen pixels.  Similar to the way we use a higher resolution than the screen for textures now, we would use a higher resolution than the screen for the rasterization step, thus ensuring that at the area of greatest magnification due to distortion, no two two adjacent input pixels cease to be adjacent when mapped to actual physical screen resolution

Merging
An unavoidable consequence of distortion, even without the above resolution increase is that pixels that are adjacent in the raster shader inputs will end up with their outputs mapping to the same pixel.  

Cost 
Depending on the kind of distortion required for a given lens, the calculations called for in the raster shader might be quite complex, and certainly not the kind of thing you'd want to be doing for every pixel of every triangle.  However, that's a fairly easy problem to solve.  When binding a distortion program, the OpenGL driver could precompute the distortion for every pixel, as well as precompute the weight for each rasterizer output pixel relative to the physical screen pixel it eventually gets mapped to.  This computation would only need to be done once for any given raster shader / raster raster resolution / viewport resolution required.  If OpenGL can be told about symmetry even more optimization is possible.  

You end up doing a lot more linear interpolation among vertex attributes during the rasterization state, but all this computation is still essentially the same kind of work the existing rasterization stage already does, and far less costly than a complex lighting shader executed for a pixel that never gets displayed. 

Next steps

  • Writing up something less off the cuff
  • Creating a draft specification for what the actual OpenGL interface would look like
  • Investigating a software OpenGL implementation like Mesa and seeing how hard it would be to prototype an implementation
  • Pester nVidia for a debug driver I can experiment with
  • Learn how to write a shader compiler
  • Maybe figure out some way to make someone else do all this


Wednesday, October 22, 2014

Video: Rendering OpenCV captured images in the Rift

In this video, Brad gives a walkthrough of an application that pulls images from a live Rift-mounted webcam and renders them to the display.


Links for this video:

Tuesday, October 14, 2014

Using the DK 2 on a MacBook Pro

Updated this information elsewhere so updating it here, too. Here is what I did to get the DK 2 running on the MacBook Pro.

I first downloaded the 0.4.1 SDK and Runtime for the Mac. I then plugged in all cables as recommended in the guide that comes with the DK 2. After getting the cables set up, I installed the Runtime and SDK. The README contains this note:

 “Before using your new DK2, it is critical to update the firmware on the headset. This is important to ensure reliable functioning of your DK2. Use the Config Util to install the firmware file supplied in this release (v2.11). This is only relevant to DK2 owners.”

As I had tested the DK2 out on Windows previously, I had already updated my DK2 firmware to 2.11. Just to be sure, I ran OculusConfigUtil and confirmed that my firmware was up-to-date. While I had it open, I went ahead and created a user profile for myself. Creating a profile can help prevent discomfort when using the Rift.
OculusConfigUtil profile screen

On Windows, there is the new Direct HMD Access display mode which can be set by selecting Tools > Rift Display Mode in the OculusConfigUtil menu. At this time, Direct HMD Access mode is not supported on the Mac.

OculusConfigUtil Display modes selection panel
So for the Mac, the next step is to configure the displays. As with earlier releases, you have the choice of using Extended mode and Mirrored mode. Previously, I had not been able to get Extended mode to work and was forced to use mirroring. Oculus recommends against mirroring, so I gave Extended mode another try.

Extended Mode

In the display preference, I set the displays to extended mode. My laptop screen was set as the main display and the Rift was the extended display.  The Unity Integration guide, in the monitor set up section, says “For DK2, the resolution should be Scaled to 1080p, the rotation should be 90°and the refresh rate  should be 75 Hertz,” so those were the settings I used. 

In the OculusConfigUtil I then selected Show Demo Scene and the demo scene appeared correctly on the Rift. Yeah! 

The desk scene demo accessed by selecting the "Show Demo Scene" button in OculusConfigUtil 

I then tried to run the “Oculus World Demo" and it appeared on my main monitor and not the Rift. The mouse cursor also disappeared so there was no way to move the demo window to the extended portion of the desktop. The Unity Integration guide monitor set up section says “Some Unity applications will only run on the main display. In the Arrangement screen, drag the white bar onto the Rift's blue box to make it the main display.” This was the case with the “Oculus World Demo"  and to view it I needed to set the Rift as the main display and then run the demo.  But, doing so wasn’t as simple as it sounds. 

Working with the desktop is not really possible when looking through the Rift, so I needed to first make sure the “Display Preferences Window” and the finder window with the application I wanted to launch were situated such that they were at least partially on the extended portion of the display before I switched to having the  Rift be the main display. 

Desktop window positioning

With these windows in place, in the “Display Preferences Window” I grabbed the white bar that indicates which display is the main display and dragged it so that the Rift was now the main display. 

You need to grab the white bar that indicates which display is the main display and drag it so that the Rift is main display. 

Then with my main screen as the extended display, I double clicked on the “Oculus world demo” to run it. 
OculusWorldDemo

And the demo ran successfully on the Rift.

That process was very cumbersome, so I decided to also take a look at using mirrored mode.

Mirrored Mode

In the display preferences, I set the displays to mirrored. Again, I needed to rotate the display 90 degrees for the display to be the correct orientation.  

I then ran both the “Oculus World Demo” and the demo in the config Utility. In both cases I saw a lot of judder as I moved my head around (very headache inducing). The release notes have this to say on the topic:

“ Scene Judder - The whole view jitters as you look around, producing a strobing  back-and-forth effect. This effect is the result of skipping frames (or Vsync)  on a low-persistence display, it will usually be noticeable on DK2 when frame rate falls below 75 FPS. This is often the result of insufficient GPU performance or attempting to render too complex of a scene. Optimizing the engine or scene content should help.
We expect the situation to improve in this area as we introduce asynchronous timewarp and other optimizations over the next few months. If you experience this on DK2 with multiple monitors attached, please try disabling one monitor to see if the problem goes away.” 

On a suggestion from Brad, I tried setting the display refresh rate to 60 hertz. This significantly reduced the judder; however, there was noticeable screen blur when I moved my head. The good news on the blur was that unlike the judder, it wasn’t an immediate headache trigger for me.

Which mode will I use?

Which mode I will use will really depend on what I am trying to do.  If I am just using the Rift,  I would choose extended mode  as it does offer better performance. In extended mode I was seeing 75 FPS and in mirrored mode with the refresh rate set to 75 hertz I was seeing 46 FPS and with the refresh rate set to 60 I was seeing 60 FPS.

But until Direct HMD Access mode works on the Mac, unless I am testing for performance, I will probably mostly use mirrored mode when developing.  Mirrored mode allows me to see what the person using the Rift is doing and provides a faster work-flow for doing quick iterations.

Wednesday, October 1, 2014

Video: Dynamic Framebuffer Scaling in the Oculus Rift

In this video Brad discusses dynamic framebuffer scaling in the Oculus Rift:

 
 Links from the video:

Friday, September 26, 2014

Unity: Playing a video on a TV screen at the start of a Rift application

Let’s say you wanted to have a TV screen that plays a short welcome video on start up in your scene, such as in this demo I'm working on:



Displaying video on a screen in a scene in Unity Pro is typically done using a Movie Texture. Movie Textures do not play automatically - you need to use a script to tell the video when to play. The Rift, however, presents some challenges that you wouldn’t face when working with a more traditional monitor that make knowing when to start the video a bit tricky.
  1. You can’t assume that the user has the headset on when the application starts. This means you can’t assume that the user can see anything that you are displaying. 
  2.  On start-up all Rift applications display a Health and Safety Warning (HSW). The HSW is big rectangle pinned to the user’s perspective that largely obscures the user’s view of everything else in the scene.
  3. You aren’t in control of the where the user looks (or rather, you shouldn’t be - moving the camera for the user can be a major motion sickness trigger), so you can’t be sure the user is even looking at the part of the scene where the video will be displayed.
In my demo, I addressed the first two issues by making sure the HSW had been dismissed before I started the video. If the user has dismissed the HSW, it will no longer be in the way of their view and it is a good bet that if they dismissed the HSW, they have the headset on and are ready to start the demo. The third issue I addressed by making sure the video is in the user’s field of view before it starts playing.

Making sure the Health and Safety Warning (HSW) has been dismissed

The HSW says “Press any key to dismiss.” My first thought was to use the key press as the trigger for starting the video. Unfortunately this doesn’t quite work. The HSW must be displayed for a minimum amount of time before it can actually be dismissed - 15 seconds the first time it is displayed for a given profile and 6 seconds for subsequent times. The result was that often the key was pressed and the welcome video would start but the HSW had no yet gone away. I also wanted the video to replay if the user reloaded the scene. When the scene is reloaded, the HSW is not displayed, the user does not need to press a key and therefore the video would not start.

Fortunately, Oculus Unity Integration package provides a way to know if the HSW is still being displayed or not.
OVRDevice.HMD.GetHSWDisplayState().Displayed
The above will return true if the HSW is still on screen.

Making sure the video is in the player’s field of view

How you get the user to look at the video will depend a lot on what kind of scene you are using. You can, for example, put a TV in every corner of the room so that no matter which direction the user is looking, a screen is in view. Or, if you have only a single TV screen, you can use audio cues to get the get the user’s attention. (I haven't decided yet how I will get the user's attention in my demo.)

No matter how you get the player to look at where the video is playing, you can check that the video is within the user’s field of view by checking the video’s render state before playing the video using:
renderer.isVisible
The above will return true if the object (in this case, the TV screen) is currently being rendered in the scene.

Thursday, September 25, 2014

Video: Asynchronous timewarp with the Oculus Rift

In this video Brad discusses an example of using asynchronous timewarp in order to maintain a smooth experience in the Rift even if your rendering engine can't maintain the full required framerate at all times.

 

Links from the video: