Tuesday, October 14, 2014

Using the DK 2 on a MacBook Pro

Updated this information elsewhere so updating it here, too. Here is what I did to get the DK 2 running on the MacBook Pro.

I first downloaded the 0.4.1 SDK and Runtime for the Mac. I then plugged in all cables as recommended in the guide that comes with the DK 2. After getting the cables set up, I installed the Runtime and SDK. The README contains this note:

 “Before using your new DK2, it is critical to update the firmware on the headset. This is important to ensure reliable functioning of your DK2. Use the Config Util to install the firmware file supplied in this release (v2.11). This is only relevant to DK2 owners.”

As I had tested the DK2 out on Windows previously, I had already updated my DK2 firmware to 2.11. Just to be sure, I ran OculusConfigUtil and confirmed that my firmware was up-to-date. While I had it open, I went ahead and created a user profile for myself. Creating a profile can help prevent discomfort when using the Rift.
OculusConfigUtil profile screen

On Windows, there is the new Direct HMD Access display mode which can be set by selecting Tools > Rift Display Mode in the OculusConfigUtil menu. At this time, Direct HMD Access mode is not supported on the Mac.

OculusConfigUtil Display modes selection panel
So for the Mac, the next step is to configure the displays. As with earlier releases, you have the choice of using Extended mode and Mirrored mode. Previously, I had not been able to get Extended mode to work and was forced to use mirroring. Oculus recommends against mirroring, so I gave Extended mode another try.

Extended Mode

In the display preference, I set the displays to extended mode. My laptop screen was set as the main display and the Rift was the extended display.  The Unity Integration guide, in the monitor set up section, says “For DK2, the resolution should be Scaled to 1080p, the rotation should be 90°and the refresh rate  should be 75 Hertz,” so those were the settings I used. 

In the OculusConfigUtil I then selected Show Demo Scene and the demo scene appeared correctly on the Rift. Yeah! 

The desk scene demo accessed by selecting the "Show Demo Scene" button in OculusConfigUtil 

I then tried to run the “Oculus World Demo" and it appeared on my main monitor and not the Rift. The mouse cursor also disappeared so there was no way to move the demo window to the extended portion of the desktop. The Unity Integration guide monitor set up section says “Some Unity applications will only run on the main display. In the Arrangement screen, drag the white bar onto the Rift's blue box to make it the main display.” This was the case with the “Oculus World Demo"  and to view it I needed to set the Rift as the main display and then run the demo.  But, doing so wasn’t as simple as it sounds. 

Working with the desktop is not really possible when looking through the Rift, so I needed to first make sure the “Display Preferences Window” and the finder window with the application I wanted to launch were situated such that they were at least partially on the extended portion of the display before I switched to having the  Rift be the main display. 

Desktop window positioning

With these windows in place, in the “Display Preferences Window” I grabbed the white bar that indicates which display is the main display and dragged it so that the Rift was now the main display. 

You need to grab the white bar that indicates which display is the main display and drag it so that the Rift is main display. 

Then with my main screen as the extended display, I double clicked on the “Oculus world demo” to run it. 
OculusWorldDemo

And the demo ran successfully on the Rift.

That process was very cumbersome, so I decided to also take a look at using mirrored mode.

Mirrored Mode

In the display preferences, I set the displays to mirrored. Again, I needed to rotate the display 90 degrees for the display to be the correct orientation.  

I then ran both the “Oculus World Demo” and the demo in the config Utility. In both cases I saw a lot of judder as I moved my head around (very headache inducing). The release notes have this to say on the topic:

“ Scene Judder - The whole view jitters as you look around, producing a strobing  back-and-forth effect. This effect is the result of skipping frames (or Vsync)  on a low-persistence display, it will usually be noticeable on DK2 when frame rate falls below 75 FPS. This is often the result of insufficient GPU performance or attempting to render too complex of a scene. Optimizing the engine or scene content should help.
We expect the situation to improve in this area as we introduce asynchronous timewarp and other optimizations over the next few months. If you experience this on DK2 with multiple monitors attached, please try disabling one monitor to see if the problem goes away.” 

On a suggestion from Brad, I tried setting the display refresh rate to 60 hertz. This significantly reduced the judder; however, there was noticeable screen blur when I moved my head. The good news on the blur was that unlike the judder, it wasn’t an immediate headache trigger for me.

Which mode will I use?

Which mode I will use will really depend on what I am trying to do.  If I am just using the Rift,  I would choose extended mode  as it does offer better performance. In extended mode I was seeing 75 FPS and in mirrored mode with the refresh rate set to 75 hertz I was seeing 46 FPS and with the refresh rate set to 60 I was seeing 60 FPS.

But until Direct HMD Access mode works on the Mac, unless I am testing for performance, I will probably mostly use mirrored mode when developing.  Mirrored mode allows me to see what the person using the Rift is doing and provides a faster work-flow for doing quick iterations.

Wednesday, October 1, 2014

Video: Dynamic Framebuffer Scaling in the Oculus Rift

In this video Brad discusses dynamic framebuffer scaling in the Oculus Rift:

 
 Links from the video:

Friday, September 26, 2014

Unity: Playing a video on a TV screen at the start of a Rift application

Let’s say you wanted to have a TV screen that plays a short welcome video on start up in your scene, such as in this demo I'm working on:



Displaying video on a screen in a scene in Unity Pro is typically done using a Movie Texture. Movie Textures do not play automatically - you need to use a script to tell the video when to play. The Rift, however, presents some challenges that you wouldn’t face when working with a more traditional monitor that make knowing when to start the video a bit tricky.
  1. You can’t assume that the user has the headset on when the application starts. This means you can’t assume that the user can see anything that you are displaying. 
  2.  On start-up all Rift applications display a Health and Safety Warning (HSW). The HSW is big rectangle pinned to the user’s perspective that largely obscures the user’s view of everything else in the scene.
  3. You aren’t in control of the where the user looks (or rather, you shouldn’t be - moving the camera for the user can be a major motion sickness trigger), so you can’t be sure the user is even looking at the part of the scene where the video will be displayed.
In my demo, I addressed the first two issues by making sure the HSW had been dismissed before I started the video. If the user has dismissed the HSW, it will no longer be in the way of their view and it is a good bet that if they dismissed the HSW, they have the headset on and are ready to start the demo. The third issue I addressed by making sure the video is in the user’s field of view before it starts playing.

Making sure the Health and Safety Warning (HSW) has been dismissed

The HSW says “Press any key to dismiss.” My first thought was to use the key press as the trigger for starting the video. Unfortunately this doesn’t quite work. The HSW must be displayed for a minimum amount of time before it can actually be dismissed - 15 seconds the first time it is displayed for a given profile and 6 seconds for subsequent times. The result was that often the key was pressed and the welcome video would start but the HSW had no yet gone away. I also wanted the video to replay if the user reloaded the scene. When the scene is reloaded, the HSW is not displayed, the user does not need to press a key and therefore the video would not start.

Fortunately, Oculus Unity Integration package provides a way to know if the HSW is still being displayed or not.
OVRDevice.HMD.GetHSWDisplayState().Displayed
The above will return true if the HSW is still on screen.

Making sure the video is in the player’s field of view

How you get the user to look at the video will depend a lot on what kind of scene you are using. You can, for example, put a TV in every corner of the room so that no matter which direction the user is looking, a screen is in view. Or, if you have only a single TV screen, you can use audio cues to get the get the user’s attention. (I haven't decided yet how I will get the user's attention in my demo.)

No matter how you get the player to look at where the video is playing, you can check that the video is within the user’s field of view by checking the video’s render state before playing the video using:
renderer.isVisible
The above will return true if the object (in this case, the TV screen) is currently being rendered in the scene.

Thursday, September 25, 2014

Video: Asynchronous timewarp with the Oculus Rift

In this video Brad discusses an example of using asynchronous timewarp in order to maintain a smooth experience in the Rift even if your rendering engine can't maintain the full required framerate at all times.

 

Links from the video:

Friday, August 22, 2014

Using basic statistical analysis to discover whether or not the Oculus Rift headset is being worn

As we were getting ready for our talk next week at PAX Dev 2014, entitled "Pitfalls & Perils of VR Development: How to Avoid Them", an interesting question came up: how can you tell if the Rift is actually on the user's head, instead of on their desk?  It's a pretty common (and annoying) scenario right now--you double-click to launch a cool new game, and immediately you can hear intro music and cutscene dialog but the Rift's still on a table.  I hate feeling like, ack!, I have to scramble to get the Rift on my face to see the intro.

Valve's SteamVR will help with this a lot, I expect; if I launch a game when I'm already wearing the Rift, there'll be no jarring switch.  But I'm leery--half the Rift demos I download today start by popping up a Unity dialog on my desktop before they switch to fullscreen VR, and that's going to be an even worse experience if I'm using Steam.

So I was mulling over how to figure out programmatically whether or not the Rift is on the user's head.  I figure that we can't just look at the position data from the tracker camera, because "camera can see Rift" isn't a firm indicator of "Rift is being used".  (Maybe the Rift is sitting on my desk, in view of the camera.)  Instead, we need to look at the noise of its position.

I recorded the eye pose at each frame, taking an average of all eye poses recorded every tenth of a second.  At 60FPS that's about six positions per decisecond.  The Rift's positional sensors are pretty freaking sensitive; when the Rift is sitting on my desk, the difference in position from one decisecond to the next from ambient vibration is on the order of a hundredth of a millimeter.  Pick it up, though, and those differences spike.

I plotted the standard deviation of the Rift's position, in a rolling window of ten samples for the past second, versus time:


This is a graph of log(standard deviation(average change in position per decisecond)) over time.  The units on the left are in log scale.  I found that when the Rift was inert on my desk, casual vibration kept log(σ) < -10.5; as I picked it up log(σ) spiked, and then while worn would generally hover between -4.5 and -10.5.  When the Rift was being put on or taken off, log(σ) climbed as high as -2, but only very briefly.

I found that distinguishing between Rift that was being put on or lifted off from a Rift that was being worn normally was pretty hard with this method, but that the distinction between "not in human hands at all" and "in use" was clear.  So this demonstrates a method for programmatically determining whether the Rift is in active use or not.  I hope it's useful.

Sample code was written in Java, and is available on the book's github repo.  (File "HeadMotionStatsDemo.java".)

Advanced uses of Timewarp II - When you're running late

[This is post three of three on Timewarp, a new technology available on the Oculus Rift. This is a draft of work in progress of Chapter 5.7 from our upcoming book, "Oculus Rift in Action", Manning Press. By posting this draft on the blog, we're looking for feedback and comments: is this useful, and is it intelligible?]


5.7.2 When you're running late

Of course, when the flak really starts to fly, odds are that you won’t be rendering frames ahead of the clock—it’s a lot more likely that you’ll be scrambling to catch up.  Sometimes rendering a single frame costs you longer than the number of milliseconds your target framerate allows.  But timewarp can be useful here too.

Say your engine realizes that it’s going to be running late.  Instead of continuing to render the current frame, you can send the previous frame to the Rift and let the Rift apply timewarp to the images generated a dozen milliseconds ago.  (Figure 5.12.)  Sure, they won’t be quite right—but if it buys you enough time to get back on top of your rendering load, it’ll be worth it, and no human eye will catch it when you drop occasionally one frame out of 75.  Far more importantly, the image sent to the Rift will continue to respond to the user’s head motions with absolute fidelity; low latency means responsive software, even with the occasional lost frame.

Remember, timewarp can distort any frame, so long as it’s clear when that frame was originally generated so that the Rift knows how much distortion to apply.

Figure 5.12: If you’re squeezed for rendering time, you can occasionally save a few cycles by dropping a frame and re-rendering the previous frame through timewarp.

The assumption here is that your code is sufficiently instrumented and capable of self-analysis that you do more than just render a frame and hope it was fast enough.  Carefully instrumented timing code isn’t hard to add, especially with some display-bound timing methods as ovrHmd_GetFrameTiming, but it does mean more complexity in the rendering loop.  If you’re using a commercial graphics engine, they may already have the support baked in.  This is the sort of monitoring that any 3D app engine that handles large, complicated, variable-density scenes will hopefully be capable of performing.

Dropping frames with timewarp is an advanced technique, and probably not worth investing engineering resources into early in a project.  This is something that you should only build when your scene has grown so complicated that you anticipate having spikes of rendering time.  But if that’s you, then timewarp will help.