[This is post two of three on Timewarp, a new technology available on the Oculus Rift. This is a draft of work in progress of Chapter 5.7 from our upcoming book, "Oculus Rift in Action", Manning Press. By posting this draft on the blog, we're looking for feedback and comments: is this useful, and is it intelligible?]
5.7.1 When you're running early
One obvious use of timewarp is to fit in extra processing, when you know that you can afford it. The Rift SDK provides access to its timing data through several API functions:- ovrHmd_BeginFrame // Typically used in the render loop
- ovrHmd_GetFrameTiming // Typically used for custom timing and optimization
- ovrHmd_BeginFrameTiming // Typically used when doing client-side distortion
ovrFrameTiming includes:
- float DeltaSeconds
The amount of time that has passed since the previous frame returned its BeginFrameSeconds value; usable for movement scaling. This will be clamped to no more than 0.1 seconds to prevent excessive movement after pauses for loading or initialization. - double ThisFrameSeconds Absolute time value of when rendering of this frame began or is expected to begin; generally equal to NextFrameSeconds of the previous frame. Can be used for animation timing.
- double TimewarpPointSeconds Absolute point when IMU (timewarp) expects to be sampled for this frame.
- double NextFrameSeconds
Absolute time when frame Present + GPU Flush will finish, and the next frame starts. - double NextFrameSeconds Absolute time when frame Present + GPU Flush will finish, and the next frame starts.
- double ScanoutMidpointSeconds Time when when half of the screen will be scanned out. Can be passed as a prediction value to ovrHmd_GetSensorState() to get general orientation.
- double EyeScanoutSeconds[2]
Timing points when each eye will be scanned out to display. Used for rendering each eye.
Generally speaking, it is expected that the following should hold:
ThisFrameSeconds
< TimewarpPointSeconds
< NextFrameSeconds
< EyeScanoutSeconds[EyeOrder[0]]
<= ScanoutMidpointSeconds
<= EyeScanoutSeconds[EyeOrder[1]]
…although actual results may vary during execution.
Knowing when the Rift is going to reach TimewarpPointSeconds and ScanoutMidpointSeconds gives us a lot of flexibility if we happen to be rendering faster than necessary. There are some interesting possibilities here: if we know that our code will finish generating the current frame before the clock hits TimewarpPointSeconds, then we effectively have ‘empty time’ to play with in the frame. You could use that time to do almost anything (provided it’s quick)—send data to the GPU to prepare for the next frame, compute another million particle positions, prove the Riemann Hypothesis—whatever, really (Figure 5.11.)
Figure 5.11: Timewarp means you’ve got a chance to do extra processing for ‘free’ if you know when you’re idle.
Keep this in mind when using timewarp. It effectively gives your app free license to scale its scene density, graphics level, and just plain awesomeness up or down dynamically as a function of current performance, measured and decided right down to the individual frame.
But it’s not a free pass! Remember that there are nasty consequences to overrunning your available frame time: a dropped frame. And if you don’t adjust your own timing, you risk the SDK spending a busywait cycle for almost all of the following frame, using past data for the next image, which can consume valuable CPU. So you’ve got a powerful weapon here, but you must be careful not to shoot yourself in the foot with it.
[Next post: Chapter 5.7, "Advanced uses of timewarp", part 2]
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.