Sunday, January 26, 2014

Oculus Rift rendering from Java & mesh based distortion


I've just updated our public examples repository, located here to add an example of rendering content for the Oculus Rift using Java (via LWJGL).

The Java example is doing it's distortion using a new (to me anyway) approach of using a mesh rather than a shader to do the heavy lifting. More details after the break.

Java example

There are three individual projects, each defined using a Maven pom.xml file.

The first project is in resources. This holds all my Java-code resources, like the shader definitions, images and meshes. It's a little oddly defined because it's intended to share the non-code resources with my C++ projects. You can import the project into Eclipse, but you may need to manually set up the classpath so that the resources are found properly (Eclipse doesn't seem to do it correctly even though the Maven jar file is properly built).

The second project is in java/Glamour. This is a set of OpenGL wrappers to make GL object and shader management suck less, similar to the set of template classes I use in C++. Shaders, textures, framebuffers, vertex buffers, and vertex arrays are all encapsulated.

The last project is in java/Rifty. This is Rift specific stuff, unsurprisingly. There is a with a small rendering test program here that draws a simple spinning cube.

The example is very raw. There's no logic for detecting the actual coordinates of the Rift display, so you have to manually set to location in the demo code.  The projection isn't quite right yet, because I haven't reproduced all the math from Util_Render_Stereo.cpp. Instead I'm doing a lot of hard-coded constants, but it should convey the idea of the rendering mechanism you can use in LWJGL to render the scene and then do the distortion.  Also, it doesn't interact with the head tracker at all yet.  I expect the code to evolve quickly though, as I'm consulting with an actual mathematician on how to make it less stupid.

Mesh based distortion

As for the distortion I'm using a new (to me anyway) approach of using a mesh rather than a shader to do the heavy lifting of the distortion. The Oculus SDK examples have you take the scene which has been rendered to a framebuffer and render it to a quad which takes up the whole display (or half of the display, depending on whether you're distorting each eye individually) and uses a hefty shader to do the distortion.

Mesh based distortion instead pre-computes the distorted locations of a set of points on a rectangular mesh. Here is the mesh drawn as a wireframe with the texture coordinates represented as red and green:



Given this mesh, rather than a full screen quad, you can then take the texture containing the scene and render it on this texture with a trivial texture drawing shader.  Here is the vertex shader I'm using

#version 330
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);

layout(location = 0) in vec3 Position;
layout(location = 1) in vec2 TexCoord0;

out vec2 vTexCoord;

void main() {
  gl_Position = Projection * ModelView * vec4(Position, 1);
  vTexCoord = TexCoord0;
}

And here is the fragment shader

#version 330
uniform sampler2D sampler;

in vec2 vTexCoord;
out vec4 FragColor;

void main() {
  if (!all(equal(clamp(vTexCoord, 0.0, 1.0), vTexCoord))) {
    discard;
  }
  FragColor = texture(sampler, vTexCoord);
}

There are some drawbacks to this approach. For one, the distortion isn't as accurate from pixel to pixel. This drawback would also apply to any rendering approach that used a look-up texture for distortion, if the look-up texture was of lower resolution than the frame buffer being distorted. 

Another drawback is that it currently has no mechanism for chromatic aberration correction. This should be an easy fix though.  I'm currently storing only the vertex position and texture coordinate in the mesh, but I could easily include another vertex attribute, where I'd store the texture XY offsets for red and blue in the XY and ZW components respectively.

For the time being I'm ignoring the chroma distortion because I'm pathologically lazy and because my glasses already introduce so much chroma distortion into my world that it's basically a non-issue for me. However, I'm having surgery tomorrow to correct my eyes and hopefully reduce or eliminate the need for glasses with such heavy prism. After that I might find more interest in working on chroma correcting solutions.

Finally, the computation of the mesh coordinates isn't something that's supported out of the box by the SDK.  The problem is that the code in the SDK is intended to run in the shader, so for any given distorted pixel you're about to draw on the screen,  you need to compute the undistorted UV coordinate from which it came, so you can then fetch that texel out of the scene texture.  The computation of the mesh works in reverse.  You're starting out with the undistorted XY coordinate and you need to find the distorted (warped) version of it.  The Oculus SDK contains a DistortionFn() method that implements the first case, and a DistortionFnInverse() which I assume is intended to reproduce the second.  Indeed in my testing, DistortionFnInverse(DistortionFn(X)) does in fact equal X.  However, in addition to using the distortion coefficients, the Oculus distortion shader also applies a post distortion scale, because the Ks they've chosen result in a overall shrinking effect.  The post-distortion scale enlarges the resulting image to reach a 'fit point' typically the left edge of the screen.

For whatever reason, I wasn't able to get DistortionFnInverse to my satisfaction.  So for the time being I'm using my own implementation of a binary search to zero in on a given Rd (distance of the distorted point from the lens center) given an input Ru (distance of the undistorted point from the lens center) and it seems to be working OK, if not well.  Examining the Oculus DistortionFnInverse() method it appears to be executing a similar binary search as well, but using a fixed number of iterations, while I've specified an epsilon value representing a specific distance from the input coordinate to the result coordinate when the distortion function is reapplied to the test candidate.  However, I may need to reduce this epsilon (and perhaps move to doubles for this calculation) since looking at my mesh in point view it's clear that the curves aren't completely smooth.

Alex suggests I might be able to improve the results by removing the post distortion scale via distributing it among the K values, which I'm going to try shortly.

Thanks to Joe Ludwig from Valve who turned me on to the mesh based approach to distortion while discussing the Steam VR API.

Thanks also to my co-blogger Alex who is assisting me with.... stuff of some sort.  I don't know.  He has yet to publish anything so maybe he'll be motivated to explain something here.  :)

1 comment:

  1. Hi

    Thank you for making this java example
    I'm having an issue that says
    Description Resource Path Location Type
    Missing artifact org.saintandreas:example-resources:jar:0.0.1-SNAPSHOT pom.xml /rifty line 30 Maven Dependency Problem

    and

    Description Resource Path Location Type
    The container 'Maven Dependencies' references non existing library 'C:\Users\username\.m2\repository\org\saintandreas\example-resources\0.0.1-SNAPSHOT\example-resources-0.0.1-SNAPSHOT.jar' rifty Build path Build Path Problem

    i cant seem to fix it could you please help

    ReplyDelete

Note: Only a member of this blog may post a comment.