VR Rendering with RealityServer 4.4 Update 46

RealityServer 4.4 build 1527.46 has just been released adding Iray 2016.1.1 which includes support for rendering stereo, spherical VR imagery suitable for viewing with devices such as the Oculus Rift, HTC Vive, Samsung GearVR, OSVR and Google Cardboard viewers. There are also numerous small additions and bug fixes and some other new features such as spectral rendering, however VR rendering is the headline item. In this article we will show you how to do simple VR rendering with RealityServer.

What is a VR Rendering?

There is a lot of different terminology and jargon out there about VR today and there are many different ways to produce content suitable for VR viewing. When we say VR content, we mean content intended to be viewed on a head mounted display (HMD) which has the ability to show a different image to each eye (stereo) and tracks at least the rotation of your head (some devices go further and track the position of your head however that won’t be exploited by the content we are producing here). Down the road full head tracking will become much more relevant.

Two main types of content exist (within which there are many sub-types), pre-generated VR content and real-time VR content. Pre-generated VR content is made in advance, either through real-world capture with special cameras and techniques or through 3D rendering (what we will be doing here). Usually pre-generated content takes the form of a spherical image or a pair of images if stereo is used. These may cover the full 360×180 degree field or some subset of that. This type of content only allows the user to control only the direction they are looking – their gaze – and not their position within the scene.


Office scene by Luminova Japan rendered with the new VR capabilities of RealityServer 4.4. The above view is an embedded VR player by Bhautik Joshi called vrEmbed. Pressing the Cardboard icon on your mobile device places the viewer in full screen, stereo mode and applies the appropriate distortion for your cardboard viewer. If the viewer above does not work on your device try this link.

Real-time VR content, such as games are rendered interactively and allow both gaze and movement within the VR environment. To be effective and not induce motion sickness this requires very high frame rates and low input latency, typically at least 90 frames per second (per eye) and ideally 20 milliseconds or less on input latency. This is now within reach for high end gaming PCs or when using low end content on mobile devices. The Iray settings used in this article could in fact also be used when rendering interactively with Iray to allow gaze and movement changes however given the relatively low frame rate of Iray output (compared to games engine style rendering) we will focus only on pre-rendered panoramic imagery that lets the user control their gaze alone.

VR in RealityServer

If you haven’t already done so, please read the article on Exploring the RealityServer JSON-RPC API since the way we build up the commands used below assumes you are familiar with this.

To render a stereo, spherical VR image in RealityServer we actually have to render two images, one for the left eye and one for the right. Each will have a spherical camera distortion applied to it and we will use a new feature introduced in RealityServer 4.4 build 1527.46 (from Iray 2016.1.1) which makes it easy to specify the eye separation to be used for the stereo effect and correctly account for this within the spherical projection.

Loading the Scene and Camera Setup

We’ll start by loading up one of the default scenes that comes with RealityServer and setting up the camera to be somewhere sensible for rendering a VR. You should usually point the camera flat for a VR image. Keep in mind that where you are looking will setup the initial view in your VR viewer. Here are the initial commands:

[
	{"jsonrpc": "2.0", "method": "create_scope", "params": {
		"scope_name" : "vrScope"
	}, "id": 1},

	{"jsonrpc": "2.0", "method": "use_scope", "params": {
		"scope_name" : "vrScope"
	}, "id": 2},

	{"jsonrpc": "2.0", "method": "import_scene", "params": {
		"scene_name" : "vrScene",
		"filename" : "scenes/meyemII/main.mi"
	}, "id": 3},

	{"jsonrpc": "2.0", "method": "instance_set_world_to_obj", "params": {
		"instance_name" : "mainCamera",
		"transform" : {
			"xx": 1.0, "xy": 0.0, "xz": 0.002, "xw": 0.0,
			"yx": 0.0, "yy": 1.0, "yz": 0.0, "yw": 0.0,
			"zx": -0.002, "zy": 0.0, "zz": 1.0, "zw": 0.0,
			"wx": 0.0005, "wy": -0.035, "wz": -0.25, "ww": 1.0
		}
	}, "id": 4},

	{"jsonrpc": "2.0", "method": "camera_set_resolution", "params": {
		"camera_name" : "mainCameraShape",
		"resolution" : { "x" : 6000, "y" : 3000 }
	}, "id": 5}
]

First we make a new scope for everything we will be doing and then use it and immediately import the scene. Next we set the transformation of the camera instance to put the camera in a sensible position with a nice default orientation. We also setup the camera resolution here. Note that you should always use a 2:1 aspect ratio when rendering a 360 x 180 degree panorama. The instance_name and camera_name parameters above are based on what is already in the scene, they will obviously be different for your own scenes.

Setup the Camera Distortion

Before setting up the stereo parameters we need to instruct RealityServer to render the spherical camera projection rather than the usual one. This is done simply by setting an attribute on the camera with element_set_attribute or element_set_attributes. For this we add the following commands to our sequence:

[
	...

	{"jsonrpc": "2.0", "method": "element_set_attributes", "params": {
		"element_name" : "mainCameraShape",
		"create" : true,
		"attributes" : {
			"mip_lens_focus" : {
				"type" : "Float32",
				"value" : 0.225
			},
			"mip_lens_distortion_type" : {
				"type" : "String",
				"value" : "spherical"
			}
		}
	}, "id": 6}

	{"jsonrpc": "2.0", "method": "element_set_attributes", "params": {
		"element_name" : "miDefaultOptions",
		"create" : true,
		"attributes" : {
			"progressive_rendering_max_samples" : {
				"type" : "Sint32",
				"value" : 1000
			},
			"progressive_rendering_max_time" : {
				"type" : "Sint32",
				"value" : 300
			}
		}
	}, "id": 7}
]

We set the focus distance as well here to be something sensible to be safe in case depth of field is used but the critical attribute here is the mip_lens_distortion_type which we are setting to spherical to give us the panoramic rendering. We also setup some basic rendering termination conditions here, obviously you may want to use different ones. This would complete the first request you would send (you could also include the first render but we prefer to separate that in case there are errors when loading).

Rendering the Left Eye

Now, in a new request, we need to setup the eye offset and render our image out. Before calling those commands we need to make sure we call use_scope so that we have access to the scene we loaded into a scope previously. Here is what the request to render the left eye would look like:

[
	{"jsonrpc": "2.0", "method": "use_scope", "params": {
		"scope_name" : "vrScope"
	}, "id": 1},

	{"jsonrpc": "2.0", "method": "element_set_attribute", "params": {
		"element_name" : "mainCameraShape",
		"create" : true,
		"attribute_name" : "mip_lens_stereo_offset",
		"attribute_type" : "Float32",
		"attribute_value" : -0.005
	}, "id": 2},

	{"jsonrpc": "2.0", "method": "render", "params": {
		"scene_name" : "vrScene",
		"renderer" : "iray",
		"render_context_options" : {
			"scheduler_mode" : {
				"type" : "String",
				"value" : "batch"
			}
		}
	}, "id": 3}
]

You need to decide what you want your eye separation to be (interpupilar distance or IPD). This basically corresponds to the distance between your eyes. A typical value would be 65mm, note that the parameter is in scene units so the appropriate size will depend on how your scene is setup. Higher values give a stronger stereo effect but can lead to visual discomfort. The mip_lens_stereo_offset specifies the offset from the center to the right, so for the left eye it should be set to -IPD/2.0 and for the right eye it should be IPD/2.0.

Rendering the Right Eye

The previous command will return a completed render which corresponds to the left eye. We now need to make another request to render the right eye, this looks almost identical, with one critical change:

[
	{"jsonrpc": "2.0", "method": "use_scope", "params": {
		"scope_name" : "vrScope"
	}, "id": 1},

	{"jsonrpc": "2.0", "method": "element_set_attribute", "params": {
		"element_name" : "mainCameraShape",
		"create" : true,
		"attribute_name" : "mip_lens_stereo_offset",
		"attribute_type" : "Float32",
		"attribute_value" : 0.005
	}, "id": 2},

	{"jsonrpc": "2.0", "method": "render", "params": {
		"scene_name" : "vrScene",
		"renderer" : "iray",
		"render_context_options" : {
			"scheduler_mode" : {
				"type" : "String",
				"value" : "batch"
			}
		}
	}, "id": 3}
]

Notice the value for mip_lens_stereo_offset is now set to 0.005. This command will give another image for the right eye, so you will now have two images representing the full VR content.

Cleaning Up

If you are finished with the scene you should clean up your scope to free memory and tidy up the database:

[
	{"jsonrpc": "2.0", "method": "delete_scope", "params": {
		"scope_name" : "vrScope"
	}, "id": 11}
]

Of course if you will continue using the scene you should leave the scope intact.

Results

Here is the output you should see when using the above commands on a default RealityServer installation. We have the left and right eye in tabs so you can switch between them and see the subtle difference between the frames that is not immediately obvious when viewing the images side by side.

Left Eye

Left Eye

Right Eye

Right Eye

Viewing

So you have your VR renders done but how do you view them? Unfortunately each type of VR hardware seems to take a different approach and currently there is no accepted standard or format for the storage of stereo, spherical panoramic imagery. We will cover some options here for viewing with Google Cardboard based viewers which is the cheapest and easiest way to get started. Unfortunately several of the common viewers today look for cubemap rather than latitude/longitude images such as those produced here.

For iOS we currently recommend the VR Gallery app by Holumino. This is one of the few applications that allows you to load left and right images separately and doesn’t require you to combine them into a single image. It is also available for Android, however if using Android you might want to try the NVIDIA VR Viewer. This is one of the viewers that will require you to merge your images together.

In general the landscape of VR viewing apps is changing rapidly so we expect that our recommendations may well change down the road as well. What you are looking for is a viewer that supports image viewing (many only support videos) and supports your particular device.

Google Cardboard Viewer

Google Cardboard Viewer

If you do have to merge your images together you can do it in a tool like Adobe Photoshop or alternatively if you have access to ImageMagick a simple command line can do it. Here are a few helpful commands for this:

# Merge left and right images into a side-by-side format image
montage left.jpg right.jpg -tile 2x1 -geometry +0+0 side-by-side.jpg

# Merge left and right images into a top-bottom format with left on the top
montage left.jpg right.jpg -tile 1x2 -geometry +0+0 top-bottom-left-on-top.jpg

# Merge left and right images into a top-bottom format with right on the top
montage right.jpg left.jpg -tile 1x2 -geometry +0+0 top-bottom-right-on-top.jpg

If you are using the vrEmbed tool to do web based viewing then it will allow you to specify how to divide up your image into left and right eyes so either format will work. The image embedded at the start of this article uses the side-by-side format.

Just Scratching the Surface

This is really the early days of VR in RealityServer and Iray and we look forward to announcing additional features in the future as well as updating this material to reference the latest viewers that become available. If you have a viewer that works well for you or want to share some VR content you have created with RealityServer it would be great to hear from you.

Articles Tutorials