What’s New in RealityServer 5.0

We are happy to announce the immediate availability of RealityServer 5.0. There are some great new features so we’ve put together a quick list of the headline items. We will also be posting additional articles on the individual features and how to use them but for now take a look at what’s new.

Iray Photoreal Rendering

Iray Photoreal Rendering by Floorplanner.com

Major Features

Iray 2017

With the release of NVIDIA Iray 2017 many very significant features have been added. Here are some of the headline items. There are additionally numerous smaller performance improvements, bug fixes and other updates related to Iray.

No More Light Limits with Iray Interactive

Iray Interactive is a faster, more approximate rendering mode than Iray Photoreal, however it has always had a very serious limitation, it could not support more than 16 light sources. While it was very useful for exterior scenes that were primarily illuminated by HDRI environments or sun sky systems, for interiors or night time exteriors it just wasn’t possible to get good results with only 16 light sources available. This limit has now been lifted, opening the door for much more widespread use of Iray Interactive.

Los Angeles County Lighting (128,000 lights)

Los Angeles County Lighting (128,000 lights)

In the image above you can see a scene with over 100,000 (yes, one hundred thousand) light sources rendered with Iray Interactive. Performance is impressive to say the least. Where traditionally more approximate rendering modes suffer extreme performance loss when lots of light sources are used, Iray Interactive is now easily able to handle huge numbers of lights.

3D Backplates

The virtual backplate feature is very useful when you want to swap out a different background to the one you are using to illuminate your scene. This works great for fixed images where you have a high resolution still image to use as the backplate, however in scenes where you can navigate the view it becomes more difficult as you need a separate backplate for each image. A frequent request from customers has been for the ability to supply a different environment for viewing through windows or around objects while keeping existing lighting.

Vertical Cross Map on Backplate Mesh

Vertical Cross Map on Backplate Mesh

Iray 2017 introduces the Backplate Mesh feature which allows you to specify any mesh object in your scene to act as a backplate. You just need to ensure the mesh has suitable UV coordinates but otherwise it can be any shape you like. The backplate image will be projected onto the mesh using the UV coordinates and replace the directly visible background in your images.

Sub-surface Scattering in Iray Interactive

You can now see the effect of materials with a sub-surface scattering component (those that define a scattering_coefficient in their MDL material_volume) when using Iray Interactive mode. Previously this component was ignored and only the absorption_coefficient was used. Iray Interactive uses a fast approximation and so results may differ from the physically correct Iray Photoreal mode, however for materials with moderate sub-surface scattering the results will look great.

(Left) Iray 2016.3 (Right) Iray 2017

(Left) Iray 2016.3 (Right) Iray 2017

 

MDL Projector Functions

Often you get geometry that doesn’t have UV texture coordiantes already embedded in them. For example geometry tessellated from CAD systems with free-form surface models or less than ideal 3D file formats such as STL. Iray 2017 adds a great new feature called Projector Functions. This allows you to use procedural texture projectors (or ones you write yourself in MDL) to execute and apply UV texture coordiantes to your geometry.

You might wonder why you couldn’t just do this on your MDL material itself and of course this is possible however doing so then means the MDL material is closely linked to the geometry it is being applied to, making it much less portable. Using projector functions you can setup the material once in a way that expects the UV texture coordinates to come from the object and then use the projector functions to create them on them on the object.

Complex Geometry with Triplanar UV Projection

Complex Geometry with Triplanar UV Projection

 

Section Plane Scene Elements

Iray 2017 adds a new element type for section plans. Previously section planes were added by setting attributes on the scene options. The new section plane scene elements allow you to add them to the scene, instance and transform them light regular objects. This makes the simpler to manipulate and makes the way they are transformed consistent with other scene elements.

RealityServer

WebSockets Based Streaming

One of the most requested features from RealityServer users has been to provide an improved, lower latency method for streaming rendering results from the server to the client. RealityServer 5.0 introduces WebSocket streaming for persistent, bi-directional communication between the client and server. Using WebSockets significantly reduces latency and allows the server to push imagery to the client instead of requiring the client to constantly poll for new images.

To get users started we have updated our standard Render Loop Demo application with WebSockets support and have added a WebSockets streaming client module to our JavaScript client libraries. If you are already using the JavaScript client library then this makes getting started with WebSockets extremely simple. If you want to roll your own client you can use our example to understand the protocol.

Currently we support streaming image data from the render loop to the client from the server over WebSockets as well as updating camera data (including arbitary camera attributes) on the server from the client. This communication all happens over the same persistent connection, avoiding the overhead of setting up and tearing down a HTTP connection for every request.

Since the camera movement and image stream are the most latency sensitive parts of any application we have chosen to implement those over WebSockets first. In the future we will plan to enable additional functionality over WebSockets, including potentially video streaming if we are able to do so in a way that has suitably broad browser support. For now, you can still use your normal way of sending commands for everything that isn’t explicitly supported over WebSockets and it will get picked up by the stream.

V8 Server-side JavaScript Engine

This one is big! RealityServer has had a server-side JavaScript command API since version 3.0, however this was based on a relatively ancient version of the Mozilla SpiderMonkey JavaScript runtime. Starting with RealityServer 5.0 we are phasing out the old JavaScript command API in favour of a new one based on the V8 JavaScript runtime developed by the Chromium Project.

If you haven’t used server-side JavaScript commands yet now is a great time to start. They allow you to build your own commands that are automatically exposed over the RealityServer JSON-RPC API and called just like any other command. Within these commands you can freely call any built in command (or other JavaScript commands) without having to make a round trip to the server. This is great when you have chains of commands where results depend on previous commands or if you want to modularlise commonly used functionality.

With the inclusion of V8, you now have full access to all modern JavaScript language features within your server-side JavaScript commands. As of writing we are including V8 version 5.6.326.42 and plan to keep this up to date as future versions are announced. All of our previous JavaScript commands have been ported to V8 and included in RealityServer. You can still use the old SpiderMonkey based engine for now if you wish, however it will be deprecated at some point in the future.

We have also started to wrap many of the common scene elements and data types up in pre-defined JavaScript classes so you can very easily manipulate your scene data without resorting to large numbers of command calls. For example you can now create an Instance as both a scene element and JavaScript object and then directly set the attributes on it using the more familiar object oriented dot notation. So far we have wrapped the most commonly used scene elements but more are coming.

There are also a number of JavaScript classes for the most common data types you might want to use such as RS.math.Matrix4x4, RS.math.Color, RS.math.Vector3 and many more. This can be easily initialised in several differently commonly used ways, eliminating large amounts of boiler plate checking code from your JavaScript commands.

We have also included an implementation of require, this allows you to heavily modularise your code and re-use code between your commands. We are exploiting this already for the wrapper classes mentioned above and have deployed it to great effect on several internal projects already. In the past, in SpiderMonkey it was necessary to have all of the code, repeated in every command. Not anymore.

If you like Node.js (but wish you could develop everything synchronous) then we think you’ll love this new way to work in RealityServer. We are just scratching the surface so far of where this will go but there are already some very cool things that can now be done with V8 integrated. We’ll be preparing some future articles to demo some of these features very soon.

Single Pass Stereo Rendering

We have had stereo, including stereo VR rendering¬†since RealityServer 4.4 build 1527.46. However previously it required you to render two images separately, changing a parameter in between renders. In RealityServer 5.0 you can perform a stereo render, whether it’s for VR or just a normal image, in a single pass. You can render side-by-side or top-and-bottom style images and the image will automatically be doubled in width or height as needed and composited for you on the server into a single image.

All you need is to set the standard Iray mip_lens_stereo_offset attribute to specify the eye separation and then the new mip_lens_combined_stereo string attribute on your camera to the desired layout. Use vertical_lr for a top/bottom image with the left eye on the top and the right eye on the bottom. Also available are vertical_rl, horizontal_lr and horizontal_rl.

Single Pass Stereo Rendering

Single Pass Stereo Rendering

Iray Bridge Server

Some of our users have asked if they could use their RealityServer installations as Iray Bridge servers to connect their Iray based desktop applications to. This can be useful for example if you are using your own Iray SDK based application or any Iray application that supports ad-hoc bridge connections. When in use RealityServer then offers remote streaming with Iray Bridge to those client applications, similar to Iray Server but lacking the queuing and management functionality.

A side benefit is that when you perform such a rendering the scene data is loaded into the same shared database that RealityServer uses, so you can potentially utilise RealityServer functionality to then export or capture this scene data for use outside the Iray based application. You can enable this functionality from your realityserver.conf file (there is a commented out section showing how to do it).

Render Loop Outline Rendering and Picking

Most people now use the render loop functionality when doing interactive rendering with RealityServer rather than polling and with the introduction of WebSocket streaming we think even more users will take that approach. Unfortunately picking (casting a ray at a click point and seeing what it hits and where), is complicated quite a bit by using the render loop.

To help people understand how it works we have now added picking to our main render loop example, showing how you can use the new default render loop handler to perform a pick operation and then using another new feature we added to dynamically highlight the object you picked by drawing an outline around it. This is extremely useful for applications where you want to select objects and indicate to the user what has been selected.

Dynamically Rendered Object Outline

Dynamically Rendered Object Outline

Texture Upload Command

A frequent use case we are asked about is uploading a texture to RealityServer directly, for example in an online configurator where the user might be allowed to upload their own fabric pattern or image to be printed on a product. Previously you would need to find a way to get this texture onto the filesystem of the server running RealityServer. This often meant using another application server for this purpose.

In RealityServer 5.0 we have added the image_reset_from_base64 command. This allows you to change the image associated with a texture used in your scene by base64 encoding the image data and including it on the command. The data will then be loaded by RealityServer and used immediately.

mig Image Format

RealityServer supports a lot of different pixel formats for the various things you might need to render. In addition to your standard image data you may also want to render depth maps, UV texture coordinates, normals, irradiance data and many others. Often stuffing this data into colours isn’t the best way to handle things where you want to be absolutely sure the data is written and read in exactly the data format you want.

To address this we are introducing the mig image file format in RealityServer 5.0. We are providing both read and write capability for this format and it supports all of the pixel types which our render command can output natively. That means:

  • Rgb
  • Rgba
  • Rgb_fp
  • Color
  • Rgbe
  • Rgbea
  • Rgba_16
  • Rgb_16
  • Sint8
  • Sint32
  • Float32
  • Float32<2>
  • Float32<3>
  • Float32<4>
mig for Complex Data

mig for Complex Data

are now all supported and can be written to and then read back without fear of being re-interpreted as would be the case when trying to store some of these data types in a more conventional format such as TIFF or PNG. This is great for things like texture coordinates and normals which don’t always like being coerced into colours. When loading mig images the pixel format will always be preserved.

The file format is a simple uncompressed binary format and the specifications are available upon request should you wish to develop your own tools to read and write images in this format.

Import Elements from String

We have always had the import_elements_from_string command in RealityServer however previously it only supported providing MDL content. This feature has now been extended to support any format for which you have a valid importer, including the standard .mi and .obj files.

In some cases it might be useful upload transient content to RealityServer directly rather than rely on getting it to the filesystem to load as you would normally do. For example if you want to allow a user to upload a model in Wavefront OBJ format.

As long as your file format is string based, you can embed it within this command’s parameters and get the content loaded with RealityServer.

Smaller Features

During development and creation of internal projects we found a few gaps in our commands that needed filling. We managed to squeeze a few extras into our work program for RealityServer 5.0.

Displacement Related Commands

Added geometry_get_max_displacement and geometry_set_max_displacement to control setting displacement settings on meshes.

Texture Compression Commands

Added texture_get_compression and texture_set_compression to allow enabling in memory compression of texture data to save on GPU resources.

Light Bounding Box

The element_get_bounding_box command now supports retrieving bounding box information or Light elements. This is useful for area light sources.

Get MDL Definition Name

The mdl_get_definition command has been added to retrieve the name of the MDL definition used to create a material or function. This can be very useful when you want to copy a material by creating another instance from the same definition.

Licensing Model

RealityServer 5.0 will switch from a per-GPU licensing model to a per-Server licensing model. To access this new licensing model you must be using RealityServer 5.0, when doing so additional GPUs in your server will no longer require extra licenses.

Available Now!

If you haven’t received your update yet please let us know. If you have never tried RealityServer this is a great time to get started. Contact us for more information.

Paul Arden

Paul Arden has worked in the Computer Graphics industry for over 20 years, co-founding the architectural visualisation practice Luminova out of university before moving to mental images and NVIDIA to manage the Cloud-based rendering solution, RealityServer, now managed by migenius where Paul serves as CEO.

More Posts - LinkedIn

Articles
Get in Touch