We are happy to announce the immediate availability of RealityServer 5.0. There are some great new features so we’ve put together a quick list of the headline items. We will also be posting additional articles on the individual features and how to use them but for now take a look at what’s new.
With the release of NVIDIA Iray 2017 many very significant features have been added. Here are some of the headline items. There are additionally numerous smaller performance improvements, bug fixes and other updates related to Iray.
Iray Interactive is a faster, more approximate rendering mode than Iray Photoreal, however it has always had a very serious limitation, it could not support more than 16 light sources. While it was very useful for exterior scenes that were primarily illuminated by HDRI environments or sun sky systems, for interiors or night time exteriors it just wasn’t possible to get good results with only 16 light sources available. This limit has now been lifted, opening the door for much more widespread use of Iray Interactive.
In the image above you can see a scene with over 100,000 (yes, one hundred thousand) light sources rendered with Iray Interactive. Performance is impressive to say the least. Where traditionally more approximate rendering modes suffer extreme performance loss when lots of light sources are used, Iray Interactive is now easily able to handle huge numbers of lights.
The virtual backplate feature is very useful when you want to swap out a different background to the one you are using to illuminate your scene. This works great for fixed images where you have a high resolution still image to use as the backplate, however in scenes where you can navigate the view it becomes more difficult as you need a separate backplate for each image. A frequent request from customers has been for the ability to supply a different environment for viewing through windows or around objects while keeping existing lighting.
Iray 2017 introduces the Backplate Mesh feature which allows you to specify any mesh object in your scene to act as a backplate. You just need to ensure the mesh has suitable UV coordinates but otherwise it can be any shape you like. The backplate image will be projected onto the mesh using the UV coordinates and replace the directly visible background in your images.
You can now see the effect of materials with a sub-surface scattering component (those that define a scattering_coefficient in their MDL material_volume) when using Iray Interactive mode. Previously this component was ignored and only the absorption_coefficient was used. Iray Interactive uses a fast approximation and so results may differ from the physically correct Iray Photoreal mode, however for materials with moderate sub-surface scattering the results will look great.
Often you get geometry that doesn’t have UV texture coordiantes already embedded in them. For example geometry tessellated from CAD systems with free-form surface models or less than ideal 3D file formats such as STL. Iray 2017 adds a great new feature called Projector Functions. This allows you to use procedural texture projectors (or ones you write yourself in MDL) to execute and apply UV texture coordiantes to your geometry.
You might wonder why you couldn’t just do this on your MDL material itself and of course this is possible however doing so then means the MDL material is closely linked to the geometry it is being applied to, making it much less portable. Using projector functions you can setup the material once in a way that expects the UV texture coordinates to come from the object and then use the projector functions to create them on them on the object.
Iray 2017 adds a new element type for section plans. Previously section planes were added by setting attributes on the scene options. The new section plane scene elements allow you to add them to the scene, instance and transform them light regular objects. This makes the simpler to manipulate and makes the way they are transformed consistent with other scene elements.
One of the most requested features from RealityServer users has been to provide an improved, lower latency method for streaming rendering results from the server to the client. RealityServer 5.0 introduces WebSocket streaming for persistent, bi-directional communication between the client and server. Using WebSockets significantly reduces latency and allows the server to push imagery to the client instead of requiring the client to constantly poll for new images.
Currently we support streaming image data from the render loop to the client from the server over WebSockets as well as updating camera data (including arbitary camera attributes) on the server from the client. This communication all happens over the same persistent connection, avoiding the overhead of setting up and tearing down a HTTP connection for every request.
Since the camera movement and image stream are the most latency sensitive parts of any application we have chosen to implement those over WebSockets first. In the future we will plan to enable additional functionality over WebSockets, including potentially video streaming if we are able to do so in a way that has suitably broad browser support. For now, you can still use your normal way of sending commands for everything that isn’t explicitly supported over WebSockets and it will get picked up by the stream.
We have also included an implementation of require, this allows you to heavily modularise your code and re-use code between your commands. We are exploiting this already for the wrapper classes mentioned above and have deployed it to great effect on several internal projects already. In the past, in SpiderMonkey it was necessary to have all of the code, repeated in every command. Not anymore.
If you like Node.js (but wish you could develop everything synchronous) then we think you’ll love this new way to work in RealityServer. We are just scratching the surface so far of where this will go but there are already some very cool things that can now be done with V8 integrated. We’ll be preparing some future articles to demo some of these features very soon.
We have had stereo, including stereo VR rendering since RealityServer 4.4 build 1527.46. However previously it required you to render two images separately, changing a parameter in between renders. In RealityServer 5.0 you can perform a stereo render, whether it’s for VR or just a normal image, in a single pass. You can render side-by-side or top-and-bottom style images and the image will automatically be doubled in width or height as needed and composited for you on the server into a single image.
All you need is to set the standard Iray mip_lens_stereo_offset attribute to specify the eye separation and then the new mip_lens_combined_stereo string attribute on your camera to the desired layout. Use vertical_lr for a top/bottom image with the left eye on the top and the right eye on the bottom. Also available are vertical_rl, horizontal_lr and horizontal_rl.
Some of our users have asked if they could use their RealityServer installations as Iray Bridge servers to connect their Iray based desktop applications to. This can be useful for example if you are using your own Iray SDK based application or any Iray application that supports ad-hoc bridge connections. When in use RealityServer then offers remote streaming with Iray Bridge to those client applications, similar to Iray Server but lacking the queuing and management functionality.
A side benefit is that when you perform such a rendering the scene data is loaded into the same shared database that RealityServer uses, so you can potentially utilise RealityServer functionality to then export or capture this scene data for use outside the Iray based application. You can enable this functionality from your realityserver.conf file (there is a commented out section showing how to do it).
Most people now use the render loop functionality when doing interactive rendering with RealityServer rather than polling and with the introduction of WebSocket streaming we think even more users will take that approach. Unfortunately picking (casting a ray at a click point and seeing what it hits and where), is complicated quite a bit by using the render loop.
To help people understand how it works we have now added picking to our main render loop example, showing how you can use the new default render loop handler to perform a pick operation and then using another new feature we added to dynamically highlight the object you picked by drawing an outline around it. This is extremely useful for applications where you want to select objects and indicate to the user what has been selected.
A frequent use case we are asked about is uploading a texture to RealityServer directly, for example in an online configurator where the user might be allowed to upload their own fabric pattern or image to be printed on a product. Previously you would need to find a way to get this texture onto the filesystem of the server running RealityServer. This often meant using another application server for this purpose.
In RealityServer 5.0 we have added the image_reset_from_base64 command. This allows you to change the image associated with a texture used in your scene by base64 encoding the image data and including it on the command. The data will then be loaded by RealityServer and used immediately.
RealityServer supports a lot of different pixel formats for the various things you might need to render. In addition to your standard image data you may also want to render depth maps, UV texture coordinates, normals, irradiance data and many others. Often stuffing this data into colours isn’t the best way to handle things where you want to be absolutely sure the data is written and read in exactly the data format you want.
To address this we are introducing the mig image file format in RealityServer 5.0. We are providing both read and write capability for this format and it supports all of the pixel types which our render command can output natively. That means:
are now all supported and can be written to and then read back without fear of being re-interpreted as would be the case when trying to store some of these data types in a more conventional format such as TIFF or PNG. This is great for things like texture coordinates and normals which don’t always like being coerced into colours. When loading mig images the pixel format will always be preserved.
The file format is a simple uncompressed binary format and the specifications are available upon request should you wish to develop your own tools to read and write images in this format.
We have always had the import_elements_from_string command in RealityServer however previously it only supported providing MDL content. This feature has now been extended to support any format for which you have a valid importer, including the standard .mi and .obj files.
In some cases it might be useful upload transient content to RealityServer directly rather than rely on getting it to the filesystem to load as you would normally do. For example if you want to allow a user to upload a model in Wavefront OBJ format.
As long as your file format is string based, you can embed it within this command’s parameters and get the content loaded with RealityServer.
During development and creation of internal projects we found a few gaps in our commands that needed filling. We managed to squeeze a few extras into our work program for RealityServer 5.0.
Added geometry_get_max_displacement and geometry_set_max_displacement to control setting displacement settings on meshes.
Added texture_get_compression and texture_set_compression to allow enabling in memory compression of texture data to save on GPU resources.
The element_get_bounding_box command now supports retrieving bounding box information or Light elements. This is useful for area light sources.
The mdl_get_definition command has been added to retrieve the name of the MDL definition used to create a material or function. This can be very useful when you want to copy a material by creating another instance from the same definition.
RealityServer 5.0 will switch from a per-GPU licensing model to a per-Server licensing model. To access this new licensing model you must be using RealityServer 5.0, when doing so additional GPUs in your server will no longer require extra licenses.
If you haven’t received your update yet please let us know. If you have never tried RealityServer this is a great time to get started. Contact us for more information.
Paul Arden has worked in the Computer Graphics industry for over 20 years, co-founding the architectural visualisation practice Luminova out of university before moving to mental images and NVIDIA to manage the Cloud-based rendering solution, RealityServer, now managed by migenius where Paul serves as CEO.