{"id":1818,"date":"2017-08-15T08:10:29","date_gmt":"2017-08-15T08:10:29","guid":{"rendered":"http:\/\/www.migenius.com\/?p=1818"},"modified":"2017-08-15T08:10:29","modified_gmt":"2017-08-15T08:10:29","slug":"whats-new-in-realityserver-5-0","status":"publish","type":"post","link":"https:\/\/www.migenius.com\/articles\/whats-new-in-realityserver-5-0","title":{"rendered":"What’s New in RealityServer 5.0"},"content":{"rendered":"

We are happy to announce the immediate availability of RealityServer 5.0. There are some great new features so we’ve put together a quick list of the headline items. We will also be posting additional articles on the individual features and how to use them but for now take a look at what’s new.<\/p>\n

<\/p>\n

\"Iray

Iray Photoreal Rendering by Floorplanner.com<\/p><\/div>\n

Major Features<\/h3>\n

Iray 2017<\/h4>\n

With the release of NVIDIA Iray 2017 many very significant features have been added. Here are some of the headline items. There are additionally numerous smaller performance improvements, bug fixes and other updates related to Iray.<\/p>\n

No More Light Limits with Iray Interactive<\/h5>\n

Iray Interactive is a faster, more approximate rendering mode than Iray Photoreal, however it has always had a very serious limitation, it could not support more than 16 light sources. While it was very useful for exterior scenes that were primarily illuminated by HDRI environments or sun sky systems, for interiors or night time exteriors it just wasn’t possible to get good results with only 16 light sources available. This limit has now been lifted, opening the door for much more widespread use of Iray Interactive.<\/p>\n<\/div><\/div>

\"Los<\/a><\/p>\n

Los Angeles County Lighting (128,000 lights)<\/p>\n<\/div>\n<\/div><\/div><\/div>\n

In the image above you can see a scene with over 100,000 (yes, one hundred thousand) light sources rendered with Iray Interactive. Performance is impressive to say the least. Where traditionally more approximate rendering modes suffer extreme performance loss when lots of light sources are used, Iray Interactive is now easily able to handle huge numbers of lights.<\/p>\n

3D Backplates<\/h5>\n

The virtual backplate feature is very useful when you want to swap out a different background to the one you are using to illuminate your scene. This works great for fixed images where you have a high resolution still image to use as the backplate, however in scenes where you can navigate the view it becomes more difficult as you need a separate backplate for each image. A frequent request from customers has been for the ability to supply a different environment for viewing through windows or around objects while keeping existing lighting.<\/p>\n<\/div><\/div>

\"Vertical<\/p>\n

Vertical Cross Map on Backplate Mesh<\/p>\n<\/div>\n<\/div><\/div><\/div>\n

Iray 2017 introduces the Backplate Mesh<\/em> feature which allows you to specify any mesh object in your scene to act as a backplate. You just need to ensure the mesh has suitable UV coordinates but otherwise it can be any shape you like. The backplate image will be projected onto the mesh using the UV coordinates and replace the directly visible background in your images.<\/p>\n

Sub-surface Scattering in Iray Interactive<\/h5>\n

You can now see the effect of materials with a sub-surface scattering component (those that define a scattering_coefficient<\/em> in their MDL material_volume<\/em>) when using Iray Interactive mode. Previously this component was ignored and only the absorption_coefficient<\/em> was used. Iray Interactive uses a fast approximation and so results may differ from the physically correct Iray Photoreal mode, however for materials with moderate sub-surface scattering the results will look great.<\/p>\n

\"(Left)<\/a>

(Left) Iray 2016.3 (Right) Iray 2017<\/p><\/div>\n

 <\/p>\n

MDL Projector Functions<\/h5>\n

Often you get geometry that doesn’t have UV texture coordiantes already embedded in them. For example geometry tessellated from CAD systems with free-form surface models or less than ideal 3D file formats such as STL. Iray 2017 adds a great new feature called Projector Functions<\/em>. This allows you to use procedural texture projectors (or ones you write yourself in MDL) to execute and apply UV texture coordiantes to your geometry.<\/p>\n

You might wonder why you couldn’t just do this on your MDL material itself and of course this is possible however doing so then means the MDL material is closely linked to the geometry it is being applied to, making it much less portable. Using projector functions you can setup the material once in a way that expects the UV texture coordinates to come from the object and then use the projector functions to create them on them on the object.<\/p>\n<\/div><\/div>

\"Complex<\/a><\/p>\n

Complex Geometry with Triplanar UV Projection<\/p>\n<\/div>\n

 <\/p>\n<\/div><\/div><\/div>\n

Section Plane Scene Elements<\/h5>\n

Iray 2017 adds a new element type for section plans. Previously section planes were added by setting attributes on the scene options. The new section plane scene elements allow you to add them to the scene, instance and transform them light regular objects. This makes the simpler to manipulate and makes the way they are transformed consistent with other scene elements.<\/p>\n<\/div><\/div>

\"\"<\/a><\/p>\n<\/div><\/div><\/div>\n

RealityServer<\/h4>\n
WebSockets Based Streaming<\/h5>\n

One of the most requested features from RealityServer users has been to provide an improved, lower latency method for streaming rendering results from the server to the client. RealityServer 5.0 introduces WebSocket streaming for persistent, bi-directional communication between the client and server. Using WebSockets significantly reduces latency and allows the server to push<\/em> imagery to the client instead of requiring the client to constantly poll for new images.<\/p>\n

To get users started we have updated our standard Render Loop Demo application with WebSockets support and have added a WebSockets streaming client module to our JavaScript client libraries. If you are already using the JavaScript client library then this makes getting started with WebSockets extremely simple. If you want to roll your own client you can use our example to understand the protocol.<\/p>\n<\/div><\/div>

\"\"<\/p>\n<\/div><\/div><\/div>\n

Currently we support streaming image data from the render loop to the client from the server over WebSockets as well as updating camera data (including arbitary camera attributes) on the server from the client. This communication all happens over the same persistent connection, avoiding the overhead of setting up and tearing down a HTTP connection for every request.<\/p>\n

Since the camera movement and image stream are the most latency sensitive parts of any application we have chosen to implement those over WebSockets first. In the future we will plan to enable additional functionality over WebSockets, including potentially video streaming if we are able to do so in a way that has suitably broad browser support. For now, you can still use your normal way of sending commands for everything that isn’t explicitly supported over WebSockets and it will get picked up by the stream.<\/p>\n

V8 Server-side JavaScript Engine<\/h5>\n

This one is big! RealityServer has had a server-side JavaScript command API since version 3.0, however this was based on a relatively ancient version of the Mozilla SpiderMonkey<\/a> JavaScript runtime. Starting with RealityServer 5.0 we are phasing out the old JavaScript command API in favour of a new one based on the V8<\/a> JavaScript runtime developed by the Chromium Project.<\/p>\n

If you haven’t used server-side JavaScript commands yet now is a great time to start. They allow you to build your own commands that are automatically exposed over the RealityServer JSON-RPC API and called just like any other command. Within these commands you can freely call any built in command (or other JavaScript commands) without having to make a round trip to the server. This is great when you have chains of commands where results depend on previous commands or if you want to modularlise commonly used functionality.<\/p>\n<\/div><\/div>

\"\"<\/p>\n<\/div><\/div><\/div>\n

With the inclusion of V8, you now have full access to all modern JavaScript language features within your server-side JavaScript commands. As of writing we are including V8 version 5.6.326.42 and plan to keep this up to date as future versions are announced. All of our previous JavaScript commands have been ported to V8 and included in RealityServer. You can still use the old SpiderMonkey based engine for now if you wish, however it will be deprecated at some point in the future.<\/p>\n

We have also started to wrap many of the common scene elements and data types up in pre-defined JavaScript classes so you can very easily manipulate your scene data without resorting to large numbers of command calls. For example you can now create an Instance<\/em> as both a scene element and JavaScript object and then directly set the attributes on it using the more familiar object oriented dot notation. So far we have wrapped the most commonly used scene elements but more are coming.<\/p>\n

There are also a number of JavaScript classes for the most common data types you might want to use such as RS.math.Matrix4x4<\/em>, RS.math.Color<\/em>, RS.math.Vector3<\/em> and many more. This can be easily initialised in several differently commonly used ways, eliminating large amounts of boiler plate checking code from your JavaScript commands.<\/p>\n

We have also included an implementation of require<\/em>, this allows you to heavily modularise your code and re-use code between your commands. We are exploiting this already for the wrapper classes mentioned above and have deployed it to great effect on several internal projects already. In the past, in SpiderMonkey it was necessary to have all of the code, repeated in every command. Not anymore.<\/p>\n

If you like Node.js<\/a> (but wish you could develop everything synchronous) then we think you’ll love this new way to work in RealityServer. We are just scratching the surface so far of where this will go but there are already some very cool things that can now be done with V8 integrated. We’ll be preparing some future articles to demo some of these features very soon.<\/p>\n

Single Pass Stereo Rendering<\/h5>\n

We have had stereo, including stereo VR rendering<\/a>\u00a0since RealityServer 4.4 build 1527.46. However previously it required you to render two images separately, changing a parameter in between renders. In RealityServer 5.0 you can perform a stereo render, whether it’s for VR or just a normal image, in a single pass. You can render side-by-side or top-and-bottom style images and the image will automatically be doubled in width or height as needed and composited for you on the server into a single image.<\/p>\n

All you need is to set the standard Iray mip_lens_stereo_offset<\/em> attribute to specify the eye separation and then the new mip_lens_combined_stereo<\/em> string attribute on your camera to the desired layout. Use vertical_lr<\/em> for a top\/bottom image with the left eye on the top and the right eye on the bottom. Also available are vertical_rl<\/em>, horizontal_lr<\/em> and horizontal_rl<\/em>.<\/p>\n<\/div><\/div>

\"Single<\/a><\/p>\n

Single Pass Stereo Rendering<\/p>\n<\/div>\n<\/div><\/div><\/div>\n

Iray Bridge Server<\/h5>\n

Some of our users have asked if they could use their RealityServer installations as Iray Bridge servers to connect their Iray based desktop applications to. This can be useful for example if you are using your own Iray SDK based application or any Iray application that supports ad-hoc bridge connections. When in use RealityServer then offers remote streaming with Iray Bridge to those client applications, similar to Iray Server but lacking the queuing and management functionality.<\/p>\n

A side benefit is that when you perform such a rendering the scene data is loaded into the same shared database that RealityServer uses, so you can potentially utilise RealityServer functionality to then export or capture this scene data for use outside the Iray based application. You can enable this functionality from your realityserver.conf<\/em>\u00a0file (there is a commented out section showing how to do it).<\/p>\n<\/div><\/div>

\"\"<\/p>\n<\/div><\/div><\/div>\n

Render Loop Outline Rendering and Picking<\/h5>\n

Most people now use the render loop functionality when doing interactive rendering with RealityServer rather than polling and with the introduction of WebSocket streaming we think even more users will take that approach. Unfortunately picking (casting a ray at a click point and seeing what it hits and where), is complicated quite a bit by using the render loop.<\/p>\n

To help people understand how it works we have now added picking to our main render loop example, showing how you can use the new default render loop handler to perform a pick operation and then using another new feature we added to dynamically highlight the object you picked by drawing an outline around it. This is extremely useful for applications where you want to select objects and indicate to the user what has been selected.<\/p>\n<\/div><\/div>

\"Dynamically<\/a><\/p>\n

Dynamically Rendered Object Outline<\/p>\n<\/div>\n<\/div><\/div><\/div>\n

Texture Upload Command<\/h5>\n

A frequent use case we are asked about is uploading a texture to RealityServer directly, for example in an online configurator where the user might be allowed to upload their own fabric pattern or image to be printed on a product. Previously you would need to find a way to get this texture onto the filesystem of the server running RealityServer. This often meant using another application server for this purpose.<\/p>\n

In RealityServer 5.0 we have added the image_reset_from_base64<\/em> command. This allows you to change the image associated with a texture used in your scene by base64 encoding the image data and including it on the command. The data will then be loaded by RealityServer and used immediately.<\/p>\n

mig Image Format<\/h5>\n

RealityServer supports a lot of different pixel formats for the various things you might need to render. In addition to your standard image data you may also want to render depth maps, UV texture coordinates, normals, irradiance data and many others. Often stuffing this data into colours isn’t the best way to handle things where you want to be absolutely sure the data is written and read in exactly the data format you want.<\/p>\n

To address this we are introducing the mig<\/em> image file format in RealityServer 5.0. We are providing both read and write capability for this format and it supports all of the pixel types which our render command can output natively. That means:<\/p>\n