{"id":3691,"date":"2020-08-12T02:18:21","date_gmt":"2020-08-12T02:18:21","guid":{"rendered":"https:\/\/www.migenius.com\/?p=3691"},"modified":"2020-08-12T09:42:38","modified_gmt":"2020-08-12T09:42:38","slug":"realityserver6","status":"publish","type":"post","link":"https:\/\/www.migenius.com\/articles\/realityserver6","title":{"rendered":"What’s New In RealityServer 6.0"},"content":{"rendered":"\n

This is a big one! We’ve been in beta for a while so some of our advanced users have already had a chance to check out the great new features in RealityServer 6.0. Some headline items are a new fibers primitive, matte fog, toon post-processing, sheen BSDF, V8 engine version bump, V8 command debugging and much more. Checkout the full article for details.<\/p>\n\n\n\n\n\n\n\n

Iray 2020.0.2<\/h3>\n\n\n\n

RealityServer 6.0 includes Iray 2020.0.2 build 327300.6313 which contains a lot of cool new functionality. Let’s take a look at a few of the biggest.<\/p>\n\n\n\n

Fibers<\/h4>\n\n\n\n

In many products this might be called hair rendering, however fibers as implemented by Iray can be used for almost any fiber like object, for example hair, grass, carpets, fabric fringes and more.<\/p>\n\n\n\n

\n
\n

Fibers are a new type of element you can create and consist of a set of points defining a uniform cubic B-spline<\/a> and radii at those points. This forms a kind of smooth extruded cylinder with varing thickness. This lightweight primitive is intersected during ray-tracing directly, without creating any explicit polygons or mesh geometry. This allows very large numbers of primitives to be handled.<\/p>\n\n\n\n

Think millions. Many places you’d like to use fibers will need lots of them, so this efficiency is essential. In testing we have also seen that RTX based GPUs with RT Core hardware ray-tracing support see even bigger speedups on scenes with a lot of fiber geometry compared to regular scenes. Creating fibers can be tricky since there isn’t really much software which authors this type of geometry. Tools such as the XGen<\/a> feature of Autodesk Maya would be one example. However many of our customers just want to generate a somewhat random distribution of fibers over existing mesh geometry.<\/p>\n<\/div>\n\n\n\n

\n
\"Fibers\"<\/a>
Topiary Generated with RealityServer (Click to Enlarge)<\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n

To make that simple we now have a new generate_fibers_on_mesh <\/em>command which takes a mesh and some simple parameters and generates fibers for you. The topiary example above was generated this way by passing the geometry of the big 6 into the command. If you want to directly control every aspect of your fibers we also have the generate_fibers <\/em>command which allows you to provide a JSON based or binary based description of the fibers geometry you want to create. We’ve also included V8 wrapper classes for fibers similar to those used for Polygon_mesh and Triangle_mesh. This is the best way to make use of the binary data input.<\/p>\n\n\n\n

Fibers can also be read from .mi files using the hair object supported there from the mental ray days. As various Iray plugins such as Iray for Maya start to support fibers they will be able to export this data from those tools into a .mi file that can be read by RealityServer. Of course you can also create fibers from C++ plugins so if you want to use your own custom fibers format you can implement that as well. To finish up on fibers here is another great image, this time with 30M fibers in the scene and varying the fiber colour using textures.<\/p>\n\n\n\n

\"\"<\/a>
Plush Rug Created with Fibers (Click to Enlarge)<\/figcaption><\/figure>\n\n\n\n

Matte Fog<\/h4>\n\n\n\n

When rendering larger scenes, the effects of aerial perspective<\/a> can be critically important to getting a realistic result. You probably know this most from seeing the shift towards blue in features as you look out to the horizon over a landscape. In theory you could simulate this already in previous releases by enclosing the entire scene in a huge box and applying a material with appropriate volume properties to the box however this would significantly increase rendering time. Now with the new matte fog feature there is a much faster way.<\/p>\n\n\n\n

\"\"\/\"\"\/<\/div><\/div>\n\n\n\n

In the image above you can see the original scene without matte fog applied on the left and on the right with matte fog enabled. You can move the slider to compare the two results. The matte fog image gives a much more realistic impression of this type of scene. Until you see it with the fog it can be hard to put your finger on what is actually wrong with the original image. Enabling matte fog is easy, you just need to enable a few attributes on your scene options, see the Iray documentation for details.<\/p>\n\n\n\n

The matte fog technique is applied as a post effect, similar to the bloom feature that was introduced some time ago. It uses depth information combined with the rendering result to apply a distance based fog. Because it runs as a GPU post process it adds very little time to rendering. Unlike a true volumetric simulation (which as mentioned earlier, is still possible), matte fog will not produce effects from specific light sources or so called god rays<\/em>. For these effects you will still need to perform a full volume simulation, however to get a simple aerial perspective effect this feature is perfect.<\/p>\n\n\n\n

Toon Post-processing<\/h4>\n\n\n\n

Another new post processing effect being introduced in this version is toon post processing. This allows you to produce non-photorealistic rendering results using RealityServer such as might be used for cartoons, illustrations or diagrams. The toon post processing effect can be applied to the normal result canvas, ambient occlusion canvas or the BSDF weight canvas (also introduced in this version). It affects both the shading and also adds outlines to the objects. Here is a quick example of what you can achieve. <\/p>\n\n\n\n

\"Toon<\/figure>\n\n\n\n

This image was made by applying the toon effect to the BSDF weight canvas which encodes the albedo of the materials. It then uses a faux lighting effect to give shading which is quantized to give the banded appearance typical for cartoons. You can control the colour of the outlines, the level of quantization or also choose to show the fully faux shaded appearance or no shading at all. Object IDs are used to determine where the edges of the objects are located, so in cases where you have objects with the same material that needs to show edges you should ensure they have unique object ids assigned with the label<\/em> attribute. You can also set the label <\/em>attribute to 0<\/em> on an object if you want to selectively disable the outlining. Also note that the toon feature works best when progressive_aux_canvas <\/em>is enabled on your scene options.<\/p>\n\n\n\n

To support all of these features the old canvas_content <\/em>parameters of our RealityServer render commands have been changed to now accept an object in addition to a string. This object is needed since canvas content type can now contain various parameters. For using the toon effects you need to use a V8 command or named canvases so that you can use the render_to_canvases <\/em>command for this purpose. It’s definitely simplest to use in V8, here is a quick example command which renders a toon image similar to the above of one of our default scenes.<\/p>\n\n\n

\nconst Scene = require('Scene');\nconst Camera = require('Camera');\n\nmodule.exports.command = {\n    name: 'render_toon',\n    description: 'Render a scene with the toon effect.',\n    groups: ['javascript', 'examples'],\n    execute: function() {\n        let scene = Scene.import('test', 'scenes\/meyemII.mi');\n        scene.options.attributes.set(\n            'progressive_rendering_max_samples', 10, 'Sint32');\n        let camera = scene.camera_instance.item.as('Camera');\n\n        let canvases = RS.render_to_canvases({\n            canvases: [\n                {\n                    name: 'weight',\n                    pixel_type: 'Rgba',\n                    content: {\n                        type: 'bsdf_weight',\n                        params: {}\n                    }\n                },\n                {\n                    name: 'toon',\n                    pixel_type: 'Rgba',\n                    content: {\n                        type: 'post_toon',\n                        params: {\n                            index: 0,\n                            scale: 4,\n                            edge_color: {\n                                r: 1.0, g: 1.0, b: 0.0\n                            }\n                        }\n                    }\n                }\n            ],\n            canvas_resolution_x: camera.resolution_x,\n            canvas_resolution_y: camera.resolution_y,\n            renderer: 'iray',\n            render_context_options: {\n                scheduler_mode: {\n                    type: 'String',\n                    value: 'batch'\n                }\n            },\n            scene_name: scene.name\n        });\n\n        return new Binary(\n            canvases[1].encode('jpg', 'Rgb', '90'),'image\/jpeg');\n    }\n};\n<\/pre><\/div>\n\n\n

The main difference to how you would have done this before is the fact that the in the canvas definitions passed to the render_to_canvases <\/em>command the content <\/em>property is specified as an object. You can see that in the second canvas that is being rendered there are three extra parameters. This example renders two canvases, the first is the BSDF weight and the second is the post toon effect which uses the first canvas to do its work. We then encode and return the second canvas. Changing the scale parameter of the toon effect gives some interesting results. In the example below you can see the difference between setting this to 0 and 4.<\/p>\n\n\n\n

\"\"\/\"\"\/<\/div><\/div>\n\n\n\n

Please refer to the Iray documentation for more details on how to use the toon effect feature and what the different parameters do. While RealityServer rendering commands still accept the older canvas_content string definitions, we definitely recommend updating your applications if you are rendering anything other than the default canvas_content (result) to use the new method of specifying canvas contents to ensure future compatibility.<\/p>\n\n\n\n

Sheen BSDF<\/h4>\n\n\n\n

Iray 2020 comes with support for MDL 1.6 which adds several new features. One of the major ones is a new BSDF specifically for dealing with sheen. This phenomenon is particularly important for realistic fabrics and in the past other rendering engines would often use non-physical effects such as falloff to fake this. Now with the new sheen_bsdf<\/em> you have a physically based option for proper sheen. Here is an example how how big a difference sheen can make.<\/p>\n\n\n\n

\"\"\/\"\"\/<\/div><\/div>\n\n\n\n

In this scene the image on the left is using pure diffuse fabrics while on the right we have added sheen. The effect is subtle and difficult to quantify but the sheen is often what results in a much more fabric like appearance. To help you try out this functionality we’ve included a new add_sheen <\/em>MDL material in a new migenius core_definitions <\/em>MDL module. You’ll find this at mdl::migenius::core_definitions::add_sheen<\/em> and it allows you to add a sheen layer on top of any existing material.<\/p>\n\n\n\n

Deep-Learning SSIM Render Predictor<\/h4>\n\n\n\n

The Structured Similarity Index<\/a> or SSIM for short, helps determine the similarity between an image and a given reference image. How can this help us when rendering since we don’t have a reference image? In fact that is exactly what we are trying to create by rendering in the first place. Iray 2020 uses deep-learning techniques to allow predict SSIM values based on an imaginary converged image using training data from a large number of pairs of partially and fully converged images generated with Iray. How does knowing the SSIM value actually help us here?<\/p>\n\n\n\n

\"\"
SSIM Convergence Heatmap<\/figcaption><\/figure>\n\n\n\n

The image above is a heatmap generated by the SSIM predictor in Iray and it shows which parts of the image have converged and which still require further work. The brigther areas are closer to the imagined reference image while the darker areas are further from it in terms of similarity. Using this information it is possible to predict at which iteration Iray will reach a particular target similarity and how long it will take to get there. If you’ve dealt with having to build heuristics for your application to set the iteration count or nebulous quality parameters in Iray, especially for widely varying scenes then you can probably see where there is going and why you would want it.<\/p>\n\n\n\n

In the end the goal will be for you to be able to set a quality value which is your SSIM target value and have this control when rendering terminates as well as provide feedback to users on how long rendering is expected to take. Right now however this feature should be considered more of a preview into type of automated termination conditions that are coming in future releases. There are a few important restrictions for now which mean you may not be able to use it in your application.<\/p>\n\n\n\n