{"id":2131,"date":"2018-10-23T01:29:23","date_gmt":"2018-10-23T01:29:23","guid":{"rendered":"https:\/\/www.migenius.com\/?p=2131"},"modified":"2020-11-02T21:29:37","modified_gmt":"2020-11-02T21:29:37","slug":"configurators-at-scale-with-compositing","status":"publish","type":"post","link":"https:\/\/www.migenius.com\/blog\/configurators-at-scale-with-compositing","title":{"rendered":"Configurators at Scale With Compositing"},"content":{"rendered":"

RealityServer 5.1 introduced a new way to generate images of configurations of your scenes without the need to re-render them from scratch. We call this Compositing<\/em> even though it’s actually very different to traditional compositing techniques. In this article we will dive into the detail of how to use the new system to render without rendering and speed up your configurator.<\/p>\n\"\"\n

<\/p>\n

Rendering Without Rendering?<\/h3>\n

RealityServer is a rendering web service right? So why do we want to avoid rendering? In short, scalability. For example when you are building a sizeable configurator application and expect a large number of users, using server-side rendering basically means your are buying GPU hardware for all of the visitors. This might work well in some use cases (for example B2B), however for large scale consumer configurators devoting the full resources of a GPU server to a single user is often not practical.<\/p>\n

Using compositing we can render once, store a lot of extra data, then use that data to reconstruct new images that would normally require re-rendering. For example changing the colour of objects in the scene. This can be done much faster than rendering a high quality image and with fewer resources. It can even be run on CPU based resources if needed (although it will get accelerated by GPU hardware if used).<\/p>\n

Demo<\/h3>\n

How does this actually help us? Here is a small demonstration of what can be done with the compositing system. Basically we have a piece of footwear in which we have split out many of the components and are applying a random set of colours to each part as well as random textures every time we hit the shuffle<\/em> button. All of the images being output are using the compositing system and a single pre-rendered piece of content.<\/p>\n

A few things to get your head around in this little demo. Firstly, there are no masks or alpha channels. All of the compositing is purely additive. This means no edge artifacts. Look closely at the small details like the stitches, where there is barely a pixel involved. This poses a significant issue for traditional compositors where you typically need to composite at a higher resolution and downsample to mitigate this (even then it doesn’t fully fix the issue).<\/p>\n<\/div><\/div>

\t