With the Sayduck Platfrom users can create, manage and publish engaging visuals for their products in both 3D and Augmented Reality. Leading design brands such as Alessi, Kristalia and Vallila Interior trust the platform to create accurate representations of their products every day. We spoke with Niklas Slotte from Sayduck about adding true photorealistic rendering to an already great platform and why they chose RealityServer to get the job done.
Last month at NVIDIA GTC Digital 2020 migenius had the pleasure of presenting along side our long term customer Amazon on the concept of Visuals as a Service. This presentation is now available online and if you would like to see what several industry leading companies are already doing with RealityServer be sure to check it out.
We’ve had quite a few customers express interest in downloading content from AWS S3 rather than persistently storing it on their RealityServer instances. While this introduces latency, in many use cases it can still make a lot of sense. Our recently released HTTP Request functionality for V8 makes it easy to download content from public URLs. What do you do if your S3 buckets require authentication though? Let’s dive in.
Customers of RealityServer are usually seeking out the highest quality visuals possible. However in many contexts it is not economical to provide this quality at all times. As a result, hybrid solutions using both WebGL and RealityServer have become popular. Here, WebGL is used during most of the interaction and RealityServer just at the end of the process for a final high quality image. The problem this creates is that you have to make the WebGL version of the content somehow, ideally without making your content twice. In this article we’ll see how glTF 2.0, MDL materials and distillation let you repurpose your content automatically.
Even though RealityServer is great at streaming fully interactive, server-side rendering directly to your browser, not every use case requires this level of interactivity. RealityServer has recently introduced a new feature called the Queue Manager which integrates with popular message queue services to manage rendering and other RealityServer tasks. In this article we will dive into the details on how to get up and running with this great new feature using Amazon SQS and S3 services.
RealityServer is a platform for integrating photorealistic 3D rendering into your application. It is based on a web services methodology and can be used both for rendering automation and fully interactive rendering. In this introduction we will cover the core concepts of RealityServer development and what you need to understand in order to get started.
So you’ve got your RealityServer release email and downloaded everything but how do you install it? This article will take you through the steps in setting up RealityServer on Windows or Linux as well and also provide an overview of the RealityServer directory structure. We’ll also cover how to setup your licensing.
Continuing our series of articles on using the server-side V8 API we will now add some objects, loaded from a model file to the scene that we setup and lit in our previous articles. This one is going to be very simple but for bonus points we’ll add multiple copies of the object in different locations.
migenius has now joined the Khronos Group as an Associate Member. The Khronos Group has helped shape several key standards that are used today and that are actively used in our products. In particular glTF 2.0 which has now been fully supported by RealityServer for some time. With the advent of the 3D Commerce Working Group we have joined to help represent the needs of our customers. We are looking forward to some great collaboration.
RealityServer with support for NVIDIA RTX technology is here! This release includes a major Iray version bump which adds support for accelerating Iray rendering with the new RT Core hardware inside RTX cards and the Tesla T4. There is also a great performance improvement for Iray Interactive (on all cards) and support for MDL 1.5. Let’s take a look.
NVIDIA RTX technology was announced late last year and has gathered a lot of coverage in the press. Many software vendors have been scrambling to implement support for it since then and there has been a lot of speculation about what is possible with RTX. Now that Iray RTX is finally about to be part of RealityServer we can talk about what RTX means for our customers and where it will be most beneficial for you.
Our next RealityServer update is here. This is an incremental release with quite a few requested fixes and enhancements but also contains a few great new features for heavy users of MDL materials. This will be the last RealityServer 5.2 release as we will shortly be releasing RealityServer 5.3 with NVIDIA RTX support so watch out for that one. In the meantime, let’s checkout whats new in this version.
The Timex Group has joined the growing list of household names who use RealityServer to create the imagery that pushes customers to choose their products over a competitors. The retail sector demands the highest quality imagery in order to replace traditional photography with photorealistic 3D rendering. Not just quality, but speed and scale in everything from real-time, interactive and offline rendering.
Our first update for RealityServer 5.2 is here. It includes an Iray version bump and some nice convenience features. The most significant feature however is support for queuing renders with Iray Server. This will be of interest to those building internal rendering automation tools with RealityServer.
RealityServer 5.2 is here and adds some great functionality. Hugely expanded glTF 2.0 importer support, wireframe rendering, lightmap rendering, section plane capping, MDL 1.4 support, UDIM support and many more features have been added along with many fixes and smaller enhancements based on extensive customer feedback. In this post we will run through some of the most interesting functionality and how it can help you build your applications.
We’ve covered server-side V8 commands before but in this post we will go into a little more detail and use some of the helper classes that are provided with RealityServer to make common tasks easier. Quite often you want to kick off an application by creating a valid, empty scene ready for adding your content. Actually, it’s something we we need to do in a lot of our posts here so to avoid repeating it each time, lets make a V8 command to do it for us.
A core concept in RealityServer which many new users have some difficulty understanding is Scopes. The use of scopes is critical in making effective use of RealityServer in a production environment where multiple users or multiple independent operations are happening at once. In this article we will go into more depth on what scopes are and how to use them.
In this article we’ll take a quick look at how to use the UAC system in RealityServer to effectively manage user sessions and clean up server memory when users go away. There won’t be a lot of pretty pictures (well there is one if you make it to the end) but for those of you getting your hands dirty with RealityServer in production, you’ll get some valuable pointers to help keep your server from filling up with unused data.
In our last post we explored using the RealityServer compositing system to produce imagery for product configurators at scale. Check out that article first if you haven’t already as it contains a great introduction to how the system works. In this follow up post we will explore the possibilities of using the same system to modify the lighting in a scene without having to re-render, allowing us to build a lighting configurator.
RealityServer 5.1 introduced a new way to generate images of configurations of your scenes without the need to re-render them from scratch. We call this Compositing even though it’s actually very different to traditional compositing techniques. In this article we will dive into the detail of how to use the new system to render without rendering and speed up your configurator.
19 September, 2018, London — Project 424 welcomes its first brand technology partners to the team as the development of the world’s first all-electric and autonomous Le Mans Prototype race car enters an exciting new phase of development.
The trio of new partners, Onshape, SimScale and migenius, will form an integral part of Project 424’s overall development through the provision of high-performance, cloud-based tools for the design, simulation and 3D prototype renderings of this unique Le Mans Prototype race car.
TapGlance is a powerful and intuitive interior design app. Within minutes and without any prior experience, you can create photo-realistic images of just about any interior design project you have in mind.
Drag and drop furniture, fixtures and appliances into your plan – more than 2000 items are included with the app for free. Test material combinations using over a thousand included materials or import your own seamless textures or camera photos.