neuray API Programmer's Manual

Overview: Configuration

[Previous] [Next] [Up]

neuray does not use configuration files, commandline options or environment variables. Instead all configuration is done by calling configuration functions on the neuray object. Most configuration options need to be set before starting up the system but some can be adapted dynamically - see below for details. The following is a short overview of the available configuration options. See the corresponding reference pages for the complete set.

Threading Configuration

The neuray system internally uses a thread pool to distribute the load among the processor cores. The size of the thread pool is configurable. Additionally the number of threads which can be active at any time can be configured up to the licensed number of cores

The number of active threads can be changed dynamically at runtime. This can be used to balance the processor load against the embedding applications needs or other applications needs on the same host. If you decrease the number of threads it may take a while until this limitation will be enforced because running operations will not be aborted.

The application can use any number of threads to call into the neuray API. Those threads will exist in addition to the threads in the thread pool. The neuray scheduling system will however include those threads in the count of active threads and will limit the usage of threads from the thread pool accordingly.

Memory Configuration

You can configure a memory limit. The neuray system will try to keep its memory usage below that limit. To achieve this it will flush data to disk. Objects can be flushed if the system guarantees that other hosts in the network still have them or if they are the result of a job execution which can be repeated.

neuray can be configured to use a swap directory to which it can flush out the contents of loaded assemblies. Those parts can then be dropped from memory if the memory limit was exceeded. They will automatically be reloaded on demand when they are accessed again.

The memory limits can be adapted dynamically. If you decrease the memory limit neuray will make a best effort to reduce the memory usage to the given amount. Actually enforcing this limit may take a while and is not guaranteed to succeed.

You can configure neuray to use a custom allocator object provided by your application. The allocator has to implement an abstract C++ interface class which exposes the functionality of allocating and releasing memory blocks. It also allows one to inquire about the amount of memory currently being used. Calls to the allocator object can be done from several threads at the same time, including those threads from the application which are currently inside calls to neuray. The allocator object must be implemented to handle this concurrency properly.

If the allocator cannot provide the memory requested by neuray it needs to signal this failure by returning a 0 pointer. In that case neuray will try to flush memory to accommodate the request and retry again. If this is not successful neuray will terminate and release the memory it uses, but it cannot be guaranteed that all memory will be released. Additionally in that case it is not possible to restart neuray without restarting the complete process.

Networking Configuration

Networking can be done in different ways: UDP multicast and TCP unicast with or without automatic host discovery are supported.

You can decide how to use the networking capabilities in your application. You can realize a conventional master/slave configuration similar to mental ray with satellites where all scene editing and rendering always initiates from the same host. In that scenario you would typically write a small stub application which does the configuration and starts the system and then waits until shutdown is requested. In the background the neuray system would handle reception of data and would accept requests to do work and handle them.

But you are not restricted to this scenario with neuray. The neuray library allows all hosts to edit the database and initiate rendering. You can use this to implement peer-to-peer applications. It is up to your application to avoid conflicting changes in this scenario. To help doing that the neuray API provides means for synchronizing changes to database objects for example by locking objects.

UDP Multicast Networking: Using UDP multicast gives the best performance because data can be sent once on a host but be received by many hosts in parallel. Additionally there is no need to configure a host list so it is very easy to dynamically add and remove hosts. On the downside using UDP multicast for high bandwidth transmissions is not supported by all network switches and might require changes to the network infrastructure. For the UDP multicast case a multicast address, a network interface and a port can be configured. A host list is optional and acts as a filter which restricts the hosts which can join to the given list.

Hosts can be started dynamically and will automatically join without the need for configuration. A callback object can be given to the neuray system which will be called when hosts have been added to the network or when hosts have left the network.

TCP/IP Networking: Because multicasting with high bandwidth is not supported on all networks it is also possible to use a more conventional scheme using TCP/IP networking, which is supported on virtually all networks. In that case an address and port to listen on can be configured. A host list is mandatory if the discovery mechanism is not used. Hosts can still be added to and removed from the host list at any time using the neuray API provided that the necessary redundancy level can be maintained (see below).

TCP/IP networking can be coupled with a host discovery mechanism, in which case an additional address needs to be given. In case of a multicase address, multicast will only be used to discover other hosts dynamically. In case of a unicast address, the host with the given address acts as master during the discovery phase. In both cases, the actual data transmission will be done using TCP/IP. Because this mode requires only low bandwidth multicasting it is supported by most networks and can be used to simplify the configuration even if high bandwidth multicast is not supported. Again a callback object can be given by the application to allow the application to keep track of added and leaving hosts.

Failure Recovery: A redundancy level can be configured up to a certain maximum. The redundancy level configures how many copies of a certain database object will be kept at least on the network. The neuray database guarantees that the system will continue working without data loss even when hosts fail if the following preconditions are met: The number of hosts failing at the same time must be less than the configured redundancy level and at least one host must survive. After host failures or administrative removal of hosts the database will also reestablish the redundancy level if the number of surviving hosts is high enough.

Dynamic Scheduling Configuration changes: A neuray instance in a multi-hosted system can dynamically be configured to stop delegating rendering work to other hosts, to stop accepting rendering work delegation from other hosts, or to exclude the local hosts from rendering completely. This can be used to adapt the load on systems to the current usage scenario.

Administrative HTTP Server: The neuray system has a built-in administrative HTTP server which can be started. This server is not identical to the HTTP server framework which can be used to serve requests from customers. The administrative HTTP server does not allow the execution of C++ code or the rendering of images. It is meant to be used to monitor the system at runtime. You can configure if this server is started (by default it is not started) and on which port and interface it listens. The administrative HTTP server allows one to inspect aspects of the neuray database and thus is useful for debugging integrations. Usually it would not be enabled in customer builds.

Rendering Configuration

You can enable the option of GPU usage at startup. You can choose which renderer is actually used for rendering an image for each render request. Hosts with and without enabled GPU rendering can freely be mixed in multi-hosted rendering.

You can configure the location of the directories where scene files, textures, and MetaSL shaders reside.

You can configure rendering options such as trace depth etc. at any time and per rendered image by editing the options object in the database.

Logging Configuration

The logging in neuray is done using an abstract C++ interface class with one member function which is called by neuray whenever some part of the system needs to emit a log message. You can provide a log object, which is an implementation of this abstract interface, and register it with the neuray API. Only one log object can be registered with a neuray process at any time.

In a multi-hosted system all log messages will be sent to an elected logging host. This host will pass all hosts' log messages to the log object of that host. You can influence the log election process to favor a certain host.

Each log message will come with an associated log level and with a log category. You can decide which log messages to report to the user. The method of the log object that you have registered with the neuray API can be called at any time by any thread in the neuray system including application threads which initiated operations in neuray. The log object must be implemented to handle this concurrency properly.

You can configure a log verbosity level and neuray will pre-filter the log messages to avoid the processor load associated with generating log messages which are never presented to the user.

[Previous] [Next] [Up]