Basic Canvas Operations in V8

RealityServer 5.1 introduced new functionality for working with canvases in V8. In this post I’m going to show you how to do some basic things like resizing and accessing individual canvas pixels. We’ll build a fun little command to render a scene and process the result into a piece of interactive ASCII art. Of course, this doesn’t have much practical utility but it’s a great way to learn about this new feature!

Rendering ASCII Art

ASCII Art is a way of representing raster images purely by using a combination of standard ASCII characters rather than pixels. This technique was used heavily before raster based displays were common but you can still find many examples today.

We are going to write a new RealityServer command in JavaScript using the Server-Side V8 API. If you haven’t used the V8 API before, this will also serve as good starting point.

Command Boilerplate

First up we want to bring in some modules we will use in our code. The V8 API uses a system similar to the Node.js require based module system. Let’s bring in the Scene and Utils modules for working with our scene data and generating UUIDs respectively.

const Utils = require('Utils');
const Scene = require('Scene');

Next we’ll construct our command object. This specifies the name of our command, input parameters returns and so on.

module.exports.command = {
    name: 'example_render_to_ascii',
    description: 'Renders ASCII Art!',
    groups: [ 'render', 'javascript' ],
    arguments: {
        renderer: {
            description: 'Render mode to use.',
            type: 'String',
            default: null
        }
    },
    returns: {
        type: 'String',
        description: 'Our ASCII art.'
    },
    execute: function(args) {
        return 'Implement me!';
    }
};

We just have one parameter for now to show the format. You can see the basic layout of a command, we have a name (which is used to call the command over JSON-RPC), arguments to define the inputs, a return type finally a function to actually execute which takes a single argument which is a map of the arguments defined in the command.

You’ll also notice most items can have a description field. This allows you to document your commands and will be shown to users reviewing the RealityServer command documentation. Even with the minimal information above you start to get a nicely documented command. V8 commands are treated identically to built-in commands, they are just much easier to write. Save your initial command into the v8/commands directory and start RealityServer and when browsing the documentation you should be able to find your command, like the example on the right.

Auto-generated Documentation (Click to Enlarge)

Ok, so we have basic structure and our commands loads. We need to add more parameters to make it useful such as specifying the size of the output, a mapping of brightness to characters and obviously the name of the scene we want to render. Here is the full boiler plate without any real code yet (expand to view).

V8 Boiler Plate Code

const Utils = require('Utils');
const Scene = require('Scene');

module.exports.command = {
    name: 'example_render_to_ascii_done',
    description: 'Renders ASCII Art!',
    groups: [ 'render', 'javascript' ],
    arguments: {
        renderer: {
            description: 'Render mode to use.',
            type: 'String',
            default: null
        },
        scene_name : {
            description: 'The name of the scene to render.',
            type: 'String'
        },
        text_width: {
            description: 'Width in characters of the output.',
            type: 'Sint32',
            default: 80
        },
        text_height: {
            description: 'Height in characters of the output.',
            type: 'Sint32',
            default: 24
        },
        character_map: {
            description: 'Characters to map with.',
            type: 'String',
            default: '@#$=*!;:~-,.  '
        }
    },
    returns: {
        type: 'String',
        description: 'String of characters representing the image data.'
    },
    execute: function(args) {
        return 'Implement me!';
    }
};

If we run the command now we’ll just get back a rude string asking to be implemented so let’s add some meat to the main execute function.

Rendering the Canvas

The following code is what we need to add to the execute function to actually get our command to do something. Before rendering an image we want to find out what the resolution is set to on the camera in our scene. We can use the following code for this.

// Get the camera resolution and options from the scene
const scene = new Scene(args.scene_name);
const camera = scene.camera_instance.item.as('Camera');
const canvas_resolution_x = camera.resolution_x;
const canvas_resolution_y = camera.resolution_y;

The first line uses the scene name from our arguments to create a Scene object, pre-populated with the information from the given scene. We need this since the camera instance and and the camera it references are stored in the scene. In the next line we fetch the camera from the scene’s camera instance. Then we can retrieve the resolution from the camera. Next we need some unique names for our canvas and render context used to render the scene.

// Defaults for items requiring unique names
const canvas_name = Utils.uuid();
const render_context_name = Utils.uuid();

This uses the uuid function in our Utils library to make a unique identifier for us. Now we have everything we need to do an actual render, which we can do with the following code.

// Render to a canvas so we can manipulate it
const render_canvas = RS.render_to_canvas({
    canvas_pixel_type: 'Rgb',
    canvas_content: 'result',
    canvas_name: canvas_name,
    canvas_resolution_x: canvas_resolution_x,
    canvas_resolution_y: canvas_resolution_y,
    render_context_name: render_context_name,
    render_context_timeout: 10,
    render_context_options: {},
    renderer: args.renderer,
    scene_name: args.scene_name
});

You can refer to the documentation for render_to_canvas for more details on the settings. Using the RS object we can call any standard RealityServer command, which we do here for rendering. The render_to_canvas command returns a Canvas object which contains the image data we are going to use. Before going further, let’s check that we actually rendered successfully.

// Fetch the numeric result of the last render
result = RS.get_last_render_result({
    render_context_name: render_context_name,
    scene_name: args.scene_name
});

// Make sure our render actually worked before trying to go further
if (result < 0) {
    throw new Error("Render call returned " + result);
}

The get_last_render_result command gets a numeric result representing the outcome of the previous render. If this number is negative then something went wrong so we shouldn’t continue. If all went well we have a valid image to work with.

Image Manipulation

Now that we have an image in our canvas we can do things with it. Let’s start by resizing the image to make it much smaller, since we want to output far fewer characters in our ASCII art than there are pixels in the original image. We can do this easily now in RealityServer 5.1 like this.

// Resize the image with a bicubic filter so we can generate one character per pixel
const canvas = render_canvas.resize('cubic', args.text_width, args.text_height);

You can see more detailed in the documentation for the resize method on the Canvas, including various filter types. This method returns a new canvas and does not change the original one. With a nice small canvas in hand let’s loop over all of the pixels and do something to them.

// RealityServer canvases have 0,0 at the bottom left so we need to iterate
// the lines of the image in reverse to get the right order
for (var y = canvas.height - 1; y >= 0; y--) {
    for (var x = 0; x < canvas.width - 1; x++) {
        // Get the color of the pixel from our canvas
        var color = canvas.get_pixel(x, y);
        // Compute a grey scale color
        var grey = new RS.Math.Color(
            color.ntsc_intensity(),
            color.ntsc_intensity(),
            color.ntsc_intensity()
        );
        // Set the color of all three channels to the intensity
        canvas.set_pixel(x, y, grey);
    }
}

So this isn’t quite doing our ASCII art yet, however it shows how to iterate over the pixels in our canvas and do something simple. In this case we are just taking the colour and then making a new colour which is a grey scale version and setting it back in the canvas. If we were to return this canvas from the command we’d have a very small grey scale image. Now that we have all of the pieces, we can change the loop above to actually perform the ASCII art remapping.

Making Art

So, how do we go about converting a pixel in our image into a character to display on the screen? There are actually many methods but we will use a very simple one. We’ll take the intensity of our pixel and then use that to look up a selection of characters arranged from brightest to darkest.

You can see above we can already obtain the intensity of our colour using the ntsc_intensity() method on the colour. Assuming our image is already tone-mapped then this will be in the [0-1] range so to look up our character we can just multiple our intensity by the length of the character map and round the result into an index.

So, how do we do this in code. Actually it’s pretty trivial, here we go, let’s complete the command by changing our manipulation loop and returning our result. First though let’s look at the line that does the magic.

// Compute pixel brightness and index into the character map
image += args.character_map[
    parseInt(color.ntsc_intensity() * character_map_length, 10)
];

Here image is actually a string which will be our result and we simply look up the character map supplied in the arguments using the intensity as an index. When constructing the character map, you want to try and put characters that look darker on the screen earlier and those that look brighter later. Now we can put it all together in our loop.

// String to represent our "image"
var image = "";

// Loop over the canvas and generate the characters representing the colors
// RealityServer canvases have 0,0 at the bottom left so we need to iterate
// the lines of the image in reverse to get the right order of characters
const character_map_length = args.character_map.length - 1;
const linebreak_index = canvas.resolution_x - 2;
for (var y = canvas.height - 1; y >= 0; y--) {
    for (var x = 0; x < canvas.width - 1; x++) {
        // Get the color of the pixel from our canvas
        var color = canvas.get_pixel(x, y);
        // Compute pixel brightness and index into the character map
        image += args.character_map[
            parseInt(color.ntsc_intensity() * character_map_length, 10)
        ];
        // Add newline for the end of every row of the image
        if (x === linebreak_index) image += '\n';
    }
}

// Send back the string representing the image
return image;

We have had to add a little extra code to insert line breaks in the string so it doesn’t all come out on a single line. This is fast enough to run interactively and you can see this in the ascii_demo application that ships with RealityServer 5.1.  So then, here is the complete command in all its glory.

Full Command Source Code

const Utils = require('Utils');
const Scene = require('Scene');

module.exports.command = {
    name: 'example_render_to_ascii',
    description: 'Renders ASCII Art!',
    groups: [ 'render', 'javascript' ],
    arguments: {
        renderer: {
            description: 'Render mode to use.',
            type: 'String',
            default: null
        },
        scene_name : {
            description: 'The name of the scene to render.',
            type: 'String'
        },
        text_width: {
            description: 'Width in characters of the output.',
            type: 'Sint32',
            default: 80
        },
        text_height: {
            description: 'Height in characters of the output.',
            type: 'Sint32',
            default: 24
        },
        character_map: {
            description: 'Characters to map with.',
            type: 'String',
            default: '@#$=*!;:~-,.  '
        }
    },
    returns: {
        type: 'String',
        description: 'String of characters representing the image data.'
    },
    execute: function(args) {
        // Get the camera resolution and options from the scene
        const scene = new Scene(args.scene_name);
        const camera = scene.camera_instance.item.as('Camera');
        const canvas_resolution_x = camera.resolution_x;
        const canvas_resolution_y = camera.resolution_y;
        
        // Defaults for items requiring unique names
        const canvas_name = Utils.uuid();
        const render_context_name = Utils.uuid();

        // Render to a canvas so we can manipulate it
        const render_canvas = RS.render_to_canvas({
            canvas_pixel_type: 'Rgb',
            canvas_content: 'result',
            canvas_name: canvas_name,
            canvas_resolution_x: canvas_resolution_x,
            canvas_resolution_y: canvas_resolution_y,
            render_context_name: render_context_name,
            render_context_timeout: 10,
            render_context_options: {},
            renderer: args.renderer,
            scene_name: args.scene_name
        });

        // Fetch the numeric result of the last render
        result = RS.get_last_render_result({
            render_context_name: render_context_name,
            scene_name: args.scene_name
        });

        // Make sure our render actually worked before trying to go further
        if (result < 0) { throw new Error("Render call returned " + result); } // Resize the image with a bicubic filter so we can generate one character per pixel const canvas = render_canvas.resize('cubic', args.text_width, args.text_height); // String to represent our "image" var image = ""; // Loop over the canvas and generate the characters representing the colors // RealityServer canvases have 0,0 at the bottom left so we need to iterate // the lines of the image in reverse to get the right order of characters const character_map_length = args.character_map.length - 1; const linebreak_index = canvas.resolution_x - 2; for (var y = canvas.height - 1; y >= 0; y--) {
            for (var x = 0; x < canvas.width - 1; x++) {
                // Get the color of the pixel from our canvas
                var color = canvas.get_pixel(x, y);
                // Compute pixel brightness and index into the character map
                image += args.character_map[
                    parseInt(color.ntsc_intensity() * character_map_length, 10)
                ];
                // Add newline for the end of every row of the image
                if (x === linebreak_index) image += '\n';
            }
        }

        // Send back the string representing the image
        return image;
    }
};

Experimenting

Now that everything works we can try having a bit of fun. We have exposed the character map as a parameter to the command so we can influence the look and feel of the results simply by changing the string of characters. Since we simply index into the string it can be as long or short as we like. Here are a couple of examples using different character maps.

@#$%&*!^()=-

$@B%8&WM#*oahkbdpqwmZO0QLCJUYXzcvunxrjft/\|()1{}[]?-_+~<>i!lI;:,”^`’.

The first is the default we use in our example command while the second is taken from Paul Bourke’s page on Character representation of grey scale images. You can see you can get very different results just by changing this one parameter.

Summing Up

RealityServer 5.1 includes a demo application and command based on the ideas discussed in this article so you can check those out without doing any coding but I’d encourage you to make your own and experiment.

We’ve covered a lot of ground in this article. Setting up a basic V8 server-side command, performing rendering to canvases, canvas manipulation and ASCII art creation. The technique used here is very simple, there are a lot of more sophisticated ways to generate ASCII art, it’s well worth a search around. There are a lot of techniques used here you can apply to much more useful situations, but you never know when you might want to navigate your 3D scene on your VT100 terminal.

Paul Arden

Paul Arden has worked in the Computer Graphics industry for over 20 years, co-founding the architectural visualisation practice Luminova out of university before moving to mental images and NVIDIA to manage the Cloud-based rendering solution, RealityServer, now managed by migenius where Paul serves as CEO.

More Posts - LinkedIn

Articles Blog Tutorials
Get in Touch