Tuesday 18 September 2012

Some more optimizations

When you care about performance of a system you need to look everywhere. You start optimizing your code for the critical points, in the case of a render engine that might be ray traversal, shading code... acceleration structure build.

Lately I'm quite happy with Glimpse rendering speed, but somthing that always bothered me is how long it takes to get data from the host application, in this case Maya, to the render engine.
In the past I used to wait 10 minutes for a preview rendering. In this case 15 seconds of frame translation is not a big deal. But now I can get a low quality noisy frame rendered in a fraction of a second. In this scenario a 10 sec frame translation is boldly inappropriate!

I was doing some tests the other day. Frame rendering was about 70" for a very clean quality at 2k resolution; about a second for an half-res low auality preview, but 26" of data translation.
It turns out that many Maya API calls are rather slow. Some to avoid like pleague is MItMesh*.
Some others particularly slow subsystems are light linking and materials assignments. If you are ever trying to write your own translator from Maya, do yourself a favor and extract such data from plugs and connections rather than relaying on higher level API to do the work for you.

After some changes translator is between 4-7 times faster now.
Sorry, no pictures this time :)

Saturday 1 September 2012

Motionblur

Motion blur is a deal breaker for production renderer. It is often one of those feature that determines if the engine can be considered "production ready". For many years raytracers had this "bad name" because they couldn't do it right. It was either too slow or too bad.

In practice, the concept is fairly simple when applied to bounding volume hierarchies. Each tree node would contain multiple bounds, describing the discreet motion steps. During ray traversal the motion bounds are first linearly interpolated based on ray time, the result is tested for ray intersection. Same happens for the leaf nodes and primitives.
What is the "ray time" then? When you generate primary samples you associate to them a random time, most likely i the [0-1) range referring to the fractional moment between shutter opening and close. All secondary rays, indirect illumination, shadow, etc deriving from a primary sample will inherit its time.
If you are using stratified sampling, or other low discrepancy sequences, you should stratify the random time too to get a nice distribution, but make sure you scramble the sequence to not have any correlation between ray direction and time. That would cause temporal aliasing.

Recently I have noticed that Intel Embree 1.1 got released. Some of the memory improvements Intel engineers implemented remarkably resemble some suggestion I have made in their forum. That apart, v1.1 sees a decent implementation of motion blur too.

Looking at the code I have noticed that their implementation is quite similar to the one I had in mind, except for the tree build which I had only a vague idea how to dealt with (in the motion-blur sense). A nice trick to simplify BVH tree construction it to build the tree half way across the motion. It is probably the most optimal place to minimize bounds overlap. Then at the end of the build process, refit your tree to the motion steps and store the multiple keys. Refit is a lot quicker than building a new tree from scratch for each motion step. Refit is know to under perform in animation comparing to a fresh tree rebuild, however this is motion of half a frame we are talking about (and building the tree at mid time, it's just a quarter of frame). Objects generally won't move so much in a fraction of frame, not enough to cause strong performance degradation.

During the course of last week I had a crack at implementing a mix of my ideas and inspirations taken from Embree.

It seems that performance degradation compared to non moving rendering is around 15%. Better than I expected! I know there are several optimizations I can do to make it better. For instance I am sampling motion bounds and primitives for static objects too, which is a big waste of computation and memory.

If you want to challenge yourself, this is a  nice paper:

It extends the idea of Split BVH to the concept of motion.

   

Monday 20 August 2012

Why Glimpse?

When you begin a project, any project, you need a purpose. It can be passion, business, you are forced to do it (like those days at high school lab). So why am I doing this thing? Two main reasons:

  1. Passion. I always wanted to, but for years I have been stuck following other projects.
  2. To learn. Everybody says, "do not reinvent the wheel"... well most of the times somebody says that is actually saying "Don't challenge me" or "Don't make me look bad"... or actually "Don't waste your time, somebody better than you did it already". However if you want to learn something you got to get your hands dirty. I'm reinventing the wheel because I want to know more about wheels.
Would that be enough to keep my determination high? Hum... probably not.
For many years I have been working in the VFX industry. If you know something about the visual effect for a movie, or a full feature animation project, then you know how much rendering is the bottleneck to most of the  creative process.

Unless you are a visionary, you are not truly creative if you don't have interactive tools that let you explore and visualize what you are doing. Imagine a painter doing a portrait with his eyes closed and looking at the canvas for one second every few minutes.
For my entire career I have felt like that. I often need to "play machine", thinking in terms of the tool mechanics to understand how to best do a pure intuitive artistic action. That is not good enough.

You can argue that there are plenty of good renderers out there that features interactive rendering options where you can change parameters and see the result. You could also argue that our of them only a very few have the features needed to do what I do. I could use one of them and be happy... but that would not satify point 1 and 2. Glimpse is my rationalization. The render engine I would like to use, and that probably I will never finish.
In this idealization I would never leave the render engine. The editor (like Maya) will open with it and I will use it as my main visualization tool. Pretty much the way you work with your openGL quad-views. Everything I do in my scene should instantly reflect in the renderer. I'm still miles away from it but if you saw the video in previous post I'm getting closer.

Stop with the boring motivational background information. If you care about rendering you want to know how it's done :)

Glimpse is a traditional path raytracer using a MIS integrator. It implements a couple of physically plausible shading models (a diffuse and a microfactet BRDF beckmann). A few light types, like dome (IBL capable), distant and area with a few shapes.

Glimpse runs on the CPU. Which doesn't make much sense as a visualization tool... It should run on the GPU and be 5x faster. However when I began writing this I didn't have any GPU in hands. But the real reason is that GPUs perform well on small data set that fits in their limiter graphic memory. Typical production scene is way too large to fit in there and as soon as you are forced to stream off core data, bye bye GPU performance. CPUs seems a better candidate for now and they are easier to program.

Thursday 16 August 2012

Right in the middle of it

 I should have done this a while ago... I hope you love when you get thrown in the middle of the story, without a clue of what is happening, and you have to figure out the characters and the main plot for the events that follows. Well in this case I don't have much choice  than start documenting from the middle of the story. What story?

Several months ago I began a journey in the world of render engines development. That's not entirely true... I wrote my first render engine  in 2002, It was a terrible scanliner with plenty of bugs and limitations. It was reading a Maya file and rendering one line at a time gray shaders on polygonal surfaced... That project didn't last long.

Back to now. The project is called Glimpse. I have decided to write here my progress and failures. For the moment I just point you to a small video I recorder a few months ago... you know just to give you some background. More will come.

https://vimeo.com/40088608