Blender 2013 Demo Reel, the Making of.
2 years ago, Siggraph took place outside of the US for the first time - In Vancouver, Canada - where I live. One thing led to another, I ended up putting together a Blender demo reel for the exhibit. The reel worked well, and thus it had became a tradition to release a new reel each year just before Siggraph. (The first one was finished literally 3 hours before the exhibit floor opened)
So this year, I am at it again! This post will walk through the process I use to clobber a reel together.
Before doing any real work, a few decisions had to be made. Do we want to include stills? What about feature tests? Game engine content? Once those are finalized, it’s time to get crackin’.
Collecting footage was definitely the least pleasant part of the job. I started by doing an open “Call for Content” on BlenderNation, which lead to over 120 entries. After downloading them all, I went through each, picking out the exceptional ones based artistic and technical merit. On top of the submitted work, I reached out to many other artists. Twitter, youtube, vimeo, facebook, emails, all of these channels were used to track down people. All in all, I ended up with 20GB of mostly 1080p videos.
I went through all of them again to log the bits I want to use from each video. This gave me a very rough idea of how much workable material I have, and how long the final reel will be. I had a lot of fun doing this, watching all these amazing artwork just makes me happy and inspired.
Music choice was limited due to the draconian licensing restrictions of most record labels and Youtube. So I spent quite a bit of time on Jamendo sampling Creative Commons songs. An artist even offered me something he’s been working on, gratis. But in the end, I reached out to the all mighty Jans Morgenstern, who composed the soundtrack for Sintel, BBB and Elephants Dream. He provided me with a catchy beat that I feel goes well.
So with the 4-minute long music laid down, I started building the reel by assembling snippets of footage together. True to the spirit of the demo reel, Blender’s Video Sequence Editor was used for this part.
The submitted videos all came in at different framerate. There are 23.97, 24, 25, 29.97 and 30fps. Luckily, The Blender VSE doesn’t do any frame interpolation, so certain footages are simply slightly sped up. This proves to be completely undetectable, doing this avoids the dreaded frame-blending that would otherwise be required. Any audio from the original video is dropped in favor of the soundtrack.
I tried my best to group related ideas together to make the demo reel as cohesive as possible. All the architectural visualizations are collected in one place, so are most of the Non-Photorealistic-Renderings. This grouping really helped keeping the reel coherent.
A lot of time were also spent tweaking the cut to fit with the beat of the music. I find it unintuitive that the default playback behavior would lead to video playing much slower than the audio, this is resolved by setting the playback mode to ‘AV-Sync’ in the timeline.
By the way, syncing video to the beat is a lot easier when you can see the waveform of the audio track.
On the VSE
The VSE in Blender 2.68a got the job done. But one can always hope for more, right? There is a GSoC project on the VSE this year, so maybe we’ll see some improvement. Here would be my top 3 requests:
- Improve playback and rendering performance. I understand all the decoding and scaling and filtering are done on the CPU right now, which is severely limiting. GPU decoding and processing should significantly improve VSE’s performance.
- A library/asset system for video clips would be nice.
- A more robust proxy system for video strips. Having to manually create the proxy is tedious. Also, can creating proxy be made into a non-blocking operation that runs in the background?
So, after watching the reel. What do you think? Is blender going places?