Day 10 of the #DefaultCubism Challenge. Not being able to modify the default cube means I had to get pretty creative with this image:
- The spinning top is made from the default cube, first subsurfed, cast into a sphere, then modelled using a lattice.
- The ground plane is actually a duplicate copy of the top, (using the array modifier), scaled up 1000 times.
- Rendered in the ever amazing Cycles Renderer.
Day 8 of the #DefaultCubism Challenge. Where I try to create an entire scene with just the default cube in Blender.
This one deserves a bit more explanation, as it’s not a typical Blender Internal/Cycles scene. It’s a parallax based terrain shader that uses a relief map to create an entirely 3D looking terrain out of a single flat surface. The effect is far cooler in motion. To see the shader, you’d have to start the Game Engine with the P key. (And have an Nvidia card)
As we enter day 4 of the A Cube a Day experiment, I thought I’d clarify the rules I’ve set up for myself, as well as anyone else that wants to participate.
Goal: To produce an image in Blender using only the Default Cube.
- Basically, any modelling of the cube is not allowed. This includes Sculpting and any editing of the base mesh. Keep those 8 vertices exactly where they are!
- Curves, Surfaces, Metaball and Text when used as geometry.
- Any number of Empty, Lattice, Light, Camera, Group, Material, Texture and Nodes. Modifier stack, particles and simulation are allowed and encouraged.
- Weight painting is the exception to the ‘No editing of base mesh’ rule and is allowed. Go nuts with Vertex Groups if it helps.
- Python coding is allowed.
- Curves are allowed when they are not being used directly as geometry.
- To encourage clever (ab)use of the library system, multiple Blend file is allowed. But each file needs to adhere to all the rules above.
- Post-processing using Blender or other apps. As long as the post-processing does not introduce any significantly new visual element.
Most importantly, have fun and share your work!
Blender 2013 Demo Reel, the Making of.
2 years ago, Siggraph took place outside of the US for the first time - In Vancouver, Canada - where I live. One thing led to another, I ended up putting together a Blender demo reel for the exhibit. The reel worked well, and thus it had became a tradition to release a new reel each year just before Siggraph. (The first one was finished literally 3 hours before the exhibit floor opened)
So this year, I am at it again! This post will walk through the process I use to clobber a reel together.
Before doing any real work, a few decisions had to be made. Do we want to include stills? What about feature tests? Game engine content? Once those are finalized, it’s time to get crackin’.
Collecting footage was definitely the least pleasant part of the job. I started by doing an open “Call for Content” on BlenderNation, which lead to over 120 entries. After downloading them all, I went through each, picking out the exceptional ones based artistic and technical merit. On top of the submitted work, I reached out to many other artists. Twitter, youtube, vimeo, facebook, emails, all of these channels were used to track down people. All in all, I ended up with 20GB of mostly 1080p videos.
I went through all of them again to log the bits I want to use from each video. This gave me a very rough idea of how much workable material I have, and how long the final reel will be. I had a lot of fun doing this, watching all these amazing artwork just makes me happy and inspired.
Music choice was limited due to the draconian licensing restrictions of most record labels and Youtube. So I spent quite a bit of time on Jamendo sampling Creative Commons songs. An artist even offered me something he’s been working on, gratis. But in the end, I reached out to the all mighty Jans Morgenstern, who composed the soundtrack for Sintel, BBB and Elephants Dream. He provided me with a catchy beat that I feel goes well.
So with the 4-minute long music laid down, I started building the reel by assembling snippets of footage together. True to the spirit of the demo reel, Blender’s Video Sequence Editor was used for this part.
The submitted videos all came in at different framerate. There are 23.97, 24, 25, 29.97 and 30fps. Luckily, The Blender VSE doesn’t do any frame interpolation, so certain footages are simply slightly sped up. This proves to be completely undetectable, doing this avoids the dreaded frame-blending that would otherwise be required. Any audio from the original video is dropped in favor of the soundtrack.
I tried my best to group related ideas together to make the demo reel as cohesive as possible. All the architectural visualizations are collected in one place, so are most of the Non-Photorealistic-Renderings. This grouping really helped keeping the reel coherent.
A lot of time were also spent tweaking the cut to fit with the beat of the music. I find it unintuitive that the default playback behavior would lead to video playing much slower than the audio, this is resolved by setting the playback mode to ‘AV-Sync’ in the timeline.
By the way, syncing video to the beat is a lot easier when you can see the waveform of the audio track.
On the VSE
The VSE in Blender 2.68a got the job done. But one can always hope for more, right? There is a GSoC project on the VSE this year, so maybe we’ll see some improvement. Here would be my top 3 requests:
- Improve playback and rendering performance. I understand all the decoding and scaling and filtering are done on the CPU right now, which is severely limiting. GPU decoding and processing should significantly improve VSE’s performance.
- A library/asset system for video clips would be nice.
- A more robust proxy system for video strips. Having to manually create the proxy is tedious. Also, can creating proxy be made into a non-blocking operation that runs in the background?
So, after watching the reel. What do you think? Is blender going places?
An Ode to Early Adopters
How is that an expensive luxury car with all cutting-edge features is often less reliable than an entry-level commuter car. Surely a $120k Range Rover is better built than a $20k Toyota Rav4?
It makes sense if you consider just how many additional features there are in a luxury vehicle. Does your commuter car have in-seat massage? Doors that close themselves? Built-in Wifi hotspot? Not to mention a gazillion sensors that make the car steers itself, brakes itself and parks itself. I am not making stuff up. See for yourself. Whether the additional features makes a better value proposition is moot. The point is, more stuff equates more stuff to fail. Especially when the stuff are as experimental as these.
Early adopters of technology have all experienced this. We pay a huge premium for the newest, shiniest, gadgets. Not only does our wallet suffer, more often than not, the first iteration of something is unrefined, unreliable, and depreciates quickly as the mass market warms up to the technology.
I bought a 4K TV last month. A 50inch LCD display with a staggering 3840x2560 resolution. 4K is far from mainstream just yet, as there is virtually no UltraHD content available apart from computer outputs. But I wanted one, so here we are.
As with most early adopters, my experiences with the 4K display is hardly pleasant. I had expected that. I had to flash the firmware, twice. It was near impossible to get working under Linux without some serious technical know-hows. And I had to wade through 80 pages of forum posts just to find out how to adjust the backlight brightness (It’s under a secret factory diagnostic menu). But never mind all that, because I have a 9 megapixel display and you don’t.
All jokes aside, being an early adopter is not about bragging right. It’s about pushing the technology forward by voting with ones wallet. Early adopters live uncomfortable lives, they often have to deal with products that are half-finished while paying a huge premium for it. But this shouldn’t be a reason to avoid new technology, without early adopter’s support, nothing will ever reach mainstream.
When DSLR video recording first came out on the 5D Mark II, the indie filmmakers went nuts over it. The first short film Reverie, with its creamy bokeh that take up the whole screen, convinced the world what digital SLR can do. Never mind the heavily compressed codec; never mind the lack of manual control; never mind that it didn’t even record in the industry standard FPSs, the status quo had been challenged, and the indie film-making scene is never the same after.
When the first iPhone came out, the people who bought them on the first day did not do it because it’s cheap, nor because it’s a proven technology. Keep in mind that the first iPhone didn’t even have an app store. Imagine living with just the dozen apps that it comes with today! They bought it because they are the early adoptors, they see an emerging technology and made the decision to support it. Dollar voting.
Going back to the topic of cars, we are at the beginning of another adoption lifecycle: that of the electric cars. But if everyone decided against buying the current generation because it’s too slow, limited, or expensive, how will the companies ever collect enough capital to make it better?
The world needs more early adopters.
thoughts on the Blender Game Engine
A while ago, the Blender Foundation posted a roadmap for the future of Blender and that of the game engine. The gist seems to be in favor of a more unified codebase between Blender and the game engine. This move is meant to bring some of the best features from both sides together, and alleviate the current stagnant state of the game engine.
There was a lot of panicked GE faithfuls that got uncomfortable with what some perceived as the annexation of the GE, or worse, the complete abandonment of the GE. Lots of ideas were thrown around.
Having relied on the GE for nearly half a decade to make a living, I felt compelled to chime in.
Firstly, let’s be clear: No one said anything about removing the GE. Sharing a codebase is a great thing, it means the viewport can benefit from the performance and shading capability of the GE, and the GE can benefit from the constant improvement of the Blender kernel. This merge will give the GE more attention, not less. Ton later clarified his intention in another email.
Secondly, we are way past the ‘make an offline scene, press P et voila!’ game creation pipeline. Developing any game asset takes careful planning, execution, and a very specific set of tools. Expecting a scene made for offline rendering to work in a game ad-hoc is simply not reasonable. Yes, there are tools such as the Valve FilmMaker and Unreal Matinee. But these are designed to be realtime tools, while Blender is still an offline-centric authoring tool.
With the advance of Cycles and non-polygonal renderings(volume, smoke, particles, hair), the feature gap between Blender and its game engine is widening. As we lose more and more of that interoperability between Blender and the GE, the style of game development vis-a-vis offline work also diverges. But since that’s not how games are made today anyway, this interoperability cannot be seen as a serious feature, even if it worked.
By now, hopefully you are a little convinced that getting the game engine to support all the Blender features is tough, in fact, as developer Mitchell Stokes explained, from a software perspective, the game engine is very much separate from the rest of Blender. This is probably why the developers aren’t as interested in the game engine - it’s another entity altogether.
Which brings us to the proposed solution - To bring the GE into Blender.
This will accomplish exactly one thing: As mentioned above, a unified codebase will make development far easier, and remove duplicate work.
As an user, the advantage of a more interactive Blender is limitless. Bret Victor gives insight into how a simulation based workflow can benefit animators. A game engine powered Blender doesn’t just mean realtime lighting and shading, it will hopefully one day give animators the ability to blend procedural driven animation with classical hand-tuned animation. We already have bullet physics rigid body integration, now imagine rag doll physics, event based animation, rule-based crowd simulation, and directed animation.
This is where things gets fun: when art-making moves away from slider tweaking, towards being a performance.
Seriously, watch Bret Victor’s Stop Drawing Dead Fish talk: