How is that an expensive luxury car with all cutting-edge features is often less reliable than an entry-level commuter car. Surely a $120k Range Rover is better built than a $20k Toyota Rav4?
It makes sense if you consider just how many additional features there are in a luxury vehicle. Does your commuter car have in-seat massage? Doors that close themselves? Built-in Wifi hotspot? Not to mention a gazillion sensors that make the car steers itself, brakes itself and parks itself. I am not making stuff up. See for yourself. Whether the additional features makes a better value proposition is moot. The point is, more stuff equates more stuff to fail. Especially when the stuff are as experimental as these.
Early adopters of technology have all experienced this. We pay a huge premium for the newest, shiniest, gadgets. Not only does our wallet suffer, more often than not, the first iteration of something is unrefined, unreliable, and depreciates quickly as the mass market warms up to the technology.
I bought a 4K TV last month. A 50inch LCD display with a staggering 3840x2560 resolution. 4K is far from mainstream just yet, as there is virtually no UltraHD content available apart from computer outputs. But I wanted one, so here we are.
As with most early adopters, my experiences with the 4K display is hardly pleasant. I had expected that. I had to flash the firmware, twice. It was near impossible to get working under Linux without some serious technical know-hows. And I had to wade through 80 pages of forum posts just to find out how to adjust the backlight brightness (It’s under a secret factory diagnostic menu). But never mind all that, because I have a 9 megapixel display and you don’t.
All jokes aside, being an early adopter is not about bragging right. It’s about pushing the technology forward by voting with ones wallet. Early adopters live uncomfortable lives, they often have to deal with products that are half-finished while paying a huge premium for it. But this shouldn’t be a reason to avoid new technology, without early adopter’s support, nothing will ever reach mainstream.
When DSLR video recording first came out on the 5D Mark II, the indie filmmakers went nuts over it. The first short film Reverie, with its creamy bokeh that take up the whole screen, convinced the world what digital SLR can do. Never mind the heavily compressed codec; never mind the lack of manual control; never mind that it didn’t even record in the industry standard FPSs, the status quo had been challenged, and the indie film-making scene is never the same after.
When the first iPhone came out, the people who bought them on the first day did not do it because it’s cheap, nor because it’s a proven technology. Keep in mind that the first iPhone didn’t even have an app store. Imagine living with just the dozen apps that it comes with today! They bought it because they are the early adoptors, they see an emerging technology and made the decision to support it. Dollar voting.
Going back to the topic of cars, we are at the beginning of another adoption lifecycle: that of the electric cars. But if everyone decided against buying the current generation because it’s too slow, limited, or expensive, how will the companies ever collect enough capital to make it better?
A while ago, the Blender Foundation posted a roadmap for the future of Blender and that of the game engine. The gist seems to be in favor of a more unified codebase between Blender and the game engine. This move is meant to bring some of the best features from both sides together, and alleviate the current stagnant state of the game engine.
There was a lot of panicked GE faithfuls that got uncomfortable with what some perceived as the annexation of the GE, or worse, the complete abandonment of the GE. Lots of ideas were thrown around.
Having relied on the GE for nearly half a decade to make a living, I felt compelled to chime in.
Firstly, let’s be clear: No one said anything about removing the GE. Sharing a codebase is a great thing, it means the viewport can benefit from the performance and shading capability of the GE, and the GE can benefit from the constant improvement of the Blender kernel. This merge will give the GE more attention, not less. Ton later clarified his intention in another email.
Secondly, we are way past the ‘make an offline scene, press P et voila!’ game creation pipeline. Developing any game asset takes careful planning, execution, and a very specific set of tools. Expecting a scene made for offline rendering to work in a game ad-hoc is simply not reasonable. Yes, there are tools such as the Valve FilmMaker and Unreal Matinee. But these are designed to be realtime tools, while Blender is still an offline-centric authoring tool.
With the advance of Cycles and non-polygonal renderings(volume, smoke, particles, hair), the feature gap between Blender and its game engine is widening. As we lose more and more of that interoperability between Blender and the GE, the style of game development vis-a-vis offline work also diverges. But since that’s not how games are made today anyway, this interoperability cannot be seen as a serious feature, even if it worked.
By now, hopefully you are a little convinced that getting the game engine to support all the Blender features is tough, in fact, as developer Mitchell Stokes explained, from a software perspective, the game engine is very much separate from the rest of Blender. This is probably why the developers aren’t as interested in the game engine - it’s another entity altogether.
Which brings us to the proposed solution - To bring the GE into Blender.
This will accomplish exactly one thing: As mentioned above, a unified codebase will make development far easier, and remove duplicate work.
As an user, the advantage of a more interactive Blender is limitless. Bret Victor gives insight into how a simulation based workflow can benefit animators. A game engine powered Blender doesn’t just mean realtime lighting and shading, it will hopefully one day give animators the ability to blend procedural driven animation with classical hand-tuned animation. We already have bullet physics rigid body integration, now imagine rag doll physics, event based animation, rule-based crowd simulation, and directed animation.
This is where things gets fun: when art-making moves away from slider tweaking, towards being a performance.
Seriously, watch Bret Victor’s Stop Drawing Dead Fish talk:
Blender Game Engine improvements to keep an eye on
OpenGL Mobile Compatibility and Android Port (GSoc 2012) - Alexandr Kuznetsov: This project aims to made OpenGL slightly faster and compatible with OpenGL ES in order to port Blender Player to Android. This would include port of libs, ghost and build infrastructure. At the end Blender Games can be played on Android.
BGE Converter Improvements and fixups (GSoC2012) - Mitchell Stokes: Various improvements such as saving out converted data to disk and allowing for asynchronous level loading. Will make loading of large geometry much faster.
Multitouch Framework (GSoC 2012) - Nicholas Rishel: Extending recognition for multitouch input for SDL for the purpose of navigation, and a framework for future additions. As envisioned, the immediate result would serve as a compliment to a stylus. This would prepare Blender for the incoming Slate form factor machines (see Samsung Series 7 Slate and Asus Eee EP121), and potentially ease ports to Android touch devices.
Adapting the Hive system for the Blender Game Engine (GSoc 2012) - Spencer Alves: This project aims to make a more accessible, efficient, and useful editor for logic systems in the Blender Game Engine by integrating the Hive systems project into Blender.
Dalai Felinto had been working on teaching the old Blender Game Engine some new tricks.
In the upcoming version of Blender, the Game Engine can use the native Blender ‘text’ object type in the game, this means you can reuse the same text object inside the Game Engine, without having to fiddle with custom bitmap textures. Further more, the text engine is now thoroughly Unicode aware, with the right font, you can make Blender talk in any language you want.
Just a small update on what I have been up to lately. September has turned out to be quite an exciting month! On the 23rd, I was at Cologne for Photokina, where I played around with every camera I can get my hands on. (Fuji’s X100‘s digital viewfinder is astonishingly sharp; the NEX5 stitches panorama faster and better than a quadcore PC; the Canon SX30 zooms 35x optically, and the 200-500mm F2.8 lens from sigma is just ridiculous)
On the following day, I arrived at Nijmegen, The Netherland to work with Dalai and Martins prepping for a 3-day event called Cosmic Sensation. We are responsible for making the 3D graphics that will be projected onto the dome with 6 high-powered projectors. It was an intense week, as us three Blender artists worked 16 hour days to get it ready for the show. But in the end everything paid off, people had a blast.
During our stay in Holland, all 3 of us went to Utrecht for the Sintel Premiere on the 27th where a lot of the big names of the Blender community graced the city. It was quite inspiring talking to Colin, the director of the film; Jan, who single handily created the sound and score for the film, and William Reynish, who turned out to be a bigger photo geek than I am.
Anyway, that was it for September. I still have some flying scheduled for October:
While Dalai and I are still working on the 400 page manuscript day and night (haha, no not really, but it’s fun to pretend that we are working hard on it), Amazon already has this soon-to-be-legendary book on a 35% discount. So if you are remotely interested in Blender but is short on cash, why not pre-order this book today and be one of the first reader to email us about that spelling mistake on page 274.
The book will cover every BGE topic from graphics to shader, logic bricks to python, so there is something for everyone.
And finding a cure for cancer is definitely lvl 80.
My daily reading usually does not include anything from Science or Nature, but this article stood out among all the other academia mumble-jumbos. In a nut shell, the paper described a software that can potentially solve protein structures faster than any existing computer. How? By taking advantage of human brain’s immense cognitive power. Researchers found that even a casual player (non-biochemist) can solve complex protein folding problems much faster than a computer. (Ars has some very good background info explaining the biology aspect of the paper, take a look if you are unfamiliar with terms like amino-acid and hydrophilicity)
So this game called FoldIt is devised to takes advantage of of people’s boredom. A series of incrementally difficult scenarios teach players the basic mechanics of the game. Then real data is streamed from online for player to solve. Results are sent back for cumulative analysis. So far, the result looks wonderfully promising. It turns out, our monkey brains can easily out-perform a cluster of Intel i7s, both in accuracy and speed.
In the US, 200,000 Million (200Billion) hours are spent in front of a television, each year. A total of 500 million hours were spent in Second Life in 2009, and a mere 100 million hours were spent in creating the entire Wikipedia. Perhaps we can cure cancer in a week if we all just skip our weekly quota of Grey’s Anatomy?