If you just want to see the new site, go to mikepan.com
It’s been more than 5 years since my portfolio’s last redesign, which looked like this:
Okay, I still like the old design, but there are quite few things that desperately need improvement:
Design looks diminutive on high DPI screens, even worse on mobiles in portrait view.
Images are low-resolution, averaging between 900x500.
High-key design takes away from visual content.
No sense of time or project. Just a collection of stuff.
Rigid horizontal layout
With all that in mind, the new sites has the following improvements:
Content first. Big projects are featured.
Not only does each project have its own page, all the images are at least 1920x1080. This ensures everything looks great on a retina screen.
The background is now a dim black, further putting the content forward.
Should look as good on a 3.5’ screen as a 30’ screen.
Using CSS3, I was able to make the site pretty responsive. Google Chrome’s device emulation feature is a godsend for testing how the site will look on different devices.
Hand-crafted HTML5, CSS3, Jekyll backend
Semantic HTML, CSS3 goodies, it’s all there.
My previous host wasn’t the most reliable nor easiest to use, so I decided to go with something far more established, and free. Github. They can host any static content, and updating the site is as easy as a git push.
Things I Learned
A pixel isnt’ really a pixel
Remember when a pixel is just that, a pixel? Now a design has to target everything from 72 DPI computer screens to a 300 DPI smartphones. Combine that with the idea of viewport, varying aspect ratio, it makes for some pretty interesting layout challenges.
When blue isn’t really blue
This wasn’t too much of a problem for me, but I had to make sure all the images I uploaded are encoded in the proper colorspace (sRGB).
Other people are smarter than you
Use css reset, shivs, boilerplates. Because these things are made by people far smarter than you. It will save you time and give you back precious sleep.
Web development is still a mine field of workarounds, gotchas and vendor-specific hacks.
Sadly, this hasn’t really changed much from 5 years ago. Desktop browsers are getting really good, but mobile browsers has their own performance issues and limitations.
How is that an expensive luxury car with all cutting-edge features is often less reliable than an entry-level commuter car. Surely a $120k Range Rover is better built than a $20k Toyota Rav4?
It makes sense if you consider just how many additional features there are in a luxury vehicle. Does your commuter car have in-seat massage? Doors that close themselves? Built-in Wifi hotspot? Not to mention a gazillion sensors that make the car steers itself, brakes itself and parks itself. I am not making stuff up. See for yourself. Whether the additional features makes a better value proposition is moot. The point is, more stuff equates more stuff to fail. Especially when the stuff are as experimental as these.
Early adopters of technology have all experienced this. We pay a huge premium for the newest, shiniest, gadgets. Not only does our wallet suffer, more often than not, the first iteration of something is unrefined, unreliable, and depreciates quickly as the mass market warms up to the technology.
I bought a 4K TV last month. A 50inch LCD display with a staggering 3840x2560 resolution. 4K is far from mainstream just yet, as there is virtually no UltraHD content available apart from computer outputs. But I wanted one, so here we are.
As with most early adopters, my experiences with the 4K display is hardly pleasant. I had expected that. I had to flash the firmware, twice. It was near impossible to get working under Linux without some serious technical know-hows. And I had to wade through 80 pages of forum posts just to find out how to adjust the backlight brightness (It’s under a secret factory diagnostic menu). But never mind all that, because I have a 9 megapixel display and you don’t.
All jokes aside, being an early adopter is not about bragging right. It’s about pushing the technology forward by voting with ones wallet. Early adopters live uncomfortable lives, they often have to deal with products that are half-finished while paying a huge premium for it. But this shouldn’t be a reason to avoid new technology, without early adopter’s support, nothing will ever reach mainstream.
When DSLR video recording first came out on the 5D Mark II, the indie filmmakers went nuts over it. The first short film Reverie, with its creamy bokeh that take up the whole screen, convinced the world what digital SLR can do. Never mind the heavily compressed codec; never mind the lack of manual control; never mind that it didn’t even record in the industry standard FPSs, the status quo had been challenged, and the indie film-making scene is never the same after.
When the first iPhone came out, the people who bought them on the first day did not do it because it’s cheap, nor because it’s a proven technology. Keep in mind that the first iPhone didn’t even have an app store. Imagine living with just the dozen apps that it comes with today! They bought it because they are the early adoptors, they see an emerging technology and made the decision to support it. Dollar voting.
Going back to the topic of cars, we are at the beginning of another adoption lifecycle: that of the electric cars. But if everyone decided against buying the current generation because it’s too slow, limited, or expensive, how will the companies ever collect enough capital to make it better?
A while ago, the Blender Foundation posted a roadmap for the future of Blender and that of the game engine. The gist seems to be in favor of a more unified codebase between Blender and the game engine. This move is meant to bring some of the best features from both sides together, and alleviate the current stagnant state of the game engine.
There was a lot of panicked GE faithfuls that got uncomfortable with what some perceived as the annexation of the GE, or worse, the complete abandonment of the GE. Lots of ideas were thrown around.
Having relied on the GE for nearly half a decade to make a living, I felt compelled to chime in.
Firstly, let’s be clear: No one said anything about removing the GE. Sharing a codebase is a great thing, it means the viewport can benefit from the performance and shading capability of the GE, and the GE can benefit from the constant improvement of the Blender kernel. This merge will give the GE more attention, not less. Ton later clarified his intention in another email.
Secondly, we are way past the ‘make an offline scene, press P et voila!’ game creation pipeline. Developing any game asset takes careful planning, execution, and a very specific set of tools. Expecting a scene made for offline rendering to work in a game ad-hoc is simply not reasonable. Yes, there are tools such as the Valve FilmMaker and Unreal Matinee. But these are designed to be realtime tools, while Blender is still an offline-centric authoring tool.
With the advance of Cycles and non-polygonal renderings(volume, smoke, particles, hair), the feature gap between Blender and its game engine is widening. As we lose more and more of that interoperability between Blender and the GE, the style of game development vis-a-vis offline work also diverges. But since that’s not how games are made today anyway, this interoperability cannot be seen as a serious feature, even if it worked.
By now, hopefully you are a little convinced that getting the game engine to support all the Blender features is tough, in fact, as developer Mitchell Stokes explained, from a software perspective, the game engine is very much separate from the rest of Blender. This is probably why the developers aren’t as interested in the game engine - it’s another entity altogether.
Which brings us to the proposed solution - To bring the GE into Blender.
This will accomplish exactly one thing: As mentioned above, a unified codebase will make development far easier, and remove duplicate work.
As an user, the advantage of a more interactive Blender is limitless. Bret Victor gives insight into how a simulation based workflow can benefit animators. A game engine powered Blender doesn’t just mean realtime lighting and shading, it will hopefully one day give animators the ability to blend procedural driven animation with classical hand-tuned animation. We already have bullet physics rigid body integration, now imagine rag doll physics, event based animation, rule-based crowd simulation, and directed animation.
This is where things gets fun: when art-making moves away from slider tweaking, towards being a performance.
Seriously, watch Bret Victor’s Stop Drawing Dead Fish talk:
Blender Game Engine improvements to keep an eye on
OpenGL Mobile Compatibility and Android Port (GSoc 2012) - Alexandr Kuznetsov: This project aims to made OpenGL slightly faster and compatible with OpenGL ES in order to port Blender Player to Android. This would include port of libs, ghost and build infrastructure. At the end Blender Games can be played on Android.
BGE Converter Improvements and fixups (GSoC2012) - Mitchell Stokes: Various improvements such as saving out converted data to disk and allowing for asynchronous level loading. Will make loading of large geometry much faster.
Multitouch Framework (GSoC 2012) - Nicholas Rishel: Extending recognition for multitouch input for SDL for the purpose of navigation, and a framework for future additions. As envisioned, the immediate result would serve as a compliment to a stylus. This would prepare Blender for the incoming Slate form factor machines (see Samsung Series 7 Slate and Asus Eee EP121), and potentially ease ports to Android touch devices.
Adapting the Hive system for the Blender Game Engine (GSoc 2012) - Spencer Alves: This project aims to make a more accessible, efficient, and useful editor for logic systems in the Blender Game Engine by integrating the Hive systems project into Blender.