How to waste money and time when you obviously have too much of both. That would be Eve-Online “walking in station” project.
Now, in order to comprehend the bullshitting CCP is used to wave around and flaunt, I need to do some quoting from original dev blogs.
Now, today we’ve set our sights on character rendering and lifelike animation. It’s important to realize that in order to create realistic looking characters, you have to pay great attention to how they will be animated. We have examples of 3d rendered characters in films and digital media that look amazing when you see a screenshot or a still frame, but once they start moving, they look like zombies or animatronic RealDolls™. There’s actually a term that describes how cg or cartoon humanoid characters tend to look creepy and un-lifelike the closer you get to photorealism. A Japanese roboticist named Masahiro Mori gave it the name “Uncanny valley”. If we plan to create close to photorealistic characters, we must ensure that they’re behavior matches the quality of the shading and rendering in order to keep them out of this valley of darkness.
So exactly how do we create lifelike animation? Well, for one, we will use state of the art motion capture. There are nuances in the biomechanics of the human body that only the most experienced and skilled animators are able to express. It takes them, however, days to create what you capture in minutes in mocap. The amount of animation needed for a project like walking in stations prohibits us from hiring an army of the worlds most talented animators and having them animate for years until their fingers bleed.
This brings us to an area of computer graphics called dynamic avatar human-to-human interaction. It tries to apply knowledge derived from years of research of human body language into the actions of computer generated avatars, so that their behavior mimics human behavior without the user or NPC controller micro-managing every little twitch of the body or glance of the eyes.
This is one of the areas that we intend to research and apply to our animation system.
Two years passed since this dev blog I quoted. Now you can see the result of all that inspired and hyped work:
Two years passed and now you can see the result of so much bullshitting. You can see where all their work with “lifelike animations”, study in “biomechanics” and “state of the art motion capture” went.
Enjoy some of the WORST character animations I’ve ever seen. Featuring lobotomized facial expressions, stick-up-the-ass robotic walking animations and bump-aganist-the-walls like a drunkard. Please don’t offend zombies and animatronics, they move with so much more grace.
But, for this year, they are up for more bullshitting.