For his PhD at Delft University of Technology's Faculty of Mechanical Engineering, Thomas Geijtenbeek created robots that learned how to walk. These were virtual robots — simulations in a computer system — but with realistic muscles, joints and mass that behave in real-life ways.
When you see computer-generated figures move around in movies or computer games, the motion is either hand-generated (using a process not very different from that of animating old Looney Tunes cartoons), or the motion is captured from a human actor. The former method is very time consuming, and limited by the skill and time of the animator. The latter motion-capture method is more common these days, but doesn't lend itself well to non-human characters (like extrapolating a man's motions to that of a giant ape) or situations that are dangerous or impossible for an actor.
Thomas Geijtenbeek's approach is very different: rather than programming the motion of the robots in advance, he created a genetic algorithm to explore all the possible sequences of virtual muscle movements that result in a target motion of, say, a brisk walking speed. As you can see in the video below, early generations of the algorithm fail miserably. But over thousands of generations, a successful sequence of muscle movements evolves:
Because the simulation includes realistic power and torque for the virtual muscles (even neural delay is incorporated!), the result is a very realistic walking action. Better yet, when he sets the target speed to different numbers — say, 10 km/hr for a run — different gaits for the motion naturally emerge, rather than just a speeded-up walking motion. Even a hopping motion emerges naturally for a kangaroo-shaped robot! (You can different gaits in action at around 1:33 in this longer video.)
Now, if we can just combine these genetic algorithms with real-world robots with independent power ... well, let's just hope our robot overlords are friendly!
That's all for this week — we'll be back on Monday. See you then!