New animation software simplifies the process of animating movements for digital characters, such as those found in big-budget animated movies.
“We want to make it quick and easy to create animations—without compromising on quality,” says lead author Loïc Ciccone, a doctoral student at the Computer Graphics Laboratory at ETH Zurich.
…after a 15-minutes introduction, five inexperienced test subjects were able to create various motion sequences within an hour.
Today’s professional animation methods—like keyframing, which artists in the entertainment industry use—offer a very high level of precision. However, characters’ temporal and spatial transformations have to be animated separately, which makes the process complicated and unintuitive.
Ciccone counteracts this problem by integrating a specially developed tool known as “MoCurves” into his software.
Each MoCurve represents the movement of an animated item, such as the lifting and lowering of a foot, and is created when an artist defines a movement with the mouse. This can be decelerated or accelerated through the extension or contraction of the curve at specific points. The movement cycle is thus tested in real time and changed where necessary.
If the artist wants the character to lift its hand as well as its foot, they create an additional MoCurve for the second movement that they can then adjust and test independently of the first movement.
The scientists have already successfully tested the software: after a 15-minutes introduction, five inexperienced test subjects were able to create various motion sequences within an hour. The research team also received positive feedback from professional artists, who were particularly impressed by the increased speed with which they could create animations.
Animation tool helps people pull up virtual shorts
According to Ciccone, the new software will not only benefit the industry: “Our easy-to-use software gives anyone and everyone the ability to tell animated stories.”
The research appears online in the journal ACM Digital Library. Additional researchers contributing to this work are from ETH Zurich and Disney Research.
Source: Giulia Adagazza for ETH Zurich