Animation deals with the creation of facial and body movements on virtual characters. This movement can be derived from movement models, created by hand, or recorded using motion capture. We are interested in the creation of movement models and the use and adaptation of motion capture data.
Our current research topics include:
Some animation can not be defined beforehand. For example: walk to a location, point at a target, gaze at a target, jump on the chair. Movement models obtained from biomechanics, behavior science, etc. can help us realisticly execute such animation.
Movement parameters modify the way an animation is executed in real time. We distinquish parameters that modify the shape of the animation (for example: higher, wider) and parameters that influence the movement effort (velocity profile, timing, use of the whole arm or just the wrist, etc). Parameters and their values could be derived from the emotional state and the style of a conversational agent.
Parameterization is especially useful to adapt motion captured animation: motion capture yields very detailed animation, but the control over such motion is currently lacking. By obtaining movement parameters, we aim to gain more control over such animation, so that it can be modified in real time.
Several animations/functions can claim the body at the same time. For example, a virtual human can point and make a beat gesture concurrently. An animation planner should be able to solve such animation conflicts. Possible conflict solving strategies are:
- Skipping the gesture with the lowest priority
- Execute on of the animations in another way (example: point using gaze, make a beat gesture with the hand)
- Combine animations, make a point+beat gesture with the hand
When two gestures are to be concatenated, we want to move gracefully from one animation to the next.
Humans often use body movement in coordination with behavior on other channels. For example, while dancing, we align our movements to the beat of the music and while gesturing, we align our movements to our speech (or at times, our speech to our movements). To achieve this in an animation system, we need to be able to stretch or skew existing animation in a realistic way. Stretching can be achieved by slowing the animation down, holding (e.g. pre and post stroke holds in gesture) or repetition (applause is stretched by adding more clapping movements). Skewing can be achieved by speeding up, skipping movement phases (e.g. skip preparation or retraction while gesturing) or even by interrupting the movement.
Another interesting challenge is the real time planning of multimodal behavior, given only the behavior on the different channels and the time-alignment between this behavior. In an interactive system, replanning of behavior is necesary and should be done in real time.
Precise animation of an articlated human body requires computational resources. Moreover, the distance and other visibility conditions (light, fog) make subtle gestures meaningless, as they are not seen or noticed well enough. In real life people do consider these factors, when e.g. apply exaggerated waving to somebody far away.
A animation LOD mechanism can both safe computational resources and generate the appropiate animation in relation to the location of the viewer.
Animation is a subtopic of:
The following HMI-members are working on Animation:Former HMI-members:
Here you can find the publications
Students can find information about assignments:
[ final project ] [ Capita selecta and Research Topics ] at our Msc webpages.