Elckerlyc is a BML compliant behavior realizer for generating
multimodal verbal and nonverbal behavior for Virtual Humans (VHs). It
is designed specifically for continuous (as opposed to turn-based)
interaction with tight temporal coordination between the behavior of a
VH and its interaction partners. Animation in Elckerlyc is generated
using a mix between the precise temporal and spatial control offered
by procedural motion and the naturalness of physical simulation.
Elckerlyc, BML Realizer, BMLT, continuous InteractionSome themes present in this work are :
A Java webstart demo of Elckerlyc can be downloaded here.
The jar file (not runnable, unzip to get extra BML scripts and read
all procedural animation xmls) can be downloaded here.
This demo requires windows computer with a recent version of Java and somewhat up to date videocard
Elckerlyc is released under the GPLv3 license, contact Herwi
Welbergen for a copy of the source code.
Elckerlyc uses the Behavior Markup
Language (BML) for the specification and synchronization of
We widened the interpretation of BML's
synchronization mechanism to allow the specification of
synchronization to time predictions by an 'anticipator' and add the
specification of physical controllers, procedural animation,
transition animations as BML behaviors. These extensions are
documented in our BML Twente
Currently no BML Realizer, including Elckerlyc fully supports the core
(=minimal) BML specification. We are working towards supporting the draft 1.0
standard. Here we list our progress.
 H. van Welbergen, A. Nijholt, D. Reidsma and J. Zwiers Presenting
in Virtual Worlds: Towards an Architecture for a 3D Presenter
explaining 2D-Presented Information, IEEE Intelligent Systems,
21(5):47-99, ISSN 1541-1672, 2006
||Speech through MaryTTS, Microsoft Speech API or using a custom
text synthesize (e.g. text balloons). Lipsync using morph
Point to static or dynamic targets using left or right hand. Full
beat and conduit gestures. Support for several lexicalized gestures.
No support for stroke repetition and preparation/retraction skipping
Specialized procedural pointing motion unit . Beat and conduit
gestures based on corresponding Greta gestures .
Lexicalized gestures can be flexibally linked to procedural animations
or physical controllers using Elckerlyc's gesture binding.
Rotation is fully supported, orientation is not supported.
Rotation is based on SmartBody's head nods and
head shakes .
||Support for FACS (au, side, amount) and Lexicalized expressions.
For dynamics, we use the onset-apex-offset trajectory. Other types are
not supported. Direction is also not supported, because it is
irrelevant for lexicalized expressions, and already implied in the
Action unit for FACS.
MPEG4 control points; FACS-to-MPEG4 conversion ; Plutchik's emotion
circle-to-MPEG4 ; lexicalized expressions are locations on
Plutchik's emotion circle, or a set of FACS values, or a combination
of one or more hand designed morph targets.
type=AT, modality=neck and or eyes, offsets are supported, both static
Biomechanical gaze model [1,4,5].
||Physical balance controller on the lower body .
||The wait behavior is implemented, event handling is not yet
||Wait is implemented as a simple no-op behavior.
||Synchronize is implemented, before and after are not.
||Support for bmlt, msapi, ssml, marytts, non-recognized description
 Bjorn Hartmann, Maurizio Mancini, and Catherine
Pelachaud. Implementing Expressive Gesture Synthesis for
Agents, in Gesture in Human-Computer Interaction and Simulation,
volume 3881 of Lecture Notes in Computer Science, pp. 188-199.
 M. Thiebaux, A. N. Marshall, S. Marsella, and M. Kallmann.
Smartbody: Behavior realization for embodied conversational agents. In
Autonomous Agents and Multiagent Systems, pages 151-158, 2008.
 Tweed, D. (1997). Three-dimensional model of the human eye-head
saccadic system. The Journal of Neurophysiology, 77 (2), pp. 654-
 Tweed, D., & Vilis, T. (1992). Listing's Law for Gaze-Directing
Head Movements. In A. Berthoz,
W. Graf, & P. Vidal (Eds.), The Head-Neck Sensory-Motor System
(pp. 387-391). New York:
Oxford University Press.
 Wayne L. Wooten and Jessica K. Hodgins. Simulating
Leaping, Tumbling, Landing, and Balancing Humans. In Proceedings of
International Conference on Robotics and Animation, pp. 656-662. IEEE,
Zhang et al, Dynamic Facial Expression Analysis and Synthesis with
MPEG4 Facial Animation Parameters, 2008
 Raouzaiou, A., Tsapatsoulis, N., Karpouzis, K., Kollias, S.:
Parameterized facial expression synthesis based on MPEG-4. Eurasip
Journal on Applied Signal Processing 10(2002):1021--1038
Our procedural animation model steers the body using mathematical
expressions of canonical time t (0 is the start of the animation, 1
the end) and various user-assigned variables. These parameters can be
varied in real time. The procedural animation can be stored in an xml
script that describes how the procedural motion moves the wrist, root
and ankles or how it rotates other joints.
Demonstration videos of the realization of different BML scripts.
Animation is realized using our mixed kinematic/physics animation
system. We simultaneously use kinematical motion and physical
simulation, applied to different regions of the human body. A torque
feedback method couples the kinematical motion to the physical
The movement path of the wrists in this 4-beat
is defined by a hermite spline, that can be controlled using parameter
a. A larger value of a leads to a wider arm path.
Five keypoints facilitate synchronization to
Mixing procedurally generated conducting arm gestures (as used
in our virtual conductor with a physical balance controller on the
lower body. The wire frame on the right is a visualization of the
physical model of the conductors' lower body. The torque generated by
the arms is calculated using inverse dynamics, and applied to the
upper body. Note how large accelerations of the arms (when they are at
their lowest point) effect the movement in the knees.
As the conducting amplitude is lowered, the physical balancing
Most conductors conduct the score with their right
hand and use
the left hand for expressive conducting gestures.
When the left hand is not used, it hangs down
loosely. Here, the
right hand is steered with our procedural conducting gesture, while
the left hand is added to the physically steered part of the
conductor. The lower body is controlled with a physical balance
controller, the left arm is controlled with a pose controller.
The pose controller is a simple PD-controller that
the arm in place. Note the left and right sway of the upper body,
caused by the sideways movement of the right arm.
The conductor can switch from two-handed conducting to the one-handed
conducting motion shown above. Such a switch entails changing physical
representations. The conductor is first physically steered on the
lowerbody and trunk (visualized on the right). After the switch,
physical simulation steers the lowerbody, trunk and left arm
(visualized on the left). The physical representation of the left arm
is initialized with the position, rotation, velocity and angular
velocity of the kinematic motion before the switch. Note the subtle
pendular swing of the left arm and how the simple PD-controller
loosely steers it to the desired position. To conduct with both hands
again, a transition motion unit is used to enable kinematic control,
smoothly connecting the physical start pose and velocity with the
desired kinematic pose and velocity in the conducting gesture.
an architecture for an embodied conversational agent. She is able to
communicate using a rich palette of verbal and nonverbal behaviours.
Greta can talk and simultaneously show facial expressions, gestures,
gaze, and head movements. Greta's gestures are executed solely on the
arms and head, the lower body is moved slightly with Perlin noise. In
this demo we show how our system can enhance Greta's physical realism.
On the left: our system (physical balance model on the lower body,
loosely hanging left arm steered by a PD-controller, torque generated
by the right arm is calculated using inverse dynamics, and applied to
the upper body). On the right: Greta's default movement. Note how the
upper body subtly sways as the arm moves to the right and back,
reducing the 'robotic-ness' of the motion.
is a a modular, character animation system (...) that composes
multiple behaviors into a single coherent skeletal animation,
synchronized with spoken audio. Different behaviors on different body
parts are not physically coupled, gesture is typically performed by
executing arm or head movement on top of an existing idle animation.
In this demo, we enhance the physical realism a SmartBody nod
motion. On the left: our system (physical balance model on the lower
body, loosely hanging arms steered by a PD-controllers, two spine
joints, both steered by a PD-controller, torque generated by the head
is calculated using inverse dynamics, and applied to the upper body).
Note how the forces generated by the powerful nod effect motion on the
whole body. On the right: the original SmartBody nod motion.
The conductor conducts a short piece and comments on the performance
of the musicians. This example features switching between three
physical representations, several procedural animations and speech
synchronized with gesture.
This demo demonstrates gaze at static and dynamic targets and lipsync
Conducting performance test. Our mixed dynamic system can steer
30 conductors in real time. Each conductor has its own procedural
conducting controller, conducting amplitude and physical balance
controller. The system runs in real time on a desktop computer (2.83
GHz, Quad core, 4Gb ram, Nvidia GeForce 8800 GTS video card).
Elckerlyc can be configured to execute BML behaviors on a
Nabaztag robot bunny. The demo movie
on the left shows this in action
on the Nabaztag simulator.
A generic Thrift
interface is used to connect Elckerlyc to a render engine. Here we
show Elckerlyc connected to Ogre
- Implement stroke repetition
- Fully automatic rather than semi-automatic conversion of Greta
- Support for wait and emit
- Demo: have physical events cause emit-s (being hit by a car)
- Release through website
- Physical motion unit(s) for spine postures (hunched, beauty
- Hold-a-can-of-coke motion unit
- Improve balance controller (for example: implement weight
- Mechanisms for the combination of conflicting gestures
- Support for EMBR gestures, by converting them to our ProcAnimation
- Integration with MURML, MURML motion units?
- Specification of compositional gestures: automatic insertion of
transition motion units, reuse of hand shapes for different gestures,
Scheduling and constraint solving
- Support for spine movement
- Support for before and after constraints
- Support for required
- Better/alternative scheduling algorithm(s)
- Runtime tests (faster-than-real-time mockup engines, and/or using
the BML feedback mechanism to build a generic BML test suite that can
be used for multiple realizers)
Tools & editors
- Allow stretching and skewing
- Different speech engines, based upon different TTS software
- Release procedural animation editor
- Release scripting based physical animation editor
- Create Visual BML editing framework (start with constraint
- Physical motion units with kinematic end effector constraints that
can be enabled and disabled at will
(Example: constrain hand position to a bar
while riding the bus,
elbow is still free to move physically,
hand held against hip constraint in posture)
Full BML support
- Physics based, based on/using Simbicon?
- Borrow from SmartBody here or implement Boulic's procedural
Elckerlyc and robotics
- Finalize support for head, posture
|A. Nijholt, D. Reidsma, H. van Welbergen, H.J.A. op den Akker and Z.M. Ruttkay Mutually coordinated anticipatory multimodal interaction, in Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction, A.M. Esposito, N.G. Bourbakis, N. Avouris and I. Hatzilygeroudis (eds), Lecture Notes in Computer Science, volume 5042, Springer Verlag, Berlin, ISBN 978-3-540-70871-1, ISSN 0302-9743, pp. 70-89, 2008 [ BiBTeX ]  |
|D. Reidsma, K.P. Truong, H. van Welbergen, D. Neiberg, S. Pammi, I.A. de Kok and B. van Straalen Continuous Interaction with a Virtual Human, in Proceedings of the 6th International Summer Workshop on Multimodal Interfaces (eNTERFACE’10), A.A. Salah and T. Gevers (eds), University of Amsterdam, Amsterdam, ISBN 978-90-5776-213-0, pp. 24-39, 2010 [ BiBTeX ]  [Official URL] |
|H. van Welbergen, D. Reidsma and J. Zwiers A Demonstration of Continuous Interaction with Elckerlyc, in Proceedings of the Third Workshop on Multimodal Output Generation, MOG 2010, I. van der Sluis, K. Bergmann, C.M.J. van Hooijdonk and M. Theune (eds), CTIT Workshop Proceedings, Centre for Telematics and Information Technology University of Twente, Enschede, ISBN not assigned, ISSN 0929-0672, pp. 51-57, 2010 [ BiBTeX ]  |
|D. Reidsma, H. van Welbergen, R.C. Paul, B.L.A. van de Laar and A. Nijholt Developing Educational and Entertaining Virtual Humans using Elckerlyc, in 9th International Conference on Entertainment Computing (ICEC 2010), H.S. Yang, Rainer Malaka, J. Hoshino and J.H. Han (eds), Lecture Notes in Computer Science, volume 6243, Springer Verlag, London, ISBN 3-642-15398-4, ISSN 0302-9743, pp. 514-517, 2010 [ BiBTeX ]  |
|H. van Welbergen, Y. Xu, M. Thiebaux, W.W. Feng, J. Fu, D. Reidsma and A. Shapiro Demonstrating and Testing the BML Compliance of BML Realizers, in 10th International Conference on Intelligent Virtual Agents, IVA 2011, H.H. Vilhjálmsson, S. Kopp, S. Marsella and K.R. Thorisson (eds), Lecture Notes in Computer Science, volume 6895, Springer Verlag, Berlin, ISBN 978-3-642-23973-1, ISSN 0302-9743, pp. 269-281, 2011 [ BiBTeX ]  |
|G. Solano Méndez and D. Reidsma A BML Based Embodied Conversational Agent for a Personality Detection Program, in International Conference on Intelligent Virtual Agents (IVA 2011), H.H. Vilhjálmsson, S. Kopp, S. Marsella and Kristinn R. Thórisson (eds), Lecture Notes in Computer Science, volume 6895, Springer Verlag, Berlin, ISBN 978-3-642-23973-1, pp. 468-469, 2011 [ BiBTeX ]  |
|D. Reidsma, H. van Welbergen and J. Zwiers Multimodal Plan Representation for Adaptable BML Scheduling, in International Conference on Intelligent Virtual Agents, IVA 2011, H.H. Vilhjálmsson, S. Kopp, S. Marsella and Kristinn R. Thórisson (eds), Lecture Notes in Computer Science, volume 6895, Springer Verlag, Berlin, ISBN 978-3-642-23973-1, ISSN 0302-9743, pp. 296-308, 2011 [ BiBTeX ]  |
|D. Reidsma and H. van Welbergen Elckerlyc in practice - on the integration of a BML Realizer in real applications, in Proceedings of the 4th International ICST Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN 2011), Revised Selected Papers, Antonio Camurri and Cristina Costa (eds), Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, volume 78, Springer Verlag, Berlin, ISBN 978-3-642-30213-8, ISSN 1867-8211, pp. 83-92, 2012 [ BiBTeX ]  |
- GATE [GATE: Games for Advanced Training and Entertainment]
- HUMAINE [Human-Machine Interaction Network on Emotion]
- SEMAINE [Sustained Emotionally coloured Machine-human Interaction using Nonverbal Expression]