Towards a Standard Markup Language for Embodied Dialogue Acts

Abbreviation: EDAML

Start date: May 11,  2009

End date: May 12,  2009

Location: Budapest

Towards a Standard Markup Language for Embodied Dialogue Acts

Workshop held in conjunction with AAMAS 2009
11 or 12 May 2009
Budapest – Hungary

Dirk Heylen, University of Twente
Catherine Pelachaud, CNRS, TELECOM – ParisTech
Roberta Catizone, University of Sheffield
David R. Traum, University of Southern California

Look here.

Embodied Conversational Agents, ECAs, are virtual agents endowed with human-like communicative capabilities. Over the last few years there has been increasing collaborative effort across research groups working on ECAs to define a common framework for ECA systems under the name of SAIBA. The framework specifies three main processes. The first, called Intent Planning, deals with the computation of the communicative intents and the emotional state of the agent. The second, Behavior Planning, computes how to convey high-level information through verbal and nonverbal means. The third and last module, Behavior Realizer, instantiates the behaviors into acoustic and visual parameters that are sent, respectively, to a speech synthesizer and an animation player[drt1]. These three modules exchange data (communicative intentions between the first and second ones, and behaviors between the last two). Together with the specification of the three main processes, SAIBA proposes the use of two mark-up languages to encode the flow of data. The first language is called Function Markup Language (FML) while the second one is called Behavior Markup Language (BML). While quite a lot of work has been done to define BML, FML is still in its infant stages. A first workshop at AAMAS 2008 gathered researchers for a first broad discussion about the issues surrounding FML, the state of the art in existing systems and brainstorming about the way to go forward.

While the first workshop aimed to define the scope of the information the language should cover, in this workshop we aim to further elaborate on one specific aspect of FML. A major concern that appeared in almost all of the papers presented in the first workshop was that of conversational acts, also called speech acts or dialogue acts. In this workshop we will look at the relevance of the taxonomies that have been proposed in the literature and the way these can be used or should be adapted and extended for the ECA domain.

Several taxonomies of dialogue act types have been proposed for use in analyzing human dialogue behavior and as units of interpretation and production in dialogue systems. Examples are: Meta-locutionary acts (Novick, 1988), Conversation acts (Traum & Allen 1991), The HCRC coding scheme (Carletta et al 1996), and the Verbmobil coding scheme (Alexandersson et al 1997). These taxonomies encompass the different functions of dialogue acts such as information seeking, turn management and feedback and have been widely used to annotate corpora. Within the computational linguistics community, a series of meetings of an informal working group called the Discourse Resource Initiative produced a unifying scheme known as DAMSL (Allen&Core, 1997), which has been very influential and adapted for many projects. See (Traum, 2000) for a comparison of taxonomies and issues for such taxonomies. More recent efforts including European projects such as MATE and LIRICS have extended this work and produced new schemes such as DIT++ (Bunt et al, 2008). These dialogue act taxonomies can be used to further the development of FML. As dialogue act specification is a core component of any specification of communicative intent, perhaps one of these schemes can be adopted or extended for suitability for ECAs, or can at least inspire the development of FML. One key difference between the coverage of most of these schemes and ECAs is that ECAs communicate through verbal and nonverbal means so many of these schemes will need to be extended for use in FML.

With this workshop we aim to raise the following questions:

* what are the strengths and weakness of the dialogue acts standards and their potential use in FML?
* in what ways should they be extended?
* in what ways do they miss the mark?
* can they be used for multimodality?
* how can a dialogue act result in the animation of verbal and/or nonverbal behaviours?
* how can and should synchrony between modalities be tied in the dialogue act representation?
* what can we learn from the standardisation effort (the way the process went, the way the standard is being used/adopted/adapted...)?
* how/whether ASR signal/emotion features could/should be represented in Dialogue Acts?

We invite position papers addressing one or more of the following aspects:

* legacy: how can multimodal dialog acts be represented
* desires: how do researchers believe these dialog acts should be specified in FML
* expertise: contributions of researchers in cognitive modelling/dialogue

The purpose of this full-day workshop is to bring together researchers and developers of embodied conversational characters together with dialogue act specialists to exchange ideas and experiences on the various aspects involved in dialogue act specification for ECAs.

Submissions should be of 8 pages maximum, following AAMAS specified style. Position papers of 2 pages are also allowed. Submissions should be sent as pdf-files to the workshop contact: heylen AT

Important dates:
Submission: Feb 15, 2009
Notification: March 1, 2009
Camera ready Copy: March 15

old Parlevink website   colophon   [Back] .