hmilogo
AMI

 

Project Name: Augmented Multi-party Interaction

Abbreviation: AMI

Start date: January 1, 2004

End date: December 31, 2006

Project Description:

General goals

"AMI is concerned with new multimodal technologies to support human interaction, in the context of smart meeting rooms and remote meeting assistants. The project aims to enhance the value of multimodal meeting recordings and to make human interaction more effective in real time. These goals will be achieved by developing new tools for computer-supported cooperative work and by designing new ways to search and browse meetings as part of an integrated multimodal group communication, captured from a wide range of devices." [See also the official web page where this text came from].

AMI: an informal introduction

The AMI project is about multi modal, multi party interaction. IDIAP (Martigny, Switzerland), TNO and the University of Edinburgh host special meeting rooms that contain an extensive setup of microphones and camera's. In these meeting rooms the interactions between people during meetings are recorded. These recordings are annotated for many modalities. Examples of these annotations are speech transcription, emotion, gestures, dialogue acts and posture. This information is used for (at least) two purposes: as training material for recognition algorithms and as evidence on the basis of which theoretical models of human multi party interaction can be developed.

HMI activities within AMI

The AMI project is the context for many different activities of the HMI research group. The following list gives a few examples of ongoing work.
  • The Virtual Meeting Room is a 3D virtual environment displaying the AMI meeting room with avatars as meeting participants. The most obvious use of this environment is to literally replay (fragments of) the recorded meetings. More interesting though are applications such as simple validation of interaction models and/or recognition results ("does the predicted behaviour look sensible when displayed in the VMR?"), the ability to restructure meetings (e.g. displaying all arguments around a certain decision in one coherent discussion, smoothing over the transitions between the original fragments of the meeting).
  • By Gaze Experiment another application of the VMR is demonstrated. Machine learning algorithms are used to predict current speaker from gaze directions or head orientations. The results are compared to the performance of humans who get exactly the same input as the machine learning algorithms. To this end human judges are presented with a screenshot of a Virtual Meeting Room setting that shows four neutral avatars with correct head orientation. For each screenshot they have to say who they think is the current speaker. In this setup, all information that is not available to the machine learning algorithms, such as facial expressions, is suppressed for the human judges as well.
  • Annotation tools: HMI has also been involved in the development of better tools for manual annotation of aspects such as dialogue acts, named entities, gaze targets or gestures.
  • Research into Addressing: In the context of dialogue act recognition work is being done on addressing behaviour. How do people indicate at whom they address a certain utterance? Can the addressee be automatically detected?
  • Pose and Gesture Detection Within HMI a computer vision platform has been developed. One aspect of this platform involves 3D pose recognition from single camera images. The output of this pose recognition is now used for the development of gesture recognition in meeting recordings.

AMI: more information

For more information about the AMI project, follow one of the links below or contact one of the HMI researchers mentioned on the next column.

Project-coordinator

The following HMI-member(s) is/are coordinator of this Project

Dirk Heylen

 

Publications

Here you can find the publications

 

 

ShowCases

The following ShowCases are associated with this Project

 

 

old Parlevink website   colophon   [Back] .