hmilogo

Multimodal Interactions


Whenever a user is allowed to use different modalities when interacting with a computer we speak of multimodal interaction. Keyboard input, mouse clicks but also more advanced input modalities such as speech, gaze and gestures or input coming from different types of sensors (data gloves, head-mounted devices, haptic devices, sensors attached to body parts, etc.) provide a wide range of input modalities that can appear sequentially or in parallel. The computer needs to understand input coming from different modalities and it needs to be able to integrate these inputs in order to get a full understanding of what is going on. Generally a user is not always
complete in his or her utterances when interacting with a computer.

Often utterances need to be disambiguated by looking at the context of the utterance or by looking at information coming from other modalities. Research in multimodal interaction is concerned with interpreting isolated, sequential and parallel interaction utterances with the computer. In particular it is useful to develop models in which knowledge obtained from different input modalities can be integrated. This also involves knowledge representation and reasoning formalisms.

In different projects we exploit the combination of different input modalities. In virtual reality research, for instance, a user will use a range of input modalities in order to communicate with a virtual world, objects in the world and agents inhabiting the world. Users point to objects, make references to the virtual world in their language use and use gaze and facial expressions to show their (lack of) interest. A similar range of output modalities can be used by the system, especially when the output is generated from an embodied conversational agent that communicates with the user. Multimodal interaction is not restricted to virtual reality applications. As computers become more and more embedded in the environment and different sensors are integrated in everyday appliances, all kinds of actions of people in the environment can be detected and interpreted by a system. From a human computer interaction part of view it is interesting to look at the various multimodal ways people interact with the environment and each other and to design systems that are sensitive to what the user wants without having been given explicit commands.

Student projects in this area often combine with approaches in virtual reality and ambient intelligence, embodied conversational agents research and speech and language research.

Members

The following HMI-members are working on Multimodal Interactions:

Former HMI-members:

 

Publications

Here you can find the publications

 

ShowCases

 

Student Information

Students can find information about assignments:
[ final project ] [ traineeship ] [ Capita selecta and Research Topics ] at our Msc webpages.

old Parlevink website   colophon   [Back] .