October 3, 2005
October 3, 2005
Location: Trento, Italy
Theme/Topic: Multimodal Interactions
Multimodal systems demand innovative approaches in terms of information selection, extraction and presentation (fission), software architectures, interactive techniques and devices. As such, various kinds of rules, guidelines and strategies have been proposed for deciding how to present information in multimodal interfaces, making the best use of multimodal options. Yet, many cognitive aspects as related to the user’s interaction with information are overlooked. This is due to the absence of formal cognitive models which can model and map such interactions. Many of those who create multimodal systems, are often unaware of the formal cognitive techniques that can be applied to deepen our understanding of how the representation of information works, how it can support reasoning, how it activates the user’s channels of perception in an optimal way, etc. Therefore, it follows that the application of the formal theories of cognition to the principles of design, implementation and evaluation of multimodal interface technology is needed more than ever. For, there is a growing need to formally understand the cognitive processes, which underlie human-machine multimodal communication. This will help in predicting the usability preferences of the user, the optimal vs. non-optimal aspects of multimodal information processing, etc.
Moreover, the attempts to use multiple media to represent information in an automatically generated environment have far led to new problems. For example, there is a lack of a unified meta-model which formalizes the cross-media references. The engineering of concepts from natural language processing (eg. cohesion, anaphora, referring expressions etc.) can allow a useful transfer of concepts to implement in the design of the output of multimodal systems, however; the discourse of an artificial multimodal system requires an explicit representation of the syntax and semantics of the visual, auditory and textual discourse to conform to the optimal processing requirements of human-cognitive channels of perception. In this respect, among the questions that the workshop will focus on: (i) Are the existing models of linguistic reference transferable to the context of multimodal reference? (e.g. in automatically generated multimedia systems, etc. ), (ii) Are the processes of cognitive referring and reference resolution in natural languages akin to those in artificial multimodal systems? The questions related to how to determine, cognitive relevance in the processes of information representation in multimodal systems by stepping on referring models leaves much to desire. In sum, the task of bringing “intelligence” into the technology of multimodal interfaces is channeled through a relevant, useful and productive combination of “cognitive engineering” and “linguistic engineering”.
Co-chair: Noureddine Elouazizi (University of Leiden)
Co-chair: Yulia Bachvarova (University of Twente)
Co-chair: Anton Nijholt (University of Twente)
|Paper submission|| July 22, 2005|
|Notification|| July 29, 2005|
|Submission of Final paper|| August 15, 2005|
For any questions related to conference and submission contact