Project Name: ARIA-VALUSPA
July 1, 2015
June 30, 2018
The ARIA-VALUSPA project will create a new framework that will allow easy creation of Affective Retrieval of Information Assistants (ARIA agents) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system will generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user's verbal and nonverbal behaviour when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication will be captured. A sophisticated dialogue management system will decide how to respond to a user's input, be it a spoken sentence, a head nod, or a smile. The ARIA will use specially designed speech synthesisers to create emotionally coloured speech and a fully expressive 3D face to create the chosen response. Back-channelling, indicating that the ARIA has understood what the user meant, or returning a smile are but a few of the many ways in which it will employ emotionally coloured social signals to improve communication.
The following HMI-member(s) is/are coordinator of this Project
Here you can find the publications