Projects & Teams

Here are this year’s projects and teams for eNTERFACE’16:

A smell communication interface for affective systems
Adrian David Cheok (Imagineering Institute, Malaysia / City University London)
Emma Yann Zhang (Imagineering Lab, City University London)

Stephen Barrass
Maarten Lamers
Mahmut Gökhan Turgut
Emre Saraçoğlu

This project aims to expand the applications of digital smell technology to affective systems. We are building a digitally controllable smell communication interface to be integrated into affective robots and communication systems. First, we are designing an emotional robot that can use scents to complement visual and audio signals to communicate its emotional states to human partners. Second, the smell interface will also be integrated in a mobile kissing device, which allows users to kiss and smell each other using their mobile phones. A set of scents and pheromones will be selected and mapped to the basic emotions. We will examine the various design considerations required to make a digitally controllable smell interface. Finally, we will evaluate the effectiveness and impact of a smell interface in human-robot interaction and technology-mediated human-human communication.
Full project description
Midway progress presentation

CARAMILLA: Combining Language Learning and Conversation in a Relational Agent
Emer Gilmartin (Trinity College Dublin)
Ketong Su (Trinity College Dublin)
Nick Campbell (Trinity College Dublin)
Benjamin R. Cowan (University College Dublin)

Yong Zhao
Ayşegül Bumin
Alpha Ousmane Diallo
Yuyun Huang
Jaebok Kim

Language learning is essential in this century of migration. CARAMILLA will be an automatic language tutor and conversation partner, and will provide easy access to language learning and practice for learners. In this project, we will build on our existing MILLA language learning system created at eNTERFACE’14. We will improve our automatic pronunciation training, based on automatic speech recognition.  We will implement social conversation modules at different levels, We will also create engaging learning activities to practice reading, listening, writing and speaking. We will use gamification to motivate learners, and use multimodal input to infer the learner’s affect. We will tie everything together in an attractive GUI. We are looking for team members to help develop CARAMILLA’s spoken interaction, language learning activities and games, and interface. Participation will develop your skills, give you experience of working on a multinational project, and allow you to help language learners everywhere.
Full project description

Sorry, this project has been cancelled!
Collaborative serious gaming in augmented reality for motor function assessment
Marina Cidota (Delft University of Technology)
Stephan Lukosch
(Delft University of Technology)
Richard Li
Joe Wong
Ceren Dikmen

Various diseases affect human motion (e.g. neurovascular diseases, neurodegenerative diseases, and musculoskeletal pain conditions). Currently, clinical methods to assess motor (dys)function are based on subjectively scored and low-resolution clinimetric tests, qualitative video analysis, or cumbersome marker-based motion capturing. The clinical community senses a great need for assessment tools that allow for objective, quantitative and cost effective measures of the factors contributing to motor dysfunction. In this project, our aim is to design and implement an engaging collaborative game for arm/hand motor function assessment. For this purpose, we plan to make use of serious gaming and automatic tracking of the hand and body in augmented reality (AR). Thus, while playing the game, the patient’s movements are recorded for later analysis and objective evaluation of upper extremity motor dysfunction. The game needs to be adaptable to the patient’s physical capabilities at runtime and allow the therapist to interact and engage with the patient during the assessment.
Full project description

Development of low-cost portable hand exoskeleton for assistive and rehabilitation purposes
Matteo Bianchi (MDM Lab, University of Florence)
Francesco Fanelli
(MDM Lab, University of Florence)
Benedetto Allotta (MDM Lab, University of Florence)
Kaspar Althoefer (Centre for Robotics Research, King’s College London)
Alessandro Ridolfi
Stefano Laszlo Capitani
Arianna Cremoni
Nicola Secciani
Matteo Venturi
Lukas Lindenroth
Ali Shafti
Agostino Stilli
Tobias Buetzer

Basing on strict requirements of portability, cheapness and modularity, an assistive device for hand opening disabilities, characterized by an innovative mechanism, has been developed and tested by the Mechatronics and Dynamic Modelling Laboratory (MDM Lab) of the University of Florence. This robotic orthosis is designed to be a low-cost and portable hand exoskeleton to assist people with hand opening disabilities in their everyday lives and during rehabilitation tasks. Concerning the hand opening disabilities, the MDM Lab has also developed a methodology which, starting from the geometrical characteristics of the patient’s hand, properly defines the novel kinematic mechanism that better fits the finger trajectories. The activity at eNTERFACE’16 workshop will focus on the development of control strategies (force/position), on the implementation of an actuation system based on EMG signals and on the automatization of scaling procedure to adapt the exoskeleton to different hands.
Full project description
Midway progress presentation

Embodied conversational interfaces for the elderly user
Marieke Peeters (Interactive Intelligence, Delft University of Technology)
Mark Neerincx
(Interactive Intelligence, Delft University of Technology / TNO)
Sena Büşra Yengeç
Oğuz Çalık
Vivian Motti
Helena Frijns
Siddharth Mehrotra
Tugce Akkoc

Do you want to get hands-on experience with the modelling of a virtual avatar? Have you always wanted to build your own robot? Now is your chance to do so, by joining eNTERFACE 2016’s project “Embodied Conversational Interfaces for Elderly Users”. In this project, you will collaborate in a team on the design and development of a bi-bodied conversational agent for elderly users, i.e. an agent that has a robotic as well as a virtual body. The project will result in a test set-up enabling experimenters to compare the effects of a virtual body vs. a robotic body in human-agent interaction studies. Both bodies should be likeable and acceptable by the target group, and so the team will consult weekly with a representative sample of elderly users. In addition to the bodies, the team will work on a control panel to be used by a “Wizard of Oz”, i.e. a tele-operator that controls the robot/avatar from a distance, thereby simulating the intelligence of the agent.
Full project description
Midway progress presentation

Heterogeneous Multi-Modal Mixing – Realizing fluent, multi-party, human-robot interaction with a mix of deliberate conversational behavior and bottom-up (semi)autonomous behavior
Dennis Reidsma (Human Media Interaction, University of Twente)
Daniel Davison (Human Media Interaction, University of Twente)
Edwin Dertien (Robotics and Mechatronics, University of Twente)
Binnur Görer
Bob Schadenberg
Jeroen Linssen
Zerrin Yumak

This project aims to work on a novel, state-of-the art setup for realizing fluent, multi-party, human-robot interaction with a mix of deliberate conversational behavior and bottom-up (semi)autonomous behavior. We approach this from two sides. On the one hand, there is the dialog manager requesting deliberative behavior and setting parameters on ongoing (semi)autonomous behavior; on the other hand, there is the robot control software that needs to translate and mix these deliberative and bottom-up behaviors into consistent and coherent motion. The two need to work well together in order to get behavior that is fluent, naturally varied, and well-integrated while at the same time conforming to the high level requirements as to content and timing that are set by the dialog manager. We will first look at the visual attention displayed by the robot in a multi person interaction scenario; once this works, we will extend the project towards other domains of expressive behavior as well. In order to prepare for an evaluation study that is to be carried out in follow-up to eNTERFACE’16, we will also design an experiment aimed at evaluation the core qualities of the system in a relevant scenario and carry out a pilot run of the study during the eNTERFACE’16 summer period.
Full project description
Midway progress presentation
Final presentation

MOVACP: Monitoring Computer Vision Applications in Cloud Platforms
Sidi Ahmed Mahmoudi (University of Mons)
Fabian Lecron (University of Mons)
Mohammed El Adoui
Omar Seddati
Amine Lazouni
Mohammed Hamoudi
Mohammed Amine Belarbi

Nowadays, images and videos have been present everywhere, they can come directly from camera, mobile devices or from other peoples that share their images and videos. The latter are used to present and illustrate different objects in a large number of situations (public areas, airports, hospitals, football games, etc.). This makes from image and video processing algorithms a very important tool used for various domains related to computer vision such as video surveillance, human behavior understanding, medical imaging and database (images and videos) indexation methods. The goal of this project is to develop a cloud application that integrate the mentioned methods using similar image and video processing (OpenCV, OpenGL, ITK, VTK, etc.). The application takes into account the variety of OS and languages (Java, C++, Python, etc.). Experimentations will be conducted using different situations such as real time event detection and localization in crowd videos, motion tracking, medical image segmentation, 3D image reconstruction from 2D radiographs, 3D analysis of bones, etc.
Full project description
Midway progress presentation

SCE in HMI: Social Communicative Events in Human Machine Interactions
Hüseyin Çakmak (University of Mons)
Kevin El Haddad
(University of Mons)
Uğur Ayvaz
Gueorgui Pironkov
Marwan Doumit

Enhancing the human-machine interaction by adding emotions to the machine’s way of expression, is one of the main topics of current research. This would improve the interaction relying on the assumption that the more human-like the machine behavior is, the more comfortable the interaction with it will be. This project proposes to create an environment aware emotional avatar. General objectives of the project include, first, investigating the use of Deep Neural Networks (DNN) for audiovisual synthesis and recognition of Social Communicative Events (SCE),  which include laughter, amusement, surprise and disgust. Second, the application of the synthesis and recognition results on a real time 3D agent with WOZ-like controls. Also, If need be, the collection of databases for the purpose of the previous objectives.
Full project descriptionMidway progress presentation

The Roberta IRONSIDE project: A dialog capable humanoid personal assistant in a wheelchair for dependent persons
Hugues Sansen (SHANKAA, France)
Kristiina Jokinen (University of Helsinki, Finland)
Maria Inés Torres (Universidad del País Vasco, Spain)
Atta Badii (University of Reading, UK)
Dijana Petrovska-Delacretaz (Télécom Sud Paris, France)
Stephan Schlögl (MCI Management Center, Austria)
Nick Campbell
(Trinity College Dublin, Ireland)
Gérard Chollet (Intelligent Voice, UK)
Graham Wilcock (University of Helsinki, Finland)
Javier Mikel Olaso (University of the Basque Country, Spain)
Neil Glackin (Intelligent Voice, UK)
Nazim Dugan (Intelligent Voice, UK)
Wahbi Nabi
Asier López Zorrilla
Fasih Haider
Ahmed Ratni
Soumaya Ben Souissi
Trung Ngo Trong
Ville Hautamäki
Seth Montenegro
Zoghlami Ale Eddine

This project covers speech processing, dialog management, affective computing, and human behavior analysis in the context of human robot interaction. It deals with the integration of several software frameworks (some of them may be accessed remotely), so as to produce a system that allows for natural, engaging, and intuitive mixed-initiative human-machine interactions. Expanding upon purely spoken interaction data, the envisioned system should be able to obtain information from the environment. It should employ a vision component (i.e. a camera) to observe user behavior and use this information to infer emotions and interest levels. In doing so, the system should be able to acquire a better understanding of the user’s context and his/her background, and consequently shape dialog tasks, strategies, and information presentations accordingly. The expected result of the project is thus an open-domain conversational system where both the user and the system are engaged in satisfying and smooth conversational-type situations.
Full project description

The Virtual Human Journalist
Michel Valstar (University of Nottingham, UK)
Alexandru Ghitulescu (University of Nottingham, UK)

Elisabeth André  (University of Augsburg, DE)
Matthew Aylett  (Cereproc, UK)
Laurent Durieu (Cantoche, FR)
Dirk Heylen (University of Twente, NL)
Catherine Pelachaud (Paris Télecom/CNRS, FR)
Björn Schuller (Imperial College London, UK)
Mariët Theune (University of Twente, NL)
Kubra Cengiz
Dominik Schiller
Christine Spencer
Kevin Bowden
Tommy Nilsson
Jelte van Waterschoot
Johannes Wagner
Amr El-Desoky Mousa
Angelo Cafaro
Eduardo De Brito Lima Ferreira Coutinho
Blaise Potard
Tobias Baur

In this project you will have the chance to help construct a Virtual Human Journalist, using the latest Virtual Human technology developed in the ARIA-VALUSPA EU project. The ARIA technology is designed for building agents that have social, emotional, and linguistic skills, and to interface with a structured knowledge base to function as an information retrieval agent. In this workshop, we challenge the participants to turn this information retrieval functionality around and make an agent that can extract information from a human expert and store this in a structured knowledge base for future use. Thus the Virtual Human Journalist is born. We are looking for enthusiastic, motivated students who are keen to learn more about virtual human technology, and who thrive in working in teams. We are particularly interested in students who are able to take a fresh and unexpected look at what may at first sight seem straightforward problems.
Full project description