Speaker: Michel Valstar (University of Nottingham)
Date: 29th of July 2016, 11:00 – 12:00
Title: The Computational Face – Novel approaches to facial expression analysis in an age of big data
Abstract: In this talk I will present recent advances in computer vision and machine learning made by my team at the University of Nottingham. In particular, I will present work on the following topics:
1) Behaviomedics – a novel area of using affective computing and social signal processing to help diagnose, monitor, and treat medical conditions that alter expressive behaviour, including recent work on automatic depression detection.
2) Facial Point Localisation/Face Alignment – discussing our ground-breaking work on direct-diplacement based point detection and incremental continuous cascaded regression, submitted to ECCV 2016.
3) Facial Expression Analysis – our latest facial expression recognition research, including FERA 2015, our ICCV 2015 work on multi-task learning, and dynamic deep learning.
Short bio: Michel Valstar (http://www.cs.nott.ac.uk/~mfv) is an associate professor at the University of Nottingham, and member of both the Computer Vision and Mixed Reality Labs. He received his masters’ degree in Electrical Engineering at Delft University of Technology in 2005 and his PhD in computer science at Imperial College London in 2008, and he was a Visiting Researcher at MIT’s Media Lab. He works in the fields of computer vision and pattern recognition, where his main interest is in automatic recognition of human behaviour, specialising in the analysis of facial expressions. He is the founder of the facial expression recognition challenges (FERA 2011/2015/2017), and the Audio-Visual Emotion recognition Challenge series (AVEC 2011-2016). He is the coordinator of the EU Horizon2020 project ARIA-VALUSPA, which will build the next generation virtual humans, and recipient of Melinda & Bill Gates Foundation funding to help premature babies survive in the developing world. In 2007 he won the BCS British Machine Intelligence Prize for part of his PhD work. His work has received popular press coverage in, among others, Science Magazine, The Guardian, New Scientist and on BBC Radio. He has published over 50 peer-reviewed papers at venues including PAMI, CVPR, ICCV, SMC-Cybernetics, and Transactions on Affective Computing (h-index 29, >3900 citations).
Speaker: Kristiina Jokinen (University of Helsinki and University of Tartu)
Date: 1st of August 2016, 11:00 – 12:00
Title: Social Engagement via Eye-Gaze in Multimodal Robot Applications
Abstract: In human-human communication, a wide range of multimodal signals is used to
provide effective feedback about the partner’s interest and understanding.
It can be assumed that in a similar manner multimodality plays an important
role in the context of “social robotics”, where the robot aims to support
users in various interactive tasks, and serve as an intuitive interface to
access and share information.
This talk concerns multimodal interaction between humans and robot agents,
and in particular, it discusses eye-gaze and engagement in conversational
interactions with social robots. It starts with an overview of eye-tracking
techniques and issues in engagement, and then continues to explore the
assumption that the users engage with a humanoid robot in a similar manner
as they engage themselves with human partners, i.e. their interaction with
a robot resembles social communication rather than tool manipulation. The
hypothesis is explored via eye-tracking experiments, and support is found
for the hypothesis that the interaction with intelligent agents resembles
communication with humans. However, it is also noticed that such
interactions are socially less binding than those with human partners.
The talk will conclude with a discussion of the consequences of the
results to HRI in general.
Short bio: Kristiina Jokinen is Adjunct Professor and Project Director at University
of Helsinki, and Visiting Professor at University of Tartu (Estonia). She
received her PhD at UMIST, Manchester, and worked several years in Japan
and in Belgium. She has played a leading role in many national and
international collaborative research projects and received several
prestigious fellowships. Her current research focuses on human-robot
interaction, dialogue systems, multimodality (gesturing, eye-gaze), and
applications for less-resourced languages. Together with Graham Wilcock
she developed the WikiTalk robot application. She has published widely,
including three books, and edited several books and proceedings. She has
served in many programme and review committees, and chaird a number of
international conferences (recently IWSDS-2016 in the Finnish Lapland).
She is Life Member of Clare Hall at University of Cambridge, Chair of
Board for the JSPS Alumni Club in Finland, and Secretary/Treasurer of
Speaker: Anton Nijholt (University of Twente / Imagineering Institute)
Date: 5th of August, 11:00 – 12:00
Title: Smart Technology for Smart Humor in Playable Cities
Abstract: In smart cities we can expect to witness human behavior that is not necessarily different from human behavior in present-day cities. There will also be demonstrations, flash mobs, urban games and even organized events to provoke the smart city establishment. Smart cities have sensors and actuators that maybe can be accessed by makers and civic hackers. Smart cities can also offer their data to civic hackers, who may create useful applications for city dwellers. Smart cities will have bugs that can be exploited for fun or appropriation. Humor is an important aspect of our daily activities and experiences. In this talk, we explore how humor can become part of smart and playable cities. We also discuss some types of humor that appear in games and how views of game humor can find analogues in the humor that may appear and can be created in smart and playable cities.
Short bio: Anton Nijholt received his PhD in computer science from the Vrije Universiteit in Amsterdam. He held positions at various universities, inside and outside the Netherlands. In 1989 he was appointed full professor at the University of Twente in the Netherlands. Presently he is emeritus-professor at the University of Twente and Global Research Fellow at the Imagineering Institute, Iskandar, Malaysia. His main research interests are human-computer interaction with a focus on entertainment computing, humor and brain-computer interfacing. He edited various books, most recently on Playful User Interfaces and Brain-Computer Interaction. In 2016 a book on Playable Cities will appear. Nijholt is Chief Editor of Frontiers in Human-Media Interaction and Springer Book Series Editor of Gaming Media and Social Effects and of Computational Social Sciences.
More to come…