Title: Introduction to Deep Learning for Computer Vision
Instructor: Gwenn Englebienne (University of Twente)
Date: Friday 22nd of July 10:00 – 11:45
Location: DesignLab

In this talk, we will have a brief look at what deep learning is. We will have a look at different forms of deep learning, how they work, and why deep learning interesting. We will look at how to develop and use deep neural networks for computer vision tasks using Theano, and some of the practical aspects of their use. Finally, we will relate traditional concepts from neural networks and general machine learning, such as overfitting, non-convex optimisation, regularisation, concept learning, etc. to deep learning. You can download the slides.

Title: Informal Academic English Clinic
Emer Gilmartin (Trinity College Dublin)
first session starts on Friday 22nd of July 2016 15:30 – 17:00

Academic writing and presentations are very challenging for native speakers and even more so for speakers of English as a foreign language. We are running informal language clinics at Enterface as part of the CARAMILLA project. These sessions will help participants to perfect their written and spoken English. The sessions will be informal, based on participants’ needs, and will use examples of good and bad academic writing and speaking to highlight common errors, and to provide strategies for clear and effective communication in English. There will be a strong focus on participants’ own work and participants will work together on role playing presentations and editing written work as needed.

Title: Working with the Social Signal Interpretation (SSI) framework
Instructors: Johannes Wagner, Tobias Baur, Dominik Schiller (University of Augsburg)
Date: Monday 25th of July 10:00 – 11:45
Location: DesignLab

The Social Signal Interpretation (SSI) framework offers tools to record, analyse and recognize human behaviour in real-time, such as gestures, mimics, head nods, and emotional speech. Following a patch-based design pipelines are set up from autonomic components and allow the parallel and synchronized processing of sensor data from multiple input devices. The workshop covers an introduction to the framework architecture and introduces basic design concepts, e.g. how streams from multiple modalities are kept in sync. By means of an emotional speech recognizer we will learn how to set up a recognition pipeline and extract social user behaviour in real-time. In a final step we will extend the system by facial expression detection. Here is a link to the software and slides: