- Feb 2013: For eNterface 2013, we are looking for motivated PhD students who like to spend some weeks working in a multi-disciplinary team on the topics "Body-Centric Interactive Play" or "Touching Virtual Agents". eNterface will be held in Lisbon, from July 15 until August 9. It's a great experience! More information can be found here or by sending me an email. Join us!
- Jan 2013: For the second time, I've received the "most cited paper award 2012" for the Image and Vision journal for my survey on vision-based human action recognition.
- Dec 2012: A joint special issue for the IVA workshops will appear in JMUI. The call for papers can be found here. Submission deadline is on April 7, 2013.
- Sep 2012: Jens Edlund, David Traum, Iwan de Kok and I have organized the workshop on Real-time Conversations with Virtual Agents (RCVA) at IVA 2012. Papers can be downloaded from the website.
- Aug 2012: Hayley Hung (UvA), Janienke Sturm (TU/e) and I organized a workshop on Measuring behaviour in open spaces at Measuring Behaviour 2012, Utrecht, August 30. On the same day, a special session on Technical support for analysis of human error in task performance was held at Measuring Behavior as well, organized with Tobias Heffelaar (Noldus) and Jordi Bieger (Vicar Vision).
I serve as PC member/editor for the following conferences/workshops and journal:
- International Conference on Computer Vision and Pattern Recognition (CVPR) 2013
Website Portland, OR, June 23-28, 2013
- International Conference on Computer Vision (ICCV) 2013
Website Sydney, Australia, December 3-6, 2013
- International Conference on Intelligent Virtual Agents (IVA) 2013
Website Edinburgh, UK, August 29-31, 2013
- International Conference on Affective Computing and Intelligent Interaction (ACII) 2013
Website Geneva, Switzerland, September 2-5, 2013
- International Conference on Distributed Smart Cameras (ICDSC) 2013
Website Palm Springs, CA, October 29 - November 1, 2013
- Workshop on Vision(s) on Deception and Non-Cooperation (VDNC) at FG 2013
Website Shanghai, China, April 22-26, 2013
- Workshop on Multimodal Corpora: Beyond Audio and Video (MMC) at IVA 2013
Website Edinburgh, UK, September 1, 2013
- Guest editor Journal on Multimodal User Interfaces (JMUI) special issue "From Multimodal Analysis to Real-Time Interactions with Virtual Agents"
Website Deadline April 7, 2013
- International Journal of Computer Vision & Signal Processing (IJCVSP)
Website - An open access journal on computer vision and signal processing
|Currently, I'm a postdoctoral researcher working on computer vision and human behavior analysis. In 2009, I was a visiting researcher at the Delft University of Technology, working on the PetaMedia project. In 2010, I spent two months at Stanford University. My PhD thesis "Discriminative Vision-Based Recovery and Recognition of Human Motion", can be found online here.|
|Human motion analysis|
|My research into human motion analysis focusses mainly on the use of discriminative approaches. In example-based work, the choice of image descriptor is important, and we investigated how well different silhouette shape descriptors performed on a synthetic data set. Dataset and software are available, please contact me. Published in FG 2006.|
|For example-based human pose recovery, we performed extensive evaluations on the HumanEva dataset with a variant of HOG as image descriptor. We studied how the recovery accuracy was influenced by different persons, actions and number of views. Dataset and software are available, please contact me. Some videos: divx, xvid, mpeg. Published in CVPR-EHuM 2007.|
|We used a set of synthetically generated body part templates to detect humans and simultaneously estimate their poses in 2D. Since we match at the body part level, but estimate joint locations for the whole body, we can deal with occlusion. Such an approach is feasible when a limited motion domain is considered. Here, we show results on Walking and Jogging movements. Video: xvid. Published in AMDO 2008.|
|To recognize human actions, we introduced a framework where pairwise discriminative functions between two actions were used. Common spatial patterns (CSP) was used to maximize the difference in variance between the actions. Such a framework can be learned efficiently, and evaluation of the functions can be done in real-time with a small number of training sequences. We evaluated our approach on the Weizmann human action dataset and obtained competitive results. Published in FG 2008.|
|Computer Vision and Image Understanding published my survey of vision-based human motion analysis in volume 108, issue 1-2. A bibtex file of all references is available for download. In 2010, my survey on vision-based human action recognition appeared in Image and Vision Computing. All references are available in bib format.|| |
|Social signals in conversation|| |
|Backchannels are a common type of listener responses. We investigated algorithms that automatically place backchannels while processing the speech of a speaker, and evaluated these using a corpus, and using human perception. Stimuli, code and results are available, please contact me. Video: xvid. Published in IVA 2011, IVA 2010, Interspeech 2011 and Interspeech 2010.|
|Research has revealed high accuracy in the perception of gaze in dyadic (sender-receiver) situations. Triadic situations differ from these in that an observer has to report where a sender is looking, not relative to himself. We looked at the accuracy of perception in these situations. Published in ACM TAP.|
|Face naming in social networks|| |
|Social networks can contain very large numbers of photos, often portraying the users of the network. We focus on naming detected faces in photos in a scalable way by exploiting the social network of the user. Published in FG2011 and Pattern Recognition.|
|Perception of affect|| |
|The body can display many forms of affect. However, much research that investigates the role between pose and affect suffers from methodological issues. We looked into the factor of stimulus realism. Stimuli, code and results are available, please contact me. Published in ACII 2007.|
|Multimodal human-computer interaction|
|The Virtual Dancer is an interactive application that allows users to dance together with an animated dancer. A camera and dance mat are used to observe the user's moves. Showcase.|
|The Distributed Virtual Meeting Room allows to have real-time distributed meetings, and can be used for offline meeting visualization. Additional information such as current speaker, dominance level and agenda progress can be shown. Showcase.|
|Within the ICIS-CHIM project, a multimodal system has been developed to facilitate communication between rescue workers at distributed locations.|| |
|Tetris-with-your-body. Play the game of Tetris using some hand gestures. A camera observes the body and hands, and translates the movements to game actions.|
|Jump-and-run. A demo where players can control a running character in a game using their body movements. By jumping, ducking and moving left and right, obstacles can be avoided and bonuses collected.|
An interview with Ellen Giebels, Matthijs Noordzij, Elze Ufkes, Dirk Heylen and me about our joint work on the detection of deceit appeared in the UT-Nieuws. It can be read online on pages 26 and 27.
Image and Vision Computing most cited paper award 2012, see the publisher's note online.