Programme

08.45Opening
09.00 Invited Talk: Fei-Fei Li (Stanford)
Visual Recognition Beyond Simple Actions and Isolated Actors and Objects
10.00 Coffee break
10:30 CVPR preview talk: Talking Pictures: Temporal Grouping and Dialog-Supervised Person Recognition
Timothee Cour, Ben Sapp, Akash Nagle and Ben Taskar
10:55 Door Detection via Signage Context-based Hierarchical Compositional Model
YingLi Tian and Cheng Chen
11:20 Contextual smoothing of image segmentation
Jonathan Letham, Neil Robertson and Barry Connor
11:45 Perspective and Appearance Context for People Surveillance in Open Areas
Giovanni Gualdi, Andrea Prati and Rita Cucchiara
12:10 Lunch Break
14.00 Invited talk: Tsuhan Chen (Cornell)
Beyond Face Recognition: Using Social Context to Understand Images of People
15.00 Context-driven Clustering by Multi-class Classification in an Active Learning Framework
Martin Godec, Sabine Sternig, Peter Roth and Horst Bischof
15.25 Coffee break
16.00 CVPR preview talk: Exploiting Hierarchical Context on a Large Database of Object Categories
Myung Jin Choi, Joseph Lim, Antonio Torralba and Alan S. Willsky
16.25 Generative modeling of spatio-temporal traffic sign trajectories
Karla Brkic, Ivan Sikiric, Axel Pinz, Sinisa Segvic and Zoran Kalafatic
16:50 Group discussion
18.00 Closing


Background

The Workshop on Use of Context in Video Processing (UCVP) offers a timely opportunity for the exchange of recent work on employing contextual information in problems of computer vision, and in particular, in video-based event and scene analysis. The availability of different sensing modalities and the recent efforts in multi-modal information fusion, the importance of developing situation-aware and dynamic (active) vision algorithms by employing prior information as context for better inference, and the recent interest in adaptive applications based on user context (activity, preferences, history) have created a common motivation across different research disciplines to utilize context as a key enabler of application-oriented vision. Improved robustness, efficient use of sensing and computing resources, dynamic task assignment to different operating modules (such as different cameras) as well as adaptation to event and user behavior models are among the benefits a video processing system can gain through the utilization of contextual information. Sources of contextual data can be user and event models, environmental states and parameters acquired by various sensing methods, logical relationship between objects in physical spaces and in images, consistency between different instances of observation in time and views, and previously interpreted observations.


Aims and scope

UCVP aims to address the opportunities in incorporating contextual information in algorithm design for single or multi-camera vision systems, as well as systems in which vision is complemented with other sensing modalities, such as audio, motion, proximity, occupancy, or location sensors each acting as a source of contextual data. The objective of the workshop is to gather high-quality contributions describing leading-edge research in the use of context in video processing. The workshop further aims to stimulate interaction among the participants through a panel and group discussion.


Topics of interest to the workshop include:

  • Methodology to define relevant sources of context
    • multi-camera networks
    • multi-modal sensing systems
    • long-term observation
    • behavior models
    • spatial or temporal relationships of objects and events
    • interaction of user with objects
    • internet resources as knowledge-base for context extraction
  • User-centric context and representation
    • user behavior model
    • demographic information
    • user’s activity, location, expression, or emotional state
    • stated preferences
    • explicit and implicit interfaces
    • interaction between users)
  • Integration of context with visual processing
    • context-driven event interpretation
    • active vision
    • multi-modal activation
    • service provision and switching based on context
    • response to user events and interaction with user
    • detection of abnormal behavior
    • active sensing and task assignment to different sensing modules
    • guided vision based on high-level reasoning
    • user behavior modeling based on observations
    • applications in smart environments
    • human-computer interfaces)

The workshop aims to encourage collaboration between researchers in different areas of computer vision and related disciplines. In addition, by introducing topics of emerging applications in smart environments, multi-camera networks, and multi-modal sensing, which offer sources of context, the workshop aims to extend the notion of context-based video processing to include high-level and application-driven information extraction and fusion.


Paper submission

The workshop solicits original and unpublished papers that address a wide range of issues concerning the use of context in video processing. Authors should submit papers not exceeding six (6) pages in total in the format specified at the IEEE CVPR website. Papers must follow the double-blind submission guidelines and be submitted through the conference submission system.

Accepted papers will be presented at the workshop and will appear in the conference proceedings. At least one author of each paper must register for the conference and attend the workshop to present the paper.


Important dates

Paper submission: March 12, 2010
Author notification:April 6, 2010
Camera-ready due:April 13, 2010
Workshop:June 13, 2010


Registration

Please note that registration is needed in order to include an accepted paper in the proceedings. Please refer to the CVPR 2010 website for more details.


Organizing team

Hamid Aghajan (Stanford University, USA)
Louis-Philippe Morency (USC, USA)
Anton Nijholt (University of Twente, The Netherlands)
Ming-Hsuan Yang (Univ. of California Merced, USA)
Ronald Poppe (University of Twente, The Netherlands)
Yuri Ivanov (MERL, USA)


Programme committee

Stan Birchfield, Clemson University, USA
Tanzeem Choudhury, Dartmouth College, USA
Bill Christmas, University of Surrey, UK
Maurice Chu, PARC, Palo Alto, USA
David Demirdjian, MIT, USA
Daniel Gatica-Perez, IDIAP, Switzerland
Abhinav Gupta, University of Maryland, USA
Richard Kleihorst, University of Ghent and Vito, Belgium
Daphne Koller, Stanford University, USA
Kevin Murphy, UBC, Canada
Paolo Remagnino, Kingston University, UK
Neil Robertson, Heriot-Watt University, UK
Michael S. Ryoo, ETRI, Korea
Stan Sclaroff, Boston University, USA
Rainer Stiefelhagen, University of Karlsruhe, Germany
Ying Li Tian, CCNY, New York
Antonio Torralba, MIT, USA
Fernando de la Torre, CMU, USA
Chris Wren, Google Inc., USA



Sponsoring

The workshop is sponsored by the European Network of Excellence SSPNet. Part of the talks will be recorded and made available on the project's Virtual Learning Centre website.



Photo by Dignon3