The ACII 2011 Workshops aim to facilitate lively discussions, comparison of methods, synthesis of results on particular topics of interest to Affective Computing. ACII 2011 will feature five diverse workshops on topics ranging from affective brain-computer interfaces to social interactions. Workshop papers will be published ny Springer as part of the main conference proceedings. The workshops will be held on the first day of the main conference, Sunday, October 9th, 2011.

Affective Brain-Computer Interfaces (aBCI 2011)

The second workshop on affective brain-computer interfaces will explore the advantages and limitations of using neurophysiological signals as a modality for the automatic recognition of affective and cognitive states, and the possibilities of using this information about the user state in innovative and adaptive applications. However, whereas in 2009 the focus was on affective and cognitive state estimation alike, in 2011 we hope to focus more on the induction, measurement, and use of affective states, i.e. emotions and moods.


  • Brendan Allison, TU Graz, Austria
  • Stephen Dunne, Starlab Barcelona, Spain
  • Dirk Heylen, Universiteit Twente, The Netherlands
  • Anton Nijholt, Universiteit Twente, The Netherlands

Emotion in Games

Capturing, analyzing and synthesizing player experience in both traditional screen-based games and augmented- and mixed-reality platforms has been a challenging area within the crossroads of cognitive science, psychology, artificial intelligence and human-computer interaction. Additional gameplay input modalities such as 3D acceleration (e.g. Nintendo Wii and smartphones), image and speech (e.g. Microsoft Kinect) enhance the importance of the study and the complexity of player experience. Sophisticated techniques from artificial and computational intelligence can be used to recognize the affective state of player, based on multiple modalities of player-game interaction, and to model emotion in non-playing characters. Multiple modalities of input can also provide a novel means for game platforms to measure player satisfaction and engagement when playing, without necessarily having to resort to post-experience and off-line questionnaires. For instance, players immersed by gameplay will rarely gaze away from the screen, while disappointed or indifferent players will typically show very little response or emotion. Adaptation game techniques can also be used to maximise player’s experience, thereby, closing the affective game loop: e.g. change the game soundtrack to a vivid or dimmer tune to match the player’s powerful stance or prospect of defeat; from the point of view of non-player characters, an injured or frustrated opponent will look down when facing defeat, informing the users about its status, much in the way a human opponent would be expected to. In addition to this, procedural content generation techniques may be employed, based on the level of user engagement and interest, to dynamically produce new, adaptable and personalised content (e.g. a new level in a platform game, which poses enough challenge to players, without disappointing them).

This workshop is the first of its series in ACII and build on related workshops and special sessions in other venues, such as the Simulation of Adaptive Behaviour (SAB) 2006 and Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2007 workshops and the IEEE Computational Intelligence in Games (CIG) 2008 special session on player satisfaction, the special session on Emotion in Games in IEEE CIG 2010 and VS Games 2011 and the Networking Session on Research and Development on Serious Games during the ICT Event 2010 and tutorials on 'affective computing in game design' at Gameon-NA 2008 and at DigiPen (2009).



  • Georgios Yannakakis, IT University, Denmark
  • Ana Paiva, Instituto Superior Técnico/INESC-ID, Portugal
  • Kostas Karpouzis, National Technical University of Athens, Greece
  • Eva Hudlicka, Psychometrix Associates, Inc., USA

Machine Learning for Affective Computing (MLAC)

Affective computing (AC) is a unique discipline which attempts to model affect using one or multiple modalities by drawing on techniques from many different fields. AC often deals with problems that are known to be very complex and multi-dimensional, involving different kinds of data (numeric, symbolic, visual etc.). However, with the advancement of machine learning techniques, a lot of those problems are now becoming more tractable.

The purpose of this workshop is to engage the machine learning and affective computing communities towards solving the most pressing problems relating to understanding and modeling affect. We welcome the participation of researchers from diverse fields, including signal processing and pattern recognition, statistical machine learning, human-computer interaction, human-robot interaction, robotics, conversational agents, experimental psychology, and decision making.


  • M. Ehsan Hoque, MIT, USA
  • Dan McDuff, MIT, USA
  • Louis Philippe Morency, USC, USA
  • Rosalind Picard, MIT, USA

1st International Audio/Visual Emotion Challenge and Workshop (AVEC 2011)

The Audio/Visual Emotion Challenge and Workshop (AVEC 2011) will be the first of its kind – a competition event aimed at the comparison of multimedia processing and machine learning methods for automatic audio, visual and/or audiovisual emotion analysis – where all participants will compete under strictly the same conditions. The motivation is to provide a common benchmark test set for individual multimodal information processing and to bring together the audio and video emotion recognition communities; to compare the relative merits of the two approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion recognition systems to be able to deal with naturalistic behavior in large volumes of un-segmented, non-prototypical and non-preselected data as this is exactly the type of data that both multimedia retrieval and human-machine/human-robot communication interfaces have to face in the real world. We are encouraged in our proposal by the huge success and high community response and appreciation of the Challenge on Audio-based Emotion Recognition part of us initiated and organized at INTERSPEECH 2009 and 2010 and the purely vision based Facial Expression Recognition Challenge (FERA 2011) part of us initiated and organized FERA for the IEEE FG 2011 as an equivalent for the video analysis community.


  • Björn Schuller, TUM, Germany
  • Michel Valstar, Imperial College London, UK
  • Roddy Cowie, Queen’s University Belfast, UK
  • Maja Pantic, Imperial College London, UK

General Workshop Chairs
  • Björn Schuller, TUM, Germany []
  • Ginevra Castellano, Queen Mary University of London []


General questions - please email Björn Schuller and Ginevra Castellano.

Questions specific to a particular workshop - please email the chairs for the specific workshop