The ACII 2011 Workshops aim to facilitate lively discussions, comparison of methods, synthesis of results on particular topics of interest to Affective Computing. ACII 2011 will feature five diverse workshops on topics ranging from affective brain-computer interfaces to social interactions. Workshop papers will be published ny Springer as part of the main conference proceedings. The workshops will be held on the first day of the main conference, Sunday, October 9th, 2011.
2009 the focus was on affective and cognitive state estimation alike, in 2011 we hope to focus more on the induction, measurement, and use of affective states, i.e. emotions and moods.
This workshop is the first of its series in ACII and build on related workshops and special sessions in other venues, such as the Simulation of Adaptive Behaviour (SAB) 2006 and Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2007 workshops and the IEEE Computational Intelligence in Games (CIG) 2008 special session on player satisfaction, the special session on Emotion in Games in IEEE CIG 2010 and VS Games 2011 and the Networking Session on Research and Development on Serious Games during the ICT Event 2010 and tutorials on 'affective computing in game design' at Gameon-NA 2008 and at DigiPen (2009).
The purpose of this workshop is to engage the machine learning and affective computing communities towards solving the most pressing problems relating to understanding and modeling affect. We welcome the participation of researchers from diverse fields, including signal processing and pattern recognition, statistical machine learning, human-computer interaction, human-robot interaction, robotics, conversational agents, experimental psychology, and decision making.
The Audio/Visual Emotion Challenge and Workshop (AVEC 2011) will be the first of its kind – a competition event aimed at the comparison of multimedia processing and machine learning methods for automatic audio, visual and/or audiovisual emotion analysis – where all participants will compete under strictly the same conditions. The motivation is to provide a common benchmark test set for individual multimodal information processing and to bring together the audio and video emotion recognition communities; to compare the relative merits of the two approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion recognition systems to be able to deal with naturalistic behavior in large volumes of un-segmented, non-prototypical and non-preselected data as this is exactly the type of data that both multimedia retrieval and human-machine/human-robot communication interfaces have to face in the real world. We are encouraged in our proposal by the huge success and high community response and appreciation of the Challenge on Audio-based Emotion Recognition part of us initiated and organized at INTERSPEECH 2009 and 2010 and the purely vision based Facial Expression Recognition Challenge (FERA 2011) part of us initiated and organized FERA for the IEEE FG 2011 as an equivalent for the video analysis community.
General Workshop Chairs
General questions - please email Björn Schuller and Ginevra Castellano.
Questions specific to a particular workshop - please email the chairs for the specific workshop