The CALLAS challenge

Print
Multimodal processing solutions use devices and sensors to capture different signals, and try to blend them together to provide extensions to the traditional man-machine interface.
Gestures, movements, speech and verbal and non-verbal expressions are all ways to express emotions and feelings. But understanding of emotions is a field apart:
These are only a subset of the questions about understanding of emotions that require to move a step beyond simple signal detection and merge.
Let's go a bit further:
Reactive systems need to understand immediately the reaction of an audience to a plot if the storyteller has to change its narrative or if the user has to become part of it.
And more.  Additional investigations are also challenging our project, to solve the dilemma of having artists engaged with IT specialists under the constraints of complex tools that limit their creativity or rather providing them with acceptably (simplest, not naive) solutions for the interpretation in human to computer interaction.

Those and similar research questions inspired the CALLAS project and our prototypes to progressively contribute to the advance of the state of the art in multimodal affective computing. The project is cultivating a community of amateurs, artists, technology providers and scientists, a dedicated social network (C³ : the CALLAS Community Club) through which accelerate the adoption of CALLAS and actuate its vision.
Last Updated on Wednesday, 04 August 2010 21:05