Gestures, movements, speech and verbal and non-verbal expressions are all ways to express emotions and feelings. But understanding of emotions is a field apart:
- What if contradictory signals are detected?
- What if a person is not behaving naturally?
- What about emotions induced by a theatrical performance?
- What about an individual not exactly free to express itself because of some sort of pressure or disturbance in the space where he/she is immersed?
Let's go a bit further:
- How useful is it to perform emotion processing as a batch activity, if not for documentary purposes?
- To which extent can Augmented Reality contribute to enrich the experience?
- Which is the optimal boundary?
- Is interactive art a synonym of irreducible complexity?
- How close are we to an acceptable blend?
- Is the detection of basic Ekmanian emotions such as joy, fear or anger enough when the aesthetic properties of the media, or specific enjoyment and entertainment experience are an essential part of the dynamic of the emotion?
Those and similar research questions inspired the CALLAS project and our prototypes to progressively contribute to the advance of the state of the art in multimodal affective computing. The project is cultivating a community of amateurs, artists, technology providers and scientists, a dedicated social network (C³ : the CALLAS Community Club) through which accelerate the adoption of CALLAS and actuate its vision.