technology potential: affective computing

10
Technology potential: affective computing Why, how and when? Dr. Kostas Karpouzis Image, video and multimedia systems lab National Technical University of Athens [email protected]

Upload: kostas-karpouzis

Post on 29-Jan-2018

931 views

Category:

Education


1 download

TRANSCRIPT

Technology potential:affective computing

Why, how and when?

Dr. Kostas KarpouzisImage, video and multimedia systems lab

National Technical University of Athens

[email protected]

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

Defining Affective Computing

Picard (1997) defines affective computing as ‘computing that relates to, arises from, or deliberately influences emotion or other affective phenomena’ cyclical definition!

Merriam-Webster’s entry: ‘a set of observable manifestations of a subjectively experienced emotion’ Keywords for CS people: ‘observable manifestations’,

‘subjective experience’, ‘emotion’

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

Beyond emotions

Emotion recognition and synthesis in the focus of many FP5, FP6 and FP7 projects Starting from ERMIS (emotion-aware agents)

Humaine (network of excellence) to Callas (emotion in arts and entertainment)

Main focus: produce emotion-aware machines

OK, we can recognize (some) emotions… …but what can we do about it? Emotional episodes are scarce in front of a computer Even then, it’s difficult to reason about what caused

the emotion

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

Affective cues

Alternatively, we can look for observable manifestations which provide cues about the user’s subjective experience A smile may indicate the successful completion of a

transaction or retrieval of what user looked for Instead of a cryptic “retry” button or asking the user to

verify results (worse!)

People frown to indicate displeasure or difficulty to read, nod to agree, shrug shoulders when indifferent, etc. If only computers could sense that…

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

Sensing affect

…but they can!

We can recognize: Facial features and cues Head pose / eye gaze (to estimate user attention) Hand gestures (usually fixed vocabulary, signs) Directions & commands (usually fixed vocabulary) Anger in speech (useful in call centres)

In real time and robustly Check results from EU projects: Humaine, Callas,

Feelix Growing, Agent-Dysl, etc.

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

Affective interaction

So, computers can sense affective cues Let’s put them to work!

Users cannot read text off the screen and frown/approach screen? Redraw text with larger font!

Call centre user is angry? Redirect to human operator!

Users not familiar with/cannot use mouse/keyboard? Spoken commands/hand gestures are another option!

Users not comfortable with on-screen text? Use virtual characters and speech synthesis!

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

The case of the Agent-Dysl project

Children with dyslexia experience problems in reading off a computer screen Common errors: skipping words, changing word or

syllable sequence, easily distracted/frustrated

A screen reading software which Helps them read in correct order by highlighting

words and syllables Checks and monitors their progress Looks for signs of distraction or frustration

URL: http://www.agent-dysl.eu

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

The case of the Agent-Dysl project

User leans towards the screen? Font size increased

User looks away? Highlighting stops

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

The case of the Agent-Dysl project

Highlighting and font adjustment example

Virtual character

Affective computing for inclusion applicationsKostas Karpouzis – [email protected]

Affect-aware design and development

When would this be available?

Imagine a Self-Service Terminal which: Listens to spoken commands Is operated with gestures/touch/joystick Checks for signs of frustration/disappointment in

facial expressions and speech prosody Reads responses aloud, instead of text Uses on-screen virtual characters to increase user-

friendliness Uses virtual characters to render sign languages

besides text

We have the right tools to build them – now!