versus point-light faces: movement matching is better ... · avatars versus point-light faces:...

1
Avatars versus point-light faces: Movement matching is better without a face Rachel J. Bennetts 1 , Darren Burke 2 , Kevin Brooks 3 , Jeesun Kim 1 , Simon Lucey 4 , Jason Saragih 4 & Rachel A. Robbins 1 1 University of Western Sydney 2 Newcastle University 3 Macquarie University 4 CSIRO Email: [email protected] Background Characteristic facial movements can be used as an alternative pathway to recognise individuals 1 Familiar faces are generally easier to match than unfamiliar faces 2 - but few studies have tested this with moving faces To examine movement-based face recognition, it is important to reduce static facial information 3 It is unclear which is better at reducing static cues: facial point-light-displays (PLDs) or shape normalised avatars (but see 4 ) 1. Can movement act as cue to identity when static facial information is degraded? (Moving vs Static clips) Is performance affected by the image manipulation used? (PLD vs avatar) 2. Does familiarity improve movement-based face matching? (Familiar vs Unfamiliar) 3. Do any of these effects change when participants have a non-degraded image to compare to? (Experiment 1 vs Experiment 2) Questions References Results Conclusions 2 s clips of 6 familiar (famous) and 6 unfamiliar faces, converted to PLDs and avatars. Sequential same/different task, within subjects 2 Familiar/Unfamiliar) x 2 (PLD/Avatar) x 2 (Dynamic to Dynamic (MOVING)/Static to Static (STATIC)) (Dynamic/Static and Static/Dynamic conditions were also run, but are not shown) Experiment 1: Matching PLD to PLD and avatar to avatar (N=16 undergrads) Experiment 2: Matching Video to PLD and Video to avatar (N=33 undergrads) Methods MOVING STATIC MOVING STATIC Experiment 1 1. It is possible to match faces based on movement alone But this depends heavily on stimulus and task Participants are more accurate when matching PLDs than avatars 2. Familiar faces are matched more accurately than unfamiliar faces Even when participants do not know the face is familiar 3. Participants are less accurate matching a degraded image to a non-degraded video than matching two degraded videos Changing the format within trials eliminates the movement advantage 1. OToole, A. J., Roark, D. A., & Abdi, H. (2002). Trends in Cognitive Sciences, 6, 261-266. 2. Hancock, P. J. B., Bruce, V., & Burton, A. M. (2000). Trends in Cognitive Sciences, 4, 330-337. 3. Knight, B., & Johnston, A. (1997). Visual Cognition, 4, 265-273. 4. Hill, H., Jinno, Y., & Johnston, A. (2003). Perception, 32, 561-566. Experiment 2 MOVING STATIC MOVING STATIC Familiar > Unfamiliar Expt 1: F(1,15) = 17.46, p = .001, eta 2 = .538 Expt 2: F(1,32) = 6.33, p = .017, eta 2 = .165 Dynamic > Static…sometimes Expt 1: Dynamic > Static, p = .009 Expt 2: Dynamic = Static, p = 1 PLD > Avatar Expt 1: F(1,15) = 8.56, p = .01, eta 2 = .363 Expt 2: F(1,32) = 3.98, p = .054, eta 2 = .111 Some interactions Expt 1: Manipulation x Movement, p = .013 Expt 2: Manipulation x Familiarity, p = .037 osters. Copyright protected. ed. F1000 Posters. Copyright protected. F1000 Poster pyright protected. F1000 Posters. Copyright protected. F1000 Posters. Copyright p 0 Posters. Copyright protected. F1000 Posters. Copyright protected. F1000 Posters. Copyright protected. F10 pyright protected. F1000 Posters. Copyright protected. F1000 Posters. Copyright protected. F1000 Posters. C ed. F1000 Posters. Copyright protected. F1000 Posters. Copyright protected. F1000 Posters. Cop sters. Copyright protected. F1000 Posters. Copyright protected. F1000 ght protected. F1000 Posters. Copyright prot F1000 Posters. C

Upload: others

Post on 22-Nov-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: versus point-light faces: Movement matching is better ... · Avatars versus point-light faces: Movement matching is better without a face Rachel J. Bennetts1, Darren Burke2, Kevin

Avatars versus point-light faces: Movement matching is better without a faceRachel J. Bennetts1, Darren Burke2, Kevin Brooks3, Jeesun Kim1, Simon Lucey4, Jason Saragih4 & Rachel A. Robbins1

1 University of Western Sydney 2 Newcastle University 3Macquarie University 4CSIROEmail: [email protected]

Background• Characteristic facial movements can be used as

an alternative pathway to recognise individuals1

• Familiar faces are generally easier to match than unfamiliar faces2 - but few studies have tested this with moving faces

• To examine movement-based face recognition, it is important to reduce static facial information3

• It is unclear which is better at reducing static cues: facial point-light-displays (PLDs) or shape normalised avatars (but see 4)

1. Can movement act as cue to identity when static facial information is degraded? (Moving vs Static clips)

• Is performance affected by the image manipulation used? (PLD vs avatar)

2. Does familiarity improve movement-based face matching? (Familiar vs Unfamiliar)

3. Do any of these effects change when participants have a non-degraded image to compare to? (Experiment 1 vs Experiment 2)

Questions

References

Results

Conclusions

• 2 s clips of 6 familiar (famous) and 6 unfamiliar faces, converted to PLDs and avatars. Sequential same/different task, within subjects

• 2 Familiar/Unfamiliar) x 2 (PLD/Avatar) x 2(Dynamic to Dynamic (MOVING)/Static to Static (STATIC)) (Dynamic/Static and Static/Dynamic conditions were also run, but are not shown)

Experiment 1: Matching PLD to PLD and avatar to avatar (N=16 undergrads)

Experiment 2: Matching Video to PLD and Video to avatar (N=33 undergrads)

Methods

MOVING STATIC MOVING STATIC

Experiment 1

1. It is possible to match faces based on movement alone• But this depends heavily on stimulus and task• Participants are more accurate when matching PLDs than avatars

2. Familiar faces are matched more accurately than unfamiliar faces• Even when participants do not know the face is familiar

3. Participants are less accurate matching a degraded image to a non-degraded video than matching two degraded videos

• Changing the format within trials eliminates the movement advantage

1. O’Toole, A. J., Roark, D. A., & Abdi, H. (2002). Trends in Cognitive Sciences, 6, 261-266.2. Hancock, P. J. B., Bruce, V., & Burton, A. M. (2000). Trends in Cognitive Sciences, 4, 330-337. 3. Knight, B., & Johnston, A. (1997). Visual Cognition, 4, 265-273.4. Hill, H., Jinno, Y., & Johnston, A. (2003). Perception, 32, 561-566.

Experiment 2

MOVING STATIC MOVING STATIC

• Familiar > Unfamiliar • Expt 1: F(1,15) = 17.46, p = .001, eta2 = .538• Expt 2: F(1,32) = 6.33, p = .017, eta2 = .165

•Dynamic > Static…sometimes• Expt 1: Dynamic > Static, p = .009• Expt 2: Dynamic = Static, p = 1

•PLD > Avatar• Expt 1: F(1,15) = 8.56, p = .01, eta2 = .363• Expt 2: F(1,32) = 3.98, p = .054, eta2 = .111

•Some interactions• Expt 1: Manipulation x Movement, p = .013• Expt 2: Manipulation x Familiarity, p = .037

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.

Copyri

ght p

rotec

ted. F

1000

Pos

ters.