is deep learning safe for robot vision? adversarial examples against the icub humanoid
Post on 22-Jan-2018
144 Views
Preview:
TRANSCRIPT
PatternRecognitionandApplications Lab
UniversityofCagliari,Italy
DepartmentofElectricalandElectronic
Engineering
Is Deep Learning Safe for Robot Vision?Adversarial Examples against the iCub Humanoid
12017ICCVWorkshopViPAR,Venice,Oct.23,2017
MarcoMelis,AmbraDemontis,BattistaBiggio,GavinBrown,GiorgioFumera,FabioRoli
battista.biggio@diee.unica.it
Dept.OfElectrical andElectronicEngineeringUniversity ofCagliari,Italy
@biggiobattista
http://pralab.diee.unica.it @biggiobattista 2
The iCub is the humanoid robot developed at the Italian Institute of Technology as part of the EU project RobotCub and adopted by more than 20 laboratories worldwide.
It has 53 motors that move the head, arms and hands, waist, and legs. It can see and hear, it has the sense of proprioception (body configuration)and movement (using accelerometers and gyroscopes).
[http://www.icub.org]
The object recognition system of iCub uses visual features extracted with CNN models trained on the ImageNet dataset[G. Pasquale et al. MLIS 2015]
The iCub Humanoid
http://pralab.diee.unica.it @biggiobattista 3
The iCub Robot-Vision System
http://pralab.diee.unica.it @biggiobattista 4
[http://old.iit.it/projects/data-sets]The iCubWorld28 Dataset
http://pralab.diee.unica.it @biggiobattista
Crafting the Adversarial Examples
• Key idea: shift the attack sample towards the decision boundary– under a maximum input perturbation (Euclidean distance)
• Multiclass boundaries are obtained as the difference between the competing classes (e.g., one-vs-all multiclass classification)
5
f1
f2
f3 f1-f3
http://pralab.diee.unica.it @biggiobattista
Error-generic Evasion
• Error-generic evasion– k is the true class (blue)– l is the competing (closest) class in feature space (red)
• The attack minimizes the objective to have the sample misclassified as the closest class (could be any!)
6
1 0 1
1
0
1
Indiscriminate evasion
http://pralab.diee.unica.it @biggiobattista
Error-specific Evasion
• Error-specific evasion– k is the target class (green)– l is the competing class (initially, the blue class)
• The attack maximizes the objective to have the sample misclassified as the target class
7
max
1 0 1
1
0
1
Targeted evasion
http://pralab.diee.unica.it @biggiobattista 8
∇fi (x) =∂fi(z)∂z
∂z∂x
f1
f2
fi
fc
...
...
Gradient-based Evasion Attacks• Solved with projected gradient-based optimization algorithm
http://pralab.diee.unica.it @biggiobattista 9
An adversarial example from class laundry-detergent, modified with our algorithm to be misclassified as cup
Adversarial Examples against the iCub
http://pralab.diee.unica.it @biggiobattista 10
Adversarial example generatedby manipulating only a specific region, to simulate a sticker that could be applied to the real-world object
This image is classified as cup
The ‘Sticker’ Attack against iCub
http://pralab.diee.unica.it @biggiobattista
Why ML is Vulnerable to Evasion?
• Attack samples far from training data are anyway assigned to ‘legitimate’ classes
• Rejecting such blind-spot evasion points should improve security!
11
1 0 1
1
0
1
SVM-RBF (higher rejection rate)
1 0 1
1
0
1
SVM-RBF (no reject)
http://pralab.diee.unica.it @biggiobattista 12
Countering Adversarial Examples
maximum input perturbation (Euclidean distance)
visually-indistinguishable perturbations
Error-specificevasion(similarresultsforerror-genericattacks)
http://pralab.diee.unica.it @biggiobattista
Conclusions and Future Work
• Adversarial Examples against iCub• Countermeasure based on rejecting blind-spot evasion attacks
• Main open issue: instability of deep features
13
smallchangesininputspace(pixels)alignedwiththegradientdirection...
...correspondtolargechangesindeepfeaturespace!
http://pralab.diee.unica.it @biggiobattista
https://sec-ml.pluribus-one.it/
14
top related