trustworthy ai in factory of the future

24
2/06/19 1 Milos Manic, PhD Professor, Virginia Commonwealth University Trustworthy AI in Factory of the Future Explainable, Trustworthy, and Adversarial Intelligence Automation Research Day, Feb. 6, 2019 Commonwealth Center for Advanced Manufacturing (CCAM) Outline 1. What is Artificial Intelligence? AI Today 2. AI & Factory Automation What is AI Factories of the Future AI/ML example (Biomass) 3. Trustworthy AI in FA Right decisions for right reasons Generalization When does it fail (adversarial learning for XAI) Understanding why the AI fails 4. Instead of Conclusion The challenges © VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 2/48

Upload: others

Post on 27-Jan-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

2/06/19

1

Milos Manic, PhDProfessor, Virginia Commonwealth University

Trustworthy AI in Factory of the FutureExplainable, Trustworthy, and Adversarial Intelligence

Automation Research Day, Feb. 6, 2019 Commonwealth Center for Advanced Manufacturing (CCAM)

Outline1. What is Artificial Intelligence?

– AI Today

2. AI & Factory Automation– What is AI– Factories of the Future– AI/ML example (Biomass)

3. Trustworthy AI in FA– Right decisions for right reasons– Generalization– When does it fail (adversarial learning for XAI)

• Understanding why the AI fails

4. Instead of Conclusion– The challenges

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 2/48

2/06/19

2

Virginia Commonwealth University

Overall school rankings 2015:

The Third Best Computer School in VirginiaRanked #20 Computer School in The SouthRanked #68 Computer School in USA

http://computer-science-schools.com/virginia

• University– Founded in 1838– 13 schools , more than 31,000 students– 226 degree and certificate programs

• College of Engineering– Started in 1996– Enrollment: ~2,000– Faculty: 67% increase since 2010– Programs/ Departments: 6– Operating Budget: $22.6 Mil– Medical School

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 3/48

MHRG GroupCSC)Intro, Milos…• Director, Cybersecurity Center

• 30+ grants completed, (anomaly detection)

– NSF, DOE, BEA, Dept. of Air Force, HP, Fujitsu USA.

– Data mining, AI/ML in cyber security, critical infrastructure protection, energy security, and resilient

intelligent control.

• 30+ international keynotes

– ML in critical infrastructures, security and resiliency (UK, Turkey, Portugal, Japan, US)

• Scholarly

– 180 peer-reviewed publications - 30 journals, 140+ conference papers

– 15 best paper awards (Italy, Japan, Poland, Singapore)

– 10 advisee awards

– 2018 R&D 100 Award (AICS Autonomic Intelligent Cyber Sensor)

• Synergistic

– Founding Chair, IEEE IES Technical Committee on Resilience and Security in Industry

– Officer, IEEE Industrial Electronics Society (IES)

– The 1st NSF National Workshop on Resilience Research for Critical Infrastructures, Oct. 2015, Arlington, VA https://cps-vo.org/group/NWRR2015.

– Chair, IEE IECON 2018 http://www.iecon2018.org/, IEEE HSI 2019 http://hsi2019.welcometohsi.org/

– Associate Editor: Trans. on Industrial Electronics, Trans. on Industrial Informatics, and Int. Journal of

Engineering Education

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 4/48

2/06/19

3

AI, Machine Learning – anomaly detection(Crit.Inf., HMI, Networking (SDN), Visualization)

http://www.people.vcu.edu/~mmanic/

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 5/48

What is Artificial Intelligence?

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 6/48

2/06/19

4

should-they-be-63154

What IS AI...?

Artificial = Made by humans; Created, produced - rather than natural.

Defining Intelligence – much harder!• The capacity to acquire and apply knowledge.• The ability to learn or understand things or to deal

with new or trying situations: the skilled use of reason.

• Terminology..• Artificial Intelligence, Machine Learning,

Computational Intelligence, (Fuzzy/Neural/Genetic), Deep learning

AI our attempt to build models of ourselves?

AI today…¨ “data driven” ¨ takes many forms

Who is NOT using AI today?

Stuffed animal or $6K medical device? http://www.parorobots.com/

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 7/48

AI…the difficult questions…

• How do you…• …replicate something we do not understand?• What is human emotion? • Sentience…emotion, love, dream, conciseness • Fear, anger, violence?• Memory - ours is subjective, fallible• How do you teach a computer to forget…or to dream? It is

something our minds need to do...• …..

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 8/48

2/06/19

5

Page 9

AI…difficult questions…(cont.)• Data techniques and trust and metrics; • Trust and trustworthy, trustworthy-seeming robots; how to quantify?• Public and gov policies; • Robot takes same responsibility as human - should be equally treated and

insured; Human values in AI? • Autonomous vehicles and intelligence - ethical, moral questions questions; • Asimov rules;• How to quantify ethics and morality? Intelligence ability to learn, control?• Humans disrespecting Google cars, equal partners?• Ehics (decisions); How to trust it? Trust too much -> fails; • Gov & Industry in USA - Push for trust

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 9

“Are search engines a map of what people are thinking, or actually a map HOW people are thinking”

“Self awareness, manipulation, sexuality - if that’s not true AI, i don’t know what is…”…From Ex-Machina

“AI would be the biggest event in human history. Unfortunately, it might also be the last”Elon Musk (Tesla)

“The development of full Artificial Intelligence could spell the end of the human race”Stephen Hawking, English physicist

“Google will fulfill its mission only when its search engine is AI-Complete”Larry Page (Google co-founder)

“If a super-intelligent machine decided to get rid of us, I think it would do so pretty efficiently”Shane Legg, DeepMind co-founder

“I don’t understand why some people are not concerned”Bill Gates, Microsoft

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 10

2/06/19

6

https://www.mckinsey.com/business-functions/operations/our-insights/human-plus-machine-a-new-era-of-automation-in-manufacturing

Automation in Manufacturing

New Automation EraAI, ML, Robotics

Where and how much to automate?• Study of 46 countries on 80% global workforce (2015)

• 64% manufacturing working hrs (478B of 749B) automatable with current technology

• 236M of 372M FTE ($2.7T of $5.1T) could be eliminated/repurposed

•Machines• Matching or outperforming human (even work requiring

cognitive capabilities)•Manufacturing

• While one of the highly automated industries ->

Still significant automation potential!Why NOT use AI today?

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 11/48

The Factory of the Future?

Improvements in plantsStructure, Digitization, Processes

Global survey on 2030 FoF vision (2016)

Structure• multiD layout (Audi R8, Heilbronn, no fixed

conveyor (RFID, laser scanner guidance in the

floor assembly)

• Modular setup (Toyota, modular conveyor –

flexibility

Processes• Customer Centricity (big data analytics,

Daimler, last minute modifications of car)

• Continuous Improvement (value-adding

activities; Bosch, production improvement)

https://www.bcg.com/en-us/publications/2018/artificial-intelligence-factory-future.aspx

Digitization - smart, collaborative robots (Changan Ford), Additive Manufacturing (Rolls-Royce Phantom, BMW 3D printing), AR/VR(Volkswagen, logistics), Immersive Training (Mercedes-Benz, virtual assembly lines), Decentralized Production Steering (Bosch, tool

location detection), Big Data & Analytics (Mercedes-Benz, predictive analytics)

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 12/48

2/06/19

7

AI as part of Factories of the FutureAI –> up to 70% cost reduction

Opportunities, yet skepticism…

Global survey 1,000 executives (2018)• Forefront: Transportation, logistics, automotive• Process industries lag behind• Germany – automotive most advanced, process

ways to go

Trend• Lowest adopters

• Japan (11%), Singapore (10%), France (10%)

• Early adopters vs. highest ambitions • US (25%), China (23%), and India

(19%), • Singapore (97%), India (96%), China

(94%)

• China has overtaken the US in AI

https://www.bcg.com/en-us/publications/2018/artificial-intelligence-factory-future.aspx

Early industry adopters: Transportation and Logistics (21%), Automotive (20%) highest, while engineered products (15%) andprocess industries (13%) lag behind

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 13/48

AI in the Factory of the Future

AI everywhere…

Global survey 1,000 executives (2018)

AI application areas• Outside the Factory

• Engineering - R&D, simplifying production, generative product design (AI suggesting unconventional solutions like bionic structures)• Supply Chain Management – demand forecasting (big data analytics, customers, media, weather, enterprise resource planning with customer insights)

• Inside the Factory• Production– continuous and discrete (chemicals and assembly tasks); self-optimization, material composition, image recognition for unsorted parts in undefined locations (bin, conveyor belt)• Maintenance – reduce equipment breakdowns, increase asset utilization (predictive maintenance, data analytics)

https://www.bcg.com/en-us/publications/2018/artificial-intelligence-factory-future.aspx

• Inside the Factory (cont.)• Quality– ML vision for defect identification; Logistics – inplant and warehousing, autonomous movement and efficient supply of material w/ obstacle detection (UAG), warehouse self-optimization (moving high-demand parts to closer for faster access, and low-demand parts to more remote locations).

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 14/48

2/06/19

8

Example: Artificial Intelligence Powered

Bio-fuel Generation

https://www.energy.gov/eere/success-stories/articles/eere-success-story-artificial-intelligence-based-control-system

https://www.inl.gov/article/systems-engineering/

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 15/48

Overview• Objective:

– Improving the reliability of biomass feedstock preprocessing processes

• Why:– Preprocessing is highly sensitive to variations in the input

product, which translates in various stoppages and down-time.

• How:– Using Artificial Intelligence to predict the behavior of the system– Learning from process data– Self-optimize parameters based on material input

• Results:– Increased reliability from 63% to 96%

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 16/48

2/06/19

9

Process Development Unit (PDU)

• PDU: – Full-scale, integrated biomass feedstock preprocessing

system operated by the INL

• Performs a two-stage grinding process:– Two grinders– Several conveyor belts

Process Development Unit at Biomass Feedstock National User Facility

https://bfnuf.inl.gov/SitePages/Process%20Development%20Unit.aspx

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 17/48

• Variability in the input product causes:– Plugging of grinding screens– Overloading of grinders– Plugging of conveyors – Increased wear and tear

• The operators of the plant manually monitor the equipment workload to reduce down-time

• AI can be used to automate the monitoring and adapting

Plugged screen

Feedstock Grinding: Obstacles

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 18/48

2/06/19

10

PDU-RS

• Enables the operator to:– Observe outcomes for different operating configurations– Make an informed decision about the best operation configuration for a

particular bale

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 19/48

PDU-RS: Data-driven model

PDU-RS

Given (Inputs):• Bale Moisture• Screen sizes• Infeed rate

Estimates (Outputs):• Throughput • Currents of each

component• Down-time

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 20/48

2/06/19

11

Improving Reliability• Reliability indicated Percentage of Active-time

(PAT)

• PAT: Percentage of time that the equipment is actively processing product– Low PAT: The result of shutdowns caused by equipment

exceeding the maximum amperage.– High PAT: Desirable for improving the reliable operation

of the process

• PDU-RS estimates PAT for each component

• Reliable operation by finding a system configuration that:– Maximizes throughput– Subject to constraints on PAT for each component

• Experimental results showed:– Increase on reliability from 63% to 96%– Variability throughput significantly reduced– Zero unexpected downtime due to parameter adaptation

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 21/48

So… smart controls, fairly doable…

But….is performance enough?

=>Trustworthy and Explainable AI…

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 22/48

2/06/19

12

Black Box AI• AI in safety-critical domains– Humans, operators hesitant to fully adopt – “Black-box” models…how to trust?

• To fully utilize the benefits of AI – Need “transparency” to open the black-box– Need models to explain themselves to us

InputsDecision

Black-box AI

Inputs

DecisionGoal

This is my knowledge on the concept. That is why I’m giving you this decision

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 23/48

Explainable AI (XAI)• Explaining AI models at two stages– Before deployment (offline)– During deployment (online)

• Both types of explanations important– Offline trust is important to make people use AI– Online trust is important to trust AI decisions

Trust needed at every stage of AI model life-cycle

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 24/48

2/06/19

13

Offline explanations• Before deployment• Gaining a “holistic” understanding of the AI model• Users need to trust the model to allow it go live• Diagnosis of faults in training (for ex. why not 100%

accuracy)

AI ModelHistoricData

Explanation Interface

What has the AI model learned?When and why does the AI model fail?

Offline Explanations

Training

Debugging and Diagnosis

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 25/48

Online explanations

AI ModelLiveData

Explanation Interface

What are the reasons behind the decision?How confident is the model about the decision?

Online Explanations

M . T. Ribeiro, S. Singh, and C. Guestrin, “Why Should I Trust You?” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data M ining - KDD ’16, 2016, pp. 1135–1144, https://arxiv.org/pdf/1602.04938v1.pdf

Example: Online Explanation of AI in Medical Diagnosis

Local Interpretable Model-Agnostic Explanations https://homes.cs.washington.edu/~marcotcr/blog/lime/

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 26/48

2/06/19

14

Generating explanations• Goes beyond engineering and computer science

• Multi-disciplinary effort– Computer science – Cognitive science– Psychology– User interface design– Social sciences

• Types of explanations– Heat maps– Hidden layer activations– Contrastive explanations– Explanation by examples

Heat maps (understand important pixelshttps://lrpserver.hhi.fraunhofer.de/

Explanation by selection of teaching exampleshttps://www.darpa.mil/attachments/XAIProgramUpdate.pdf

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 27/48

Understanding of what AI model has learned

Example: anomaly detection (but can be translated to any other…)

K. Amarasinghe, K. Kenney, and M. Manic, "Toward Explainable Deep Neural Network based Anomaly

Detection", in Proc. 11th International Conference on Human System Interaction, IEEE HSI 2018, Gdansk,Poland, July, 04-06, 2018. DOI: 10.1109/HSI.2018.8430788

K. Amarasinghe, M. Manic, "Improving User Trust on Deep Neural Networks based Intrusion Detection Systems" in Proc. 44rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2018,

Washington DC, USA

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 28/48

2/06/19

15

AI decisions, what is the rationale?

We need an understanding of what AI model has learned – Will lead to learning its decision making process

Evidence

Available evidence

Do not know the rationaleHumans, (everyday)

decision making

Assessing evidence

Option A

Option B

Option C Check which option is most supported by evidence

Rationale: Evidence points to option A

Input Feature values AI Model

Concept A

Concept B

Concept C Available evidence Check which concept is most supported by evidence

AI, decision making

…so humans can understand “how the AI model thinks”

Causal explanations– What evidence (“cause”) supports decision

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 29/48

Anomaly Detection using DNN• Anomaly detection

– Data mining– Holy grail of cybersecurity

• Data driven techniques– Minimal a priori knowledge– Commonly used support vector machines and “shallow” neural networks

• Deep Neural Networks (DNNs) have been very successful in many domains– Still used as black boxes!!!– Major drawback for mission critical systems– As a result, no humans trust in the DNN

• For anomaly detection with DNN, explaining DNN decisions essential! Is the DNN doing the right thing? (Accuracy)

Is it doing the right thing for the right reasons? (Explanation)© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 30/48

2/06/19

16

Anomaly Detection

• A deep MLP is used

– Four different configurations tested

– Changing in depth and number of neurons (width)

• KDD-NSL intrusion detection dataset used

– https://github.com/defcom17/NSL_KDD

– Modification of the KDD-Cup 1999 dataset by

DARPA

– 41 total features in four groups

– Four attack categories

• Denial of Service, Probe, user to root (U2R) and remote to

user (R2L)

• Focus of this study

– DoS and Probe attacks considered

– Both attack types grouped together to create one class

• Supervised learning used for a binary

classification

– Attack vs. Normal

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 31/48

Input Feature Relevance• Relevance

– how much each input feature contributes to classification decision.

• Why input feature relevance?– Can identify which inputs drive decisions toward certain classes

• Achieved by decomposing the composite function of the DNN– Layer-wise Relevance Propagation

• Layer-wise relevance propagation (LRP)– Introduced by Bach et. al. for finding pixel relevance for image

classification– Assumes that the classification algorithm can be decomposed into

several layers of computation– Applies to DNNs

http://iphome.hhi.de/samek/pdf/BinICISA16.pdf© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 32/48

2/06/19

17

Input Feature Relevance• Idea – find relevance score of layer (l) when layer (l+1) is

available

• Decomposing layers– Decomposition conserves relevance scores– Relevance scores are back propagated in “messages” from (l+1)

to (l)

! " = ⋯ = %&∈()*

+&()* = %&∈(

+&(() = ⋯ = %

&+&(*)

%.+.←0(,()* = +0()*

+. ( = %0+.←0(,()*

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 33/48

BUT…Input Feature Relevance: “Attack”

Different DNN Models

Similar Classification Accuracies

Input Feature contributions ARE DIFFERENT

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 34/48

2/06/19

18

So…accuracy sufficient?• A study of DNNs for anomaly detection

– Supervised learning based two-class learning (Attack and Normal)– Provide the factors that was most relevant to the prediction

• If you look at accuracy scores only….– Any model will do– However, each one using a different set of features– The learning is different!

• A different feature sets drives DNNs to learn different things despite similar accuracies

• Implications…– User can determine if algorithm is looking at right features; – So, it is important to have this information in order to deploy the right algorithm; – Explanation is very important in understanding and therefore choosing the right algorithm; – User can interact with the DNN based anomaly detector– User can evaluate the DNN based on reasons for outputs, not just accuracies– User has more information about the DNN predictions to build trust

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 35/48

Generalization:A Holy Grail of Machine Learning

C. Wickramasinghe, D. Marino, K. Amarasinghe, M. Manic, "Generalization of Deep Learning For Cyber-Physical System Security: A Survey" in Proc. 44rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2018, Washington DC, USA, Oct. 21-23, 2018

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 36/48

2/06/19

19

Generalization: A Holy Grail of M. Learning

C. Wickramasinghe, D. Marino, K. Amarasinghe, M. Manic, "Generalization of Deep Learning For Cyber-Physical System Security: A Survey" in Proc. 44rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2018, Washington DC, USA, Oct. 21-23, 2018

• Generalization – the performance of a ML model on previously unseen scenarios.

• Strategies – Many strategies have been proposed – “Regularization” the most common

Common techniques

• Poorly understood…– Still…despite the efforts over the years– Especially in complex models like in Deep Learning– Still insufficient in many applications

• Adversarial learning– One can find regions where a learned model produces

incorrect decisions, therefore “breaking” generalization.

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 37/48

Understanding why an AI model Fails An Adversarial Approach

D. Marino, C. Wickramasinghe, M. Manic, "An Adversarial Approach for Explainable AI in Intrusion Detection Systems" in Proc. 44rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2018, Washington DC, USA

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 38/48

2/06/19

20

Adversarial Machine Learning• Extensively used in cyber-security

– to find vulnerabilities in data-driven models

• Adversarial samples – Crafted in order to deceive a data-driven model– Done by adding small modifications that are indistinguishable to a human

https://arxiv.org/pdf/1412.6572.pdf

Original sample (correctly classified)

Adversarial sample (incorrect classification)

Imperceptible modification

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 39/48

Adversarial Samples• In recent years

– Deep learning models shown to be susceptible to adversarial attacks

• CNNs– Convolutional Neural Networks can be “fooled” by making small

modifications to the samples in order to produce an incorrect output:

Original samples (correctly classified)

Adversarial samples (incorrect classification)

Your Data is Being Manipulated: http://zinc.mondediplo.net/sites/238210 https://arxiv.org/pdf/1602.02697.pdf

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 40/48

2/06/19

21

Adversarial ML: Applications

• Attacker’s perspective– to evade detection– confuse the classifier– degrade performance – gain information about the model

• Adversarial Machine Learning – A powerful tool to understand the decision boundaries of machine

learning models– Use to generate explanations– Find why AI fails and how to avoid that!

• Defender perspective: – used to perform vulnerability assessment– study the robustness against noise– improve generalization – debug the machine learning models

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 41/48

Experiments: Explainable Adversarial AI

Normal samples were misclassified as DOS because:• low connection duration (duration)• high number of connections to the same host (count)• low login success rate (num_root, logged_in).

Graph• the relevant features responsible for

the misclassification.

Bars• the difference between the

misclassified samples and the modified samples.

Output graph for Normal samples misclassified as DOS

Short connections (lower than normal) + many connections + typically denied (logged in) -> makes algorithm think it is the malicious behavior (DOS); However, the labeled data says it’s not!

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 42/48

2/06/19

22

So how is AL helping…• Understanding of the model

– Improving the understanding of the model is essential to build trust in machine learning models

• New interfaces needed – Provides satisfactory explanations that justify the misclassifications,

with descriptions that match expert knowledge.

• Approached to adversarial ML needed– Identify anomalies in the system– Understand when the model fails– Understand why is failing

AI/ML should be making right decisions for right reasons!

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 43/48

Future of Explainable AIWhat are the Challenges?

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 44/48

2/06/19

23

XAI, the difficult questions• Generalized explaining methods

– Can we develop generalized explaining methods?– Notion of explainability is application dependent– Difficult to define blanket algorithms

• How to get the users involved – How to get the users involved in explaining process?– Explanations highly user specific– Generated explanations should resonate with the user– What’s the best feedback?

• Metrics– How do we measure explainability?– Can we quantify it?

• How much is enough?– What is a sufficient level of explainability?– Who defines this?

https://news.itu.int/welcome-to-trustfactory-ai-nine-projects-to-build-trust-in-ai/

How to help adoption of AI & ML?How to build trust in AI?

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 45/48

http://hsi2019.welcometohsi.org

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 46/48

2/06/19

24

"Simplicity is the ultimate sophistication."

~ Leonardo da Vinci

Thank you JProf. Milos Manic [email protected]

Kasun Amarasinghe [email protected] L Marino [email protected]

Chathurika Wickramasinghe [email protected]

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 47/48

A few of our papers…1. K. Amarasinghe, M. Manic, "Improving User Trust on Deep Neural Networks based Intrusion Detection

Systems" in Proc. 44rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2018, Washington DC, USA, Oct. 21-23, 2018. PDF, DOI: 10.1109/IECON.2018.8591322

2. C. Wickramasinghe, D. Marino, K. Amarasinghe, M. Manic, "Generalization of Deep Learning For Cyber-Physical System Security: A Survey" in Proc. 44rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2018, Washington DC, USA, Oct. 21-23, 2018. PDF, DOI: 10.1109/IECON.2018.8591773

3. D. Marino, C. Wickramasinghe, M. Manic, "An Adversarial Approach for Explainable AI in Intrusion Detection Systems" in Proc. 44rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2018, Washington DC, USA, Oct. 21-23, 2018. PDF, DOI: 10.1109/IECON.2018.8591457

4. K. Amarasinghe, K. Kenney, and M. Manic, "Toward Explainable Deep Neural Network based Anomaly Detection", in Proc. 11th International Conference on Human System Interaction, IEEE HSI 2018, Gdansk, Poland, July, 04-06, 2018. PDF, DOI: 10.1109/HSI.2018.8430788

5. C. Wikramasinghe, K. Amarasinghe, D. Marino, and M. Manic, “Deep Self-Organizing Maps for Visual Data Mining,” in Proc. 11th International Conference on Human System Interaction, IEEE HSI 2018, Gdansk, Poland, July, 04-06, 2018. PDF, DOI: 10.1109/HSI.2018.8430845

More at http://www.people.vcu.edu/~mmanic/PubsList.html ....

© VCU CYBERSECURITY CENTER| Virginia Commonwealth University, Richmond, VA 48/48