uast and evolving systems of systems in the age of the black swan

33
[email protected], attributed copies permitted UAST and Evolving Systems of Systems in the Age of the Black Swan Part 2: On Detecting Aberrant Behavior There is no difficulty, in principle, in developing synthetic organisms as complex and as intelligent as we please . But we must notice two fundamental qualifications; first, their intelligence will be an adaptation to, and a specialization towards, their particular environment, with no implication of validity for any other environment such as ours; and secondly, their intelligence will be directed towards keeping their own essential variables within limits. They will be fundamentally selfish. Principles of the self-organizing system, W. Ross Ashby, 1962 www.parshift.com/Files/PsiDocs/Pap090901IteaJ-PathsForPeerBehaviorMonitoringAmongUAS.pdf www.parshift.com/Files/PsiDocs/Pap091201IteaJ-MethodsForPeerBehaviorMonitoringAmongUas.pdf Based on a presentation at UAST Tutorial Session ITEA LVC Conference, 12 Jan 2009, El Paso, TX. UAST: Unmanned Autonomous Systems test also: L3 Art Brooks did Masters paper here

Upload: sage

Post on 18-Jan-2016

40 views

Category:

Documents


0 download

DESCRIPTION

UAST and Evolving Systems of Systems in the Age of the Black Swan Part 2: On Detecting Aberrant Behavior. www.parshift.com/Files/PsiDocs/Pap090901IteaJ-PathsForPeerBehaviorMonitoringAmongUAS.pdf www.parshift.com/Files/PsiDocs/Pap091201IteaJ-MethodsForPeerBehaviorMonitoringAmongUas.pdf. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 1

UAST and Evolving Systems of Systems in the Age of the Black Swan

Part 2: On Detecting Aberrant Behavior

“There is no difficulty, in principle, in developing synthetic organisms as complex and as intelligent as we please. But we must notice two fundamental qualifications; first, their intelligence will be an adaptation to, and a specialization towards, their particular environment, with no implication of validity for any other environment such as ours; and secondly, their intelligence will be directed towards keeping their own essential variables within limits. They will be fundamentally selfish.

Principles of the self-organizing system, W. Ross Ashby, 1962

www.parshift.com/Files/PsiDocs/Pap090901IteaJ-PathsForPeerBehaviorMonitoringAmongUAS.pdf www.parshift.com/Files/PsiDocs/Pap091201IteaJ-MethodsForPeerBehaviorMonitoringAmongUas.pdf

Based on a presentation atUAST Tutorial SessionITEA LVC Conference,

12 Jan 2009, El Paso, TX.

UAST: Unmanned Autonomous Systems test

also: L3 Art Brooksdid Masters paper here

Page 2: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 2

Systems in Context

Class 1testing

system(s)

Class 2systems

under test

Class 2(federated?)

testingenterprise

environment(an ecology)

PoliticsTechnology

Govt ProceduresMil proceduresMilitary realityCompetitors

EnemiesUASTUASoS

Domain Independent Principles Can Inform UAST ConOps

system

systems

systems

Page 3: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 3

Problem and Observation• Self Organizing Systems of Systems are too complex to test beyond

“minimal” functionality and “apparent” rationality.

• Autonomous self organizing entities have a willful mind of their own.

• Unpredictable emergent behavior will occur in unpredictable situations.

• Emergent behavior is necessary and desirable (when appropriate).

• Inevitable: sub-system failure, command failure, enemy possession.

• UAS will work together as flocks, swarms, packs, and teams.

• Even human social systems exhibit unintended “lethal” consequences.

--------

In biological social systems, members monitor/enforce behavior bounds.

Could UAS have built-in socially attentive monitoring (SAM) on mission?

Could UAST employ SAM proxies for monitoring antisocial UAS?

Challenges:

1) “Learning” the behavior patterns to monitor.

2) Technology for monitoring complex dynamic patterns in real time.

3) Decisive counter-consequence action.

Page 4: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 4

Survey on Lethality and Autonomous

Systems

Responsibility for Lethal Errors by

Responsible Party. The soldier was found to be the

most responsible party, and robots

the least.

Responsibility

www.cc.gatech.edu/ai/robot-lab/online-publications/MoshkinaArkinTechReport2008.pdf

Res

po

nsi

ble

Par

ty

Lethality and Autonomous Systems:Survey Design and Results,

Lilia Moshkina, Ronald C. Arkin,Technical Report GIT-GVU-07-16, Mobile Robot Laboratory,

College of Computing, Georgia Institute of Technology, p. 30, 2007

Page 5: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 5

Applicability of ethical categories is ranked from more concrete and specific to more general and subjective.

Lilia Moshkina, Ronald C. Arkin, Lethality and Autonomous Systems: Survey Design and Results, Technical Report GIT-GVU-07-16, Mobile Robot Laboratory, College of Computing, Georgia Institute of Technology, p. 29, 2007

www.cc.gatech.edu/ai/robot-lab/online-publications/MoshkinaArkinTechReport2008.pdf

Page 6: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 6

This cover of I, Robot illustrates the story "Runaround", the first to list

all Three Laws of Robotics (Asimov 1942)

0) A robot may not harm humanity, or, by inaction, allow humanity to come to harm (added later).

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

FourThe Three Laws

of Robotics(Isaac Asimov)

Page 7: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 7

Self Organizing InevitabilityIsaac Asimov's three laws of robotics were developed to allow UxVs to coexist with humans, under values held dear by humans (imposed on robots).

These were not weapon systems.

Asimov’s robots existed in a peaceful social environment. Ours are birthing into a community of warfighters, with enemies, cyber warfare, great destructive capabilities, human confusion, and a code of war.

Ashby notes that a self organizing system by definition behaves selfishly, and warns that its behaviors may be at odds with its creators.

So – can we afford to build truly self organizing systems?

A foolish question. We will do that regardless of the possible dangers, just as we opened the door to atomic energy, bio hazards, organism creation, nanotechnology, and financial meltdown.

Can a cruise missile on a mission be hacked and turned to the enemy’s bidding? Perhaps we can say that it hasn’t occurred yet. Can a cruise missile get sick or confused, and hit something it shouldn’t? That’s already happened.

The issue is not “has it happened”. The issue is “can it happen”.

We cannot test-away bad things from happening, so we better be vigilant for signs of imminence,

and have actionable options when the time has come.

Page 8: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 8

Four Selfish (Potential) Guiding Principles(for synthetics)

Protection of permission to exist (civilians, public assets)

Protection of mission

Protection of self

Protection of others of like kind

A safety mechanism based on principles,

for we can never itemize

all of the

situational patternsand the

appropriate response to each

Page 9: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 9

ARTURO MEDINAARTURO MEDINA

Page 10: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 10

ARTURO MEDINAARTURO MEDINA

Page 11: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 11

ARTURO MEDINAARTURO MEDINA

… and here’s theCat’s Cradle

Page 12: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 12

Aberrant behavior arising in a stable social systemis detected and opposed

Example: Female penguin attempting to steal a replacement egg for the one she lost is prevented from doing so by others.

wip.warnerbros.com/marchofthepenguins/

Page 13: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 13

Ganging Up on Aberrant Behavior

Queenless ponerine ants have no queen caste. All females are workers who can potentially mate and reproduce. A single “gamergate” emerges, by virtue of alpha rank in a near-linear dominance hierarchy of about 3–5 high-ranking workers. Usually the beta replaces the gamergate if she dies. A high-ranker can enhance her inclusive fitness by overthrowing the gamergate, rather than waiting for her to die naturally.

(a) To end coup behavior, the gamergate (left) approaches the pretender, usually from behind or from the side, briefly rubs her sting against the pretender depositing a chemical signal, then runs away, leaving subsequent discipline to others.

(b) One to six low-ranking workers bite and hold the appendages of the pretender for up to 3–4 days with workers taking turns. Immobilization can last several days, and typically results in the pretender losing her high rank. It is not clear why punishment causes loss of rank, but it is probably a combination of the stress caused by immobilization and being prevented from performing dominance behaviours. Occasionally the immobilized individual is killed outright.

T. Monnin, F.L.W. Ratnieks, G.R. Jones, R. Beard, Pretender punishment induced by chemical signaling in a queenless ant, Nature, V. 419, 5Sep2002

http://lasi.group.shef.ac.uk/pdf/mrjbnature2002.pdf

Page 14: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 14

Promising Things to Leverage

Social pattern monitoring

Relationships (Gal Kaminka, Ph.D. dissertation)

Trajectories (Stephan Intille, Ph.D. dissertation)

Emergence (Sviatoslav Braynov, repurposed algorithm concepts)

Technology and Knowledge

Human expertise (Gary Klein, Phillip Ross, Herb Simon)

Biological feedforward hierarchies (Thomas Serre, Ph.D. dissertation)

Parallel pattern processor (Curt Harris, VLSI architecture)

Page 15: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 15

Accuracy: Decentralized Beats Centralized Monitoring

“We explore socially-attentive algorithms for detecting teamwork failures under various conditions of uncertainty, resulting from the necessity of selectivity.

We analytically show that despite the presence of uncertainty about the actual state of monitored agents, a centralized active monitoring scheme can guarantee failure detection that is either sound and incomplete, or complete and unsound.

[centralized: no false positives (sound) or no false negatives (complete), not both]

However, this requires monitoring all agents in a team, and reasoning about multiple hypotheses as to their actual state.

We then show that active distributed teamwork monitoring results in sound and complete detection capabilities, despite using a much simpler algorithm. By exploring the agents’ local states, which are not available to the centralized algorithm, the distributed algorithm: (a) uses only a single, possibly incorrect hypothesis of the actual state of monitored agents, and (b) involves monitoring only key agents in a team, not necessarily all team-members (thus allowing even greater selectivity).

From: Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, 2000, p. 6. www.isi.edu/soar/galk/Publications/diss-final.ps.gz.

Page 16: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 16

Execution Monitoring in Multi-Agent Environments

A key goal of monitoring other agents: Detect violations of the relationships

that agent is involved in Compare expected relationships to

those actually maintained Diagnose violations,

leading to recovery

Motivation for relationship failure-detection: Cover large class of failures Critical for robust performance of entire team

Relationship models specify how agents’ states are related: Formation model specifies relative velocities, distances Teamwork model specifies that team plans jointly executed Many others: Coordination, mutual exclusion, etc.

Agent Modeling: Infer agents state from observed actions via plan-recognition Monitor agents, attributes specified by relationship models

enemyattackercorrectly

waiting forscout report

attackerincorrectlyflying with

scout

scoutlooking

for enemy

Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, www.isi.edu/soar/galk/Publications/diss-final.ps.gz.

Page 17: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 17

Identifying Football Play Patterns from Real Game Films

The task of recognizing American football plays was selected to investigate the general problem of multi-agent action recognition.

Visual Recognition of Multi-Agent ActionStephen Sean Intille, Ph.D.Thesis, MIT, 1999

http://web.media.mit.edu/~intille/papers-files/thesis.pdf.

This work indicates one method for monitoringmulti-agent performance according to plan

A p51curl play. Doesn’t happen like the chalk board, but is still recognizable.Chalk board patterns a receiver can run.

Page 18: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 18

Maybe Even….Detecting Emergent Behaviors in Process

“In this paper, we studied coordinated attacks and the problem of detecting malicious networks of attackers. The paper proposed a formal method and an algorithm for detecting action interference between users. The output of the algorithm is a coordination graph which includes the maximal malicious group of attackers including not only the executers of an attack but also their assistants. The paper also proposed a formal metric on coordination graphs that help differentiate central from peripheral attackers.”

“Because the methods proposed in the paper allow for detecting interference between perfectly legal actions, they can be used for detecting attacks at their early stages of preparation. For example, coordination graphs can show all agents and activities directly or indirectly related to suspicious users.

------------------------- conjecture begging investigation -------------------------

This work focused on identifying the members of a group of “perpetrators” among a group of “benigns”, based on their cooperative behaviors in causing an event. It is applied in both forensic analysis and in predictive trend spotting.

It may be a methodology for identifying the conditions of specific emergent behavior after the fact – for “learning” new patterns of future use.

It may also provide an early warning mechanism for detecting emergent aberrant team behavior, rather than aberrant UAS behavior.

Sviatoslav Braynov, Murtuza Jadliwala, Detecting Malicious Groups of Agents. The First IEEE Symposium on Multi-Agent Security and Survivability, 2004.

Page 19: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 19

The RPD (Recognition Primed Decision) model offers an account of situation awareness. It presents several aspects of situation awareness that emerge once a person recognizes a situation. These are the relevant cues that need to be monitored, the plausible goals to pursue and actions to consider, and the expectancies. Another aspect of situation awareness is the leverage points. When an expert describes a situation to someone else, he or she may highlight these leverage points as the central aspects of the dynamics of the situation.

Experts see inside events and objects. They have mental models of how tasks are supposed to be performed, teams are supposed to coordinate, equipment is supposed to function. This model lets them know what to expect and lets them notice when the expectancies are violated. These two aspects of expertise are based, in part, on the experts’ mental models.

Garry Klein (1998) Sources of Power: How people make decisions, 2nd MIT Press paperback edition, Cambridge, MA. page 152

Page 20: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 20

Jennifer Kahn, Wayne Gretzky-Style 'Field Sense' May Be Teachable, Wired Magazine, May 22, 2007.

www.wired.com/science/discoveries/magazine/15-06/ff_mindgames#

'Field Sense’ Gretzky-StyleFive seconds of the 1984 hockey game between the Edmonton Oilers and the Minnesota North Stars: The star of this sequence is Wayne Gretzky, widely considered the greatest hockey player of all time. In the footage, Gretzky, barreling down the ice at full speed, draws the attention of two defenders. As they converge on what everyone assumes will be a shot on goal, Gretzky abruptly fires the puck backward, without looking, to a teammate racing up the opposite wing. The pass is timed so perfectly that the receiver doesn't even break stride. "Magic," Vint says reverently. A researcher with the US Olympic Committee, he collects moments like this. Vint is a connoisseur of what coaches call field sense or "vision," and he makes a habit of deconstructing psychic plays: analyzing the steals of Larry Bird and parsing Joe Montana's uncanny ability to calculate the movements of every person on the field.

Page 21: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 21

The Stuff of ExpertiseResearch indicates that human expertise (extreme domain specific sense-making) is primarily a matter of meaningful pattern quantity – not better genes.

According to an interview with Nobel Prize winner Herb Simon (Ross 1998), people considered truly expert in a domain (e.g. chess masters, medical diagnosticians) are thought unable to achieve that level until they’ve accumulated some 200,000 to a million meaningful patterns, requiring some 20,000 hours of purposeful focused pattern development.

The accuracy of their sense making is a function of the breadth and depth of their pattern catalog.

In biological entities, the accumulation of large expert-level pattern quantities does not manifest as slower recognition time.

All patterns seem to be considered simultaneously for decisive action. There is no search and evaluation activity evident.

On the contrary, automated systems, regardless of how they obtain and represent learned reference patterns, execute time-consuming sequential steps to sort through pattern libraries and perform statistical feature mathematics.

This is the nature of the computing mechanisms and recognition algorithms employed in this service.

Philip Ross (1998), “Flash of Genius,” an interview with Herbert Simon,Forbes, November 16, pp. 98- 104, www.forbes.com//forbes/1998/1116/6211098a.html.

Also: Philip Ross, The Expert Mind, Scientific American, July 2006

Page 22: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 22

Rapid visual categorization

Visual input can be classified very rapidly…around 120 msec following image onset…At this speed, it is no surprise that subjects often respond without having consciously seen the image; consciousness for the image may come later or not at all. Dual-task and dual-presentation paradigms support the idea that such discriminations can occur in the near-absence of focal, spatial attention, implying that purely feed-forward networks can support complex visual decision-making in the absence of both attention and consciousness.This has now been formally shown in the context of a purely feed-forward computational model of the primate’s ventral visual system (Serre et al., 2007).

www.technologyreview.com/printer_friendly_article.aspx?id=17111

ReverseEngineering

the Brain

www.scholarpedia.org/article/Attention_and_consciousness/processing_without_attention_and_consciousness

Page 23: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 23

Explaining Rapid Categorization. Thomas Serre, Aude Oliva, Tomaso Poggio.http://cbcl.mit.edu/seminars-workshops/workshops/serre-slides.pdf

Page 24: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 24

The Monitoring Selectivity Problem:Unacceptable Accuracy Compromise

“A key problem emerges when monitoring multiple agents: a monitoring agent must be selective in its monitoring activities (both raw observations and processing), since bandwidth and computational limitations prohibit the agent from monitoring all other agents to full extent, all the time.

However, selectivity in monitoring activities leads to uncertainty about monitored agent’s states, which can lead to degraded monitoring performance. We call this challenging problem the Monitoring Selectivity Problem: Monitoring multiple agents requires overhead that hurts performance; but at the same time, minimization of the monitoring overhead can lead to monitoring uncertainty that also hurts performance.

Key questions remain open:What are the bounds of selectivity that still facilitate effective monitoring?How can monitoring accuracy be maintained in the face of limited knowledge of other agents’ states?How can monitoring be carried out efficiently for on-line deployment?

This dissertation begins to address the monitoring selectivity problem in teams by investigating requirements for effective monitoring in two monitoring tasks: Detecting failures in maintaining relationships, and determining the state of a distributed team (for both faire detection and visualization).

From: Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, 2000, pp. 3-4. www.isi.edu/soar/galk/Publications/diss-final.ps.gz.

Page 25: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 25

Processor Recognition Speed Independent of Pattern Quantity and Complexity

Comparison shows pattern processor’s flat constant speed recognition vs typical computational alternative. Example chosen for ready availability.

Snort chart source: Alok Tongaonkar, Sreenaath Vasudevan, R. Sekar, Fast Packet Classification for Snort by Native Compilation of Rules, Proceedings of the 22nd Large Installation System Administration Conference (LISA '08), USENIX, Nov 9–14, 2008.

www.usenix.org/events/lisa08/tech/full_papers/tongaonkar/tongaonkar_html/index.html

4000

3000

2000

1000

0100 200 300 400 500 6000 40

Snort 2.6Packet Header

InterpreterInterpreter

Replaced withNative Code

Nanosecondsper Packet

Number of Rules Employed

8 million real packets run on3.06 GHz Intel Xenon processor

Pattern processor comparative speed

(unbounded)

Processor info source: Rick Dove, Pattern Recognition without Tradeoffs: Scalable Accuracy with No Impact on Speed, To appear in Proceedings of Cybersecurity Applications & Technology Conference For Homeland Security, IEEE, April 2009.

www.kennentech.com/Pubs/2009-PatternRecognitionWithoutTradeoffs-6Page.pdf.

Page 26: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 26

Reconfigurable Pattern ProcessorReusable Cells Reconfigurable in a Scalable Architecture

Up to 256 possible features can be “satisfied” by all so-designated byte values

Cell-satisfaction activation pointers

Independent detection cell: content addressableby current input byte

If active, and satisfied with current byte, can activate

other designated cellsincluding itself

Individual detection cells are configured into feature cell machines by linking activation pointers (adjacent-cell pointers not depicted here)

All active cells have simultaneous access to current data-stream byte

an unbounded number of feature cells configured as feature-cell machines can extend indefinitely across multiple processors

Cell-satisfaction output pointers

Page 27: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 27

Additional transforms provide sub-pattern combination logicFinite Cell Machines, as depicted, could represent sub-patterns or “chunked” features shared by multiple

pattern classes. Padded FCM-7 and FCM-n increase feature weight with multiple down counts.

Simple Example: Pattern Classification Method Suitable for Many Syntactic, Attributed Grammar, and Statistical Approaches

●●●●

●●●●

●●●●

●●●●

●●●●

Layered Architecture Stack Partial Conceptual Architecture Stack

½ Million Detection Cells

Logical Intersection Transforms

Logical Union Transforms

Threshold Counter Transforms

Output Transform Pointers

FCM Activation Pointers

Reinitialization Transforms

Output Transform Pointers

Multiple Threshold Down Counters

●●●●

●●●●

●●●●

●●●●

●●●●

●●●●

FCM-1 FCM-2 FCM-3 FCM-4 FCM-6 FCM-7 FCM-n

Class-1Class-2Class-3Class-4

Weight=2

Weight=3counter 1counter 2counter 3counter 4

classification output occurs for any down

counter reaching zero

outputpointers

Output Register R

Output Register S

Output Register P

Output Register T

Very SimpleWeighted Feature

Example

FCM-5

On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf

Configured FCMsM1 M2 M3 M4 M5 Mn

Page 28: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 28

Value-Based Feature Example

A reference pattern example for behavior-verification of a mobile object.Is it traveling within the planned space/time envelop?

Using GPS position data: Latitude, Longitude, Altitude.

OutputF = failureS = success

On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf

minimumseparation

absolute

LAT

LON

ALT

relative

LAT

LON

ALT

linear, log or other scale

256distancevalues

showing acceptable ranges of values

LAT

LON

ALT

FCM configured toclassify failure/success

F F S

Page 29: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 29

Example: Monitoring Complex Multi-Agent Behaviors

Packetized data can use multi-part headers to activate appropriate reference pattern sets for different times

OutputF = failureS = success

On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf

LAT

Task ID003.018

LON

ALT

UAS ID001.002

UAS 1002on task 3018

FCM-49

F F F S

Task ID003.002

LAT

LON

ALT

UAS ID001.002

UAS 1002on task 3002

F F F S

FCM-50

Page 30: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 30

Hybrid Adaptation Could Improve on Natural Systems

Applies to

Outcome

Time for one loop to execute

Parallelism of processing through interaction

Context sensitivity

Alignment of fitness and selection mechanism

Evolutionary

Populations

Produces new design features.

Period between generations – generally slow compared to timescale of actions.

Highly parallel – everymember of the populationis a simultaneousexperiment ‘evaluating’ the fitness of one set of variations.

In retrospect only – through some variations turning out to be fitter in the context than others.

100%

Learning

Individuals

Improves use of fixed design.

Period for one action (sense-process-decide-act) loop, plus the associated learning (observe action consequences – process – make changes) loop.

Serial – an individual system or organism experiments with one strategy at a time.

In anticipation – i.e. before choice of action or response, as well as in retrospect through feedback from consequences of action.

Highly variable.

Hybrid or Augmented

Either

May be able to do both, or do either better.

Could be accelerated.

Could use learning mechanism to create directed evolution, and evolutionary strategies to improve learning. Could also parallelize learning through either parallel processing in single individual, or through networking a population of learning systems.

Could extend context sensitivity to influence design choices as well as action choices.

Could improve alignment in learning systems by developing better proxies for fitness to drive selection.

Grisogono, A.M. “The Implications of Complex Adaptive Systems Theory for C2.” Proceedings of the 2006 Command and Control Research and Technology Symposium, 2006, www.dodccrp.org/events/2006_CCRTS/html/papers/202.pdf

Nature has sufficient, but not necessarily optimal, systems – One example:

Page 31: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 31

Related Implications and PointsT&E cannot be limited to pre-deployment – it must be an ongoing never-ending activity built-in to the SoS operating methods.

LVC – Put the tester into the environment – total VR immersion – as a player with intervention capability (the ultimate driving machine). Humans will “see” experientially and recognize things in real-time that forensics and remote data analysis will not recognize.

These things we build are not children that we can watch and guide and correct. They need to have a sense of ethics and principles that inform unforeseen situational response.

The biological “expertise” pattern recognition capability needs to exist in both the testing environment and on-board. We are building intelligent willful entities that carry weapons.

Page 32: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 32

Status Q1 2010• Kaminka’s Socially Attentive Monitoring examples are modeled.

• Intelle’s trajectory recognition modeling was started, another approach is wip.

• Serre’s feedforward hierarchy image recognition Level 1 is modeled.

These algorithm models reside with others in a wikiinvestigating collaborative parallel-algorithm development.

A processor emulator/compiler exists for algorithm modeling.

• One defense contractor already working on classified project.

• VLSI availability eta Q1 2012.

• ~128,000 feature cells expected for first generation modules.

• Chips can be combined for unbounded scalability.

Pursuits of interesting problems to attack with this new capability…

• x Inc: Collision avoidance in cluttered airspace.

• PSI Inc: Distributed anomaly detection, and hierarchical sensemaking

• OntoLogic LLC: Secure software code verification

This work was supported in part by the U.S. Department of Homeland Security award NBCHC070016.

Page 33: UAST and Evolving Systems of Systems in the Age of the Black Swan

[email protected], attributed copies permitted 33

Aberrant behavior will not be tolerated!