[ieee 2007 ieee 66th vehicular technology conference - baltimore, md, usa (2007.09.30-2007.10.3)]...

5

Click here to load reader

Upload: sasitharan

Post on 15-Apr-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2007 IEEE 66th Vehicular Technology Conference - Baltimore, MD, USA (2007.09.30-2007.10.3)] 2007 IEEE 66th Vehicular Technology Conference - A Combined Biologically and Socially

A Combined Biologically and Socially Inspired Approach to Mitigating Ad Hoc Network Threats

Jimmy McGibney, Dmitri Botvich, Sasitharan Balasubramaniam Telecommunications Software & Systems Group

Waterford Institute of Technology {jmcgibney, dbotvich, sasib}@tssg.org

Abstract—This paper describes a collaborative approach to handling dynamic attack threats in mobile ad hoc networks. Our approach is biologically and socially motivated. Each network node maintains a trust score for each other node of which it is aware and distributes these to its neighbourhood. Services have associated trust thresholds – the more sensitive the service, the higher the threshold. We define a robust decentralised dynamic system involving nodes, services and trust scores that helps to quickly and reliably locate potential sources of attacks and their threat level. The paper presents results of simulations of the behaviour of the system’s dynamics and its interpretation in the context of ad hoc networks.

Keywords—biologically inspired computing; distributed trust models; intrusion detection; network security.

I. INTRODUCTION Established approaches to security are often impractical in

mobile, ad hoc, or sensor network situations. It cannot be assumed, for example, that all participants have access to centralised resources such as a public key infrastructure to validate credentials like digital certificates.

We present here a combined biologically and socially inspired approach to collaborative intrusion detection systems (IDSs). The techniques used for distribution of attack related information and updating of threat level using direct experience are biologically motivated. Distribution of reputation type information is socially motivated. The attack threat level in itself is interpreted using so called trust scores.

Consider for example a mobile ad hoc network on which peer-to-peer services are available such as voice, video, instant messaging and file sharing. Having such services enabled on a node may expose that node to various types of attack, even though it may have good protection mechanism in place and may be up to date with patches, etc. New exploits are common, attackers are adept at finding ways around protection mechanisms, and it is easy for users to misconfigure systems and services. Thus, it is generally wise to deploy some IDS capability on exposed systems.

In the system proposed here, such loosely connected nodes collaborate to isolate sources of attack. They do this by sharing trust information about each other. Node A, say, detects an attempted attack or service misuse by node B and notifies peer nodes C and D of this. Nodes C and D consequently reduce respective measures of trust in node B, causing access from

node B to be blocked. For the purposes of this work, we adopt a two layer model

for communications between nodes. Nodes can either interact for service usage [1] or to exchange trust information. For modelling purposes, each service usage interaction is a discrete event. A logically separate trust management overlay handles threat notifications and other pertinent information.

How trust information is shared and how it is handled by individual nodes is a major focus of this paper. This needs to take into account to risk of some “bad” nodes attempting to corrupt the system, possibly in collusion with each other.

The remainder of this paper is organised as follows. Section II describes the essence of our trust based approach to handling dynamic threats. Section III follows this by specifying in more detail how to realise such a system. Simulation results are presented in section IV. Section V discusses related work and section VI concludes the paper.

II. TRUST-BASED APPROACH TO INTRUSION DETECTION

A. Architecture We adopt a two layer model for communications between

nodes. Fig. 1 illustrates the relationship between underlying services and this new infrastructure. There are four types of interface. Within the service usage layer, nodes interact to use services on each other. Nodes may choose to locally monitor service usage by other nodes, for example using an IDS. Relevant events are sent from the node to its access controller (“usage events” in Fig. 1) to allow access protections to be updated, possibly in reaction to suspicious activity. The third type of interface is between the access controllers themselves when they decide to share trust information (“trust updates”)

Service usage

Update protections

Usage Events

Trust updatesTrust

management layerAccess

control

Access control

Access control

Access control

Service usage layer

Figure 1. Trust overlay helps to secure usage of services in ad hoc network.

1-4244-0264-6/07/$25.00 ©2007 IEEE 2010

Page 2: [IEEE 2007 IEEE 66th Vehicular Technology Conference - Baltimore, MD, USA (2007.09.30-2007.10.3)] 2007 IEEE 66th Vehicular Technology Conference - A Combined Biologically and Socially

with each other. Finally, updated trust measures computed by the access controller (based on both direct experience reported from service usage and recommendations from other nodes) is fed back to the service usage engine to in the form of updated access rules (“update protections” in Fig. 1).

The service usage interface is not of direct interest as it is service-specific and thus independent of the trust-based overlay.

B. Socially inspired representation of trust information 1) Social motivation: In the real world, when people

interact with each other, well-evolved social and commercial standards and structures help to provide assurance of orderly behaviour. As well as the subtleties of real personal contact, social systems also rely on third parties to help assess trustworthiness. Various respected bodies issue credentials to help with verification of identification, creditworthiness, access to restricted locations, and so on. More fuzzy personal recommendations and reviews are also of considerable value.

2) Trust as a vector: Rather than having binary off/on trust, we more accurately mimic social trust by setting a fuzzy trust level. We model trust as a vector. In the simplest case, where there is just one service, this is a simple number in the range (0,1). This number could be associated with other attributes, such as confidence in the trust score or its recency. If node A’s trust in node B is 1, then A fully trusts B. Note that this is consistent with Gambetta’s definition of trust as “a particular level of the subjective probability with which an agent assesses that another agent … will perform a particular action” [2].

3) Default trust: In the initial case, all trust values are set to a low value. Having initial values set to zero would prevent the so called Sybil Attack [3], whereby attackers can take advantage of a default trust level by maintaining multiple identities. This is impractical though as we need to have some low-risk services enabled in order to get experience of other nodes. Nodes can be expected to set default trust at a value that is sufficient to allow some limited access by new nodes but that restricts access to sensitive services.

C. Biologically inspired mechanism for updating trust 1) Biological inspiration. We use a combination of

Reaction-Diffusion and Quorum Sensing to define quite rich and interesting trust dynamics.

Reaction-Diffusion [4] is a concept devised by Alan Turing as a computational model to describe self-organisation. We use a more generic interpretation. The diffusion process allows the diffusion of node states (e.g. trust score) from neighbouring nodes, and the reaction process involves updating of a node state depending on its previous states and neighbourhood node states. The reaction and diffusion mechanisms are invoked in a peer-to-peer fashion between nodes and their neighbours, where the changes can slowly propagate through the network.

Quorum Sensing is a biological phenomenon that is manifested through synchronised behaviour of cells. For our purposes, quorum sensing is interpreted as similar to voting: a node updates its own state by using some sort of averaging

procedure for node states in its neighbourhood. This technique has inspired other work on mobile ad hoc networks (e.g. [5]).

2) Updating trust. We outline here how trust scores are updated in our system using these bio-inspired mechanisms:

Direct experience. On completion of a service usage event between two nodes, each node updates its trust in the other based on a measure of satisfaction with the usage event.

Reputation. Node A notifies other nodes in its neighbourhood of the trust score that it has for node B. This will change significantly following a security-related event.

How this neighbourhood is defined is significant. The neighbourhood of a node is the set of nodes with which it can communicate or with which it is willing to interact. The choice of neighbourhood nodes is up to each individual node to decide, though these collaborations will normally be two-way and based on topology (which in a wireless environment usually relates to location). In our simulations, we distribute nodes on a plane and define neighbourhoods based on distance, so nodes share information with nearby nodes.

D. Using this trust information (closing the loop) The main benefit is in using these trust scores to tune

security settings. Trusted nodes can be dynamically provided with more privileges than untrusted nodes. In our system, as mentioned, we model all interaction between nodes in terms of services. Each node then sets a threshold trust level for access to each service in which it participates. If the trust score of a node decreases, for example due to detected suspicious activity by that node, services available to that node are reduced.

III. REALISING THE SYSTEM: PROTOCOL AND ALGORITHMS

A. Assumptions We model the system in terms of those nodes of which a

specific node is aware. We assume that neighbour discovery and routing services are in place, and that some kind of service registration and discovery is available to allow nodes to reach an understanding of the set of services available and their associated trust thresholds. We also assume authentication of identity. This could be done, for example, by having nodes exchange public keys on their first interaction (to be used as unique node identifiers). Further messages between the nodes could then be signed by the sender’s corresponding private key.

B. General model Consider the set of nodes of which node i is aware. A

network topology defines the set of neighbours of node i. Each node provides a set of services. Each service has an associated trust threshold.

1) Trust decision: When node j attempts to use a particular service provided by node i, service use is permitted if node i's trust in node j exceeds the trust threshold pertaining to that service. Otherwise node j is blocked from using the service.

2) Trust update following service usage: After a service usage event by node j on node i, if the usage is judged to be benign, i's trust in j is increased according to some algorithm. If the usage is judged to be malicious, i's trust in j is decreased

2011

Page 3: [IEEE 2007 IEEE 66th Vehicular Technology Conference - Baltimore, MD, USA (2007.09.30-2007.10.3)] 2007 IEEE 66th Vehicular Technology Conference - A Combined Biologically and Socially

according to some algorithm. 3) Trust update following recommendation by a third

party: Node i may receive a message from a third party node, k, indicating a level of trust node j. Following such a third party recommendation, node i updates its trust vector for j. This trust transitivity depends on i’s trust in k, as node i can be expected to attach more weight to a recommendation from a highly trusted node than one from a less trusted node.

C. Some candidate algorithms for trust update The way trust is updated based on both experience and

recommendations has a profound impact on the usefulness of this kind of overlay system. Note that nodes are autonomous in our model and each might implement a different algorithm for updating and using trust. It can also be expected that a node’s behaviour in terms of handling trust may change if it is hijacked. Candidate trust update algorithms include:

Moving average: Advanced moving averages are possible, where old data is “remembered” using data reduction and layered windowing techniques.

Exponential average: Exponential averaging is a natural way to update trust as recent experience is given greater weight than old values, and no memory is required in the system, making it more attractive than using a moving average.

No forgiveness: This in a draconian policy where a node behaving badly has its trust set to zero forever. Even more extreme is where a node that is reported by a third party as behaving badly has its trust set to zero forever. This could perhaps be used if a particularly sensitive service is misused.

Second chance (generally, nth chance): Draconian policies are generally not a good idea. IDS and other security systems are prone to false alarms. A variation on the “no forgiveness” approach is to allow some bounded number of misdemeanours.

Hard to gain trust; easy to lose it: To discourage collusion, there is a case for making trust hard to gain and easy to lose.

Use of corroboration: To prevent an attack by up to k colluding bad nodes, we could require positive recommendations from at least k+1 different nodes.

D. A protocol for trust-based collaboration Collaboration in our system is by the issuing of trust

recommendations. Certain rules are required to make this workable in large scale distributed environments. The protocol is designed to allow individual nodes to control the extent to which they participate in the recommendations system and to prevent overloading with information of little relevance. The following is an outline of our protocol:

• Access controllers may issue recommendations spontaneously (e.g. on occurrence of some event that might indicate an attack) or in response to a request from another node. A request may be issued by one trust manager to another, but a reply is not guaranteed. Message passing is asynchronous and the protocol requires no state information to be maintained by the corresponding entities.

• A getTrust message allows one node to ask another to report its trust in a third node. The receiving node may respond with a trust report if it wishes, or it may remain silent. This is because the system should not be burdened with redundant messaging.

• A trustReport allows a node to send a recommendation. A trustReport may be issued either spontaneously or in response to a getTrust or related message or, if desired, on a specific service usage experience or change in trust level.

• Another message, setTrustReportingPreferences, allows a node to specify to others whether and how it wishes to receive spontaneous trust reports. This could specify that that trust reports are only sent on significant events.

IV. EXPERIMENTS, RESULTS AND ANALYSIS In this section, we present and discuss the results of

simulations to evaluate the stability of this system and its effectiveness in detecting attacks, including the incidence of false alarms. In particular we examine the convergence and stability of trust scores, the impact of having multiple services with different trust requirements, the influence of topology, effects of node mobility and the potential for attackers to beat the system through active collusion.

A. Convergence and stability of trust scores In our first simulation, we have a simple fully connected

network of twenty nodes and a single service. A default initial trust score of 0.25 is used for illustration purposes (to distinguish the bad from the unknown) and the trust updating algorithm is a simple exponential average for both direct experience and recommendations. Of the twenty nodes, two nodes are bad, six nodes have mixed behaviour (modelling poorly configured nodes, perhaps) and the remainder are good.

Fig. 2 illustrates the convergence to fairly stable ranges of the trust levels that one node (‘node 12’) records in various other nodes. Recall that our model is based on subjective trust and can only be viewed from the perspective of a particular node. As the network is fully connected, the view of the other eleven good nodes will be almost identical. In a multi service environment, it can be expected that the node marked as ‘good’ in Fig. 2 will be able to interact with node 12 for a wide range of services, and that marked as ‘bad’ for few or no

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 2000 4000 6000 8000 10000

No of events a t node 12

Trus

t sco

re

"good" node

"bad" node

nodes with mixed

behaviour

Figure 2. Trust of node 12 in a selection of other nodes in a 20-node network.

2012

Page 4: [IEEE 2007 IEEE 66th Vehicular Technology Conference - Baltimore, MD, USA (2007.09.30-2007.10.3)] 2007 IEEE 66th Vehicular Technology Conference - A Combined Biologically and Socially

services.

B. Multiple services with different trust requirements. We now consider the situation where there are multiple

services with different trust thresholds. For these simulations we have just three services, A, B, and C with quite distinct trust thresholds, 0.2, 0.5, and 0.9 respectively

We again have a simple fully connected network of twenty nodes. With a default initial trust score of 0.25, all nodes will have initial access to service A only and need to earn trust before they are allowed access to services B and C. In Fig. 3, of the twenty nodes, five are bad and the remainder are good.

We initially consider a fully connected network – i.e. every node has every other node as a neighbour. In Fig. 3, as well as Figs. 5 and 6, we plot the percentage of “good usage allowed” that is, the percentage of service usage attempts by good nodes that succeed. We also plot the percentage of “bad usage allowed” that is, the percentage of service usage attempts by bad nodes that succeed. Ideally, 100% of good usage attempts and 0% of bad ones should succeed.

We also model the fact that IDSs do not always get it right. Occasionally an attack goes unnoticed (false negative) or benign activity raises an alarm (false positive). We use a simple Gaussian (normal) distribution to model the characteristics of service usage that are being monitored by the IDS. In the case of good nodes, a number is tagged onto each service usage using a normal distribution with mean 2.0 and standard deviation 2.0. In the case of bad nodes, a number is attached to the service usage using a normal distribution with mean –2.0 and standard deviation 2.0. The sign of this number is then used to “detect” whether there has been an attack attempt.

The system works very well after trust has converged. When trust has been properly established, even with 25% of nodes behaving maliciously, all attempts by good nodes to gain access are allowed and the bad nodes are blocked. It is evident from the plateaus in Fig. 3 how the good nodes initially can mostly access just service A, then both A and B, and finally all services.

C. Influence of network topology & neighbourhood size In the examples shown above, the neighbourhood of a

node is defined as containing every other node (i.e. a fully connected network). Ad hoc networks are of course often less well connected and thus it is useful to study topological effects.

Fig. 4 shows how convergence of the trust levels that node 12 maintains in various other nodes changes when there is a sparser topology. In the topology used for this simulation, nodes are randomly distributed on a plane and connectedness depends on distance. Nodes 19 and 20 are “bad” nodes and the remaining nodes are “good”. Topology affects both the speed of convergence and the value to which the trust score converges. Trust in node 5 remains low even though this is a well behaved node, because all interaction with node 5 is via node 19, a bad node. Trust in node 1 converges quickly as it is in node 12’s neighbourhood whereas trust in node 17 is slower to converge, though it will eventually reach the same range.

D. Influence of mobility Here, we observe the effects of moving a single node. The

difference is most marked if moving the node brings it into or out of contact with bad nodes. In the example shown in Fig. 5, node 5 is moved slowly. Before node 5 moves, nothing can be done about its low trust score as it is only accessible via a bad node. On moving though, it quickly gains trust by good behaviour and recommendations, though this falls off when it again comes in close contact with malicious nodes.

E. Security and attacker collusion Attacker interaction with the recommendations system is

of particular interest, especially collusion between bad nodes to attempt to artificially raise their trust scores. Fig. 6 compares the success rate of bad nodes in attempting to gain access to services when then act independently with when they act in collusion. Without collusion, each bad node, as well as behaving badly when granted service access, sends out low trust recommendations about all nodes. We model collusion as where the bad nodes are aware of each other and send out high trust recommendations about each other and low ones about the other (good) nodes. Not surprisingly, they are more effective at gaining access when actively colluding.

V. RELATED WORK There are as yet few widely used truly distributed trust

systems. Several online marketplaces, social networks and review websites use reputation and ratings systems to give a measure of trust, but these mostly depend on some centralised storage and management. One example of a well-established

0

0.2

0.4

0.6

0.8

1

0 2000 4000 6000

No of iterations of simulation (service usages on node 12)

% a

llow

ed o

r blo

cked

%good usage allowed%bad usage allowed

Figure 3. % good usage allowed and % bad usage allowed, aggregated over all three services: 15 good nodes, 5 bad nodes, fully connected topology

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 20000 40000 60000 80000 100000

No of events at node 12

Trus

t sco

re

node 1

node 4

node 17

node 13

node 5

node 20node 19

Figure 4. Trust score convergence varies due to topological factors

2013

Page 5: [IEEE 2007 IEEE 66th Vehicular Technology Conference - Baltimore, MD, USA (2007.09.30-2007.10.3)] 2007 IEEE 66th Vehicular Technology Conference - A Combined Biologically and Socially

and working distributed trust system is Pretty Good Privacy (PGP). The reader is referred to [6] for a comprehensive survey of trust and reputation systems.

Issues of trust and security in ad hoc and peer to peer networks have been under consideration for some time (e.g. [7], [8]) and a fairly substantial body of published work exists.

Much existing work on securing ad hoc networks is focused on the protection of routing. Some of this work is focused on specific protocols. For example, the secure ad hoc on-demand distance vector (SAODV) routing protocol, initially proposed in [9], extends the AODV routing protocol to provide integrity and authentication. This and several other protocols (e.g. [10], [11]) rely on cryptographic methods that depend on centralised trust or some kind of key exchange system.

More recent work has taken a collaborative trust-based approach. Yang et al, for example, in [12] build on their work in [13] to propose a collaborative solution that is applicable to a variety of routing and packet forwarding protocols. This system is reactive, dealing dynamically with malicious nodes. Jensen and O’Connell [14] have also proposed trust-based route selection in dynamic source routing (DSR). Each router is assigned a trust score based on past experience, and the trustworthiness of a candidate path is a function of that of the routers that make up that path.

Several people have also examined aspects of trust and security in access to more general services in ad hoc and peer to peer networks – e.g. [15][16][17].

VI. CONCLUSIONS AND FURTHER WORK We have described a robust decentralised dynamic system

involving nodes, services and trust scores that helps to quickly and reliably isolate sources of attack and restrict their access. Simulation results presented in section IV are encouraging and demonstrate the dynamics of this kind of system.

Significant further work is required. Our simulations to date have just used exponential averaging and a simple random technique for sharing trust information. There is scope to refine the dynamics of the system. New algorithms need to be developed and evaluated for trust updates. A protocol is required to specify how and when trust information is shared between nodes. Further experiments are needed to explore the effects of how node neighbourhoods are defined. Further

possible collusion strategies of bad nodes need to be examined, and also incentives for using this kind of system.

REFERENCES [1] J. McGibney, N. Schmidt, and A. Patel, "A service-centric model for

intrusion detection in next-generation networks," Computer Standards & Interfaces, pp 513-520, June 2005.

[2] D. Gambetta, “Can we trust trust?” D. Gambetta (Ed.), Trust: making and breaking cooperative relations, pp 213-237, Blackwell, 1988.

[3] J. Douceur, “The Sybil attack,” Proc. Int’l Workshop on Peer-to-Peer Systems, March 2002.

[4] A. M. Turing, “The chemical basis of morphogenesis”, Philos. Trans. R. Society. London, 237, pp. 37-72, 1952.

[5] M Peysakhov, C. Dugan, P. Modi, and W. Regli, “Quorum sensing on mobile ad-hoc networks”, Proc. Conf. on Autonomous Agents and Multiagent Systems, May 2006.

[6] A. Jøsang, R. Ismail, C. Boyd, "A survey of trust and reputation systems for online service provision," Decision Support Systems, to appear.

[7] F. Stajano and R. Anderson, "The resurrecting duckling: security issues for ad-hoc wireless networks," Workshop on Security Protocols, April 1999.

[8] L. Zhou and Z. Haas, "Securing ad hoc networks," IEEE Network, Nov./Dec. 1999.

[9] M. Guerrero Zapata and N. Asokan, "Securing ad hoc routing protocols," Proc. ACM Workshop on Wireless Security (WiSe), Sept. 2002.

[10] Y. Hu, D. Johnson, and A. Perrig, “SEAD: Secure efficient distance vector routing for mobile wireless ad hoc networks,” Proc. IEEE Workshop on Mobile Computing Systems and Applications, June 2002.

[11] P. Papadimitratos and Z. Haas, “Secure routing for mobile ad hoc networks,” Proc. Communication Networks and Distributed Systems Modeling and Simulation Conference (CNDS), Jan. 2002.

[12] H. Yang, J. Shu, X. Meng, and S. Lu, “SCAN: self-organized network-layer security in mobile ad hoc networks,” IEEE JSAC, Feb. 2006.

[13] H. Yang, X. Meng, and S. Lu, “Self-organized network-layer security in mobile ad hoc networks,” Proc. ACM Workshop on Wireless Security (WiSe), Sept. 2002.

[14] C. Jensen and P. O'Connell, "Trust-based route selection in dynamic source routing," Proc. 4th International Conference on Trust Management (iTrust), LNCS 3986, May 2006.

[15] V. Cahill, et al., "Using trust for secure collaboration in uncertain environments," IEEE Pervasive Computing, July-Sept. 2003.

[16] R. Handorean and G-C. Roman, "Secure service provision in ad hoc networks,” Proc. Int'l Conf. on Service Oriented Computing, Dec. 2003.

[17] V. Gligor, “Emergent properties in ad-hoc networks: a security perspective,” Proc. ASIACCS, March 2006.

Node 12's trus t in Node 5

0

0.2

0.4

0.6

0.8

1

0 5000 10000 15000 20000Total no of e vents at node 12

Trus

t sco

re

Figure 5. Node 5 is first viewed by node 12 as untrustworthy as it has only bad information (from node 19) to rely on. This changes as node 5 moves.

0

0.2

0.4

0.6

0.8

1

0 2000 4000 6000 8000

No of iterations of simulation (service usages on node 12)

% a

llow

ed o

r blo

cked

%bad usage allowed (no collusion)

%bad usage allowed (collusion)

Figure 6. % bad usage allowed where nodes actively collude compared with the same measure where nodes behave badly but do not work together.

2014