anomaly detection in cybersecurity

119
Anomaly Detection in Cybersecurity Maria Inês Pinto Bastos Martins Mestrado em Ciência de Dados Departamento de Ciência de Computadores 2020 Orientador Luís Filipe Coelho Antunes, Professor Catedrático Faculdade de Ciências da Universidade do Porto Coorientador João Miguel Maia Soares de Resende, Assistente Convidado Faculdade de Ciências da Universidade do Porto

Upload: others

Post on 03-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Anomaly Detection in Cybersecurity

Anomaly Detection in CybersecurityMaria Inês Pinto Bastos MartinsMestrado em Ciência de DadosDepartamento de Ciência de Computadores2020

Orientador Luís Filipe Coelho Antunes, Professor CatedráticoFaculdade de Ciências da Universidade do Porto

Coorientador João Miguel Maia Soares de Resende, Assistente ConvidadoFaculdade de Ciências da Universidade do Porto

Page 2: Anomaly Detection in Cybersecurity

Todas as correções determinadas

pelo júri, e só essas, foram

efetuadas.

O Presidente do Júri,

Porto, ______/______/_________

Page 3: Anomaly Detection in Cybersecurity

UNIVERSIDADE DO PORTO

MASTERS THESIS

Anomaly Detection in Cybersecurity

Author:

Maria Ines Pinto Bastos Martins

Supervisor:

Luıs Antunes

Co-supervisor:

Joao Resende

A thesis submitted in fulfilment of the requirements

for the degree of MSc. Data Science

at the

Faculdade de Ciencias da Universidade do Porto

Departamento de Ciencias de Computadores

January 16, 2021

Page 4: Anomaly Detection in Cybersecurity

Acknowledgements

Above all, I would like to thank my supervisors, Professor Luıs Antunes and Joao Re-

sende, for their support, guidance, and encouragement to study an important application

where two of the most promising fields, cybersecurity and data science, meet. I thank

them for their patience, suggestions, and advice throughout this journey, especially in the

most difficult and challenging moments.

My next thanks go to Patrıcia Sousa who has been, alongside with Joao Resende, a

teacher, a friend, and an inspiring and dedicated example always willing to help and to

clear my mind when everything seemed impossible. To Joao and Patrıcia, my sincere

gratitude.

I would also like to thank my colleagues at C3P who have assisted me to fully un-

derstand and capture the principals, necessities, and requirements in cybersecurity. They

provided crucial insights into a field I have never encountered before. A special word to

Andre Cirne and Simao Silva.

Last but not least, I thank my friends and family for all their love and understanding.

Their faithful and encouraging support reinforced my confidence and eased my journey.

My words of appreciation and acknowledgment to all of you.

Ines Martins

Porto, 2020

Page 5: Anomaly Detection in Cybersecurity

UNIVERSIDADE DO PORTO

Abstract

Faculdade de Ciencias da Universidade do Porto

Departamento de Ciencias de Computadores

MSc. Data Science

Anomaly Detection in Cybersecurity

by Maria Ines Pinto Bastos Martins

The Internet of Things (IoT) envisions a smart environment powered by unremitting

connectivity and heterogeneity. Its major asset is its major risk and, since it drives critical

data processing tasks spread across more and more industries, the biggest priority is to

develop an effective security mechanism. In this concept, data are collected and broadcast

at high speed in a continuous and real-time scale, turning IoT into a perfect match for the

streaming data concept.

The impact of a cyber attack in IoT is considerably different due to the number of con-

tingencies it holds. Although these ideas are well documented in data science literature,

when it comes to cybersecurity, the hindrance is approached by an external perspective,

bearing in mind only network activity data instead of the actual target. Furthermore,

available solutions do not focus on developing a data science approach that could fit the

requirements by carefully studying the attacks taxonomy and emphasizing the model and

the environment structure. As a result, current frameworks still rely on signature-based

solutions as they tend to be more reliable even if they fail to detect zero-day attacks.

It is believed that the essence of a successful approach is built upon a careful charac-

terization and contextualization of the task in hands. In particular, this work primarily

emphasizes on reviewing the cybersecurity background to understand the intentions and

methods that could lead to a cyber incident, followed by the discussion of machine learn-

ing methods to better choose the model that fits the urged requirements.

Driven by the necessity to design a reliable and efficient intrusion system enhanced

with machine learning capabilities, it is designed a real-time streaming anomaly detection

model evaluated with real-life vulnerabilities that can endanger our day-to-day life.

Page 6: Anomaly Detection in Cybersecurity

This work contributes to the ongoing research demands with an efficient and reli-

able Intrusion Detection System (IDS). Through the cybersecurity background analysis, a

novel data collection and processing tool, model selection and adaptation to this environ-

ment, we present a real-time streaming anomaly detection capable of detecting zero-day

attacks.

Keywords: Internet of Things, data processing, cybersecurity, real-time, zero-day, ma-

chine learning, intrusion detection systems, vulnerabilities, streaming, anomaly detection.

Page 7: Anomaly Detection in Cybersecurity

UNIVERSIDADE DO PORTO

Resumo

Faculdade de Ciencias da Universidade do Porto

Departamento de Ciencias de Computadores

Mestrado em Ciencias de Dados

Detecao de Anomalias em Ciberseguranca

por Maria Ines Pinto Bastos Martins

Internet of Things (IoT), Internet das coisas, idealiza um ambiente inteligente funda-

mentado em conectividade e heterogeneidade a qualquer hora e em qualquer lugar. A sua

principal vantagem constitui tambem o seu maior risco e, dada a informacao e operacoes

sensıveis que executa numa expansao cada vez mais significativa, a sua maior prioridade

e desenvolver um mecanismo de seguranca eficaz. Neste contexto, a informacao e reco-

lhida e transmitida em tempo real, de forma contınua e rapida, enquadrando o conceito

de IoT no contexto de data streams.

O impacto dos ataques informaticos em IoT pode ser consideravel, gracas ao numero

de contingencias que acarreta. Apesar deste topico ter sido documentado anteriormente

em ciencia de dados, as solucoes apresentadas passam por obter uma perspetiva super-

ficial baseada apenas em dados de rede, em detrimento do alvo principal. Alem disso,

propostas mais atuais nao oferecem solucoes que preencham os requisitos, nao evidenci-

ando preocupacao em estudar a taxonomia dos ataques ou dando especial atencao, nao

apenas ao modelo, mas tambem ao ambiente em que se insere. Consequentemente, as

alternativas mais recentes funcionam, essencialmente, a base de regras personalizadas,

uma vez que estas tecnicas oferecem maior seguranca, apesar de nao serem capazes de

detetar ataques nunca antes documentadas (dia zero).

A essencia de uma solucao bem conseguida assenta na caracterizacao e contextualizacao

do ambiente e da tarefa em maos. Em particular, este trabalho tem como objetivo estudar

os metodos que podem levar a possıveis incidentes e, gracas as vantagens que a aprendi-

zagem maquina tem para oferecer, as suas tecnicas serao discutidas de maneira a adaptar

os diversos metodos aos requisitos exigidos.

Page 8: Anomaly Detection in Cybersecurity

Impulsionado pela necessidade de desenhar um sistema de intrusao confiavel e efici-

ente, aprimorado pelas tecnicas da aprendizagem maquina, e apresentado um modelo de

detecao de anomalias de streaming, em tempo real, avaliado atraves de vulnerabilidades

reais que comprometem as nossas vidas.

Este trabalho contribui para o estado da arte, construindo eficientes e confiaveis sis-

temas de detecao de intrusao. Atraves da caracterizacao da taxonomia dos ataques, a

criacao de uma ferramenta de processamento de dados, assim como a selecao e adaptacao

dos metodos ja existentes, apresentamos um sistema, em tempo real, um sistema de

detecao de anomalias capaz de detetar ataques nunca antes vistos.

Palavras-chave: Internet das coisas, processamento de dados, ciberseguranca, tempo

real, dia zero, aprendizagem maquina, sistema de detecao de intrusao, vulnerabilidades,

streaming, detecao de anomalias.

Page 9: Anomaly Detection in Cybersecurity

Contents

Acknowledgements i

Abstract ii

Resumo iv

Contents vi

List of Figures viii

List of Tables ix

List of Algorithms x

List of Acronyms x

1 Introduction 1

2 State of the Art 5

3 Cybersecurity in IoT 103.1 IoT applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Security Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 Security Threats in IoT ecosystems . . . . . . . . . . . . . . . . . . . . . . . . 163.4 Intrusion Detection Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Streaming Anomaly Detection 234.1 Anomaly Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2 Machine learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.2.1 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2.2 Machine Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . 36

4.3 Streaming Anomaly Detection Systems . . . . . . . . . . . . . . . . . . . . . 38

5 miningbeat - Data Collection 435.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

vi

Page 10: Anomaly Detection in Cybersecurity

CONTENTS vii

6 Anomaly Detection in Cybersecurity 546.1 Categorical features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.2 Streaming analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.3 Concept Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676.4 Complexity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

7 Experimental Studies 727.1 Evaluation scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727.2 Evaluation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757.3 Experimental trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

8 Conclusion 818.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Bibliography 86

Page 11: Anomaly Detection in Cybersecurity

List of Figures

4.1 Swamping and masking effects . . . . . . . . . . . . . . . . . . . . . . . . . . 294.2 Types of drifts in data streams: (a) sudden; (b) incremental; (c) gradual; (d)

recurrent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.3 Domain contextualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.1 Syscall chwon report on Auditd. . . . . . . . . . . . . . . . . . . . . . . . . . 455.2 Miningbeat architecture integrated with elasticsearch and kibana for visu-

alization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.3 CPU, Disk and Cache usage per UID. . . . . . . . . . . . . . . . . . . . . . . 495.4 Output of ls -l of file /etc/passwd and ∼/pe 1.R . . . . . . . . . . . . . . 515.5 Permission importance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.6 Tree structure and attributes collected from /proc for the main metrics. . . . 525.7 Miningbeat integrated with elasticsearch and kibana. . . . . . . . . . . . . . . 53

6.1 Proposed framework of an anomaly detection system . . . . . . . . . . . . . 556.2 Example data set and plot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.3 Isolation Tree and correspondent splits for the data set plotted in Figure 6.2b 626.4 Isolation Tree of cForest method and correspondent splits for the data set

plotted in Figure 6.3a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.5 Isolation model framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

7.1 Anomaly reports by the isolation models. . . . . . . . . . . . . . . . . . . . . 747.2 Delay time for the conducted experiments . . . . . . . . . . . . . . . . . . . . 80

viii

Page 12: Anomaly Detection in Cybersecurity

List of Tables

2.1 State of the art’s featured themes summary . . . . . . . . . . . . . . . . . . . 9

3.1 IoT domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.2 IoT layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.1 Contingency table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2 Common machine learning techniques in anomaly detection . . . . . . . . . 38

5.1 Monitored Linux System Calls. . . . . . . . . . . . . . . . . . . . . . . . . . . 465.2 Metrics calculation to assert a file’s criticalness value. . . . . . . . . . . . . . 51

6.1 Attributes’ importance calculations. . . . . . . . . . . . . . . . . . . . . . . . 64

7.1 Experimental trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

ix

Page 13: Anomaly Detection in Cybersecurity

List of Algorithms

1 iForest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

2 iTree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

x

Page 14: Anomaly Detection in Cybersecurity

Chapter 1

Introduction

The beginning of IoT goes back to the early 1980s. Before there was a modern Inter-

net, in Carnegie Mellon University’s computer science department, a graduate student

came up with an idea of a soda machine that, through an Internet connection, provided

information about its contents. At that time, this team never thought their modified soda

would be the first of billions of devices connected to the Internet. It meant the beginning

of a new era. From a couple of vending machines connected to the Internet to GPS satel-

lites, smartphones, fridges, and self-driving cars, IoT is growing fast every month, emerg-

ing as a new synonym of evolution [1]. Nowadays, this definition embodies a dynamic

global network infrastructure, offering high levels of accessibility, integrity, availability,

and interoperability among heterogeneous smart devices [2].

Although the visible signs of progression are undeniable, they also represent a major

challenge in terms of privacy and security. The number of smart devices and dependen-

cies of these services is growing day by day. Thus, the number of vulnerable systems and

attack vectors follows the same fate, making fault tolerance an emergency [3].

Cybersecurity analysis relies on vast data to predict, identify, characterize, and deal

with threats. As the volume of data and complexity increases, all human efforts are not

enough to deal with the cyber defense urgency. Recent developments in computer data

acquisition, storage, and processing fueled the application of Machine Learning (ML) and

Artificial Intelligence (AI) to step in and help to detect complex patterns and trends more

efficiently and faster than humans [4].

An IoT environment can produce a wide diversity of data defined as dynamic, mas-

sive, heterogeneous, sparse, and sensitive. From actuators to sensors and mobile devices

1

Page 15: Anomaly Detection in Cybersecurity

1. INTRODUCTION 2

running and communicating with each other, the huge amount of data created, the time-

variant and diversity associated with them, and the sensitive information it might be car-

rying are significant aspects to consider [5].

There are several domains in which IoT can contribute to improving the quality of

life, from smart homes, healthcare solutions, or smart cities. As an example of an IoT

domain, smart cities employ information and communication technologies to improve

the local economy, transports, traffic management, environmental issues, and security.

It is an integration of different infrastructures and technology services, social learning,

and governance used to manage city assets, entail risks, and opportunities to enhance the

quality and performance of urban services and citizen engagement [6].

Accordingly, to ensure reliable services, security architecture must follow principles

such as confidentiality, integrity, and availability. In this sense, the network should not

allow unauthorized access, ensure reliable communication to send and receive authentic

information and perform sensor operations, transmissions, and treatments safely in real-

time [7]. All these concerns connect the concepts of streaming analysis and cybersecurity.

The intertwined topics demand online security solutions and applications to build ro-

bust tools to defend systems against security threats. One of the most prominent methods

are Intrusion Detection System (IDS) built to monitor systems and detect anomalies or

privacy violations [8]. This definition is often supported by pattern-based or knowledge-

based analysis, leading to more sophisticated machine learning algorithms searching for

better performances.

Machine learning has been proving its power and influence in several areas. However,

important shortcomings must be considered when it comes to cybersecurity. In anomaly

detection, and cybersecurity is not an exception, data analysis must deal with a highly

imbalanced data set and the prospect of an adversarial scenario where the attacker can

impersonate and influence the outcome of the model. These possibilities represent some

of the major limitations to adopt a fully autonomous cyber defense platform [9].

Anomaly detection is an essential data analysis task that detects anomalous or abnor-

mal data from a given set. From a cybersecurity perspective, it can prompt critical actions,

faults, mistakes, or unexpected behaviors that can compromise the system’s viability [10].

Various fields, such as medical or public health, industrial damage, intrusion, or fraud

detection, are some of the fields that have been addressing this problem. Its true purpose

must be to design a detection approach that can rapidly detect cyber threats and zero-day

Page 16: Anomaly Detection in Cybersecurity

1. INTRODUCTION 3

attacks, exploited vulnerabilities before they have been fixed, as they arrive as well as rec-

ognize previous attacks in their behavior. Moreover, the intrusion system must trace all

potential liabilities back to the attacker [11].

In cybersecurity, current machine learning methods rely on syntactic rules introduced

based on signatures or hash comparisons that fall short of dealing with the complexity

of this domain. Thus, a hybrid-augmented intelligence is necessary to introduce human

non-cognitive skills driven from intuition to enhance the adaptability and capability of

the machines [12].

In essence, this work aims to devise an IoT cybersecurity anomaly detection solution

capable of supporting domain experts and improving system performance. Accordingly,

the anomaly detection design will take into consideration the cybersecurity context in IoT,

reviewing its core characteristics, risks, and potential threats.

Finally, the urgency of overcoming the shortcomings of building an anomaly detection

system that meets the basic requirements in cybersecurity in an IoT environment defines

our main research question. As a result, it is discussed the importance of treating the

information in real-time, in different formats, and with very strict memory and processing

unit limitations to accomplish a viable and real solution. Thus, from the collected dataset

to the machine learning techniques, alongside the possibility of an expert validation, will

aspects discussed in this work.

The main contributions of this work are the development of a host-based and low-

level system data collection and monitoring tool following the elasticsearch family stan-

dards of a secure stack that gathers, analyzes, and visualizes data in real time [13], as well

as the implementation of a host-based and behavioral anomaly detection system indepen-

dent from customized signature and capable of detecting zero-day attacks.

This thesis is organized in eight chapters describing an efficient anomaly detection

proposal to restrain cybersecurity threats in IoT devices. The present chapter describes

the facing problem disturbing the world on a growing scale and its major constraints,

drawbacks, and main contributions. In Chapter 2, the most important solutions devel-

oped over the years, concerning an anomaly detection system applied to cybersecurity,

are being evaluated based on its methodologies and data environment. Chapter 3 and

Chapter 4 break down the contextualization of cybersecurity in IoT and anomaly detec-

tion backgrounds in the IoT domain. In Chapter 5, it is described the data collection

framework developed to obtain assemble system metrics. Chapter 6 presents the anomaly

Page 17: Anomaly Detection in Cybersecurity

1. INTRODUCTION 4

detection model describing the implemented algorithm alongside its major contributions

and weaknesses by unveiling an evaluation scheme with real-time examples presented in

Chapter 7. Finally, Chapter 8 concludes this thesis by stating the main contributions as

well as proposing some future research directions.

Page 18: Anomaly Detection in Cybersecurity

Chapter 2

State of the Art

Threat detection has been a highly documented topic over the past few years as it is

a shared concern among different areas. Some focus entirely on tuning or upgrading an

already existing intrusion system while others specialize in dealing with huge amounts

of data regarding storage and, most importantly, processing.

The first approach used to address cybersecurity needs started in network intrusion

detection systems as the network is expected to be a source of different cyber threats. The

main purpose was to improve the rules generated to stop potential threats. Mitchell and

Chen [14] summarized the pros and cons of several systems classified based on the target

system or the intended environment (network or devices); detection technique or their

basic approach (anomaly or signature); insights collection process, contrasting behavioral

and network systems; trust model, separating IDS that share raw data or analysis results

as recommendations with other peers; analysis techniques that distinguish simple pattern

matching like hash comparisons with data mining approaches and implementations; re-

sponse strategy contrasting active and passive responses depending on if there has been

an intervention to block the attack or if the action only consists on logging and notifying

domain experts.

Raza et al. [15] proposed an extension to a real-time network IDS, focusing on a par-

ticular routing protocol, based on geographical hints to locate malicious nodes for an IoT

environment. In recent years, we have been watching the rise of real-time demands in

most applications. Although Kasinathan et al. [16] offered a solution to these demands, just

like Le et al. [17], these approaches find themselves attached to a restrict range of attacks.

The first alternative proposes a scalable and active solution for Denial of Service (DoS)

attack by extending an already used signature system to different network protocols and

5

Page 19: Anomaly Detection in Cybersecurity

2. STATE OF THE ART 6

layers in a promiscuous mode, that is, irrespective of the destination [16]. The second

approach applies simple rules designed from the normal behavior to detect two specific

attacks related to nodes’ policies of a routing protocol, resulting in a narrow solution.

As artificial intelligence is being continually encouraged, most of the suggested the-

ories have been adding some anomaly detection aspects to the previous models. Abbes

et al. [18] analyzed different network protocols using a decision tree for each specifica-

tion, while others used unsupervised techniques for intrusion detection. In this solution,

decision trees, used to inspect network packets, are built incrementally without requir-

ing a complete dataset to train. Although this approach is an online solution that shows

a special concern in memory and time complexities, it only focuses on a particular set of

network protocols such as Remote Procedure Call (RPC) for client-server communications

in distributed applications.

When machine learning appeared as a realistic alternative in cybersecurity, it was ev-

ident that the best solution was the one that offered the best results and performance

without a clear concern if the model design filled all the environment requirements, such

as the imbalance domain or possible changes in the distribution [19–21]. Conventional

machine learning methods often report loss and accuracy as their performance metrics,

which results in misleading assessments of the IDS efficiency. Most of the literature as-

signs more importance to the ML model instead of focusing on accurate and representa-

tive datasets. Furthermore, they did not provide the reason why supervised options were

selected over unsupervised or vice-versa. As stated by Rathore et al. [19], the conducted

research questions focus on the ML methodologies, their strengths and weaknesses, the

dataset properties required for an IDS, and the types of attacks to which different solutions

show resilience. This study shows a relevant interest in big data solutions and decline all

methods that use obsolete datasets that do not capture contemporary attacks.

Cybersecurity belongs to a dynamic and adversarial domain, where the attackers find

new ways to improve their techniques. In this setting, there are two entities called learner

and adversary, trying to predict the right outcome and coercing into making incorrect pre-

dictions so that an attacker could infiltrate the system, respectively [22]. Consequently,

as the system depends on the dataset chosen to validate, it is important to use an up-

dated, unknown to the rivals and reliable set to capture all possible insights. Although

a few methods provided originals and updated instances [21, 23–27], a lot of research

work relied on deprecated, static and often biased datasets [19, 20, 28, 29] that lead to

Page 20: Anomaly Detection in Cybersecurity

2. STATE OF THE ART 7

an endemic situation where numerous reported results are not applicable in real-world

scenarios. Alongside the inability to cope with the most recent attacks and threats, these

shortcomings have been disclosed by [30, 31].

Garcia-Font et al. [32] understood the difficulty of choosing the right machine learning

technique and that the right choice depends on the environment’s specific goal and char-

acteristics. They were addressing cybersecurity in the smart cities domain and they man-

aged to point out the importance of the underlying constraints that developed around

the problem. Although this solution handles each attack as an anomaly and studies its

behavior within the environment, the network data used to validate their approach was

tested on a batch scenario, failing to validate any dynamic view.

One of the major restraints in adopting machine learning methods is the amount of

data required to train the models. Once big data and cloud computing became hot topics,

a big research movement concentrated on the new possibilities for cybersecurity. Hence,

MapReduce, Hadoop, and Apache Spark became the most popular keywords in threat

detection. Bostani and Sheikan [33] offered an unsupervised anomaly detection implemen-

tation using MapReduce for IoT network threats and Rathore et al. [19] proposed a Hadoop

based intrusion system for high-speed networks trained offline with an outdated dataset.

Despite the ability to process vast amounts of instances, most of the models fail to see

security from an outlier or anomaly detection perspective in a streaming context. There

are a few models that provide real-time options or stream processing [23, 26, 27], but

most of them have an offline status preventing the solution from achieving robustness to

adversarial threats or concept drifts [19, 21, 24, 33, 34].

More recent work understands the necessity of a real-time environment to provide an

effective solution to the cybersecurity community. Aksoy et al. [35] presented a solution

based on the similarity degree of two graphs in a time-evolving network context. De-

spite their efficient solution in time complexity and streaming processing, their procedure

was supported by a recently released dataset common to every system despite their po-

sitions, making it also available for the attackers to study how the models would behave

independently of their states.

Noble et al., in [27], studied a correlation-based streaming methodology in cybersecu-

rity. This approach deals with complexity principles to overcome the constraints of IoT

in a streaming context. In [26], Noble et al. proposed a real-time unsupervised anomaly

Page 21: Anomaly Detection in Cybersecurity

2. STATE OF THE ART 8

detection in a streaming context. It provides an anomaly score to each instance, making

the solution more flexible than binary output.

Chaabouni et al. [21] proposed an IDS in IoT. It was studied edge machine learning

techniques to distribute the smart process among the devices. In this scenario, problems

such as a single point of failure and network limitations if the model had to administrate

several nodes were reduced.

This information can be summarized in the chart shown in Table 2.1. All reviewed

literature is organized in terms of the data source and analysis as well as the ML tech-

niques, divided into their categories and featured themes implemented. For the anomaly

detection concerns in cybersecurity shared before, the related work is described in terms

of datasets source - public or original - used to validate the models; the data process-

ing paradigm implemented - batch, big data or stream - and, to finish our analysis, the

chosen machine learning technique to improve detection - supervised, unsupervised or

semi-supervised. The checkmarks define the methodology implemented and discussed

in a particular work. These are the prime categories in which all papers showed special

consideration. Furthermore, they allow us to recognize, implicitly, if the characteristics of

anomaly detection in IoT have been satisfied.

The overview of the current methodologies shows that all proposals fall short in terms

of the approaches used to represent the observed distribution. In fact, they are not able to

provide a dynamic view of the data behavior or online processing that compromises the

real-time requirements. The majority of the detection or intrusion system focuses on each

attack individually and not on their true purpose. Moreover, most of the current work

encourages a high specificity of the model’s design or the adoption of simple heuristics

and rules. Therefore, these approaches tend not to be fitted for the job at hand since they

rely on basic signatures or patterns of previously known attacks to learn their features,

making the model too specific and not robust to unknown threats.

To sum up, this chapter discussed the strengths and weaknesses of some solutions, as

well as a few technicalities that can be tuned. From the perspective of the data chosen

to train and the streaming properties to the models and complexities and the possibility

of increasing the decision robustness with human expertise, the research challenges of

anomaly detection in cybersecurity are far from over.

Page 22: Anomaly Detection in Cybersecurity

2. STATE OF THE ART 9

Data Source Data Analysis ML Technique

Open Original Batch Big Data Stream Supervised Unsupervised Semi-supervised

Lobato et al. [23] 3 3 3

Bostani et al. [33] 3 3 3

Singh et al. [24] 3 3 3

Owezarski [25] 3 3 3

Wang et al. [28] 3 3 3

Saied et al. [34] 3 3 3 3

Abbes et al. [18] 3 3 3

Wagner et al. [36] 3 3 3

Khan et al. [20] 3 3 3 3

El-Kathib et al. [37] 3 3 3

Yu et al. [29] 3 3 3

Bandre et al. [38] 3 3 3

Noble et al. [26] 3 3 3

Noble et al. [27] 3 3 3

Chaabouni et al. [21] 3 3 3

Aksoy et al. [35] 3 3 3

Garcia-Font et al. [32] 3 3 3 3

TABLE 2.1: State of the art’s featured themes summary

Page 23: Anomaly Detection in Cybersecurity

Chapter 3

Cybersecurity in IoT

In a world growing digital at an incredible rate, the IoT plays a prominent role in our

lives.

IoT creates a world of interconnection and integration in physical and cyberspaces.

As this paradigm is becoming part of our lives, key factors such as innovation, advanced

automation, and knowledge sharing appear as the true core and principles of a new age.

An IoT ecosystem consists of web-enabled smart devices that employs embedded pro-

cessors, sensors, and communication hardware to collect, send and make important de-

cisions based on the data acquired from their environments without requiring human-to-

human or human-to-computer interactions [39].

This environment incorporates heterogeneous devices, producing highly imbalanced,

imprecise huge amounts of data portrayed as ambiguous and imperfect. This model offers

intelligent services in real-time, making all connected and available anywhere, anytime

by anyone [40]. Such features complicate the design and deployment of efficient, inter-

operable, and scalable security mechanisms. IoT touches every context of our everyday

life, from industry, finance, manufacturing, healthcare, our cities, and homes. It enables

automation, cutting wastes, reducing anomalies, and improving monitoring abilities.

Although these technologies have been playing an important role in society, their char-

acteristics also represent their major challenges and make them a vulnerable target. In this

sense, smart cities’ major concerns include threats to national security, from targeting crit-

ical infrastructures and government assets to finance institutes. These threats can give

access to restrict information and money in a matter of seconds.

10

Page 24: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 11

In the early 2019, BankInfoSecurity [41] stated a massive botnet targeting an online

streaming application. This attack produced more than 292,000 requests per minute,

matching the same activity seen with the Mirai botnet in 2016.

Later that year, Forbes magazine [42] published that security researchers have issued

a stack warning due to an accelerating occurrence of cyberattacks on IoT devices. It has

been revealed that these attacks hit the billion mark for the first time.

3.1 IoT applications

Internet of Things symbolizes the boundless sensing technologies that cover many

areas of our modern-day lives and needs. It offers the ability to measure, infer, and un-

derstand what is around us from natural resources, healthcare assistance to urban envi-

ronments.

By the rise of modern technologies and necessities, it is estimated that several appli-

cations domains will be shaped by the emerging of IoT. The Internet enables sharing data

among different services, bringing interoperability into the system as well as multiple

opportunities. As a result of these crossovers between applications, the definition of IoT

domains differs from authors to authors and can be classified based on different network

metrics, such as coverage, scale, involvement, or impact [43].

One of the major sources of competitive advantage lays in investment in technological

improvements and innovation. Central governments, who treasure value safety and sta-

ble situations and decisions, have been bold enough to trust in the power and innovation

that IoT can offer to a city, despite the inherent uncertainties and vulnerabilities [44].

In this work, we will illustrate and describe three different, and yet very intertwined,

applications of IoT. We will focus on smart homes, smart cities, and healthcare. We picked

these three domains based on recent developments and environments that sustain our

daily life.

A Smart Home

Smart Home builds an environment consisting of a variety of home systems, which

incorporates the appropriate technology to meet the residents’ goals including ad-

ditional comfort, life safety, security as well as ecological sustainability.

Page 25: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 12

The vulnerabilities of a smart home environment include network and physical sys-

tem accessibility, heterogeneity, and slow uptake of standards. The major risk as-

sessment of this environment is the sensitive information of internal home sensors

that has the power to control our home [45].

B Smart Cities

Smart cities employ information and communication technologies to improve the

quality of life of its citizens as well as the local economy, transports, traffic man-

agement, environmental issues, and security. It is an integration of different infras-

tructures and technology services, social learning, and governance used to manage

city assets, entail risks, and opportunities to enhance the quality and performance

of urban services and citizen engagement [6].

This domain has the ability to control and influence social, political, and financial

services. The broadcast information is restrictively confidential and can compro-

mise an entire nation. The major concerns in smart cities include threats to national

security, from targeting critical infrastructures and government assets and banking

and finance institutes to cloud storage facilities. These threats can give access to

restrict information and money in a matter of seconds.

C Healthcare

IoT devices can provide a collection of healthcare solutions. Each device can be seen

as a platform for a wide range of applications and solutions whose prime purpose is

to monitor health issues and minimize the response time of health services. In this

scenario, the IoT applications become user-centric. The most common applications

include monitoring heart, blood, and body temperature patterns to detect abnormal

behaviors.

These devices must guarantee full time and fault-tolerant operative service. The ma-

jor risk assessment of healthcare strategies embodies the communication of valuable

and personal information of each patient. Since the vital benefit of smart healthcare

devices is to provide real-time health monitoring, all devices must be connected to

a global information network associated with a higher institute. Hence, healthcare

attacks can compromise health treatment as well as gain information leverage over

other institutions [46].

Page 26: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 13

DomainNetworkBandwidth

Users Applications Threats

SmartHome

SmallFamilymembers

ComfortSecurityResources man-agement

Track home internaldata to determine if thehouse is emptyConfuse and controlthe system

SmartCities

Large Citizens

Pollution moni-torizationSecurityTransportationmanagement

Reprogram a device togain full control of ahigher infrastructureTransferring a lot ofmoney in a matter ofseconds

Healthcare Medium Patients

Heart monitor-izationBlood monitor-izationBody tempera-ture monitoriza-tion

Flood the network andjeopardize the healthmonitoringUse the information togather insights abouthigher institutions

TABLE 3.1: IoT domains

Table 3.1 summarizes some of the most concerning threats in each domain, describ-

ing its coverage in both network bandwidth and user, as well as important domain ap-

plications. In the following sections, to state and describe the security background and

requirements, it is defined the security issues and principles of a potential cyber threat.

The taxonomies and definitions are of great importance to build tools capable of detecting

various threats ranging from known to zero-day attacks [8].

To conclude the background concepts, in the next sections, it is expressed the IoT

design and the chosen architecture that divides IoT according to the data transmission

phase in the infrastructure.

3.2 Security Concepts

An understanding of basic security concepts gives an analyst a distinct advantage in

communicating intelligently with domain experts and a better idea of how a cybercrime

was committed. Additionally, as they obtain more insights into how the cybersecurity

field is organized, how an attacker is performing an attack, and what is its major target,

Page 27: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 14

analysts learn how to be more proactive and come up with a more robust solution to

defend against subsequent attacks [47].

Over the years, the concept of IoT has been described with very different attributes

and very different opinions on where to prioritize it. However, the feeling that security

must be a top priority is a shared opinion. Accordingly, to ensure reliable services, secu-

rity architecture must honor principles such as confidentiality, integrity, and availability.

In this sense, the network should not allow unauthorized access, ensure reliable com-

munication to send and receive authentic information and perform sensor operations,

transmissions, and treatments safely in real-time.

A Confidentiality

The network should not allow unauthorized access to certain information.

B Integrity

The architecture must ensure a reliable communication such that the information

sent and received are authentic.

C Availability

Sensor operation, transmission, and treatment systems perform on real-time data,

making it imperative to be prepared and secure to operate anytime.

Cyber attacks correspond to a type of offensive action that intends to steal, alter, or

destroy data or information systems, compromising computer information systems, in-

frastructures, computer networks, or devices. According to the attacker’s intentions, it is

possible to distinguish two types of attacks - active and passive. The first type represents

the attacks that intentionally disrupt the system, alter system resources, or affect their op-

erations. It involves masquerading where an entity pretends to be another, modification

of messages, or overloading the system. Passive attacks attempt to learn or make use of

information from the system but does not affect system resources. These attacks appear

like eavesdropping on or monitoring of transmission aiming to obtain information trans-

mitted in the network [48]. In this sense, active attacks compromise integrity as well as

availability, while passive attacks endanger confidentiality.

In fact, in active attacks the attention is on detection while in passive attack the at-

tention is on prevention. Our essence will be to prevent threat vectors that influence and

interact with the system. We will focus on threats and events that never occurred before

Page 28: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 15

and do not match the expected pattern behavior. Therefore, the designed system can be

used with other protection systems as a second defense line [49].

Our attention will be emphasized on active attacks such as sensor modifications, net-

work, or application malware injection. Passive attacks like eavesdropping or monitor-

ing, which do not have a break or evade feature, will not have a particular impact on the

standard expression of the system, remaining unnoticed to an intrusion detection model.

However, some attacks, like SQL-injection where its ultimate goal can be to expose or alter

information in a database, cannot be considered active or passive until their true purpose

is known [50]. Under these circumstances, it is expected that the basic requirements, such

as secure communications, are guaranteed to diminish the efficiency of passive attacks.

To better understand the source of our problems and, consequently, determine the

optimal solution, it is important to know the possible strategies and adversary location

of the current threats. It might support the inference of the modus operandi of the attacker

and the detection of anomaly patterns since it provides the necessary insights to direct

the model towards the right features. As it has been reported by [51], the taxonomy and

background studies aim to achieve higher performances by providing valid insights to

build better datasets.

The adversary location can be internal, inside the IoT border, or external, out of range

but with access to the IoT devices. When it comes to the attack strategy, the question that

must be faced is how the malicious code was used against the IoT devices. Among the

major consequences related to these concerns, the physical and logical strategies are the

relevant topics to consider because they account for the tactic of the attacks. While the

physical strategy can change the behavior or structure of the devices, the logical one can

dysfunction the communication channels after the attack is launched without harming

the physical devices [52].

To control and supervise the IoT activity, it is required a comprehensive risk assess-

ment focusing on information assets [53]. This investigation can provide the potential lo-

cation of points of failure so that the data characteristics and features can capture hidden

relations between components as well as reflect the symptoms induced by the threats.

Risk assessment in addition to other features such as usability and cost determines the

appropriate level of security for an organization, where the perfect level of security is

unrealistic as there is always a trade-off between security and usability [54]. In fact, it is

common to see many secure platforms that see their performance decreasing by adding

Page 29: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 16

extra layers of obstructive nature. Therefore, it is an important challenge to ensure reliable

services without compromising their efficiency [55].

3.3 Security Threats in IoT ecosystems

The urged research questions intent to develop an IoT capable of correctly isolate and

score anomalies in IoT ecosystems. A robust and adaptive system requires the percep-

tion of the background and potential risk sources. Thus, understanding the concepts and

architecture imposed by the environment is our primary step.

The IoT ecosystem is considered an attractive target due to its interdependence and

interactions between devices, making it possible to attack surrounding components in

order to access the target. For instance, smart cities have the ability to control and in-

fluence social, political, and financial services. The broadcast information is restrictively

confidential and can compromise an entire nation.

The diversity and constrained characteristics, accommodating different applications

and scenarios, almost unique for each task, also affect the security of the system. These

factors require very specific security designs, which become less stable as the devices

become more lightweight and small [56].

Software and hardware components cooperate mutually to provide necessary tools

to process and store information. The architectures are organized based on the scoped

abstraction level and the performed duties. Each layer has unique threats because they

have unique tasks. However, they are all connected and rely on each other [57].

The taxonomy presented focuses on the main target layers of an attack. Therefore,

the chosen IoT architecture scheme to describe the possible assets and major risks is the

reference model with three layers: perception, network, and application. In this sense,

as all cyber threats must be stated as a data-driven problem, the three-layer model is the

most convenient because it effectively defines the data flow and main IoT interactions

between multiple devices.

The network layer enables the transmission and processing of the information capture

in the previous layer. The communication task transmits data reliably from the sensor to

its destination. It is based on network protocols and, therefore, shows the same problems.

Network-based anomaly detection has been the most popular method found in the liter-

ature either because its attacks combine the most dangerous and active threats observed

in IoT or threats are initiated based on a flow of packets sent over the network. Among

Page 30: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 17

network threats, spoofing attacks disguise a communication from an unknown source

as being from a trusted source to get access to a system, steal data, or spread malicious

records; sinkhole attacks attempt to guide the network flow towards malicious nodes, si-

lencing information and part of the infrastructure; denial of service and sleep deprivation

compromise the system operation by requesting nodes often enough to keep them active

and consuming unnecessary resources. All these threats are a few examples that have

been worrying IoT researchers [58]. Moreover, packet forging is still a massive concern

since the attacker generates packets impersonating normal network flow. As it has been

stated, current anomaly-based systems cannot cope with most of these described patterns.

The application layer is the most diverse and complicated layer. As there are so many

different products, devices, and manufacturers, there is no universal standard for the con-

struction of the application products. Accordingly, security issues and controlled mecha-

nisms are a difficult plan to implement. It provides services and implicitly holds control

of the performed actions [57]. Hence, this layer is known for phishing attacks, which can

lead to personal information retrieval, privilege escalation, when a user attempts to obtain

higher privileges, malicious virus, and scripts, which can inject applications to damage ca-

pabilities, or reverse engineering that breaks all procedures down to a series of steps to

exploit vulnerabilities [58]. In this layer, most of these attacks affect files, programs, or the

system itself, in order to gather useful information about the working environment of the

programs or the user actions and eventually keep the system under control [59].

Attackers can exploit security vulnerabilities in IoT infrastructure to execute sophisti-

cated attacks and, in most cases, an attack is not an isolated act but a series of steps that

lead to a greater threat. In this sense, it is still important not to underestimate small or

passive moves as they can be unnoticed or really hard to control, like man in the middle

or buffer overflow. In fact, Man in the Middle (MiTM) can be considered a passive attack

as it attempts to learn or make use of information from the system but does not affect its

resources. However, this form of attack can be used as a way to acquire knowledge in

order to affect a different target and leading to malfunctions. For instance, MiTM can lead

to phishing or spoofing, getting one step closer to take over the system’s resources. As for

the buffer overflow vulnerability, it can occur when data can be written outside the mem-

ory allocated for a buffer, either past the end or before the beginning. It allows a malicious

user to overwrite pieces of information like return addresses or function pointers, chang-

ing a program’s control flow [60]. Therefore, the applied solution to detect this form of

Page 31: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 18

attack is to add extensions to verify the program flow while running, leaving buffer over-

flows to be detected by checking a program’s code or to measure its consequences as this

program changes might be looking to achieve high privileges or secure information.

As evidence, all threats, even with different assumptions and principles, have the

same target and purpose. They either aim to control the devices or steal the information

collected in the perception layer. Taking a closer look at the infrastructure and keeping in

mind how a virus can affect an organism, ultimately, the attacks, which have been men-

tioned, have an influence on the system itself. Hence, regardless of the assumption or the

target layer, the system is expected to reflect the imprint caused by the attack either as a

direct consequence or not.

To sum up, Table 3.2 summarizes the important aspects discussed in this section. It

associates each layer to a few examples of the major components and threats that have

been representing it, as well as the eminent asset that makes it a valuable target.

Layer Components Asset Threats

Application

Layer

System integration

Cloud computing

Provides services

used to control

different devices.

Phishing

Malicious scripts

Reverse engineering

Responsible for processing information and different tasks.

Network

Layer

Routers

Aggregators

Transmits accumu-

lated data.

Spoofing

Sinkhole

Denial of service

Responsible for the communication between devices.

Perception

Layer

Sensors

Actuators

Inputs large

amounts of data.

Jamming

Node capture

Data injection

Responsible for collecting information.

TABLE 3.2: IoT layers

After providing a detailed contextualization with the essential topics of the cyberse-

curity background, the following section is presented as a related work of software or

hardware systems that automate the process of monitoring the system or network for

signs of security problems.

Page 32: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 19

3.4 Intrusion Detection Systems

IoT is a novel paradigm engaged in developing a ubiquitous environment of smart

devices seeking to enhance everyday life. Parallel with being able to change and improve

our society, its security is an important issue. With many security practices being unsuit-

able due to their resource demands, it is rated as important to include a second defense

line in IoT, that is, not a protection layer, since it is difficult to implement, but a detection

system to monitor information [61].

Intrusion is an undesired malicious activity that has been part of our generation. Intru-

sion Detection System (IDS) and Intrusion Prevention System (IPS) are standard measures

to secure computing resources with special attention to network traffic. As the words de-

tection and prevention state, prevention adds a high-level defense line that specializes in

seizing threat activities by intercepting suspicious events while detection develops tools

to identify and control as soon as possible malicious activities. Therefore, an IDS is mainly

known for events monitoring and analysis, while IPS combines preventive actions, such

as blocking IP that appear on an IP malicious list, and the technology of intrusion detec-

tion in an automated way [62].

In addition, it is introduced the concept of an intrusion detection system, a widely

established security component. An IDS is a common cybersecurity mechanism whose

task is to detect malicious activities in the host or network environments by monitoring

its behavior for signs of attacks under the assumption that anomalous and normal flow

have different patterns and distribution. With the increasing variety and complexity of

IDS and IoT requirements, the development of these systems evaluation methodologies,

techniques, and tools has become a key research topic [63]. As an example, Fail2Ban [64]

is an intrusion prevention and detection system that sits in the background monitoring

logs in /var/log and utilizing both instantaneous and historical data to integrate rules

into intrusion prevention policies specially focused on network connections [65].

In this sense, based on the activity an IDS is designed to examine, it is possible to

distinguish between Host-based Intrusion Detection System (HIDS) and Network-based

Intrusion Detection System (NIDS). The first definition monitors system activity, such as

API calls, disk activity, or memory usage. Hence, it is a highly effective tool against in-

sider threats. Open Source Security Event Correlator (OSSEC) is an example of an HIDS

that allows performing log analysis, file integrity checking, policy monitoring, rootkit de-

tection, and active response using both signature and anomaly detection methods [66].

Page 33: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 20

Another example that is particularly used to monitor a designated set of files and direc-

tories for any changes is called Tripwire [67] and it is still a very acceptable tool to notify

system administrators of corrupted or altered files [68].

On the other hand, NIDS focuses on monitoring network activity and communica-

tions and auditing packet information. Thus, this solution is specific to network threats,

neglecting other possible attacks. Two of the most popular NIDS applied in cybersecurity

are Snort [69] and Suricata [70]. The first alternative provides real-time intrusion detection

and prevention, as well as monitoring network security. Suricata is a modern alternative

to Snort with multi-threading capabilities, GPU acceleration, and multiple model statisti-

cal anomaly detection [71]. However, the ideal scenario is to incorporate both HIDS and

NIDS since they complement each other. NIDS offers faster response time while HIDS can

identify malicious data packets that originate from inside the enterprise network. This

type of solution is often called hybrid IDS and its most recent example is Wazuh [72], an

open-source security platform, that combines OSSEC and Suricata as an anomaly HIDS

and signature-based NIDS, respectively.

As these mechanisms aim to spot cyber threats, an important aspect of an IDS, widely

covered across literature, has to do with the attack detection method. Based on the most

common types employed, we will categorize an IDS detection method in a signature,

anomaly, or hybrid.

A Signature based

In signature-based, network traffic or system-level actions are compared based on

rules written in a database of previously categorized attacks. If there is a match

with one instance, such as multiple tests for open network ports, the activity will

be flagged as suspicious. This approach is efficient at identifying old and analyzed

attacks but fails to detect zero-day attacks which correspond to exploited vulnerabil-

ities that have not been studied or fixed and, therefore, there is not a rule available to

stop such threats. Due to its architecture, this method implies collecting and main-

taining rules on a database that can be impractical on some constraint devices in IoT

infrastructures. Some of the IDS solutions that follow a signature-based methodol-

ogy, like [73, 74], are the most employed techniques mostly because they offer high

processing speed.

B Anomaly based

In anomaly-based, a model is built from a previous network or system activity data

Page 34: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 21

based on the assumption that any attack, interpreted as anomalies, will have a dif-

ferent dynamic, flagging any discrepancies as suspicious. For instance, an anomaly

detection model will point out reboot operations outside of their usual hours. Al-

though this technique excels at detecting new attacks, it requires periodically up-

dates, since the ongoing distribution is not expected to be static. Over the past

few years, data analysis has been supporting most of the established methods in

every domain by offering better results, high confidence, and powerful insights.

Anomaly-based detection, augmented with machine learning capabilities, can help

to identify new exploits on both NIDS monitored platform such as [75] and HIDS

scheme [76].

C Hybrid based

This approach combines the attributes of an anomaly and signature-based detection.

In this setting, when a sample is flagged as suspicious, the activity can be confirmed

as malicious by a human expert. By combining these concepts, the disadvantages of

one method are mitigated by the strengths of another, increasing performance but

introducing a delay in the validation. These topics may compromise the real-time

response required by an effective and efficient IDS in IoT environment. As new op-

portunities and concepts, such as human-in-the-loop or complexity improvements,

provide new prospects, new solutions, aligning signature and anomaly methods,

such as [72, 77, 78], have been published, monitoring, not only a wider range of

cyber attacks but also promising zero-day attacks detection.

In the last few years, machine learning and artificial intelligence have been hitting re-

search topics and news headlines to increase importance. Such topics have been touching

the cybersecurity field, giving attackers and defenders new opportunities to achieve their

goals faster and with better insights than ever before. With the intention of studying and

synchronize all the recent developments in the data science field, the proposed model fo-

cuses, primarily, on designing an anomaly detection solution capable of being integrated

into real-world systems without compromising their accessibility.

When considering an analysis enhanced with intelligent capabilities, the environment,

in which the data is being collected, the task to accomplish, and the technique or method-

ology that better fits the inherent environment and job constraints must be studied with-

out neglecting performance evaluation to assess the model’s fitness. Each phase adds

different requirements and specifications that characterize the defined purpose. In this

Page 35: Anomaly Detection in Cybersecurity

3. CYBERSECURITY IN IOT 22

sense, to provide data analysis in cybersecurity applied to an IoT domain, it is important

to explore the attributes of the IoT data in an anomaly detection context in order to choose

the best technique.

In the following chapter, the priority subjects from a data science anomaly detection

perspective will be detailed, from the data environment, evaluation metrics, and the as-

signment purpose to the available methods that influence an intelligent detection system.

Page 36: Anomaly Detection in Cybersecurity

Chapter 4

Streaming Anomaly Detection

IoT, just like any technology these days, needs data to offer better services to users or

enhance its performance. Developing smarter environments can simplify our life-style by

saving time, energy, and costs. Therefore, collecting and analyzing data provides action-

able information and improves decision making, assigning productivity, efficiency, and

effectiveness of every platform’s attributes [79].

The IoT data processing presents limited computational, network, and storage re-

sources. Aligned with the discussed characteristics, like interoperability and heterogene-

ity, these features meet the major challenges of IoT analytics.

Due to IoT characteristics, the data collected have a distributed and real-time nature,

high volume, fast velocity, and variety. It demands cost-effective processing in order to be

able to deliver enhanced insights and decision making. Its high speed and possible low

quality and interpretation have been proving to be the challenges posed by IoT, since they

compromise the consistency and reliability of the models [80].

When fast big data meets the real world, we should be prepared for the scenario where

the data distribution changes over time and the data processing runs under very strict

constraints of space and time since we must be able to analyze and predict in real-time

with a limit amount of memory and time resources. Therefore, the task of monitoring sys-

tems, whether it is addressing network flow or host performance metrics, can be seen as

a continuous stream of inputs, denoting data flowing in and out continuously. A stream

X is an ordered sequence of N-dimensional instances, xt, that potentially grows to infin-

ity [81].

X = {x1, x2, .., xt, xx+1...}

23

Page 37: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 24

In disagreement, in batch processing, datasets are entirely stored and trained, eventu-

ally multiple times. As a result, the latency time of an output is larger and, as the num-

ber of instances increases, the method will not be able to handle the task in hand [82].

Streaming data processing can also differentiate from batch processing in a sense that

data objects are generated in a chronological order implicit by the arrival time, adding

the non-reproducibility of each object as another characteristic as they have a timestamp

associated with each observation. In addition, as in the streaming paradigm is unbearable

to process continuous and large amounts of data every time, it is required that the model

processes each instance at a time and inspecting it only once. In this sense, the streaming

resource limitations meet the IoT directives.

Streaming data processing is beneficial in most of the everyday scenarios where new

and dynamic data is continually generated. Some of its representative examples include

sensors sending data in transportation vehicles, industrial equipment, and farm machin-

ery; tracking changes in the stock market, computing value-at-risk and automatically re-

balancing portfolios based on stock price movements; monitoring equipment from panels

in the solar industry to a heart rate monitor machines [83].

Generally, applications begin with simple applications such as collecting system logs

and evolving to more sophisticated near-real-time processing. It enables continuous mon-

itoring of real-time data and further enriching and up-to-date insights.

Streaming data is the best approximation to the real world and is not as well re-

searched as the traditional batch setting, where the relations reproduced by the data

set are static and generated at random according to some stationary probability distri-

bution [82]. Most of the developed algorithms, not only are static and not capable of

dealing with large amounts of data but also fail to validate the proposed application be-

cause testing and validating on deprecated and not representative datasets does not build

a convincing data stream framework.

Another aspect that must be considered is the concept drift. Our opinions, thoughts

or reactions evolve over time, just like the world around us. Hence, the models need

to be updated in order to detect dynamic changes in a fast and accurate way [84]. In

this perspective, the models take into account the recent history of the observed stream,

training in fixed or adaptive batches of data, and testing in subsequent observations. In

order to detect possible changes in the underlying distribution that can compromise the

Page 38: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 25

efficiency of the current framework, it is expected close monitoring of the performance of

the model [85].

In recent years, an increasing interest in designing data processing algorithms has

been witnessed to manage a continuous data stream, providing answers to given queries

while looking at relevant data items a limited number of times and in a fixed order de-

termined by the arrival pattern [86]. Due to streaming analysis requirements, streaming

processing models focus on synthesizing information or training steps to develop a fast

but efficient streaming solution. Thus, it is common to see algorithms that deliver approx-

imate solutions in order to provide fast answers with low computational costs, sacrificing

accuracy within a small error range [87]. This trade-off concern will be also addressed in

the following chapters.

Firstly, two techniques used to address a streaming setting are called approximation

and randomization. These procedures consist of mapping a very large input space to a

small synopsis. They have been used in association rules and frequent mining [88]. In

addition, another important step in streaming analysis is the idea of summarization that

consists of compacting data structures into representative statistics for further querying.

In fact, it is common to design a model, maintaining relevant statistics from previously

seen data. As an example, the exponential histogram idea is often used to accomplish

data synopsis [89].

In streaming analysis, examples can be read either incrementally or in portions. The

first alternative, also known as online learning, algorithms process single examples ap-

pearing one by one in consecutive moments in time. Hence, a streaming model is con-

sidered an online model if, for each example xt ∈ X, the outcome is returned before xt+1

arrives. In the second approach, examples are available only in larger sets called data

blocks Bt. Similarly to the previous method, in chunk-based models, the output of Bt is

available before the next chunk (Bt+1) arrives [90].

Lastly, a widely used block technique is based on the assumption that the most recent

information is more relevant than historical data. It embodies the use of sliding windows

comprehending samples of data. The primary solution adopts a fixed-size sliding win-

dow, similar to the first in, first out data structure, where an element arrives at the window

at time t and forces another instance, seen at time t − n, with n the window size, to be

forgotten [82]. A sliding based strategy can be implemented based on a sequence, where

Page 39: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 26

the limits of a particular chunk are defined by its size [91], or on a timestamp where the

structure is limited based on a duration [92].

In the following topics, the anomaly detection subject will be addressed alongside the

major concerns and most important methodologies. Finally, the machine learning tech-

niques with respect to its advantages and disadvantages regarding an anomaly detection

in an IoT streaming setting will be attended. These issues are the ones that should be

considered in order to find the best model that meets the assignment and data involved.

4.1 Anomaly Detection

In 1980, Hawkins [93] described an anomaly as an observation which “deviates so

much from other observations as to arouse suspicions that it was generated by a differ-

ent mechanism.” The analysis of anomalies have proven to be important either to track

unusual traffic patterns in network, which can be a sign of a cyber attack, to detect anoma-

lous credit card transaction or to inspect MRI images that could spot the presence of ma-

lignant tumors.

An IDS, regardless the detection architecture, is based on the assumption that the in-

truder behavior will be significantly diverse from the legitimate patterns, which facili-

tates and enables the detection of non-authorized activities. Hence, these systems meet

the anomaly detection purpose. This domain can provide important solutions by tracking

relevant hidden patterns imperative in identifying malignant cases in multiple industries

from security, medical or finances to energy and agriculture. An anomaly may signify

a negative change, like a fluctuation of CPU usage possibly indicating DoS attacks, or

it may indicate a positive abnormal behavior if the running applications are demanding

more resources because people are buying more products online and, in this sense, it is

important to ascertain what are the usual necessities and why they are rising. However,

there is not a reliable solution yet, mostly because they struggle to provide an efficient

proposal, leaving experts reluctant to apply these methods.

As to establish the fundamental subjects regarding a real-world anomaly detection

algorithm in streaming IoT environment, the following topics are the ones that must be

followed to achieve an effective solution:

Page 40: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 27

A Online

When applying a real word, streaming anomaly detection algorithm, the online con-

cept where xt has to be trained as soon as it appears on the stream should be an

option to consider. To design an online solution, each object must be processed only

once in the course of training and the computational complexity of handling each

instance must be as small as possible [94].

B Continuous

The streaming context is defined as a continuous, possibly infinite, data sequence

arriving at high speed expecting a fast learning phase without storing the entire

stream and avoiding queuing. This is a crucial measure that can be a bottleneck of

many methods [95].

C Unsupervised

In cybersecurity, as well as in other domains, the process of generating labels or

rules to distinguish normal from abnormal behavior is expensive, leaving the sys-

tem completely dependent on domain experts and vulnerable to zero-day attacks.

In this sense, a promising solution will work in an unsupervised and automated

fashion without labels or parameters to be tuned. However, it will be interesting

if the machine and human capabilities can become allies and guide the model as

domain experts are expected to recognize their needs and quickly assert if the right

choice was made [96].

D Dynamic

By definition, streaming data processing incorporates sequential and various data,

allowing a more realistic representation of real-world phenomena. As so, it belongs

to a dynamic environment where the data distribution can change over time. A

successful solution must adapt to concept drift which will be discussed in the next

section [97].

E Performance

In the previous chapter, where the environment of an IDS solution in IoT was dis-

cussed, machine learning concepts were outlined and the importance of evaluating

the models was mentioned. From stream to the batch setting, a model is chosen

based on their performance, specifically, the best solution is the one that minimizes

false positive and negative rates [98].

Page 41: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 28

A significant aspect of anomaly detection is the nature of the anomaly. Based on its

nature, an anomaly can be described as:

A Point

A particular instance is considered a point anomaly when it deviates from the nor-

mal pattern of the rest of the input. When it comes to cyberattacks, it has been seen

DoS, denial of service, attacks that are originated from a single network packet. In-

stead of inducing a system overload by sending numerous requests, it exploits a

software vulnerability and it simply sends an out-of-order packet [99].

B Contextual

A contextual or conditional outlier occurs when the data behaves anomalously in

a particular context. For instance, it is expected that routers or other devices even-

tually ask for the users’ credentials, but, for this to be legit, few conditions have to

be confirmed. When the basic requirements do not match, we may be attending a

phishing attack and a contextual anomaly [100].

C Collective

When a set of instances do not match the distribution of the rest of the data, they

are considered to be collective or clustered anomalies. This set is usually more re-

warding since they may carry critical information in circumstances such as disease

outbreaks, a burst of attacks, and fraudulent activities. In this sense, it is easy to see

the typical DoS attack when, for a period of time, various packets are sent, eventu-

ally ending up overcharging the system [101].

Since we aim to study an anomaly detection system adapted to the cybersecurity

needs, we must recall that we are dealing with an imbalanced scheme and data collected

from the real world where labeled and clean input is often a major issue [100]. Further-

more, it is important to be aware of the effects the insertion of different points can imply

on the data distribution. A very common and simple calculation susceptible to these ef-

fects is the arithmetic mean calculation as it is highly affected by either higher or lower

values from what is expected. Such phenomenons typically occur as swamping or mask-

ing effects that can disguise anomalies in the dataset.

Page 42: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 29

In order to illustrate the essence of such consequences, Figure 4.1 portrays a univariate

scenario mapped in a numerical scale and boxplot to easily spot outliers, marked as iso-

lated points on the plot. For each effect, the insertion of two points, a and b, is monitored

in terms of the effects it implies on the other instances.

FIGURE 4.1: Swamping and masking effects

The masking effect occurs when the insertion of a data point is able to hide an existing

outlier. In the first section of Figure 4.1, by the time point a enters, the original data set

does not contain any outliers. As soon as b comes into play, outlier a appears to be normal.

The swamping effect, detailed in the second section of Figure 4.1, occurs when a normal

observation is mistaken as an outlier in the presence of a discrepant instance. In this case,

by adding b, where there were no outliers, it is possible to spot two anomalies, being a

incorrectly classified [102].

In an anomaly-based IDS solution applied to an IoT environment many concerns have

been detailed regarding the requirements involving context demands where only online

processing is admissible. In this case, models can be trained either incrementally by a

continuous update or by retraining using recent batches of data. One common concern

of all machine learning approaches is the performance of the model and in streaming

contexts, where data is generated on the fly in a variant rate, the data distribution can be

non-stationary driving models to become outdated and no longer suitable [103].

The phenomenon where data characteristics change over time is called concept drift

and it can affect decision boundaries causing the performance of the model to decrease.

Page 43: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 30

Therefore, concept drift methods should be integrated into every anomaly detection so-

lution in order to detect possible changes that can incorrectly flag anomalies and increase

the false alarm rate.

Most of the solutions available depending on the class label to compare against the

output of the model [104, 105]. The idea behind this approach is that the presence of drift

can affect the underlying properties of classes that were used to train the current classifier,

reducing its relevance with time. Hence, at some point, concept drift or warning signals

should be activated if there are pieces of evidence of a significant loss of performance.

As it has been stated, true labels seem unfeasible in a streaming context as it would re-

quire domain experts and extra time to label all instances. In fact, given the imbalanced

setting, label inference or approximation might seem acceptable, it could lead to serious

consequences, even with low false-negative rates [106].

Concept drift can occur in many forms and they can be categorized according to their

impact on the probabilistic distribution of the classification. In this sense, they can be

virtual, where concept drift does not have any impact on decision boundaries and they

can be real concept drift where decision boundaries can be completely affected. This

last category is the most important because it has the power to compromise the model’s

performance [107].

Moreover, noise in data is a recurrent difficulty in data streams. This phenomenon

can be interpreted as temporal and spatial fluctuations that can mislead drift detectors,

forcing the model to be updated based on noisy data, deteriorating effectiveness [108].

According to the drift swiftness, concept drift can be distinguished as sudden, incre-

mental, gradual, or recurrent. In order to give an explanatory definition of these four

concepts, let us start by representing a non-stationary data stream, X, as a sequence of in-

stances, xt characterized by a transition xt → xt+1, from time t to t+ 1, where each sample

follows a certain distribution, Dt. For the first concept, portrayed in Figure 4.2(a), sudden

drift are characterized by abrupt changes in the observed distribution, that is, Dt 6= Dt+1.

Incremental drift, described in Figure 4.2(b), is the one with the lowest ratio of changes

where Dt is not so different from Dt+1. In fact, to achieve significant statistical differ-

ences, the samples must be much more separated in time. On gradual changes, depicted

in Figure 4.2(c), it is possible to identify a slower transition phase from the observed on

sudden drift. Finally, it is possible to detect recurrent changes, shown in Figure 4.2(d),

Page 44: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 31

where a state from n previous iterations reappear occasionally or periodically, that is,

Dt = Dt+n [108].

FIGURE 4.2: Types of drifts in data streams: (a) sudden; (b) incremental; (c) gradual; (d)recurrent

The most employed strategies to tackle concept drift in data streams focus on either

monitoring statistical changes of the observed characteristics and updating the model

when the consequences of the drifts reach a certain level or applying adaptive learning

algorithms with the ability to incorporate new data and forgetting old instances as time

goes by. These methods can be incorporated in the designed framework where, in the first

category, the model training is triggered by an active change detector and, in the second

group, the model evolves independently from detectors [109].

From what we could gather, common approaches for adapting learning systems to

concept drift includes external algorithms working as concept drift detectors, sliding win-

dows of both fixed and variant sizes, online learners processing each example as soon as

they appear and only once and, finally, ensemble methods have shown to be a prominent

solution since, by using a set of base learners, they offer both flexibility and predictive

power.

The first mentioned method proposes an external algorithm combined with the offi-

cial method to monitor specific statistics such as standard deviations [110], predictive er-

ror [111] or instance distribution [112]. Any perturbation on these indicators is assumed

to be caused by dynamic changes and, according to a specified threshold, these incidents

are triggered as a concept drift [113].

Page 45: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 32

Concerning how data is to be handled, one of the most widely used methods is sliding

windows. They process data in buffers with the most recent observations as being con-

sidered as representative. These batches are used for training and updating the model. In

this setting, changes are estimated either in time or in data windows where old instances

are discarded and replaced by the recent ones [114].

Online learning is performed in consecutive rounds, growing the model instance by

instance, processing each example as they arrive, and reporting predictions as soon as

possible with low computational complexity. As a result, these algorithms have the ability

to adjust to changes in streams, building a model that is able to recognize the history of

the examples that came to life in the stream [115].

To finish the discussion of some of the most used approaches, ensemble learners are

presented as suitable for changing environments as they can be adapted to handle concept

drift. Ensemble learners use a combination of several classifiers, weak learners with lower

performance characteristics, in order to improve the overall performance. Thanks to their

modularity, they naturally adapt to change either by modifying their structure, retraining

learners, or updating decision weights often used to assign importance to instances or

models [116].

Despite the strategy chosen to monitor changes in the underlying distribution, the

anomaly detection in an IoT streaming environment still offers restrictions concerning

memory and time resources, low false alarm rate, and small delay time between the in-

stance reporting and its predictions. However, adding a particular feature to allow the

model to excel at one of the characteristics while under-performing on others, is a natural

tendency that should be avoided.

In the following topics, it will be addressed the machine learning context with respect

to the advantages and disadvantages of the most famous techniques regarding the condi-

tions an anomaly detection streaming system demands. These subjects are the ones that

should be considered in order to find the best model that meets the ultimate purpose.

4.2 Machine learning

In order to decide which technique or methodology should be used, it is important

to define the task to accomplish, the data used to reproduce the implicit relations, and,

last but not least, the context and concerns to consider. Thus, the model must fit the data

Page 46: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 33

and handle all the requirements, from the environment where the data is collected to the

defined purpose of an anomaly detection system.

It is often seen studies that use different models regarding different attacks and ac-

curacy as the performance measure. In a cybersecurity context, where it is essential to

detect a potential threat, it should be studied the best features to represent and capture

the essence of an attack, regardless of the particular characteristics that define them. The

features used to train the proposed methodology hold the key to the success of the ap-

proach.

Particularly, it is often documented network intrusion detection approaches. In such

solutions, the network data is used to train the models. Although these instances contain

malicious packets, usually, they tend to be too specific for a particular attack vector. More-

over, they only represent a means to an end and do not reflect the impact on the actual

target.

Since we aim to study an anomaly detection system enhanced with intelligent capa-

bilities in cybersecurity, we must recall that we are dealing with an imbalanced scheme

and data collected from the real world where labeled and clean input will be a hard, if not

impossible, scenario. In this sense, the first problem to encounter has to do with the fact

that the training samples will not be labeled, making it difficult to automatically assess

the performance of the solution. Another important aspect to consider is that, frequently,

in intrusion detection problems, the price to pay when a mistake is made is often high

since it can endanger the reliability and availability of the system. Therefore, researchers

have focused their attention on reducing the false alarm rate of the models.

In scenarios such as fraud detection in credit cards or telecommunications services,

medical diagnosis by identifying patterns among pathological data, radar imaging to

spot illegal wastes dumping into the ocean, anomalies in public transports such as in

the railway industry or, just like in our case, cybersecurity, the class imbalance problem

is pervasive. With rare or infrequent representatives in one class, the classification rules

used to identify its members tend to be rare, undiscovered, or ignored. Consequently,

test samples from the minority class tend to be misclassified. In most of these examples,

the minority class is the target we are looking to identify, making the classification and

performance measuring extremely sensitive [117].

Page 47: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 34

4.2.1 Evaluation Metrics

In a highly imbalanced scenario, it is discouraged the use of any potential biased mea-

sures to average the performance of the models. In order to build an effective solution, a

high detection rate and low false rate are key aspects to consider and can be derived using

the confusion matrix presented in Table 4.1. In this setting, the performance is fraught by

the base-rate fallacy problem often unveiled by the false positive paradox, where the pro-

portion of false positives outnumber the true positives. If the false-positive rate is higher

than the proportion of anomalous samples, the supervisory staff will conclude, from ex-

perience, that a positive output indicates an anomaly when, in fact, is more likely to be a

false alarm [118].

Real

Model Positive Negative

Positive True Positives (TP) False Positives (FP)

Negative False Negatives (FN) True Negatives (TN)

TABLE 4.1: Contingency table

With imbalanced datasets, the conventional way of maximizing overall performance

will often fail to learn anything useful about the minority class, due to the dominating

effect of the majority class. As a result, several mechanisms have been developed in order

to overcome these issues. In essence, researchers found ways to deal with imbalanced sce-

narios. Therefore, one of the accepted approaches seeks to balance the number of samples

by either removing some of the majority class if the number of anomalies is large enough

or adding anomaly samples by replicating or artificially generate more cyber attacks. Al-

though replicating increases the number of data, it does not provide new information or

variation. In [119], synthetic examples cause the classifier to build larger decision regions

that contain nearby minority class points. In fact, network traffic targeting computational

resources, known as honeypots, designed to lure hackers, can contribute with anomalous

data similar to the one that is being monitored [120]. Instead of generating synthetic cyber

threats based on the minority’s nearest neighbors, this solution offers anomalous samples

gathered in a different environment since protection mechanisms are never available in

honeypots, forcing the data dynamics to be different from the official one. Another ex-

ample focuses on self-inflicting cyber threats to generate more samples in a real but con-

trolled environment. Another approach, considered the safest solution as it does not feed

Page 48: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 35

the system with data risking injecting different assumptions, tries to improve the recogni-

tion success on the minority class, assessing its performance with unbiased metrics, and

focusing on individual classes [121].

Therefore, there are many evaluation measures and some of the most relevant ones

used in imbalanced settings are precision, recall, F-measure, Receiver Operating Charac-

teristic (ROC) curve, cost curve, and Precision-Recall (PR) curve. Although the accuracy

metric, shown in Equation 4.1, is one of the most popular metrics used to evaluate mod-

els, it can be misleading as it could reflect the learning performance of the majority class if

the positive and negative outcomes are not carefully defined [121]. Considering the case

where there are 99% of normal observations if the model validates all normal points and

fails to distinguish the threat pattern, it is possible to achieve high accuracy with very

poor performance.

Accuracy =TP + TN

TP + FP + FN + TN(4.1)

ROC curve is a well-established method in the community. It represents the trade-off

between the true positive rate (benefits), also known as recall, Equation 4.2, and false-

positive rate (costs), also known as fallout, Equation 4.3. From its curve, it is possible to

visually distinguish regions where a model works better than another. Another impor-

tant metric is the Area under the ROC curve (AUC) which summarizes a learner’s perfor-

mance into a single value [117]. ROC is well suited for ranking problems and tuning the

best resource utilization.

TPrate =TP

TP + FN(4.2) FPrate =

FPFP + TN

(4.3)

The following metrics are usually discussed in Information Retrieval. Precision, de-

fined in Equation 4.4, can be seen as the number of correctly positive cases out of the total

of positive returned cases. When combined with recall, it is possible to draw PR curves in

the same fashion as ROC curves, allowing a better understanding of the learner’s behav-

ior under a range of circumstances. As stated by Davis and Goadrich [122], with highly

skewed datasets, PR curves give a more informative picture of an algorithm’s perfor-

mance.

Precision =TP

TP + FP(4.4) F1 =

2TP2TP + FP + FN

(4.5)

Page 49: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 36

The final popular metric we are going to discuss is F1-score and it represents the har-

monic mean of precision and recall usually with equal weighting on both measures. F1,

defined in Equation 4.5, is better suited to represent imbalanced classes. As harmonic

mean tends to be closer to the smaller of the two metrics, reasonably high values of recall

and precision ensure a high F1 score and, therefore, high performance [117].

4.2.2 Machine Learning Methods

An important aspect for any machine learning technique is the manner in which the

output is reported [100]. This particular choice represents the connection between the

system itself and the working environment. There are two different options, that is, the

output can be continuous or discrete. If it is continuous, typically an anomaly score, the

instances are ranked and it offers the possibility for the analyst to control and ultimately

choose the abnormal occurrences in an accurate and flexible way. If it is a discrete output,

often a binary label, all instances are classified as normal or anomalous. It is less intuitive,

and yet easy to interpret, and it leaves little space for the expert to have a say in the

ranking of the data [10].

Machine learning techniques, not only in an anomaly detection context, include, based

on the structure of the training data, supervised, unsupervised or semi-supervised ap-

proaches. These techniques and their pros and cons, which will be discussed up next,

are the essential elements to consider in order to find the best model and determine the

applicability of the system.

A Supervised

Supervised techniques require labeling the training set with both normal and abnor-

mal samples. The training process has access to ground truth insights that are used

to optimize the cost function of the algorithm. Theoretically, it can provide better

results since they have access to more information [123].

Despite their better performance, these techniques face some technical issues, mostly

related to the fact the data can not be found in real life cleaned and labeled for the

machine to learn. Therefore, the training data must be labeled manually raising

questions regarding the volume or accuracy of the data created. These issues can

lead to high false alarm rates.

Page 50: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 37

The models that follow this principle often have a discrete output, integrating the

classification algorithms. Most of these solutions have a strong tendency to be bi-

ased towards the most frequent class. For instance, decision trees are not known

for their ability to deal with imbalanced scenarios, while Support-Vector Machines

(SVM), or neural networks often provide better results [124]. However, they require

more time to train. These details can narrow down a few options if we are negotiat-

ing with a strict limitation of time and memory like in the IoT environment.

B Unsupervised

In unsupervised methods, there is no need for labeled data. They assume that most

of the data is normal and only a small percentage is abnormal. The objective is to

draw inferences from the input about the underlying distribution. If a particular

example does not match the structure observed, it is considered an outlier [49].

The most common approach is the clustering algorithms. Frequently, unsupervised

algorithms rely on well-studied proprieties such as probabilistic distributions, dis-

tance or density measures to associate each instance to a specific group. Hence, they

provide theoretical support and interpretation. Moreover, the continuous output

score can be achieved by the degree of such measures. On the other hand, if such

assumptions are not correct, it can mislead the model, ending up high false alarm

rates [125].

Most of the unsupervised methods fail to provide an efficient algorithm because

they often calculate and compare similarity measures against all available samples,

increasing complexity. In streaming research, there are a few memory and time

solutions to avoid these issues [126].

C Semi-supervised

Semi-supervised algorithms provides a combination of both supervised and unsu-

pervised, training their models on a small portion of labeled data and a large portion

of unlabeled data. However, in anomaly detection, the typical approach is to build

a model for the majority class, in this case, training on normal data [100]. If an

instance is considerably different from the model assumption, it is assumed the dis-

tribution behind that sample is different from the observed. Thus, it is considered

an anomaly.

Page 51: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 38

This strategy focuses on modeling normal behavior. The key to finding an outlier

is to compare the predicted distribution with the structure of each sample. It is

simple to assume that this method is able to detect any zero-day attack, but it will

not distinguish an anomaly from a change of concept.

The most popular examples of this technique are one-class SVM and autoencoders

[124, 127].

In Table 4.2, for each learning methodology applied in anomaly detection, it is illus-

trated the model setup. It is portrayed the typical procedure on how to train their data,

which will be briefly detailed in the following topics, as well as their outcome interpreta-

tion for each technique.

Machine Learning Technique Training Data Output

Supe

rvis

edU

nsup

ervi

sed

Sem

i-su

perv

ised

Unknown Normal Anomaly

TABLE 4.2: Common machine learning techniques in anomaly detection

After providing a detailed contextualization with the essential topics of an anomaly

detection model in a streaming setting, the following section is presented as a related

work of anomaly detection methods for environmental data streams that can be used to

identify data that deviate from historical patterns.

4.3 Streaming Anomaly Detection Systems

Many anomaly detection methods have been proposed for batch settings, where all the

data samples are stored and multiple passes over the data can be performed. However,

in streaming analysis, a real data stream requires real-time data analysis. In addition, the

output must be generated with low-latency and any incoming data must be reflected in

Page 52: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 39

the newly generated output within seconds, without the possibility of storing all instances

in memory [128].

Given the scenario where an unsupervised methodology is desired and the final frame-

work ought to be incorporated in a cyber defense platform with human-in-the-loop su-

pervision to guide the model into increasing performance, the output of the model should

represent the degree of deviation from the normal behavior, that is, an outlier degree. By

using outlier scores, data samples can be ranked in decreasing order of outlier scores and

top-k examples are flagged as anomalies. In order to bound anomalies, a threshold is ap-

plied which, given the nature of this method, can interfere with false alarm rates, adding

another parameter to be tuned [129].

Anomaly detection is often used interchangeably with outlier detection methods.

Anomaly pattern detection, crucial in fraud or intrusion detection, is not performed at

an individual level as outlier detection often relates, but rather refers to the analysis of

instances behavior over time. In this sense, the following categorization methods distin-

guish four different approaches for an unsupervised and nonparametric, where no as-

sumptions are made about the data distribution, anomaly detection in a streaming con-

text.

A Distance-based

In distance-based, an instance is classified as an outlier if it has less than k neigh-

bors, whose distance lies within distance R. As so, this approach promotes a binary

decision boundary. In order to rank anomalies and output their scores, the distance

from a point to its neighbors is reported. Therefore, a distance-based algorithm re-

quires pairwise distance computations and tuning parameters for k and R [130]. In

addition, for high dimensional data sets, feature reduction is usually needed before

calculating distances [131].

In streaming mode, this method usually applies its models on sliding windows, in-

terpreted as a set of active data points. It considers the number of neighbors in a

time interval and it updates the number of instances that precedes and succeeds as

data points expire or appear in the data stream. In this sense, as time goes by, an in-

stance sees its state updated according to all active points in a sliding window [132].

Proposal algorithms that includes exact-Storm [133] or threshLEAP [134] still strug-

gle to update the necessary information of neighbors. As a matter of fact, the first

solution provides better memory performance but worse time complexity.

Page 53: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 40

B Density-based

This approach overcomes the shortcoming of the distance-based solution as the used

definition of an anomaly only takes a global view of the data set. In a more com-

plex structure, there are objects that are outlying from their local neighborhoods,

particularly with respect to their density. In density-based methods, an outlier is

detected when its local density differs from its neighborhood [135]. Different den-

sity estimation methods can be applied to measure densities. For instance, Local

Outlier Factor (LOF) [126] is an outlier score indicating how an object differs from

its locally reachable neighborhood. This formula takes into consideration the local

reachability density, calculated as the inverse of the average reachability based on

the number of instances that lie within a certain distance.

In streaming mode, multiple suggestions have been proposed in order to provide

some computational improvements. Algorithms such as iLOF [136] or MiLOF [137]

apply an incremental and memory-efficient LOF on a sliding window aiming to

have a similar performance as static LOF, decreasing its expensive computation de-

mands. These proposes perform insertion and deletion of instances as they appear

or become obsolete in a sliding window in a limited number of updates, indepen-

dent of the number of instances. MiLOF is similar to the time-fading window model

in which the latest data points, as they are more significant than older data points,

are stored in memory and the remaining part is summarized into a limited num-

ber of statistics. Although some improvements in time and memory computational

complexity have been achieved, it is still difficult to satisfy real-time applications in

streaming data.

C Clustering-based

From the perspective that cluster analysis finds groups of strongly related objects

and outlier detection finds objects that are not strongly related to other objects, these

two concepts can be complementary. In general, outliers are typically just ignored or

tolerated in clustering models as they are optimized to produce meaningful clusters,

preventing good results on outlier detection. In clustering-based models adapted to

anomaly detection, small clusters are classified as anomalies.

In clustering-based models, it is possible to find strategies that adapt clustering

distance-based algorithms and regard small clusters as anomalies [138]. Another

proposed method is to process clusters and remove the significant ones from the

Page 54: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 41

original data, identifying outliers as the remaining instances [139]. On the other

hand, some choose to use the local outlier factor as a state of the art which is mea-

sured by both the size of the cluster an instance belongs to as well as the distance

from the closest cluster [140].

In streaming mode, a common approach is to apply the model by dividing the

stream into chunks and clustering each partition where its outlier candidates are

ranked according to an anomaly score in the next fixed-size window [141]. Alter-

natively, assuming that the majority of the data instances are normal and that the

distribution of the anomaly class deviates severely from the majority class, a model

can be built from normal instances and used for anomaly detection. If an incom-

ing data instance lies outside the defined structures, it is considered to belong to an

unknown profile [142].

D Graph-based

Graph-based methods rely on the fact that data instances in a wide range of disci-

plines, such as physics, biology, social sciences, or information systems, exhibit de-

pendencies with each other. This representation, where each node represents a data

point and each edge the similarity between instances, provide powerful machinery

for effectively capture correlations among inter-dependent data objects [143]. An

example of this strategy is OutRank [144]. It maps data into a weighted undirected

graph, where its weights are transformed into transition probabilities and, later on,

the dominant eigenvector will be used to determine anomalies.

Such methods are capable of handling clustered anomalies that can be easily mis-

taken for normal instances when the number of members exceeds a certain thresh-

old. However, processing weighted graphs can be a synonym of a bottleneck for

most of the real-life applications, especially in a streaming fast environment [145].

E Tree-based

Tree-based methods propose a different approach that detects anomalies by isolat-

ing instances, without relying on any distance or density measure. In order to do so,

they assume that anomalies are the minority class and they present attribute-values

that are very different from those of normal instances. In this sense, the idea behind

these methods is to isolate instances using a tree structure mechanism. Isolation

trees are built by recursively selecting random attributes and splitting values until

Page 55: Anomaly Detection in Cybersecurity

4. STREAMING ANOMALY DETECTION 42

all instances are isolated. This strategy shows that anomalies are isolated at a higher

level than normal points. As so, the anomaly score is calculated based on the length

of the transverse path from the root to an external node. This method is considered

a basic or weak model and it is often seen working as part of an ensemble algorithm

(Isolation Forest) that combines the score of each model to come up with a more

robust and with less variance output [146].

In streaming mode, a sliding window scheme is a common approach where a model

is built and updated as the instances available appear in the stream. These models

often use parallel computing to build the ensemble as each tree is independent of

each other. Recently, robust solutions, specifically adapted to streaming settings,

have been proposed using similar ideas of random cuts with a preferential choice

regarding the attribute selection. For a binary tree structure, these algorithms offer

promising computational complexities [147].

All the requirements detailed and the context where IoT data is found to make the

choice of the appropriate machine learning strategy a difficult task. In order to imple-

ment a reliable and effective intrusion detection system, the keywords, represented in

Figure 4.3, are depicted as the main underlying constraints to fulfill a real and effective

cybersecurity system in IoT. Therefore, in the next sections, all the details of the proposed

solution will be discussed bearing these concepts in mind.

Anomaly Detection

in Cybersecurity

IoT StreamingAnomaly

Detection

Heterogeneous

Volume

Limited re-

sources

Continuous

Concept Drift

Real-time

Imbalanced

Adversarial

scenario

FIGURE 4.3: Domain contextualization

Page 56: Anomaly Detection in Cybersecurity

Chapter 5

miningbeat - Data Collection

The word security derives from the Latin word securus - se(without) + curus(anxiety)

– which expresses freedom from anxiety. The course of history has been teaching us that

freedom, specially in the sense of security, is something that everyone in every context

or domain care deeply. Nowadays, it is unthinkable to leave the house without double-

checking if the door is locked or, as we walk across the street, if our personal belongings

are close to us.

Smart cities are becoming more popular, giving a taste of the city of tomorrow, im-

proving the quality and safety of lives of its citizens. Different cities may have different

needs but they all understand the power of IoT as well as its challenges. Antwerpen,

Santander or Porto are a few examples of cities that are growing smart. While the first

example is focused on street safety and efficient use of lights, Belgium and Santander face

mobility and environmental issues, investing in traffic monitoring and in efficient energy

and emission consumption [148].

As the era of the Internet continuously grows, it is undeniable that, with great power,

comes great responsibility, building cybersecurity as necessity. In this sense, our work fo-

cuses on analyzing patterns in order to spot potential threats. While most of the proposed

solutions concentrates on monitoring network flow, as it is the most common source of

cyber attacks, we choose to monitor the main asset or the target each cyber threat is look-

ing to achieve. Therefore, we have been studying the system performance as it should be

a mirror of what the attacks intend to compromise. By choosing to monitor the system

itself, we can design an approach independent from the source of the attack, opening a

wider range of threats we are able to analyze.

43

Page 57: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 44

In this chapter, it is presented our designed framework, capable of integrating the elas-

ticsearch community that offers solutions for observability, and security that are built on

a single, flexible technology stack that can be deployed anywhere [13]. This framework,

named miningbeat, is responsible for monitoring the system, processing system calls, and

processing important metrics. We came up with the name after realizing that the elastic

search platform had already a lot of options, called elastic beats, to control system perfor-

mance, files, or network data, providing a general overview by reporting mean values or

the volume of traffic going in or out. Thus, this architecture seeks the right granularity to

inspect the data distribution that triggered an event.

5.1 Architecture

In order to build a data science oriented tool, we have created a new beat for the beat

family that uses the audit daemon to monitor system calls (syscalls) and to process the

information based on customized metrics created to record critical operations. As the

Linux manual page states, syscalls are the fundamental interface between an application

and the Linux kernel, allowing actions like opening files, creating network connections,

reading and writing from and to files [149]. As a consequence, observing system calls

can offer great insights into the tasks a process is performing as well as the capabilities

and accesses it is acquiring. These perceptions can be invaluable for troubleshooting,

monitoring, and bottleneck identification [150].

The Linux Audit system, audit daemon, was used to track and filter security-relevant

information on the system. Based on pre-configured rules, Audit generates log entries

to record as much information about the events happening on the system as possible.

This information is crucial for mission-critical environments to determine the violator of

the security policy and the actions they performed [151]. This tool should not be seen

as a security measure as it will not be sufficient to prevent cyber attacks, but rather an

essential tool to discover and log specific features later used in our dataset to reproduce

the system’s dynamic.

One of the most important steps to achieve stability and security is to provide a

strict separation of the operation system core and application programs or user processes.

Therefore, in order to communicate with the kernel, the top layer processes have to rely

on system calls to gain access to memory or CPU resources [152]. In this sense, as a main

method of communication between applications and the system, we chose to monitor,

Page 58: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 45

through audit system, all syscalls that, not only involve file descriptors, since, in Unix

systems, every communication from files and pipes to network data is accomplished by

these handlers, but also network entry point connections, changes in permissions or own-

ership, signals sent by processes and execution invocations to run scripts. This choice was

made based on the idea of tracking the critical movements a process can perform in its

lifetime, reflecting in memory, CPU, disk and file permissions and capabilities metrics can

be used to assert the criticalness of a task. These metrics will be discussed later on. On

Table 5.1, it is presented all families of syscalls used to filter processes handling delicate

jobs. For each family, it is provided the function synopsis with the most important argu-

ments, as well as the standard description and, finally, described by the name inspiration,

the reason why these system calls were selected.

Figure 5.1 shows the auditd report, returned by the ausearch command available,

when a chown command line is executed (comm="chown"). This log entries gathers useful

information to be processed by miningbeat. The type=SYSCALL entry stores information

concerning the process and user identifications (PID and UID). type=CWD is triggered to

record the current working directory from where the command line was executed. The

type=PATH field records file name path information such as the name of the reached file

(name="/tmp/when"). In this sense, miningbeat should be able to translate the information

that a file /tmp/when has changed its owner into an importance on the system’s status.

FIGURE 5.1: Syscall chwon report on Auditd.

In order to come up with a more robust approach, the miningbeat solution takes into

consideration different hardware architectures and respective system call functions that

might depend on it. In this sense, this framework runs on specific rules designed to col-

lect system calls chosen based on which elements portray decisive roles in the communi-

cation process and performance of the system. Some of the most important syscalls are

depicted in Figure 5.2. When monitoring the resulting audit logs, saved by default in

/var/log/audit/audit.log, any event following the miningbeat framework behavior is

discarded. Audit system is configured by setting multiple log entries parameters, from its

size or the action to take place with the maximum size is reached. In this sense, when the

Page 59: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 46

Syscall Synopsis Description Inspiration

opencreat

int open (char *path,

int flags); int openat

(int dirfd, char

*path, int flags);

int creat (char *path,

mode t mode);

These systemcalls open orcreate the filespecified by path.

By tracing this systemcall, you will see whenfiles are being createdor changed. Combinedwith permission metrics,inferring how critical thefile is, it can be useful tospot danger operations.

readwritesendrecv

ssize t read (int

fd, void *buf, size t

count); ssize t write

(int fd, void *buf,

size t count); ssize t

send (int sockfd, void

*buf, size t len, int

flags); ssize t recv

(int sockfd, void

*buf, size t len, int

flags);

These systemcalls attempt toread or writefrom a file de-scriptor.

Observing these syscallsoffer the change to mon-itor file descriptors, theheart of Input/Output,and a particular way ofhandling accesses to files,network data exchange,activity on pipes andsockets on Linux.

connectint connect (int sockfd,sockaddr *addr, socklen taddrlen);

Connect initiatesa connection ona socket by re-ferring a file de-scriptor to an ad-dress.

This is the only kernel en-try point to establish anetwork connection. Ev-ery time a process on themachine tries to establisha connection, this systemcall will be launched.

chmodchown

int chmod (char *path,

mode t mode); int

chown (char *path,

uid t owner, gid t

group);

These systemcalls changepermissions andownership of afile, respectively.

It is important to monitorchanges in permissions orownership of files in orderto detect any upgrades inthe files’ capabilities of ex-ecuting certain tasks.

killint kill (pid t pid,

int sig);

This system callsends a signal toa process.

Firstly, it allows to trace aprocess lifetime and, sec-ondly, it allows to observethe power a process canhave over others based onits capabilities to take overits resources.

execint execve (char

*file, char *argv[],

char *envp[]);

This family ofsystem callsexecutes the pro-gram pointed toby a file name.

Exec is the only Linuxkernel entry point to runa program. Monitoringexec invocations will tellwhat has been executed inthe system like single pro-grams, script or cron jobs.

TABLE 5.1: Monitored Linux System Calls.

Page 60: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 47

limit is reached, the log entries rotate in order to make the first file the one with the latest

records. When the number of files is reached, the oldest file, the one with the highest

identification, is removed.

In brief, we provide a tool, designed for operating systems based on Linux kernel, that

is able to monitor IoT devices and machines without further configurations.

FIGURE 5.2: Miningbeat architecture integrated with elasticsearch and kibana for visual-ization.

5.2 Features

One of the most important parts of a data science solution is obtaining valid, repre-

sentative and accurate data. Data collection, considered the fuel of this century, should be

our main focus, as building a solution on top of a biased, skewed or not representative of

the context of the work, produces unfit models.

As the power of communication increases, data is becoming a buzzword and is widely

considered to be a driver of better decision making, improved profitability and security.

Data is also a mirror of a set of interactions always connected to a certain time and context.

As so, in order to fully understand the environment of our task, domain knowledge and

data analysis cannot be separated [153].

The following discussion provides a study on the background context of cybersecu-

rity to define the system metrics used to monitor the system performance throughout

Page 61: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 48

its runtime. However, the real challenge lies in selecting appropriate metrics capable of

monitoring the performance of the system and bringing forward possible relations with

external influences. Depending on the goal, our monitoring needs may vary. In this spirit,

our metrics were gathered and designed considering what are the main assets or targets

an attack is looking for.

Since systems typically assume an hierarchical organization, with more complex lay-

ers building on top of more primitive ones, it can be useful to think about the metrics

available at each level. On the bottom of the infrastructure, there are host-based indi-

cators, comprised of usage or performance of the operating system or hardware, such

as CPU, memory or disk. These are usually the immediate suspects to identify a server

performance degradation issue, indicating the system’s availability, reliability and stabil-

ity [154]. Classes of threats such as Denial of Service where the attacker aims to cripple

a service by exhausting bandwidth and CPU resources, compromising the system per-

formance and reputation. As a way to control the number of requests and load of the

system, metrics such as CPU usage, derived from user, system and idle values available

on /proc/stat, and load average, measuring how many task are waiting in the running

queue, are being examined [155].

Figure 5.3 illustrates the usage metric calculated for CPU, disk and cache, colored by

the UID, when running miningbeat on a desktop. In particular, the increase of the CPU

usage is associated with firefox commands who are also responsible for the increase of

cache percentage on the last slope represented in the third plot. In these plots, it is possible

to see the real-time and streaming environment of this monitoring tool that is able to

combine designed metrics and system performance metrics also available on commands

like top or vmstat.

Page 62: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 49

FIGURE 5.3: CPU, Disk and Cache usage per UID.

On the next layer, we can find application metrics concerned with units of processing

or work that depend on the host-level resources. They function as indicators of the ef-

ficiency of the running services, such as success or failure rates [154]. Concerning these

measures, values like major and minor faults, occurring when a process accesses a page

that is mapped in the virtual address space but not loaded in physical memory. For in-

stance, when starting Firefox application, the system will search in cache and physical

memory. In case it is not able to find the data, a major fault will be issued [156]. These

metrics are collected, according to each Process ID (PID), from /proc.

For each process, in order to outline the process life cycle and to monitor the capabili-

ties it is enable to achieve, metrics such as PID, status, priority and capability, determined

by the number of flags, available in /proc/[PID]/status, used to ascertain a process

privilege’s [157], are some of the collected metrics regarding a program’s history.

On the last essential layer, security-level metrics can be found. In essence, most

of the tasks running in background read from or write to files holding different level

of sensitive information. Therefore, in this layer, it is important to trace all access to

files to prevent unauthorized access that could result in the loss of sensitive data or

any improper changes causing compliance failures [158]. File integrity is a very deli-

cate component when dealing with security. When starting the system, malicious code

can be added to boot loader files, such as /proc/cmdline, to daemons and services files

like /etc/system.d.Scripts can run when bash starts updating environmental variables

through ∼/.bashrc or ∼/.bashprofile. DNS can be overridden, allowing the system to

communicate with impostors, just by adding lines to /etc/hosts or /etc/resolv.conf.

Page 63: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 50

Last but not least, changes to files such as /etc/passwd, /etc/groups or /etc/shadow are

important to monitor because it holds user accounts and password hashes [159].

In Unix operating systems, files are found to be one of the basic structural units and

most file-based attacks do not need to change its contents, just its attributes or permis-

sions. Hence, it is important to discuss in detail how file management is conducted. For

this next metric, we are considering two types of files that can be listed as regular files

(−) and directories (d). In some permission systems, additional symbols can be added to

represent additional permission features such as access control list access or SELinux con-

text present. This attributes extension will be considered when assigning the final file’s

permission [160].

Thus, we propose a method to measure how critical the accessed file is. The intuition

behind our formulation is that, if the file is susceptible, it will be highly protected, present-

ing restricted permissions. Therefore, our approach consists of using a numerical statistic

inverse to the decimal value of the file’s permissions, as well as an importance score of the

User ID (UID) accessing the file. Accordingly, this metric is evaluated for each process,

recording its movements and access history, alongside with the EUID used to evaluate the

privileges of a process to perform a particular action.

The file’s permission metric is calculated as the product of importance, based on the

groups to which users belong under the operating system and the logarithm of the sum of

the decimal value of its permission for each component - user(u), group(g) and others(o) -

divided by the sum of the maximum permission a file can have (7+7+7=21). In particular,

setting user to root will have an importance of 1, since it is considered sovereign to sign

in all programs and files. By choosing the formula, written in Equation 5.1, it is possible

to assign a critical degree in a sense where changes in files with higher permission are

not as dangerous as in those with restricted values. In this sense, we can trace, not only

restricted files but also the prominence of the logging user. Therefore, when a process

opens a file descriptor invoked by one of the monitored syscalls, the files’ permissions are

inspected and returned as the sum of all files. When a new file descriptor is issued, this

procedure is repeated so that the previous average can be updated. The final result will

be a sequence of inspections of files’ permission while the process is still active.

CV = |Importance× log(perm(u) + perm(g) + perm(o))10)

7 + 7 + 7| (5.1)

Page 64: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 51

To illustrate how this formula work, let us picture two different files - /etc/passwd

and another ordinary file, ∼/pe 1.R, located in the home directory. Figure 5.4 shows the

output of the command ls -l for both files highlighting its permissions, user and Group

ID (GID)

FIGURE 5.4: Output of ls -l of file /etc/passwd and ∼/pe 1.R

Based on the definition presented above, the first step to identify the criticalness of

each file is to interpret the permissions in the first field, composed of three owners - user,

group and others.

rw− → (110)2 = (6)10

r−− → (100)2 = (4)10

In this sense, it is possible to write the sum of the decimal numbers of each permission,

as well as calculate the importance based on who is the owner of the file and the number

of groups it belongs to. For the user root, as mentioned before, the importance is 1. For

the remaining users, by running commands such as groups and the number of available

groups in the /etc/group file, the value should be 70 groups and 9 different groups to

which the user belongs. In this sense, Table 5.2 shows all the results and, in the last col-

umn, it is shown the CV, criticalness value of each file, calculated based on the definition

proposed before. As a matter of fact, this final value, which will be added to mining-

beat, states that a detected modification in /etc/passwd is more dangerous than the one

detected in ∼/pe 1.R as its criticalness value is higher.

Perm User Group − log ∑( perm)

21 Importance CV/etc/passwd 6+4+4 root root 0.585 1 0.58∼/pe 1.R 6+6+4 ines ines 0.392 9

70 = 0.129 0.03

TABLE 5.2: Metrics calculation to assert a file’s criticalness value.

This metric is applied to all the processes running on the device to capture the cru-

ciality of the accessed files. In this sense, a final adjustment to detect when the UID does

not belong to the GID that launched the process is added following the same method as

in the file’s user and group. Furthermore, this idea has also been used to assigned an

importance to chmod and chown syscalls, by associating the mode argument, which is a bit

Page 65: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 52

mask, to determine the significance of this event in a sense where lower values could be

interpreted as an increase of accessibility.

Figure 5.5 shows the CV value, colored by UID when running miningbeat on a desk-

top. For most of the performed tasks, the criticalness value scores the importance of zero,

not raising any suspicions. However, there are two sets of UID that stand out from the

rest. In fact, the highest value is related to the thermald daemon executed by root and the

second value is associated with redis-server, which stands for the data structure store that

can be used as a database, cache, and message broker.

FIGURE 5.5: Permission importance.

As a final point, Figure 5.6 stages the most significant metrics gathered for each system

component, as well as the tree structure of the /proc file, an important asset in the data

collection framework design.

FIGURE 5.6: Tree structure and attributes collected from /proc for the main metrics.

Page 66: Anomaly Detection in Cybersecurity

5. MININGBEAT - DATA COLLECTION 53

To conclude, Figure 5.7 represents a screenshot of miningbeat as a beat integrated with

elasticsearch and kibana. As part of the elasticsearch community, it is possible to monitor

and perform queries based on the collected metrics to have a real-time look into the sys-

tem. This figure verifies our proposal to design a monitoring real-time framework that

could provide meaningful insights about the system’s status. In fact, the way monitoring

is performed is one of the main characteristics that distinguish it from the most recent de-

velopments in host-based solutions, such as OSSEC [66] or Wazuh [72], that emphasizes

on simple host snapshots and system’s hash analysis alongside with specific processes

best known for specific attacks’ techniques.

FIGURE 5.7: Miningbeat integrated with elasticsearch and kibana.

Page 67: Anomaly Detection in Cybersecurity

Chapter 6

Anomaly Detection in Cybersecurity

Cyberattacks usually aim at accessing, changing, or destroying sensitive information,

extorting money from users, or even interrupting normal business processes, compro-

mising the system performance, and installing fear and disbelief. Implementing effective

cybersecurity measures is particularly challenging today due to the increasing number of

ongoing devices without underestimating the power of attackers to be innovative and to

overcome the defense lines that have been raised to control them [161].

As stated by Online Trust Alliance (OTA), IoT offers both consumers, businesses, and

governments across the globe countless benefits and significant challenges. Through lead-

ership, innovation, and collaboration, it is possible to create a safer and more trustworthy

connected world. This requires a shared responsibility including industry embracing se-

curity and privacy by design, as well as adopting responsible privacy practices.

Data-driven anomaly detection systems have been introduced as complementary de-

tection systems. Such systems aim to detect any abnormal deviations from the normal

activity of an enterprise system. They are based on statistical and machine learning meth-

ods that model the normal activity of the network [162]. In the previous chapters, it has

been discussed the relevant topics regarding cybersecurity and intrusion detection strate-

gies. Figure 6.1 depicts a real case scenario in IoT, applied to Smart Cities, aligned with

the data collection system described in the previous chapter. From bottom to top, it is pos-

sible to find the IoT domain with different and interoperable devices. As a first defense

line, working directly with data flows derived from the communication among devices,

it is found signature-based IDS that are fast but not able to deal with zero-day attacks. In

this sense, a second defense line is proposed to open a wider range of detectable threats

by focusing on the asset an attacker is looking to achieve and monitoring how the system

54

Page 68: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 55

responds to such processes. In this sense, a time-aware host-based anomaly detection fed

by miningbeat is exhibited in this figure. To conclude, as it has been detailed previously,

a hybrid-based IDS is encouraged for any platform that wishes to stay in the vanguard of

development without neglecting performance and security.

FIGURE 6.1: Proposed framework of an anomaly detection system

In the current chapter, an anomaly detection algorithm, implemented in Golang and

integrated with miningbeat, will be discussed to work as a second defense line in IoT de-

vices. To begin with, some of the most common state-of-the-art solutions will be included,

from the model itself to concept drift detection, streaming environment, and computa-

tional concerns.

According to the Cambridge Dictionary, an anomaly is defined as something that is

unusual enough to be noticeable and does not fit in. As it is often denominated, anomaly

detection is the identification of events or observations which do not conform to an ex-

pected pattern or other items in a data set. Capturing anomalous events through the

sensor data of a mobile device on an IoT platform can serve the purpose of detecting

accidents or potential frauds.

In cybersecurity, the factor time, the velocity and immensity of events generated as

well as the adversarial scenario as a consequence of a direct confront with hackers up-

dating their strategies to take over undertake challenges and restrictions in designing an

Page 69: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 56

anomaly detection approach.

In this sense, the next sections will detail an unsupervised and non-parametric anomaly

detection solution based on the data generated with the miningbeat framework already

shared in the previous chapter. In the following, theoretical concepts will be presented

as well as the implemented characteristics. In order to validate this approach, the testing

environment, based on up to date CVE, will be run.

Most anomaly detection models find anomalies by unraveling the distribution of their

properties and isolating them from the rest of the data. By following this strategy, the

model is optimized to profile normal instances instead of detecting anomalies, causing too

many false alarms. Another critical aspect is that many existing methods are constrained

to low dimensional data and small data sizes due to their high computational complexity

and propensity to yield to the curse of dimensions [163]. According to Bellman [164],

the curse of dimension phenomenon stands for the exponential growth of the number

of objects to be accessed with the underlying dimension, whereas too many dimensions

cause every observation in the data set to appear equidistant from all the others.

In this regard, it is proposed and adapted a machine learning model that isolates

anomalies rather than profiles normal instances based on the properties that anomalies

are few and different when compared to normal data. The idea of isolation contains

enough information to determine a distance-based metric without assigning a distance

function.

In an isolation model, a tree-based structure can be constructed effectively by sub-

sampling on random cuts in order to isolate every instance as anomalies should lie closer

to the root, whereas normal instances are located at a deeper level. In random trees, in-

stances’ partitioning is repeated recursively until all instances are isolated. In anomaly

detection, since anomalies are expected to be few and different, random partitioning pro-

duces shorter paths for particular points that are highly likely to be anomalies [165].

The original isolation-based model [163] builds an ensemble of independently isola-

tion trees responsible for, by randomly choosing attributes and splitting values, finding

the anomaly score of each point based on its path from the root. As the output of each

tree consists of a score, the association with the ensemble method in a regression problem

is inevitable and, thus, the final anomaly score is calculated as the average path length of

all scores.

Page 70: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 57

Opposite from most methods where large sample sizes are more desirable to achieve

high performance, isolation methods work best when the sampling size is kept small as, in

larger sizes, normal instances can interfere and disguise anomalies. As stated previously,

these effects are known as swamping and masking effects alleviated by the unique char-

acteristic of sub-sampling that allows building possible specialized models on different

sets.

Isolation forest, how the first isolation-based ensemble of trees is known, gathers

promising advantages such as efficiency, scalable, unsupervised, and nonparametric

anomaly score and transparency that can be adapted to a streaming set and help to

draw conclusions about the distribution based on the attributes that have been selected

throughout the ensemble.

The iForest algorithm, represented in Algorithm 1, has been favored over other

anomaly detection methods due to its scalability and efficiency designed for the sole pur-

pose of anomaly detection. This method is a tree-based unsupervised ensemble where

anomaly scores are generated in each tree, avoiding calculating computationally expen-

sive distance or density measures. The base learner of iForest, referred to as iTree,

consists of a proper binary tree constructed in a random fashion, making this proposal

computationally and memory efficient.

Algorithm 1 iForest

1: procedure IFOREST(X, ntrees, ψ)

2: f orest← nil

3: wg.Add(ntrees)

4: for i := 0; i < ntrees; i++ do

5: X′ ← sample(X, ψ, rep = True)

6: f orest← f orest ∪ iTree(X′, ψ, 0))

In a divide and conquer paradigm, iTree recursively splits the input space, a subsam-

ple taken from the training data without replacement, into progressively smaller, axis-

parallel rectangles aiming to isolate instances. An iTree node is created by randomly

selecting an attribute along with a randomly drawn split value, which lies between the

minimum and maximum of the select feature [163]. Therefore, this method is only mean-

ingful on numeric attributes. The Algorithm 2 represents the main recursive instructions

of the described classifier.

Page 71: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 58

Algorithm 2 iTree

1: procedure ITREE(X, ψ, h)

2: if h > log2 ψ or |X| == 1 then

3: n.le f t, n.right← nil, nil

4: n.height← h

5: else

6: att, split← n.Cut()

7: X right← X. f ilter(att > split)

8: X le f t← X. f ilter(att <= split)

9: n.right← iTree(X right, ψ, h + 1)

10: n.le f t← iTree(X le f t, ψ, h + 1)

11: return n

When an instance passes through an iTree, the attribute value of each non-leaf node

is retrieved and tested in order to decide which direction, left or right node child, the

point should take. The anomaly score for a given point for each tree is determined by

the depth of a point in a tree as anomalies are expected to be isolated at earlier levels.

Therefore, given an isolation tree, ht, the path length ht(x) = e + c(nlea f ) ∈ [1, ψ − 1],

where x is an instance to be evaluated, ψ corresponds to the number of instances in the

data set and e stands for the number of edges separating the root from the chosen leaf.

The c(nlea f ) value is an adjustment to account for unsuccessful searches in the binary

search tree by considering the average path length of a random sub-tree given the size of

the leaf node [166]. In Algorithm 2, it is possible to note that the limit height of a tree is

limited to log2 ψ, which is approximately the average tree height. The reason behind this

threshold is that the points with path length higher than the average are more likely to be

considered as normal instances, shifting the attention to the lower levels of each structure.

Equation 6.1 details how this adjustment is calculated, where H(a) ≈ ln a + 0.5772156649

(Euler’s constant).

c(nlea f ) =

2H(nlea f − 1)−

2(nlea f − 1)nlea f

, if nlea f > 2

1, if nlea f = 2

0, otherwise

(6.1)

Page 72: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 59

The final score of an instance is calculated as the average path length reported by each

tree depicted in Equation 6.2. Empirically, the work published by [163] showed that the

average path length stabilizes and tends to be much lower for anomalies at a moderate

size. Finally, the final score - s(x, ψ) ∈ [0, 1] - is computed as stated in Equation 6.3,

where E(h(x)) is defined by Equation 6.2 and c(ψ) serves as a normalization allowing the

comparison of the algorithm’s outputs with different subsample sizes represented by ψ.

E(h(x)) =1

ntrees

ntrees

∑t=1

ht(x) (6.2) s(x, ψ) = 2−E(h(x))

c(ψ) (6.3)

In this sense, as E(h(x)) → 0, that is, an instance tends to be placed next to the root,

the anomaly score will be close to 1. On the other hand, if E(h(x))→ ψ, the final score will

tend to 1. Accordingly, based on the fact that anomalies will be placed at higher levels,

these results detail that anomalies are expected to be ranked with scores closer to 1.

The following topics present how our proposed method, inspired by iForest, tackle

the important aspects detailed throughout this work such as how we managed to employ

categorical features, streaming settings, concept drifts, and computational complexity de-

mands.

6.1 Categorical features

Commonly, intrusion detection data sets, especially host-based approaches, contain

both numerical and nominal attributes. Given the ease of expressiveness, the latter of-

ten encodes valuable expert knowledge. For instance, considering a host-based system

where the fundamental units are processes and file descriptors, the user ID and effective

user ID are considered categorical features with great importance as EUID determines the

effective access rights of the program and the real UID identifies the real owner of the pro-

cess and declares the permissions for sending signals [167]. Alongside with the capability

attribute, designed to assert the number of signals, such as SETUID, a process is able to

send, tracking possible fluctuations in these values might account for undesired privilege

escalation.

Therefore, it is important to design a algorithm that is able to interpret both numerical

and categorical. As it has been suggested before, the original iForest algorithm is not

suitable to incorporate nominal variables. Anomaly detection in quantitative data has re-

ceived considerable attention in the literature. On the other hand, despite the widespread

Page 73: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 60

availability of nominal data in practice, anomaly detection in categorical data has received

relatively less attention as compared to quantitative data. This disparity can be explained

if it is held in mind that some anomaly detection techniques depend on identifying a

representative pattern and measuring distances between objects. Thus, identifying pat-

terns and measuring distances is not easy in categorical data compared with quantitative

data [168].

Numerous efforts to deal with the problem of mixed attributes in anomaly detection

has been discussed throughout the years. From what it could be learned, it is possible

to distinguish three techniques to handle mixed attribute input spaces. Categorical space

techniques are devised for categorical variables and transform any numerical variables

into categorical through a discretization phase. One example of this method is HOT [169]

in which an anomaly is detected using a hypergraph structure and a local test for the

anomaly score based on a frequent itemset. Another methodology to deal with mixed

data is the metric-centered technique where the basic anomaly solution for numerical are

kept but the similarity functions are extended to mixed attributes spaces [170]. At last,

mixed-criteria techniques tackles the nature of numerical and categorical data separately

by trying to design a criterion that encompasses the analysis of an element in both spaces.

For instance, LOADED algorithm [171] blends categorical-categorical, categorical-numeric,

and numeric-numeric correlations in a single criterion using frequent itemsets concept

and local correlation matrices.

Based on an isolation methodology, Stripling et al. [172] proposed the iForestCAD

method that computes anomaly scores conditionally on the groups determined by the

categorical variables. It determines all distinct combinations of values of selected nomi-

nal attributes and split the data set according to these combinations. For the conditional

approach, an iForest algorithm is trained on each partition. The selected variables are

then substituted by the returned anomaly score and used to train a binary classifier.

Hence, this solution, particularly in a streaming setting, does not develop a suitable

approach since forcing to pre-process the original data into combinations of different val-

ues, which tends to grow immensely as the number of categories and variables increase,

as well as training a second line classifier, imposes high computational complexities and,

therefore, is not a suitable approach.

In this proposal, considering the idea of conditional isolation to handle qualitative

features, the splitting value will be drawn randomly out of the possible categories of the

Page 74: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 61

chosen variable. In this sense, adding this thought to the original iForest algorithm

described before, the cut variable will be chosen randomly out of the input feature space.

If the variable is numerical, then the original algorithm is preserved. If it is nominal,

then the split value will be selected randomly out of the possible categories. As in the

quantitative values, if an instance is equal to the split value, it should follow the left path

and the right otherwise.

Let us consider the input data, represented in Figure 6.2, with three variables, X, Y,

and Z, where the latter is a nominal attribute.

(A) Data set (B) Plot data set

FIGURE 6.2: Example data set and plot.

Based on the explained methodology, a possible iTree structure can be drawn using

the split variables and values depicted at each node of Figure 6.3a. For each node, until

all instances have been isolated or the maximum height is reached, an attribute is selected

randomly. If the attribute is nominal, as depicted on the second layer, on the right side,

a split value is chosen from the possible categories and all the points equal to it should

follow the left branch. As it has been stated, the random cuts performed by this method-

ology draw axis-parallel splits in the data space. Figure 6.3b represents the random cuts

defined by the plotted isolation tree. In this figure, for each subspace, the depth of the tree

in which an instance has been isolated is written in the plot using the term h = i, where i

is the number of branches that lead to the leaf.

Page 75: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 62

(A) Isolation tree structure. (B) Plotted random splits

FIGURE 6.3: Isolation Tree and correspondent splits for the data set plotted in Figure 6.2b

Apart from computational efficiency, the major advantage of this method is the ability

to treat different dimensions independently and to be scale-invariant in a multidimen-

sional space. In fact, random subsampling, which consecutively drives to less populated

areas, helps to mitigate disguising effects as it breaks down into regions and filters in-

stances, removing possible interferences with the detection of anomalous points. How-

ever, as the elected features are drawn randomly, the majority of the splitting cuts are

expected to be irrelevant, especially when training with multiple dimensions. This par-

ticular trait can result in false alarms or poor recall [173].

Another shortcoming, already briefly mentioned, is derived from the filter values that

result in parallel cuts to the axis. This aspect contributes to the inflexibility of the deci-

sion boundaries around the clusters in the dataset. Thus, regions that do not necessarily

contain many points, but belong to high-density zones, manifest multiple cuts through

them and, therefore, different scores. To solve this problem, Hariri et al. [165] proposed an

extended version of the isolation forest algorithm where the branching procedure, instead

of being parallel to the axis, occurs in different directions based on a random slope.

In our work, to solve these shortcomings, without compromising the efficiency of the

isolation forest method, a random cut isolation forest has been designed derived from

the robust random cut tree splitting method [173]. In this sense, a feature is not chosen

uniformly at random but rather based on its ability to isolate instances. Diversity can

Page 76: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 63

be conceptualized as disparity, differences among group members in possession of val-

ued resources, that can be used to identify outliers [174]. For a numerical variable, its

importance is expressed as the coefficient of variation (CV), also known as relative stan-

dard deviation, calculated as the ratio between standard deviation and the mean value.

This statistic is a standardized measure of the dispersion of a probability distribution or

frequency distribution. It shows the extent of variability around the mean and allows

us to compare different distributions measured on a ratio scale that have a meaningful

zero [175]. Equation 6.4 expresses the formulations used to compute CV, where the dis-

persion is calculated as the square root of variance. By simplifying the equation, the vari-

ance formula can be written as the difference between the expected value of the square of

variable X and the square of the expected value of X.

σ(X) =√

Var(X)

Var(X) = E((X− E(X))2)

= E(X2 − 2XE(X) + E2(X))

= E(X2)− 2E(X)E(X) + E2(X)

= E(X2)− E2(X)

(6.4)

According to this formulation, CV appears as a good measure of importance, as it is

sensitive to outliers. Therefore, the emergence of a discrepant value inflicts an increase in

the outcome.

For nominal variables, different metrics have been studied to assign an importance

value following a similar regimen to the information gain used in decision trees [176].

However, it is essential to assure that the scale of the importance value is the same for

both nominal and numerical variables. Therefore, as it is attempted to distinguish vari-

ables with discrepant values in their distribution, for qualitative attributes, the distribu-

tion used to assert its importance is the class frequency in a sense where a single category

or equal-numbered categories will assume zero importance. However, disparity mea-

sures are proven to be biased towards the sample’s size, particularly by underestimating

diversity in smaller groups. Therefore, based on the work published in [177], the correc-

tion shown in Equation 6.5 was applied because it revealed a much smaller bias with a

maximum deviation than other methods.

Page 77: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 64

Cn =Γ( n−1

2 )√

n−12

Γ( n2 )

q =n− 1Cn2

s(X)∗ =

√∑(x− x)2

q

CV(X)∗ =s(X)∗

x

(6.5)

This formula is scale-free as changing measurement units won’t affect the frequency

distribution [178]. For each dimension, this method chooses the attribute that enables iso-

lation among data points. Particularly, an attribute with few different values in its distri-

bution and bigger ranges indicates that along its axis might be an outlier. For the nominal

variables, an attribute with a class, whose occurrence is considerably different from the

rest, suggests that, at least, two different concepts shaped the distribution. Finally, for the

attribute cut, it is chosen the variable with higher importance.

In order to provide an illustrative example of how this strategy works, let us consider,

once more, the data set represented in Figure 6.2a. Table 6.1 shows the application of

the corrected attributes’ importance defined in Equation 6.5. The first two lines combine

the numerical features, leaving the last line to the only categorical variable present in the

dataset. Each column represents the most important terms playing in the final importance

calculation.

Range N Cn ∑(x2i ) xi CV∗

X [1, 2.5]6

1.05 12.91 1.38 0.43

Y [1.5, 2.2] 1.05 21.96 1.90 0.16

Z {1 : 3, 2 : 3} 2 1.25 18 3 0

TABLE 6.1: Attributes’ importance calculations.

To conclude, Figure 6.4 illustrates an isolation tree considering the proportional fea-

ture importance, keeping the splitting value uniformly as in the standard algorithm. The

first node corresponds to the calculations stated in Table 6.1, where the chosen attribute is

X as it holds the higher importance.

Page 78: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 65

(A) Isolation tree structure. (B) Plotted random splits

FIGURE 6.4: Isolation Tree of cForest method and correspondent splits for the data setplotted in Figure 6.3a

This last approach, due to the way the features are selected, has the ability to subsam-

ple the data set, revealing significant clusters in the distribution. Thus, the single outlier

point on the highest part of the X-axis receives the highest anomaly score. Next in line,

there are two clusters with a different number of points. In this scenario, the nominal

variable receives special attention, uncovering the point with a different category as an

outlier in this cluster. Although the modified CV importance returns similar values as the

original CV metric in the first steps, for later nodes, particularly when the number of in-

stances is closer to one, the modified value struggles to distinguish attributes. Lastly, this

example shows how the random split value takes place to determine the anomaly scores.

In the last two ranks, three and four, depending on the X-value chosen, the blue point on

the largest cluster can be placed at a higher level. For this to happen, it is only required to

assign a splitting value larger than 1.15 when there are n=3 instances, forcing the outlier

in the upper cluster to have a smaller path length. These situations will be normalized by

averaging the scores an instance obtains in all trees.

6.2 Streaming analysis

Considering the limited storage capacity of devices and the infinity nature of a data

stream, it is impossible to store all the objects. To introduce a valid streaming application,

the memory complexity of the machine learning algorithm should be independent of the

Page 79: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 66

number of instances. In fact, it should be of a fixed size and far smaller when compared

to the amount of data already evaluated. The ordering and velocity in this setting require

the data instances to be accessed only once and processed in real time [179].

In this proposed method, the strategy applied to analyze streaming data is by using

a sliding window with a limited size, considered one of the basic approaches which can

solve the problems of discovering knowledge in a dynamic and infinite data stream. It

solves the problems of knowledge discovery in the infinite data streams by processing

data windows with finite capacity. It also helps available machine learning methods in

static data sets to be transplanted to the dynamic data stream. Thus, the main idea behind

this study is to adapt an originally static tree-based unsupervised ensemble to a dynamic

and fast anomaly detection framework.

In order to develop a streaming solution, where an instance should be processed only

once when each component is being trained, a set of statistics, such as the depth of the

node a point belongs to or the set of attributes used to split the data set, is kept avoiding

a second-round through each tree to assign the path of a data point. Therefore, each tree

should carry the information of the depth each instance falls into as well as the set of

attributes used to isolate them. These last statistics can be used to determine the rules

that ranked points into anomalies.

An ensemble is a set of individual classifiers, base learners, typically weaker than

the ones used in single methods, whose predictions are combined to improve the per-

formance of individual classifiers. Furthermore, this strategy has been proving to be an

efficient way of decomposing a complex problem into easier tasks. Its main motivation

lies in the thought that each part has a domain of expertise and there is no single classifier

that is adequate for all jobs [180].

From a statistical perspective, the hypotheses space is often too large to explore and

there may be several different hypotheses with the same importance. Thus, the model

is biased and compromising its ability to predict future data as it is not seeing the big-

ger picture. By combining hypotheses, it is possible to expand the field of view of the

solution, increasing diversity and performance. On the other hand, it is possible that a

single model does not include the correct hypothesis, making the model unfit to predict.

Accordingly, by expanding the prediction space, the probability of missing the true value

is reduced, attenuating bias [181]. In a computational perspective, combining weak learn-

ers offer better complexities scenarios in a sense that base learner is simpler and, due to

Page 80: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 67

the random selection of the input space, they can also be independent, allowing parallel

implementations. When it comes to the combining strategy in a regression setting, where

it is expected to obtain good results based on the belief that the majority of experts are

more likely to be correct when they are close in their opinions, the common solution is to

average the outputs of all the learners and, therefore, reducing variance [182]. Further-

more, since each component is expected to be focusing on a particular feature, ensemble

techniques can diminish the effects of getting stuck at a local optimum due to local search

methods, such as hill-climbing [183].

The techniques using ensemble methods fall into two categories - block-based and

online ensembles. The first method segments the data stream into fixed-size blocks to

train and update the ensemble. The second approach updates base classifiers after each

instance without the need to store or reprocess it. Therefore, ensemble methods are an

attractive approach to construct data stream classifiers because, due to their compound

structure, they can easily accommodate changes in the data stream distribution, offering

gains in both flexibility and productive power.

To sum up, the final method incorporates a finite sliding window with an ensemble

of binary trees designed to isolate instances. Following the idea of the random forest

algorithm, each isolation tree is kept as uncorrelated as possible so that the learners can

protect each other for their individual errors or masking effects. Therefore, each classifier

is trained on a sampled taken from a sliding window with replacement. After the training

phase, the sliding window is used one last time to predict all the instances’ scores. For

both splitting criteria implemented, the number of trees is equal to the one proposed by

the standard algorithm - ntrees = 100.

6.3 Concept Drift

As it has been previously stated, the adopted method to deal with infinite data streams

aims to divide the input into sliding windows which can be interpreted as data chunks.

As follows, the ensemble methodology shall uphold a chunk-based ensemble method for

streaming data for nonstationary distributions.

A simple way to handle dynamic ensemble learning is to divide large data streams into

small data chunks and training classifiers on each data chunk independently. Each batch

is relatively small so the cost of training a classifier is not too high. In this method, instead

of saving instances of summarized versions of clusters, a well-trained classifier is saved.

Page 81: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 68

Therefore, this technique is able to cope with unlimited increasing amounts of data, by

partitioning into small chunks, and concept drift problems in data stream mining [184].

Chuck-based ensembles can adapt to concept drift by creating a new component clas-

sifier from each batch defined by a sliding window, where it is assumed to be held differ-

ent distributions or concepts. Online-ensemble updates their components’ weights after

each instance has been processed, offering quickly responds to sudden changes [185]. Al-

though block-based methods report some delay in reacting to sudden changes, learning a

new component from the most recent batch holds provides a natural way of adapting to

gradual concept drifts. By learning independent classifiers from complete windows, the

defined solution is enabled to apply standard batch models [186]. One particular example

is the work proposed by Ding et al. [187], which inspired our proposal. In order to adapt

an algorithm originally designed for batch processing, this work feeds the incoming data

into a predefined sliding window size. Based on the experiment results revealed by Ding

et al., the size of the sliding window used to train our classifier will be ψ = 256.

At this point, it is clear that ensemble methods hold a natural way to adapt to con-

cept drifts either by adding components, removing outdated learners, or retraining and

adapting classifiers. However, while domain experts delay providing ground truth for

the model predictions, it is important to ensure a statistical method to detect if a change

in the distribution has occurred and it is time to update the model.

The used method aims to track the average of a stream of real values. Based on the

hypothesis that, if there is a statistical consistency in the distribution, there are not enough

indications to suspect a concept drift. Therefore, whenever two sliding windows exhibit

distinct enough statistics, it is assumed that the expected values are different and the

model is no longer suitable [188].

Bifet et. al [189] proposed a concept drift detection method based on adaptive win-

dows (ADWIN) that measures the inconsistency of distributions using the Hoeffding bound

for independent bounded variables. ADWIN is parameter and assumption-free as it auto-

matically detects and adapts to the current rate of change. The only parameter required

is confidence bound - δ ∈ [0, 1] - indicating the confidence level to account for in the al-

gorithm’s output, inherent to all algorithms dealing with random processes [190]. Some

online ensemble methods adapted the ADWIN concept in order to estimate the weights

used in the boosting ensemble method. When drift is detected, the worst component is

removed and a new classifier is added to the ensemble [185, 191].

Page 82: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 69

The Hoeffding bound [192] states that, with probability 1− δ, for a random iid variable

Z, where a ≤ Zi ≤ b, the estimated mean after n independent observations of range

R = b− a will not differ from the true mean by more than ε, here shown in Equation 6.6.

According to the Hoeffding inequality and assuming µ as the true mean of a window,

when considering two sliding windows, the previous trained block, and the current win-

dow, we can write |µW0 − µ| ≤ ε and |µW1 − µ| ≤ ε. By summing these inequalities, it is

possible to obtain the relation between the arithmetic mean of both windows detailed in

Equation 6.7.

ε =

√R2ln( 1

δ )

2n(6.6) |µW0 − µW1| ≤

√2R2ln( 1

δ )

n(6.7)

In this sense, a concept drift will be reported if the inequality 6.7 is proved. There-

fore, in the proposed algorithm, which offers an inherent concept drift detection method,

µW0 and µW1 will be the averages of variables importance’s of old and current sliding

windows, respectively. The variable’s importance was chosen to detect a concept drift

because a significant change in the average distribution might indicate that a discrepant

value in one of the features appeared or disappeared, forcing the model to be updated.

Finally, if the algorithm is not at a training stage when a concept drift is detected, the

ensemble is reformed.

6.4 Complexity analysis

In the original paper of the standard iForest, it is shown that this method outper-

forms several other methodologies on various datasets. Regarding its scalability and

complexity analysis performed by [193], showed that the worst-case for time complex-

ities for training and evaluation is O(ψ2ntrees) and O(nψntrees), respectively, where n is the

number of points in the testing dataset.

In our proposed approach, for the time analysis, it is important to recall that this

method is built in a binary tree structure that, given the isolation concept and splitting

criteria, is impossible to guarantee it is balanced. However, a limit height threshold has

been implemented since instances that lie in deeper levels are more likely to be normal

points. In this sense, it is reasonable to assume that the time complexity for the training

Page 83: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 70

step per tree is O(ψ log2 ψ). In this sense, for the ensemble iForest, it is possible to con-

clude O(ntreesψ log2 ψ). As it has been suggested before, the ensemble is a set of indepen-

dent iTrees, allowing each component to be trained in parallel and the time complexity

of the ensemble to be equal to a single learner. For the evaluation time, an instance has

to pass through each tree in order to assign a path length used to determine its anomaly

score. For each tree in the ensemble, the saved path lengths are added to determine the

expected height of an instance in the isolation forest. Therefore, for a number of instances

to test n, always less or equal to ψ, the time complexity is O(nntrees log2 ψ).

For the space complexity, as the array of scores, set of attributes used to cut the branches

and to store the average score of each tree, is kept to allow transparency and a fast pro-

cessing time, each iTree has an additional complexity when compared to the original

implementation. Particularly, each tree saves an array of size ψ, a set of attributes used

to unveil the mechanism of a potential threat, which will be of size log2 ψ equal to the

maximal height and a set of nodes. It is possible to distinguish between leaf nodes, also

called external, and internal nodes. For an isolation tree, it is expected to find, at most,

ψ external nodes and ψ− 1 internal nodes. Therefore, the space complexity to store the

tree structure will be O(2ψ− 1 + log2 ψ + ψ) which, by applying the asymptotic analysis

general properties, is equivalent to O(ψ), linear to the sliding window size.

To finish up the complexity analysis, it is important to compare the two splitting crite-

ria mentioned before. The first method processes each dimension looking for the minimal

and maximal value, in case of a numerical variable, and the possible categories when

given a nominal attribute. On the other hand, the second method suggested - cForest

- implies the evaluation of the data distribution for each attribute in order to choose the

dimension that is more likely to isolate more points. In this sense, both strategies offer a

time complexity of O(dψ), where d is the number of dimensions or attributes in the data

set. For the concept drift detection method, both strategies benefit from a statistical test

to infer if the data distribution has considerably changed. Since the arithmetic means is

calculated as the sum of all values divided by the number of attributes, when process-

ing each importance, the variable of the cumulative sum is added to the auxiliary data

structure, adding no additional time complexity to the attributes’ importance.

To conclude, Figure 6.5 illustrates the proposed framework and the workflow of the

anomaly detection, including the important steps not only from the isolation training

and testing but also the concept drift detection and anomaly report notification method

Page 84: Anomaly Detection in Cybersecurity

6. ANOMALY DETECTION IN CYBERSECURITY 71

explained throughout this chapter.

FIGURE 6.5: Isolation model framework

Page 85: Anomaly Detection in Cybersecurity

Chapter 7

Experimental Studies

Scientific advances rely on the reproducibility of results so they can be independently

validated and compared. Many intrusion detection approaches have been evaluated

based on proprietary data and results are generally not reproducible [194].

An anomaly detection system is usually evaluated by its ability to make accurate pre-

dictions. Evaluation results usually affect the choice of a suitable approach for a particu-

lar environment, depending on several factors such as the ability to correctly distinguish

anomalies from normal points and the importance of the returned insights, not to mention

the delay rates between the reported anomaly and its first log [195].

Most well know IDS solutions were analyzed, trained, and evaluated against datasets

that are static or contained a small array of possible threats. In most cases, these datasets

gather old data, not updated or oriented to context. Currently available datasets lack from

real-time and real-life characteristics [8].

In this chapter, it is discussed an evaluation setup, covering important topics such as

the metrics used to validate our approach and the anomalies to be tested and, finally, our

results and comparisons with similar solutions.

7.1 Evaluation scheme

To generate a real-time and context-aware dataset to reflect our environment and sys-

tem, a data host collection framework, called miningbeat, has been designed. As this

solution is specific to the host in which it is running and measures the system’s perfor-

mance during execution, there is not a clear separation between anomalies and normal

72

Page 86: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 73

data. Therefore, it is difficult to assert the model’s performance in a full unsupervised

environment.

The evaluation scheme of real-life anomaly detection that respects the ecosystem must

rely clear and precise on to validate the proposed method. In this sense, variables such

as delay time, defined as the difference between the input and output of the report, the

event loss, and false alarm rates are being studied to test our hypothesis.

In the proposed model, a threshold is used to separate anomalies from normal data.

Instances with an anomaly score higher than 0.75 are reported. Each report should be

interpreted as warning signals to notify domain experts of out-of-the-ordinary events.

In these messages, alongside with PID and respective score, the time of arrival, time of

notification, the set of attributes that isolated the point, and the command executed are

also reported.

As to measure the event loss and how efficient the evaluated model is, F1-score was

calculated considering TP as the number of reports concerning the malicious activity, FN

as zero, since these experiments were conducted in a controlled environment, and, finally,

FP as the number of entries not related to the inflicted threat present in the report.

In this study, we will focus on the adapted isolation model detailed in the previous

chapter. To compare both versions and to understand the differences between the stan-

dard random choice and the proportional ability to isolate instances, the two models, de-

scendants from the batch-oriented isolation model, will be executed on a simple machine

not influenced by any malicious activities. Figure 7.1 shows the result of the streaming

host-based isolation model, implemented following the original isolation paradigm (Fig-

ure 7.1a) and the adapted feature importance cForest (Figure 7.1b).

In both outputs, it is possible to see that miningbeat tracks a process in its life cycle,

reporting each execution that is out of the ordinary. In fact, both models’ chronological

reports depict an increasing and stabilizing anomaly score, allowing an expert to watch

critical processes and decide what should be done to control the situation. The average

delay time in the first model (18s), defined by the average of the delay time of each report,

is much higher than the one reported by cForest(0.7s). However, different delay times

may suggest that these reports were issued in different stages as the training phase takes

more time than the prediction phase.

The Rules metric declares the set of attributes used to isolate that instance. In the

first example, it reports perm which is the nomenclature used to express the permission

Page 87: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 74

(A) iForest output.

(B) cForest output.

FIGURE 7.1: Anomaly reports by the isolation models.

criticalness variable, stating out that the reason this process was flagged as an outlier is

that it had accessed critical content protected with high privileges. For the second model,

the chosen variable was a qualitative attribute (UID). In fact, based on the equation used

to assign the importance to an attribute, nominal variables are more likely to isolate data

points as they tend to have fewer categories than a numerical variable with a continuous

domain of [min, max]. In this case, choosing UID over permission criticalness indicates

that, if UID was picked as an important variable, it might imply that a distinct user was

used to execute this process.

When looking at the CMD point, it is possible to confirm the importance of the insights

suggested by the rules. In both settings, the command executed concerns a Eseccomp

flag, which stands for the secure computing mode. Seccomp restricts the system calls that

an application can issue. By calling this flag, some elastic beats inherit the possibility

Page 88: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 75

to access a controlled range of system calls restricting the consequences when malicious

arbitrary commands are executed typically in RCE attacks [196].

At last, cForest assigns higher scores than iForest possibly because it is expected

that many trees choose the same attribute in each level with different splitting values,

forcing an instance to be placed in similar levels throughout the ensemble. In this sense,

anomalies in cForest are more likely to be ranked with values closer to each other which

might indicate that the anomaly threshold used to report instances needs to be higher

than the one applied in iForest.

7.2 Evaluation setup

The experimental environment is designed according to the ecosystem in which the

final model is supposed to work. Therefore, the setup was configured bearing in mind

the specification of an IoT actuator device. However, our model is expected to also work

in servers or workstations, validating our approach in a wider range of applications. For

the evaluation metrics, we will be interested in calculating events loss and false alarms

rate, as well as interpret the variables used to isolate instances. Therefore, to account for

meaningful rules, our experimental tests were evaluated using the adapted version of

the original isolation model, here denominated cForest, as, in this version, the attributes

were chosen from its ability to isolate instances.

To evaluate the anomaly detection proficiency, it is vital to estimate how the model

responds when facing cyber threats. Therefore, while running the miningbeat and our

host-based anomaly detection model, a considerable number of vulnerabilities will be

exploited so that anomalous traffic is generated in a controlled environment. This strategy

will provide ground truth for the evaluating scheme.

For the evaluation study, a set of vulnerabilities from different types of attacks cur-

rently threatening IoT infrastructures will be fed while the system performs its usual

tasks. These vulnerabilities are available in Common Vulnerabilities and Exposures (CVE)

list of entries which contains an identification number, a description, and, at least, one

public reference for publicly known cybersecurity vulnerabilities [197]. The anomalies to

validate the model were chosen based on the typical vulnerabilities of ongoing services

and applications an attacker wants to exploit to compromise or gain access to a particular

device as well as the base score assigned to each threat according to its impact [198]. In

this sense, with a base score higher than 7.5 (level high), improper file reads, privilege

Page 89: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 76

escalation, and Remote Code Execution (RCE) were our priority. To attest how the model

behaves when the threat is already running on the machine, a DoS attempt that could

potentially compromise the system, resulting from a simple bug in one of the programs,

will be also evaluated.

To fully validate our approach, other IDS will be compared. Among these methods,

we can find systems such as OSSEC, an HIDS similar to our proposal, Fail2Ban [64], an

intrusion prevention software mainly used in Secure Shell (SSH) services, and Tripwire

used to detect threats and harden configurations in real-time mostly used to detect file

modifications [67]. These methods were chosen over other mentioned intrusion systems

since they either are close rivals by offering similar backgrounds and concepts of a HIDS,

such as OSSEC, or they have specialized in a particular exploit that match the ones tested

in this chapter, such as Fail2Ban or TripWire.

Since many vulnerabilities can be specific to a certain version, not only of the appli-

cation where the issue can be found but also to the operating system architecture and

version, our first option was to implement each test using dockers so that the require-

ments can be fulfilled without interfering with other trials. However, the docker solution

can not be used in our experimental study as Docker builds and stores images used only

by containers, not sharing the same file system with the host system. As a result, mining-

beat would not be able to monitor what is being accessed inside the docker, pointing to

our first obstacle.

Therefore, the next sections will describe the tested cyber threats in a configured en-

vironment to match the requirements from the anomalies and the proposed application

setting of the model.

7.3 Experimental trials

In this section, each trial will be described according to the problems and resources

that each vulnerability is compromising as well as how the proposed model reacts to

each threat. To finish this analysis, discussed available methods will be compared with

the evaluated model. Figure 7.2 and Table 7.1 summarize the obtained results. In the

last columns of Table 7.1, the effectiveness in detecting each test is depicted in bullets for

each compared system. A filled bullet states that the vulnerability is detected by a more

general approach than a signature rule. As so, three-quarter bullets are associated with

intrusion systems that apply a personalized rule to seize each threat or that implement

Page 90: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 77

signature rules depending on the configuration, scoring high FP rates. At last, empty or

times-signed circles are destined for systems that could not detect the anomaly and for

those who do not implement a detection mechanism for a particular threat, respectively.

The first experiment was conducted by running a simple python program with a

while True: writing to a common file placed on the home directory. This trial, unlike

the other experiments, is not a consequence of a vulnerability in an installed applica-

tion triggered by a connection between two entities. This process starts from the inside

and, despite not compromising the system performance, is an outlier that is consuming

unnecessary resources. In the proposed model, the reported anomalies are related tasks

regarding restricted files or operations, such as rsyslogd or dbus-daemon. However, the

python program obtained the highest number of occurrences. These results reflect the

variables’ importance used to choose the splitting attribute at each node, where, indeed,

the model seeks to monitor closely tasks endured by a privileged user. As a result, neither

of the intrusion systems compared detect this scenario while our model achieves 100%

in F1-score. This information is represented with a filled circle for our model and empty

bullets for the compared intrusion systems in Table 7.1.

To test the envisioned model against common vulnerabilities, the second experiment

is a RCE attack via a crafted HTTP request (CVE-2019-16278). This can be classified as

a point anomaly as it only occurred when an isolated critical event is triggered. There-

fore, before the attack, the system is at its normal stage, running intrinsic tasks. When

the attack occurs, the model detects a change in the previous concept, reporting as an

anomaly. Finally, the machine stabilizes and gets back to the previous stage before the at-

tack. As so, before the attack, the model outputs, out of all the processes running, the one

consuming more resources, in this case, rsyslog. As a result, rsyslog is notified before

the attack, followed by commands related to this particular CVE, and, finally, when the

system stabilized, rsyslog appears once again as the model rearranged its perception. To

sum up, this experiment, as it was an isolated anomaly, there should be fewer anomaly re-

ports concerning this attack when compared to ones that occurred before and after, giving

this attack the lowest F1-score. This vulnerability is detected on other intrusion systems

based on personalized rules that keep an alert on www-data user and its permissions. As

a result, on Table 7.1, this CVE appears with three-quartered filled bullets for all systems.

The second CVE trial was a privilege escalation attack that allows a sudoer account

to bypass certain policy blacklists (CVE-2019-14287). In this trial, a critical file such as

Page 91: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 78

/etc/shadow was read. This particular attack is classified as contextual because the root

UID has the power to perform these tasks, not causing an event out of the ordinary. How-

ever, in this context, the real UID was a forged user with common privileges only allowed

to perform non-root commands that have been added to the sudoers file. When logging

into this account and prompting /bin/bash, the UID is the root and the group is a com-

mon user, making this event critical. In this trial, as the connection with this forged user

was kept open, the anomaly reports only contained incidents related to this attack, achiev-

ing the highest F1-score possible. Given the type of network attack and the applications of

the compared IDS, OSSEC keeps a list of malicious or critical users, blocking them as they

try to log in. Fail2Ban, despite implementing mechanisms to detect this type of attack,

this particular threat was not detected. Therefore, in Table 7.1, Fail2Ban is marked with an

empty circle. TripWire, as it is mainly used in file integrity checking, does not provide a

detection method for this anomaly. As so, it is represented with a times-signed circle.

The third CVE exploits an issue discovered in Webmin 1.840 and 1.880 when the de-

fault setting of ”Can view any file as a log file” is enabled (CVE-2018-8712). In this exper-

iment, to ensure that a critical file is accessed and the attack is fulfilled, /etc/shadow was

introduced as an argument. According to the metrics proposed in miningbeat, this CVE

was detected based on the permission value as well as the UID. Given its punctual char-

acter, it is classified as a point anomaly where, as the model analyses streaming events,

as soon as the attack is no longer active, the model stabilizes and returns the most con-

suming processes. Therefore, the anomaly reports show the exploit being active, followed

gnome-shell process that is not considered a part of the performed anomaly, decreasing its

F1-score. As a result of being a file access vulnerability, all IDS have personalized rules

to detect this threat. However, TripWire only detects this vulnerability if the partition is

mounted with the strictatime, which allows to explicitly request full atime updates.

At last, an experiment using an unauthorized file read CVE (CVE-2020-1938) was led.

This vulnerability allows a user to execute tomcat to read restricted files that the only

tomcat should have access to. As this process is executed by tomcat, the model would in-

terpret the UID as the same as the owner of the file. Therefore, despite the value assigned

to the file’s permission, the importance will not be significantly high. In this sense, this

CVE was not detected as this interaction is not considered out of the ordinary. However,

it is important to mention that, if the attack escalates, the incident will fall into one of the

previously analyzed exploits and the incident will be reported. To sum up, this approach

Page 92: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 79

is not a vulnerability detection but rather a seriousness rank and detection method. This

attack is neither detected by Fail2Ban nor practicable in TripWire. However, OSSEC can

detect this attacks by monitoring tomcat log files that need to be continuously updated.

Despite being complex, this strategy is expected to achieve a high FP rate as, given that

the user is tomcat it will be difficult to distinguish normal requests from critical entries.

Attack Anomaly Delay F1-score Our model

OSSEC

Fail2B

an

TripW

ire

Python DoS Collective 4.41 100%

CVE-2019-162781 RCE Point 1.19 85.7%

CVE-2019-142871 Privilege Es-

calationContextual 0.72 100%

CVE-2018-87121 Improper

file accessPoint 2.52 92.1%

CVE-2020-19381 Unauthorized

file read- - -

⊗Detected Signature Undetected

⊗Impractical

TABLE 7.1: Experimental trials

To conclude, Figure 7.2 represents, for each experiment, the average delay time in

minutes for each process that has been labeled, ordered chronological by the reported

timestamp. In each plot, it is also represented the interval time in which the tested vul-

nerabilities are active on the device.

Figure 7.2a shows the delay time of each reported process in the python experiment.

Although most of the events get an average delay time lower than one minute, python

PID reaches more than four minutes of delay. Another important aspect is the interval

time indicating that the notified commands at an early stage were not active concurrently

with the real anomaly. In this sense, the first two commands were reported before any

real threat was triggered. After that, the model updated its perspective to further alert

the vulnerability. This workflow might explain the differences in the delay times as the

model detected a concept drift and was forced to update its structure.

1https://nvd.nist.gov/vuln/detail/[ID]

Page 93: Anomaly Detection in Cybersecurity

7. EXPERIMENTAL STUDIES 80

Figure 7.2b shows that commands rsyslog and CVE-2019-16278 were both active on the

machine when the anomaly was reported, indicating the existence of a false alarm. When

the CVE is no longer active, the previous command appears, once again, as an anomaly.

Given the difference between the first two delay times and the last delay time, just like in

the previous trial, a concept drift was detected and the model rearranged its components.

Figure 7.2c and Figure 7.2d depict that the anomalies were only reported after the vul-

nerability was no longer active. In the first plot, the CVE was the only anomaly reported,

achieving a high score. However, in Figure 7.2d, all processes were reported at the same

time, granting this test a lower score.

To sum up, these results reflect the system state determined by the processes running

on the device according to its life cycle at the time the tests were conducted. As it is not

possible to replicate a streaming real-time event, the experimental trials were reproduced

in the same device with the same characteristics but at different times and states of a

process. Each plot reinforces the idea of a behavioral model that is unique to a particular

time and a particular set of tasks. As a result, this concept idea makes our model a more

robust solution to detect zero-day attacks as the common solutions will fall short when

the signature rule is not yet defined.

(A) Python experiment (B) CVE-2019-16278

(C) CVE-2019-14287 (D) CVE-2018-8712

FIGURE 7.2: Delay time for the conducted experiments

Page 94: Anomaly Detection in Cybersecurity

Chapter 8

Conclusion

IoT is growing faster and more complex every day, making security and privacy an

urgent demand and challenge. In this work, it was essayed the environment and inherent

requirements of the proposed assignment. Therefore, the IoT characteristics and risks

have been detailed in order to design a solution contemplating its major features. Our

priority is to provide a better and more robust solution.

Although we live in a connected and data-driven era, finding the right dataset is not

always an easy task. The key component of a machine learning approach is the dataset

and the time invested in data preparation. From the chosen attributes to the cleaning pro-

cess, all steps must have special attention since they play an important role in the ultimate

performance. In the previous chapters, it was discussed the potential risks and threats as

well as the ultimate and common target to each threat and layer to have a clear percep-

tion of the architecture and complexity of the working environment. Accordingly, a data

collection framework, called miningbeat, was designed to monitor system performance

with carefully chosen metrics to filter and classify dangerous behaviors of a program.

Every function and metric was developed based on the system architecture and assets

potentially exploited with a special concern on time and memory performance.

Anomaly detection in IoT face strict limitation regarding memory and time, not to

mention the imbalanced scheme it engages and the heterogeneous formats and volume

of a real-time application. As a result, IoT is the perfect example of a streaming context.

In this sense, the choice of the machine learning model ought to be made regarding, not

only outlier detection but also the IoT application constraints so that the final solution is

as robust as possible.

81

Page 95: Anomaly Detection in Cybersecurity

8. CONCLUSION 82

In terms of the true purpose and applicability of an artificial intelligence enhanced

system, there is a feeling that most of the state of the art does not deliver options to over-

come the price to pay if a decision is incorrectly taken. Therefore, most of the currently

used frameworks do not provide a robust and designed machine learning system. In

most cases, these systems are used only to visualize hidden patterns, leaving the last, and

sometimes the only, call to be made by an expert. Therefore, there is an opportunity to

increase the power and reliability of the current approaches in order to improve cyber

defense in such matters that are easily missed by humans and perceptible by machines.

HIDS, such as the one developed in this work, runs on independent hosts or devices

on the network. It monitors the consequences of a particular task triggered by processes

responsible for handling file descriptors, change of permissions or ownership, and in-

coming and outgoing packets. ML-based methods have a better generalization property

in comparison to signature-based IDS as these models can be trained according to the ap-

plications and hardware configurations. In a hybrid solution, host agent or system data is

combined with network information to develop a complete view of the network system.

Therefore, throughout this work, it has been encouraged to combine signature-based, as a

first defense line, and host-based anomaly detection, like the one discussed in this thesis,

as a second defense line.

Current trends in the literature show that these two fields have not been able to over-

come their respective drawbacks. While machine learning forgets to consider IoT resource

limitations and the possibility of overflow the network while trying to deploy its solu-

tions, cybersecurity does not evaluate or collect the right data sets when training a model.

To overcome these drawbacks, we proposed an anomaly detection model based on back-

ground concepts that can match the complexity and insightfulness of its reports required

by an anomaly detection system in cybersecurity.

The algorithm proposed is built in a tree structure that splits data into various possible

outcomes, according to each node and its branches. These nodes lead to additional entries

until one reaches an external node. Therefore, the purpose of using a decision tree is to

present the required information to the user in the most efficient way, using a logical

question structure, providing relevant information, and avoiding unnecessary questions

or detours. Accordingly, the path each instance follows from the root to its external node

represents the set of attributes used to isolate data points or, in case of anomalies, the rules

that distinguish them from the rest.

Page 96: Anomaly Detection in Cybersecurity

8. CONCLUSION 83

The experimental studies and the proof of concept showed that our proposed method

has a behavioral essence, adapting to how the system reacts to processes running vital sys-

tem calls, while other intrusion systems focus their attention on specific files or directories.

Although we have not been able to test more vulnerabilities, the experimental trials were

carefully chosen based on its immediate danger infliction on the system and that many

cyber threats might end up exploiting these attacks. As a result, even if a vulnerability is

not detected as soon as it appears in the system, there are no reasons to believe our model

will not be able to detect the problem once it escalates to a compromising level. In essence,

our results depict our model as a generalized and more apt intrusion detection system.

In this work, we presented a data collection framework capable of gathering system

metrics optimized to data science processing as well as an efficient host-based anomaly

detection system. The rest of the idolized approaches belongs to future research direc-

tions.

8.1 Contributions

In this work, the first research question aspired to study how anomaly detection, pro-

posed as a data science field, can help to improve current cybersecurity systems. As a

result, the first contribution was a detailed literature review that has been mentioned

throughout the first three chapters. In Chapter 2, it is presented all the important method-

ologies proposed to answer our research question. In the following chapters, an elaborate

contextualization is delivered, adding a complete section on the mainly used approaches

in both cybersecurity and anomaly detection.

As the background concepts were being analyzed, we concluded that most of the ap-

plied methods in cybersecurity are too specific and the only machine learning capability

provided is to simply compare files or hashes. On the other hand, in anomaly detection,

the right dataset is sometimes neglected and the main target is not studied enough.To cap-

ture the system’s dynamic, we designed a host-based monitoring tool, written in Golang,

developed for the elasticsearch family that focuses on system calls responsible for thor-

ough executions. This contribution, an elastic beat, context-aware monitoring tool, de-

nominated miningbeat, was introduced in Chapter 5.

Furthermore, it is still important to mention that the developed system in this thesis

was all implemented in Golang, integrated into the elastic beats community. Regarding

the language used, as it is not oriented to data manipulation, an open-source library to

Page 97: Anomaly Detection in Cybersecurity

8. CONCLUSION 84

handle data frames was used. As a result, our contribution lies in the adaptations added

to this library to perform random subsampling, with or without replacement, in a data

frame.

Following the necessity to develop a robust machine learning method to improve an

intrusion detection system in real-time, an isolation model for streaming data is proposed

in Chapter 6. This method can detect concept drifts and detect anomalies based on the

current status of the system, adding generalization and efficiency when the device has

been compromised. Considering the initial implementation, this chapter also describes

the variable’s importance, designed to order the variables according to the ability to iso-

late instances. Therefore, two more contributions are added based on this variation metric

used in the attribute’s cut criteria and the concept drift method to determine significant

changes in the distribution.

As this model aims to isolate discrepant instances based on the current distribution,

an attacker will find it difficult to bypass this system. To mask an anomaly, an attacker

would need to either level their importance with the other entries or force all processes

on the machine to have the same pattern. In this sense, our final contribution is the devel-

opment of a host-based anomaly detection system capable of detecting zero-day attacks

and possibly resilient to adversarial scenarios.

8.2 Future work

Although we achieved our goals and utilized stream mining to increase intrusion de-

tection in real-world scenarios, our solution is not without limitations. As we began to test

and got through some the exploited vulnerabilities, we manage to have different perspec-

tives and perceptions that can improve how our system is capturing the system dynamics.

Starting with miningbeat, it is still possible to tune some of the proposed metrics. For

instance, we began to implement a duration rate metric to register the utilization rate of

a particular application. This is very promising when dealing with a DoS attack where a

connection is kept alive for a long time without performing exceptional tasks. This still

needs some adjustments.

Regarding the isolation model, as future work, it is possible to dedicate more time to

the concept drift detection methods and attribute’s importance metric and it would be

also interesting to study an online ensemble solution to achieve high-performance values

with lower computational costs. Regarding the model’s outcome, it seems undeniable

Page 98: Anomaly Detection in Cybersecurity

8. CONCLUSION 85

that an intrusion detection to be accepted must allow some expert interaction that could

tune the anomaly threshold or guide the concept drift detection method towards a more

robust solution.

An efficient and useful intrusion detection system requires continuously proven per-

formance to be accepted in the cybersecurity community. As a result, despite having

evaluated the designed model with high-ranked base scores, we are aware that the vali-

dation and tuning phase is not complete. Therefore, an important future work phase is to

carefully test and study more vulnerabilities to build a robust intrusion detection system.

To sum up, we idealized a hybrid detection system composed of signature-based and

host-based anomaly detection as the first and second defense line, increasing flexibility,

re-usability, and reducing complexity. Given the transparency of the model structure, the

decision rules returned by the algorithm could be used and adapted to construct new rules

and update the first defense line. This would mean an updated and efficient intrusion

detection system, resilient to zero-days and adversarial threats.

Page 99: Anomaly Detection in Cybersecurity

Bibliography

[1] P. Suresh, J. V. Daniel, V. Parthasarathy, and R. H. Aswathy, “A state of the art review

on the internet of things (iot) history, technology and fields of deployment,” in 2014

International Conference on Science Engineering and Management Research (ICSEMR),

11 2014, pp. 1–8. [Cited on page 1.]

[2] Y. Lu and L. D. Xu, “Internet of things (iot) cybersecurity research: A review of

current research topics,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 2103–2115,

Apr. 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8462745

[Cited on page 1.]

[3] R. Roman, J. Zhou, and J. Lopez, “Internet of things (iot) cybersecurity research: A

review of current research topics,” Elsevier Computer Networks, vol. 57, no. 10, pp.

2266–2279, Jul. 2018. [Online]. Available: https://doi.org/10.1016/j.comnet.2012.

12.018 [Cited on page 1.]

[4] L. Leenen and T. Meyer, “Artificial intelligence and big data analytics in support of

cyber defense,” Developments in Information Security and Cybernetic Wars, pp. 42–63,

jan 2019. [Cited on page 1.]

[5] W. Ding, X. Jing, Z. Yana, and L. T. Yang, “Artificial intelligence and big data

analytics in support of cyber defense,” Information Fusion, vol. 51, pp. 129–244, dec

2018. [Online]. Available: https://doi.org/10.1016/j.inffus.2018.12.001 [Cited on

page 2.]

[6] E. Ismagilova, L. Hughes, Y. K.Dwivedi, and K. R. Raman, “Smart cities: Advances

in research—an information systems perspective,” IEEE International Conference on

Consumer Electronics (ICCE), vol. 47, pp. 88–100, Aug. 2019. [Cited on pages 2

and 12.]

86

Page 100: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 87

[7] M. Cherian and M. Chatterjee, “Survey of security threats in iot and emerging coun-

termeasures,” International Symposium on Security in Computing and Communication,

pp. 591–604, jan 2019. [Cited on page 2.]

[8] H. Hindy, D. Brosset, E. Bayne, A. Seeam, C. Tachtatzis, R. Atkinson, and

X. Bellekens, “A taxonomy of network threats and the effect of current datasets

on intrusion detection systems,” IEEE Access, vol. PP, pp. 1–1, 06 2020. [Cited on

pages 2, 13, and 72.]

[9] G. Apruzzese, M. Colajanni, L. Ferretti, and M. Marchetti, “Addressing adversarial

attacks against security systems based on machine learning,” 2019 11th International

Conference on Cyber Conflict (CyCon), Jul. 2019. [Cited on page 2.]

[10] M. Ahmed, A. N. Mahmood, and J. Hu, “A survey of network anomaly detection

techniques,” Journal of Network and Computer Applications, vol. 60, pp. 19–31, Jan.

2016. [Cited on pages 2 and 36.]

[11] M. Kantarcioglu and B. Xi, “Adversarial data mining: Big data meets

cyber security,” CCS ’16 Proceedings of the 2016 ACM SIGSAC Conference on Computer

and Communications Security, pp. 1866–1867, Oct. 2016. [Online]. Available:

http://dx.doi.org/10.1145/2976749.2976753 [Cited on page 3.]

[12] N. ning Zheng, Z. yi Liu, P. ju Ren, Y. qiang Ma, S. tao Chen, S. yu Yu, J. ru Xue,

B. dong Chen, and F. yue Wang, “Hybrid-augmented intelligence: collaboration

and cognition,” 2019 11th International Conference on Cyber Conflict (CyCon), vol. 18,

no. 2, Feb. 2017. [Online]. Available: https://doi.org/10.1631/FITEE.1700053

[Cited on page 3.]

[13] Elasticsearch. (2020, Aug.) Open source search: The creators of elasticsearch, elk.

[Online]. Available: https://www.elastic.co/ [Cited on pages 3 and 44.]

[14] R. Mitchell and I.-R. Chen, “A survey of intrusion detection in wireless network

applications,” Computer Communications, vol. 42, pp. 1–23, Apr. 2014. [Cited on

page 5.]

[15] D. Shreenivas, S. Raza, and T. Voigt, “Intrusion detection in the rpl-connected 6low-

pan networks,” IoTPTS’17, vol. 42, pp. 31–38, Apr. 2017. [Cited on page 5.]

Page 101: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 88

[16] P. Kasinathan, C. Pastrone, M. A. Spirito, and M. Vinkovits, “Denial-of-service de-

tection in 6lowpan based internet of things,” 2013 IEEE 9th International Conference

on Wireless and Mobile Computing, Networking and Communications (WiMob), pp. 600–

607, Oct. 2013. [Cited on pages 5 and 6.]

[17] A. Le, J. Loo, Y. Luo, and A. Lasebae, “Specification-based ids for securing rpl

from topology attacks,” 2011 IFIP Wireless Days (WD), pp. 1–3, Oct. 2011. [Cited

on page 5.]

[18] T. Abbes, A. Bouhoula, and M. Rusinowitch, “Efficient decision tree for protocol

analysis in intrusion detection,” International Journal of Security and Networks, vol. 5,

no. 4, pp. 220–235, Oct. 2010. [Cited on pages 6 and 9.]

[19] M. M. Rathore, A. Paul, S. Rho, M. Imran, and M. Guizani, “Hadoop based real-

time intrusion detection for high-speed networks,” 2016 IEEE Global Communica-

tions Conference (GLOBECOM), vol. 278, pp. 1–6, Dec. 2016. [Cited on pages 6 and 7.]

[20] L. Khan, M. Awad, and B. Thuraisingham, “A new intrusion detection system using

support vector machines and hierarchical clustering,” The VLDB Journal, vol. 16,

no. 4, pp. 507–521, Oct. 2007. [Cited on pages 6 and 9.]

[21] N. Chaabouni, M. Mosbah, A. Zemmari, and C. Sauvignac, “An intrusion detection

system for the onem2m service layer based on edge machine learning,” 18th Inter-

national Conference on Ad-Hoc Networks and Wireless, ADHOC-NOW, p. 508–523, Oct.

2019. [Cited on pages 6, 7, 8, and 9.]

[22] P. Dasgupta and J. B. Collins, “A survey of game theoretic approaches for adversar-

ial machine learning in cybersecurity tasks,” arXiv preprint arXiv:1912.02258, 2019.

[Cited on page 6.]

[23] A. G. P. Lobato, M. A. Lopez, I. J. Sanz, A. A. Cardenas, O. C. M. Duarte, and

G. Pujolle, “An adaptive real-time architecture for zero-day threat detection,” in

2018 IEEE international conference on communications (ICC). IEEE, 2018, pp. 1–6.

[Cited on pages 6, 7, and 9.]

[24] K. Singh, S. C. Guntuku, A. Thakur, and C. Hota, “Big data analytics framework for

peer-to-peer botnet detection using random forests,” Information Sciences, vol. 278,

pp. 488–497, Mar. 2014. [Cited on pages 7 and 9.]

Page 102: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 89

[25] P. Owezarski, “A near real-time algorithm for autonomous identification and char-

acterization of honeypot attacks,” ASIA CCS ’15 Proceedings of the 10th ACM Sympo-

sium on Information, Computer and Communications Security, pp. 531–542, Apr. 2015.

[Cited on page 9.]

[26] J. Noble and N. Adams, “Real-time dynamic network anomaly detection,” IEEE

Intelligent Systems, vol. 33, no. 2, pp. 5–18, Jun. 2018. [Cited on pages 7 and 9.]

[27] J. Noble and N. M. Adams, “Correlation-based streaming anomaly detection in

cyber-security,” 2016 IEEE 16th International Conference on Data Mining Workshops

(ICDMW), vol. 33, no. 2, pp. 311–318, Dec. 2016. [Cited on pages 6, 7, and 9.]

[28] J. Wang, L. Yang, and J. H. Abawajy, “Clustering analysis for malicious network

traffic,” 2017 IEEE International Conference on Communications (ICC), pp. 1–6, May

2017. [Cited on pages 6 and 9.]

[29] Z. Yu, J. J. P. Tsai, and T. Weigert, “An automatically tuning intrusion detection

system,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),

vol. 37, no. 2, pp. 373–384, Apr. 2007. [Cited on pages 6 and 9.]

[30] M. V. Mahoney and P. K. Chan, “An analysis of the 1999 darpa/lincoln laboratory

evaluation data for network anomaly detection.” Springer, Berlin, Heidelberg, vol.

2820, pp. 220–237, 2003. [Cited on page 7.]

[31] J. McHugh, “Testing intrusion detection systems: A critique of the 1998 and 1999

darpa intrusion detection system evaluations as performed by lincoln laboratory,”

ACM Trans. Inf. Syst. Secur., vol. 3, no. 4, p. 262–294, Nov. 2000. [Online]. Available:

https://doi.org/10.1145/382912.382923 [Cited on page 7.]

[32] V. Garcia-Font, C. Garrigues, and H. Rifa-Pous, “Difficulties and challenges of

anomaly detection in smart cities: A laboratory analysis,” Sensors, vol. 18, no. 10,

p. 3198, 2018. [Cited on pages 7 and 9.]

[33] H. Bostani and M. Sheikhan, “Hybrid of anomaly-based and specification-based

ids for internet of things using unsupervised opf based on mapreduce approach,”

Computer Comunications, vol. 98, pp. 52–71, Jan. 2017. [Cited on pages 7 and 9.]

Page 103: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 90

[34] A. Saied, R. E. Overill, and T. Radzik, “Detection of known and unknown ddos

attacks using artificial neural networks,” Neurocomputing, vol. 172, pp. 385–393, Jan.

2016. [Cited on pages 7 and 9.]

[35] S. G. Aksoy, K. E. Nowak, E. Purvine, and S. J. Young, “Relative hausdorff distance

for network analysis,” Applied Network Science, vol. 4, no. 1, p. 80, 2019. [Cited on

pages 7 and 9.]

[36] C. Wagner, J. Francois, R. State, and T. Engel, “Machine learning approach for ipflow

record anomaly detection,” 10th IFIP Networking Conference (NETWORKING), vol. 5,

no. 4, pp. 28–39, May 2011. [Cited on page 9.]

[37] K. El-Khatib, “Impact of feature reduction on the efficiency of wireless intrusion

detection systems,” EEE Transactions on Parallel and Distributed Systems, vol. 21, no. 8,

pp. 1143–1149, Aug. 2010. [Cited on page 9.]

[38] S. R. Bandre and J. N. Nandimath, “Design consideration of network intrusion de-

tection system using hadoop and gpgpu,” 2015 International Conference on Pervasive

Computing (ICPC), vol. 73, no. 10, pp. 1–6, Apr. 2015. [Cited on page 9.]

[39] S. Helal, A. Khaled, and W. Lindquist, “The importance of being thing:or

the trivial role of powering serious iot scenarios,” Proceedings of the IEEE

ICDCS Conference 2019, vol. 132, pp. 1–8, 2019. [Online]. Available: https:

//eprints.lancs.ac.uk/id/eprint/133960/ [Cited on page 10.]

[40] N. M. Kumar and P. K. Mallick, “The internet of things: Insights into the building

blocks, component interactions, and architecture layers,” Procedia Computer Science,

vol. 132, pp. 109–117, 2018. [Cited on page 10.]

[41] BankInfoSecurity. (2019, Jul.) Massive botnet attack used more than 400,000 iot

devices. [Online]. Available: https://www.bankinfosecurity.com/massive-botnet-

attack-used-more-than-400000-iot-devices-a-12841 [Cited on page 11.]

[42] Forbes. (2019, Sep.) Cyberattacks on iot devices surge 300% in 2019,

’measured in billions’, report claims. [Online]. Available: https:

//www.forbes.com/sites/zakdoffman/2019/09/14/dangerous-cyberattacks-

on-iot-devices-up-300-in-2019-now-rampant-report-claims/#57e031975892 [Cited

on page 11.]

Page 104: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 91

[43] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of things (iot): A

vision, architectural elements, and future directions,” Future Generation Computer

Systems, vol. 29, no. 7, pp. 1645–1660, Sep. 2013. [Cited on page 11.]

[44] R. Doku and D. B. Rawat, “Big data in cybersecurity for smart city applications,”

in Smart Cities Cybersecurity and Privacy. Elsevier, 2019, pp. 103–112. [Cited on

page 11.]

[45] H. Lin and N. W. Bergmann, “Iot privacy and security challenges for smart home

environments,” IEEE International Conference on Consumer Electronics (ICCE), vol. 44,

no. 7, pp. 1–15, Jul. 2016. [Cited on page 12.]

[46] S. T. U. Shah, H. Yar, I. Khan, M. Ikram, and H. Khan, “Smart cities: Advances in

research—an information systems perspective,” EAI/Springer Innovations in Commu-

nication and Computing, pp. 153–162, Nov. 2018. [Cited on page 12.]

[47] L. Shinder and M. Cross, “Chapter 12 - understanding cybercrime prevention,”

in Scene of the Cybercrime (Second Edition), second edition ed., L. Shinder and

M. Cross, Eds. Burlington: Syngress, 2008, pp. 505 – 554. [Online]. Available:

http://www.sciencedirect.com/science/article/pii/B9781597492768000121 [Cited

on page 14.]

[48] Y. Liang, H. V. Poor, and S. Shamai, Information theoretic security. Now Publishers

Inc, 2009. [Cited on page 14.]

[49] S. Omar, A. Nagdi, and H. H. Jebur, “Machine learning techniques for

anomaly detection: An overview,” International Journal of Computer Applications

(0975-8887), vol. 79, no. 2, pp. 1–9, Oct. 2013. [Online]. Available: https://pdfs.

semanticscholar.org/0278/bbaf1db5df036f02393679d485260b1daeb7.pdf [Cited on

pages 15 and 37.]

[50] OWASP. (2014, May) Owasp testing guide appendix c: Fuzz vectors. [Online].

Available: https://wiki.owasp.org/index.php/OWASP Testing Guide Appendix

C: Fuzz Vectors#SQL Injection [Cited on page 15.]

[51] C. Zhang, F. Ruan, L. Yin, X. Chen, L. Zhai, and F. Liu, “A deep learning approach

for network intrusion detection based on nsl-kdd dataset,” in 2019 IEEE 13th Inter-

national Conference on Anti-counterfeiting, Security, and Identification (ASID), 2019, pp.

41–45. [Cited on page 15.]

Page 105: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 92

[52] M. Nawir, A. Amir, N. Yaakob, and O. B. Lynn, “Internet of things (iot): Taxonomy

of security attacks,” 2016 3rd International Conference on Electronic Design (ICED), pp.

321–326, Aug. 2016. [Cited on page 15.]

[53] B. Ali and A. I. Awad, “Cyber and physical security vulnerability assessment for

iot-based smart homes,” Sensors, vol. 18, pp. 1–17, Mar. 2018. [Cited on page 15.]

[54] A. Simmonds, P. Sandilands, and L. Van Ekert, “An ontology for network security

attacks,” in Asian Applied Computing Conference. Springer, 2004, pp. 317–323. [Cited

on page 15.]

[55] S. Dutta, S. Madnick, and G. Joyce, “Secureuse: Balancing security and usabil-

ity within system design,” in HCI International 2016 – Posters’ Extended Abstracts,

C. Stephanidis, Ed. Cham: Springer International Publishing, 2016, pp. 471–475.

[Cited on page 16.]

[56] W. Zhou, Y. Jia, A. Peng, Y. Zhang, and P. Liu, “The effect of iot new features on secu-

rity and privacy: New threats, existing solutions, and challenges yet to be solved,”

IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1606–1616, Apr. 2019. [Cited on

page 16.]

[57] S. Rizvi, A. Kurtz, J. Pfeffer, and M. Rizvi, “Securing the internet of things (iot): A

security taxonomy for iot,” 2018 17th IEEE International Conference On Trust, Security

And Privacy In Computing And Communications/ 12th IEEE International Conference

On Big Data Science And Engineering (TrustCom/BigDataSE), pp. 163–168, Aug. 2018.

[Cited on pages 16 and 17.]

[58] M. A. J. Jamali, B. Bahrami, A. Heidari, P. Allahverdizadeh, and F. Norouzi, “Iot

security,” Towards the Internet of Things, pp. 33–83, 2019. [Cited on page 17.]

[59] Comodo. (2018, May) Malware vs viruses: What is the difference between malware

and a virus? [Online]. Available: https://antivirus.comodo.com/blog/computer-

safety/malware-vs-viruses-whats-difference/ [Cited on page 17.]

[60] K. J. Kratkiewicz, “Evaluating static analysis tools for detecting buffer overflows in

c code,” HARVARD UNIV CAMBRIDGE MA, Tech. Rep., 2005. [Cited on page 17.]

Page 106: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 93

[61] E. Benkhelifa, T. Welsh, and W. Hamouda, “A critical review of practices and chal-

lenges in intrusion detection systems for iot: Toward universal and resilient sys-

tems,” IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 3496–3509, 2018.

[Cited on page 19.]

[62] N. Chakraborty, “Intrusion detection system and intrusion prevention system: A

comparative study,” International Journal of Computing and Business Research (IJCBR),

vol. 4, no. 2, pp. 1–8, 2013. [Cited on page 19.]

[63] A. Milenkoski, M. Vieira, S. Kounev, A. Avritzer, and B. D. Payne, “Evaluating com-

puter intrusion detection systems: A survey of common practices,” ACM Computing

Surveys (CSUR), vol. 48, no. 1, pp. 1–41, 2015. [Cited on page 19.]

[64] Fail2Ban. (2016, May) Fail2ban - main page. [Online]. Available: https:

//www.fail2ban.org/wiki/index.php/Main Page [Cited on pages 19 and 76.]

[65] M. Ford, C. Mallery, F. Palmasani, M. Rabb, R. Turner, L. Soles, and D. Snider, “A

process to transfer fail2ban data to an adaptive enterprise intrusion detection and

prevention system,” in SoutheastCon 2016. IEEE, 2016, pp. 1–4. [Cited on page 19.]

[66] R. Bray, D. Cid, and A. Hay, OSSEC host-based intrusion detection guide. Syngress,

2008. [Cited on pages 19 and 53.]

[67] Tripwire. (2020, Aug.) Cybersecurity for enterprise and industrial organizations.

[Online]. Available: https://www.tripwire.com/ [Cited on pages 20 and 76.]

[68] G. H. Kim and E. H. Spafford, “The design and implementation of tripwire: A file

system integrity checker,” in Proceedings of the 2nd ACM Conference on Computer and

Communications Security, 1994, pp. 18–29. [Cited on page 20.]

[69] Snort. (2020, Aug.) Snort - network intrusion detection and prevention system.

[Online]. Available: https://snort.org/ [Cited on page 20.]

[70] Suricata. (2020, Aug.) Suricata - open source ids and ips. [Online]. Available:

https://suricata-ids.org/ [Cited on page 20.]

[71] E. Albin and N. C. Rowe, “A realistic experimental comparison of the suricata and

snort intrusion-detection systems,” in 2012 26th International Conference on Advanced

Information Networking and Applications Workshops. IEEE, 2012, pp. 122–127. [Cited

on page 20.]

Page 107: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 94

[72] W. Inc. (2020, May) Wazuh - the open source security platform. [Online]. Available:

https://wazuh.com/ [Cited on pages 20, 21, and 53.]

[73] F. Anjum, D. Subhadrabandhu, and S. Sarkar, “Signature based intrusion detec-

tion for wireless ad-hoc networks: A comparative study of various routing proto-

cols,” in 2003 IEEE 58th Vehicular Technology Conference. VTC 2003-Fall (IEEE Cat. No.

03CH37484), vol. 3. IEEE, 2003, pp. 2152–2156. [Cited on page 20.]

[74] V. Kumar and O. P. Sangwan, “Signature based intrusion detection system using

snort,” International Journal of Computer Applications & Information Technology, vol. 1,

no. 3, pp. 35–41, 2012. [Cited on page 20.]

[75] D. B. Gothawal and S. Nagaraj, “Anomaly-based intrusion detection system in rpl

by applying stochastic and evolutionary game models over iot environment,” Wire-

less Personal Communications, vol. 110, no. 3, pp. 1323–1344, 2020. [Cited on page 21.]

[76] D. C. Nash, T. L. Martin, D. S. Ha, and M. S. Hsiao, “Towards an intrusion detection

system for battery exhaustion attacks on mobile computing devices,” in Third IEEE

international conference on pervasive computing and communications workshops. IEEE,

2005, pp. 141–145. [Cited on page 21.]

[77] A. Le, J. Loo, K. K. Chai, and M. Aiash, “A specification-based ids for detecting

attacks on rpl-based network topology,” Information, vol. 7, no. 2, p. 25, 2016. [Cited

on page 21.]

[78] J. P. Amaral, L. M. Oliveira, J. J. Rodrigues, G. Han, and L. Shu, “Policy and network-

based intrusion detection system for ipv6-enabled wireless sensor networks,” in

2014 IEEE international conference on communications (ICC). IEEE, 2014, pp. 1796–

1801. [Cited on page 21.]

[79] M. S. Mahdavinejad, M. Rezvan, M. Barekatain, P. Adibi, P. Barnaghi, and

A. P.Sheth, “Machine learning for internet of things data analysis: a survey,” Dig-

ital Communications and Networks, vol. 4, no. 3, pp. 161–175, Aug. 2018. [Cited on

page 23.]

[80] T. Elsaleh, M. Bermudez-Edo, S. Enshaeifar, S. T. Acton, R. Rezvani, and P. Barnaghi,

“Iot-stream: A lightweight ontology for internet of things data streams,” 2019 Global

IoT Summit (GIoTS), vol. 132, pp. 1–6, Jun. 2019. [Cited on page 23.]

Page 108: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 95

[81] P. H. dos Santos Teixeira and R. L. Milidiu, “Data stream anomaly detection through

principal subspace tracking,” in Proceedings of the 2010 ACM Symposium on Applied

Computing, 2010, pp. 1609–1616. [Cited on page 23.]

[82] J. Gama, “A survey on learning from data streams: current and future trends,”

Progress in Artificial Intelligence, vol. 1, no. 1, pp. 45–55, 2012. [Cited on pages 24

and 25.]

[83] aws. (2020, Aug.) What is streaming data? [Online]. Available: https:

//aws.amazon.com/streaming-data/ [Cited on page 24.]

[84] A. Bifet, J. Read, G. Holmes, and B. Pfahringer, “Streaming data mining with mas-

sive online analytics (moa),” Data Mining in Time Series and Streaming, pp. 1–25, 2018.

[Cited on page 24.]

[85] P. Mulinka and P. Casas, “Stream-based machine learning for network security

and anomaly detection,” Proceedings of the 2018 Workshop on Big Data Analytics and

Machine Learning for Data Communication Networks, pp. 1–7, Aug. 2018. [Cited on

page 25.]

[86] M. Garofalakis, J. Gehrke, and R. Rastogi, Data stream management: processing high-

speed data streams. Springer, 2016. [Cited on page 25.]

[87] C. C. Aggarwal, Data streams: models and algorithms. Springer Science & Business

Media, 2007, vol. 31. [Cited on page 25.]

[88] G. S. Manku and R. Motwani, “Approximate frequency counts over data streams,”

in VLDB’02: Proceedings of the 28th International Conference on Very Large Databases.

Elsevier, 2002, pp. 346–357. [Cited on page 25.]

[89] M. Datar, A. Gionis, P. Indyk, and R. Motwani, “Maintaining stream statistics over

sliding windows,” SIAM journal on computing, vol. 31, no. 6, pp. 1794–1813, 2002.

[Cited on page 25.]

[90] D. Brzezinski, “Block-based and online ensembles for concept-drifting data

streams,” 2015. [Cited on page 25.]

[91] A. Bifet and R. Gavalda, “Kalman filters and adaptive windows for learning in data

streams,” in International conference on discovery science. Springer, 2006, pp. 29–40.

[Cited on page 26.]

Page 109: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 96

[92] B. Babcock, M. Datar, and R. Motwani, “Sampling from a moving window over

streaming data,” in 2002 Annual ACM-SIAM Symposium on Discrete Algorithms

(SODA 2002). Stanford InfoLab, 2001. [Cited on page 26.]

[93] D. M. Hawkins, “Identification of outliers,” Monographs on statistics and applied

probability, pp. 1–194, 1980. [Online]. Available: https://link.springer.com/

content/pdf/10.1007/978-94-015-3994-4.pdf [Cited on page 26.]

[94] S. Ramırez-Gallego, B. Krawczyk, S. Garcıa, M. Wozniak, and F. Herrera, “A sur-

vey on data preprocessing for data stream mining: Current status and future direc-

tions,” Neurocomputing, vol. 239, pp. 39–57, 2017. [Cited on page 27.]

[95] I. Zliobaite, M. Budka, and F. Stahl, “Towards cost-sensitive adaptation: When

is it worth updating your predictive model?” Neurocomputing, vol. 150, pp. 240

– 249, 2015, bioinspired and knowledge based techniques and applications The

Vitality of Pattern Recognition and Image Analysis Data Stream Classification and

Big Data Analytics. [Online]. Available: http://www.sciencedirect.com/science/

article/pii/S0925231214012971 [Cited on page 27.]

[96] Y. Dong and N. Japkowicz, “Threaded ensembles of supervised and unsupervised

neural networks for stream learning,” in Advances in Artificial Intelligence, R. Khoury

and C. Drummond, Eds. Cham: Springer International Publishing, 2016, pp. 304–

315. [Cited on page 27.]

[97] M. Wozniak, P. Ksieniewicz, B. Cyganek, and K. Walkowiak, “Ensembles of hetero-

geneous concept drift detectors-experimental study,” in IFIP International Conference

on Computer Information Systems and Industrial Management. Springer, 2016, pp. 538–

549. [Cited on page 27.]

[98] G. Ma, Z. Wang, M. Zhang, J. Ye, M. Chen, and W. Zhu, “Understanding perfor-

mance of edge content caching for mobile video streaming,” IEEE Journal on Selected

Areas in Communications, vol. 35, no. 5, pp. 1076–1089, 2017. [Cited on page 27.]

[99] “CVE-2019-12588.” Available from MITRE, CVE-ID CVE-2019-12588., Jun.

2019. [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name=

CVE-2019-12588 [Cited on page 28.]

Page 110: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 97

[100] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,” ACM

computing surveys (CSUR), vol. 41, no. 3, pp. 1–58, 2009. [Cited on pages 28, 36,

and 37.]

[101] “Network denial of service,” Available from MITRE ATT&CK, T1498. [Online].

Available: https://attack.mitre.org/techniques/T1498/ [Cited on page 28.]

[102] E. Portela, R. P. Ribeiro, and J. Gama, “Outliers and the simpson’s paradox,” in

Mexican International Conference on Artificial Intelligence. Springer, 2017, pp. 267–

278. [Cited on page 29.]

[103] J. Gama, I. Zliobaite, A. Bifet, M. Pechenizkiy, and A. Bouchachia, “A survey on

concept drift adaptation,” ACM computing surveys (CSUR), vol. 46, no. 4, pp. 1–37,

2014. [Cited on page 29.]

[104] A. Bifet, G. Holmes, and B. Pfahringer, “Leveraging bagging for evolving data

streams,” in Joint European conference on machine learning and knowledge discovery in

databases. Springer, 2010, pp. 135–150. [Cited on page 30.]

[105] A. Bifet and R. Gavalda, “Adaptive learning from evolving data streams,” in Inter-

national Symposium on Intelligent Data Analysis. Springer, 2009, pp. 249–260. [Cited

on page 30.]

[106] D. M. dos Reis, P. Flach, S. Matwin, and G. Batista, “Fast unsupervised online drift

detection using incremental kolmogorov-smirnov test,” in Proceedings of the 22nd

ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016,

pp. 1545–1554. [Cited on page 30.]

[107] A. Mihelic and S. Vrhovec, “Explaining the employment of information security

measures by individuals in organizations: The self-protection model,” in CECC,

2017. [Cited on page 30.]

[108] B. Krawczyk and A. Cano, “Online ensemble learning with abstaining classifiers for

drifting and noisy data streams,” Applied Soft Computing, vol. 68, pp. 677–692, 2018.

[Cited on pages 30 and 31.]

[109] I. Zliobaite, “Learning under concept drift: an overview,” arXiv preprint

arXiv:1010.4784, 2010. [Cited on page 31.]

Page 111: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 98

[110] J. Gama, P. Medas, G. Castillo, and P. Rodrigues, “Learning with drift detection,” in

Brazilian symposium on artificial intelligence. Springer, 2004, pp. 286–295. [Cited on

page 31.]

[111] A. Bifet and R. Gavalda, “Learning from time-changing data with adaptive win-

dowing,” in Proceedings of the 2007 SIAM international conference on data mining.

SIAM, 2007, pp. 443–448. [Cited on page 31.]

[112] P. Sobolewski and M. Wozniak, “Concept drift detection and model selection with

simulated recurrence and ensembles of statistical detectors.” J. UCS, vol. 19, no. 4,

pp. 462–483, 2013. [Cited on page 31.]

[113] I. Khamassi, M. Sayed-Mouchaweh, M. Hammami, and K. Ghedira, “Discussion

and review on evolving data streams and concept drift adapting,” Evolving systems,

vol. 9, no. 1, pp. 1–23, 2018. [Cited on page 31.]

[114] M. Wozniak, “A hybrid decision tree training method using data streams,” Knowl-

edge and Information Systems, vol. 29, no. 2, pp. 335–347, 2011. [Cited on page 32.]

[115] E. Beyazit, J. Alagurajah, and X. Wu, “Online learning from data streams with vary-

ing feature spaces,” in Proceedings of the AAAI Conference on Artificial Intelligence,

vol. 33, 2019, pp. 3232–3239. [Cited on page 32.]

[116] D. M. Farid, L. Zhang, A. Hossain, C. M. Rahman, R. Strachan, G. Sexton, and K. Da-

hal, “An adaptive ensemble classifier for mining concept drifting data streams,” Ex-

pert Systems with Applications, vol. 40, no. 15, pp. 5895–5906, 2013. [Cited on page 32.]

[117] Y. Sun, A. K. Wong, and M. S. Kamel, “Classification of imbalanced data: A review,”

International journal of pattern recognition and artificial intelligence, vol. 23, no. 04, pp.

687–719, 2009. [Cited on pages 33, 35, and 36.]

[118] S. Axelsson, “The base-rate fallacy and the difficulty of intrusion detection,” ACM

Trans. Inf. Syst. Secur., vol. 3, no. 3, p. 186–205, Aug. 2000. [Online]. Available:

https://doi.org/10.1145/357830.357849 [Cited on page 34.]

[119] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic

minority over-sampling technique,” Journal of artificial intelligence research, vol. 16,

pp. 321–357, 2002. [Cited on page 34.]

Page 112: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 99

[120] C. Kreibich and J. Crowcroft, “Honeycomb: Creating intrusion detection signatures

using honeypots,” SIGCOMM Comput. Commun. Rev., vol. 34, no. 1, p. 51–56,

Jan. 2004. [Online]. Available: https://doi.org/10.1145/972374.972384 [Cited on

page 34.]

[121] C. G. Weng and J. Poon, “A new evaluation measure for imbalanced datasets,” in

Proceedings of the 7th Australasian Data Mining Conference-Volume 87, 2008, pp. 27–32.

[Cited on page 35.]

[122] J. Davis and M. Goadrich, “The relationship between precision-recall and roc

curves,” in Proceedings of the 23rd international conference on Machine learning, 2006,

pp. 233–240. [Cited on page 35.]

[123] U. S. Shanthamallu, A. Spanias, C. Tepedelenlioglu, and M. Stanley, “A brief survey

of machine learning methods and their sensor and iot applications,” 2017 8th Inter-

national Conference on Information, Intelligence, Systems & Applications (IISA), pp. 1–8,

Aug. 2017. [Cited on page 36.]

[124] M. Goldstein and S. Uchida, “A comparative evaluation of unsupervised anomaly

detection algorithms for multivariate data,” PloS one, vol. 11, no. 4, p. e0152173,

2016. [Cited on pages 37 and 38.]

[125] I. Sutskever, R. Jozefowicz, K. Gregor, D. Rezende, T. Lillicrap, and O. Vinyals,

“Towards principled unsupervised learning,” arXiv preprint arXiv:1511.06440, 2015.

[Cited on page 37.]

[126] M. Salehi, C. Leckie, J. C. Bezdek, T. Vaithianathan, and X. Zhang, “Fast memory

efficient local outlier detection in data streams,” IEEE Transactions on Knowledge and

Data Engineering, vol. 28, no. 12, pp. 3246 – 3260, Dec. 2016. [Cited on pages 37

and 40.]

[127] B. R. Kiran, D. M. Thomas, and R. Parakkal, “An overview of deep learning based

methods for unsupervised and semi-supervised anomaly detection in videos,” Jour-

nal of Imaging, vol. 4, no. 2, p. 36, 2018. [Cited on page 38.]

[128] T. Kolajo, O. Daramola, and A. Adebiyi, “Big data stream analysis: a systematic

literature review,” Journal of Big Data, vol. 6, no. 1, p. 47, 2019. [Cited on page 39.]

Page 113: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 100

[129] C. H. Park, “Outlier and anomaly pattern detection on data streams,” The Journal of

Supercomputing, vol. 75, no. 9, pp. 6118–6128, 2019. [Cited on page 39.]

[130] L. Tran, L. Fan, and C. Shahabi, “Distance-based outlier detection in data streams,”

Proceedings of the VLDB Endowment, vol. 9, no. 12, pp. 1089–1100, 2016. [Cited on

page 39.]

[131] C. C. Aggarwal and P. S. Yu, “Outlier detection for high dimensional data,” in Pro-

ceedings of the 2001 ACM SIGMOD international conference on Management of data,

2001, pp. 37–46. [Cited on page 39.]

[132] M. Kontaki, A. Gounaris, A. N. Papadopoulos, K. Tsichlas, and Y. Manolopoulos,

“Continuous monitoring of distance-based outliers over data streams,” in 2011 IEEE

27th International Conference on Data Engineering. IEEE, 2011, pp. 135–146. [Cited

on page 39.]

[133] F. Angiulli and F. Fassetti, “Detecting distance-based outliers in streams of data,” in

Proceedings of the sixteenth ACM conference on Conference on information and knowledge

management, 2007, pp. 811–820. [Cited on page 39.]

[134] L. Cao, D. Yang, Q. Wang, Y. Yu, J. Wang, and E. A. Rundensteiner, “Scalable

distance-based outlier detection over high-volume data streams,” in 2014 IEEE 30th

International Conference on Data Engineering. IEEE, 2014, pp. 76–87. [Cited on

page 39.]

[135] L. Duan, L. Xu, F. Guo, J. Lee, and B. Yan, “A local-density based spatial clustering

algorithm with noise,” Information systems, vol. 32, no. 7, pp. 978–986, 2007. [Cited

on page 40.]

[136] D. Pokrajac, A. Lazarevic, and L. J. Latecki, “Incremental local outlier detection

for data streams,” in 2007 IEEE Symposium on Computational Intelligence and Data

Mining, 2007, pp. 504–515. [Cited on page 40.]

[137] T. Vaithianathan and X. Zhang, “Fast memory efficient local outlier detection in data

streams,” IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,

vol. 28, no. 12, 2016. [Cited on page 40.]

Page 114: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 101

[138] M.-F. Jiang, S.-S. Tseng, and C.-M. Su, “Two-phase clustering process for outliers

detection,” Pattern recognition letters, vol. 22, no. 6-7, pp. 691–700, 2001. [Cited on

page 40.]

[139] D. Yu, G. Sheikholeslami, and A. Zhang, “Findout: finding outliers in very large

datasets,” Knowledge and information Systems, vol. 4, no. 4, pp. 387–412, 2002. [Cited

on page 41.]

[140] Z. He, X. Xu, and S. Deng, “Discovering cluster-based local outliers,” Pattern Recog-

nition Letters, vol. 24, no. 9-10, pp. 1641–1650, 2003. [Cited on page 41.]

[141] P. Dhaliwal, M. Bhatia, and P. Bansal, “A cluster-based approach for outlier de-

tection in dynamic data streams (korm: k-median outlier miner),” arXiv preprint

arXiv:1002.4003, 2010. [Cited on page 41.]

[142] E. J. Spinosa, A. P. de Leon F. de Carvalho, and J. Gama, “Olindda: A cluster-based

approach for detecting novelty and concept drift in data streams,” in Proceedings of

the 2007 ACM symposium on Applied computing, 2007, pp. 448–452. [Cited on page 41.]

[143] L. Akoglu, H. Tong, and D. Koutra, “Graph based anomaly detection and descrip-

tion: a survey,” Data mining and knowledge discovery, vol. 29, no. 3, pp. 626–688, 2015.

[Cited on page 41.]

[144] H. Moonesignhe and P.-N. Tan, “Outlier detection using random walks,” in 2006

18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’06). IEEE,

2006, pp. 532–539. [Cited on page 41.]

[145] F. T. Liu, K. M. Ting, and Z.-H. Zhou, “On detecting clustered anomalies using

sciforest,” in Joint European Conference on Machine Learning and Knowledge Discovery

in Databases. Springer, 2010, pp. 274–290. [Cited on page 41.]

[146] ——, “Isolation-based anomaly detection,” ACM Transactions on Knowledge Discov-

ery from Data (TKDD), vol. 6, no. 1, pp. 1–39, 2012. [Cited on page 42.]

[147] S. C. Tan, K. M. Ting, and T. F. Liu, “Fast anomaly detection for streaming data,” in

Twenty-Second International Joint Conference on Artificial Intelligence, 2011. [Cited on

page 42.]

Page 115: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 102

[148] L. Hemetsberger. (2018, Jan.) Get involved: Synchronicity launches open

call. [Online]. Available: https://oascities.org/synchronicity-launches-open-call/

[Cited on page 43.]

[149] Linux. (2020, Apr.) syscalls - linux system calls. [Online]. Available: https:

//man7.org/linux/man-pages/man2/syscalls.2.html [Cited on page 44.]

[150] L. Degioanni. (2014, Dec.) The most important linux files to protect (and how).

[Online]. Available: https://sysdig.com/blog/fascinating-world-linux-system-

calls/ [Cited on page 44.]

[151] R. Hat. (2020, May) Chapter 6 - system auditing. [Online]. Avail-

able: https://access.redhat.com/documentation/en-us/red hat enterprise linux/

7/html/security guide/chap-system auditing [Cited on page 44.]

[152] IONOS. (2020, May) syscalls - linux system calls. [Online]. Available: https://

www.ionos.com/digitalguide/server/know-how/what-are-system-calls/ [Cited

on page 44.]

[153] M. A. Waller and S. E. Fawcett, “Data science, predictive analytics, and big data:

a revolution that will transform supply chain design and management,” Journal of

Business Logistics, vol. 34, no. 2, pp. 77–84, 2013. [Cited on page 47.]

[154] J. Ellingwood. (2017, Dec.) An introduction to metrics, monitoring, and alerting.

[Online]. Available: https://www.digitalocean.com/community/tutorials/an-

introduction-to-metrics-monitoring-and-alerting [Cited on pages 48 and 49.]

[155] D. Kshirsagar, S. Sawant, A. Rathod, and S. Wathore, “Cpu load analysis & mini-

mization for tcp syn flood detection,” Procedia Computer Science, vol. 85, pp. 626–633,

2016. [Cited on page 48.]

[156] Linux. (2012, Nov.) 4 linux commands to view page faults statistics. [Online].

Available: https://www.cyberciti.biz/faq/linux-command-to-see-major-minor-

pagefaults/ [Cited on page 49.]

[157] linux. (2020, Aug.) capabilities(7) — linux manual page. [Online]. Available:

https://man7.org/linux/man-pages/man7/capabilities.7.html [Cited on page 49.]

Page 116: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 103

[158] M. K. (2019, Jun.) The 4 types of metrics you should monitor to keep your

servers under control. [Online]. Available: https://www.site24x7.com/blog/the-4-

types-of-metrics-you-should-monitor-to-keep-your-servers-under-control [Cited

on page 49.]

[159] R. F. Smith. (2017, Sep.) The most important linux files to protect (and how).

[Online]. Available: https://www.beyondtrust.com/blog/entry/important-linux-

files-protect [Cited on page 50.]

[160] Y. Liu, Y. Yue, and L. Guo, UNIX Operating System: The Development Tutorial Via

UNIX Kernel Services. Springer, 2011. [Cited on page 50.]

[161] Cisco. (2020, Aug.) What is cybersecurity? [Online]. Available: https://www.cisco.

com/c/en/us/products/security/what-is-cybersecurity.html [Cited on page 54.]

[162] M. Evangelou and N. M. Adams, “An anomaly detection framework for cyber-

security data,” Computers & Security, p. 101941, 2020. [Cited on page 54.]

[163] F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation forest,” in 2008 Eighth IEEE Interna-

tional Conference on Data Mining. IEEE, 2008, pp. 413–422. [Cited on pages 56, 57,

and 59.]

[164] R. E. Bellman, Adaptive control processes: a guided tour. Princeton university press,

2015, vol. 2045. [Cited on page 56.]

[165] S. Hariri, M. C. Kind, and R. J. Brunner, “Extended isolation forest,” arXiv preprint

arXiv:1811.02141, 2018. [Cited on pages 56 and 62.]

[166] F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation-based anomaly detection,” ACM

Transactions on Knowledge Discovery from Data (TKDD), vol. 6, no. 1, pp. 1–39, 2012.

[Cited on page 58.]

[167] Wikipedia. (2020, Aug.) User identifier. [Online]. Available: https://en.wikipedia.

org/wiki/User identifier [Cited on page 59.]

[168] A. Taha and A. Hadi, “Anomaly detection methods for categorical data: A review,”

ACM Computing Surveys, vol. 52, pp. 1–35, 05 2019. [Cited on page 60.]

[169] L. Wei, W. Qian, A. Zhou, W. Jin, and X. Y. Jeffrey, “Hot: Hypergraph-based outlier

test for categorical data,” in Pacific-Asia Conference on Knowledge Discovery and Data

Mining. Springer, 2003, pp. 399–410. [Cited on page 60.]

Page 117: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 104

[170] K. Zhang and H. Jin, “An effective pattern based outlier detection approach for

mixed attribute data,” in AI 2010: Advances in Artificial Intelligence, J. Li, Ed. Berlin,

Heidelberg: Springer Berlin Heidelberg, 2011, pp. 122–131. [Cited on page 60.]

[171] A. Ghoting, M. E. Otey, and S. Parthasarathy, “Loaded: link-based outlier and

anomaly detection in evolving data sets,” in Fourth IEEE International Conference

on Data Mining (ICDM’04), 2004, pp. 387–390. [Cited on page 60.]

[172] E. Stripling, B. Baesens, B. Chizi, and S. vanden Broucke, “Isolation-based condi-

tional anomaly detection on mixed-attribute data to uncover workers’ compensa-

tion fraud,” Decision Support Systems, vol. 111, pp. 13–26, 2018. [Cited on page 60.]

[173] S. Guha, N. Mishra, G. Roy, and O. Schrijvers, “Robust random cut forest based

anomaly detection on streams,” in International conference on machine learning, 2016,

pp. 2712–2721. [Cited on page 62.]

[174] D. A. Harrison and K. J. Klein, “What’s the difference? diversity constructs as sepa-

ration, variety, or disparity in organizations,” Academy of management review, vol. 32,

no. 4, pp. 1199–1228, 2007. [Cited on page 63.]

[175] C. E. Brown, “Coefficient of variation,” in Applied multivariate statistics in geohydrol-

ogy and related sciences. Springer, 1998, pp. 155–157. [Cited on page 63.]

[176] S. Nowozin, “Improved information gain estimates for decision tree induction,”

arXiv preprint arXiv:1206.4620, 2012. [Cited on page 63.]

[177] T. Biemann and E. Kearney, “Size does matter: How varying group sizes in a sam-

ple affect the most common measures of group diversity,” Organizational research

methods, vol. 13, no. 3, pp. 582–599, 2010. [Cited on page 63.]

[178] A. G. Bedeian and K. W. Mossholder, “On the use of the coefficient of variation as

a measure of diversity,” Organizational Research Methods, vol. 3, no. 3, pp. 285–297,

2000. [Cited on page 64.]

[179] R. Sun, S. Zhang, C. Yin, J. Wang, and S. Min, “Strategies for data stream mining

method applied in anomaly detection,” Cluster Computing, vol. 22, no. 2, pp. 399–

408, 2019. [Cited on page 66.]

Page 118: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 105

[180] D. H. Wolpert, “The supervised learning no-free-lunch theorems,” in In Proc. 6th

Online World Conference on Soft Computing in Industrial Applications, 2001, pp. 25–42.

[Cited on page 66.]

[181] Z.-H. Zhou, Ensemble methods: foundations and algorithms. CRC press, 2012. [Cited

on page 66.]

[182] G. James, D. Witten, T. Hastie, and R. Tibshirani, An introduction to statistical learning.

Springer, 2013, vol. 112. [Cited on page 67.]

[183] S. M. Goldfeld, R. E. Quandt, and H. F. Trotter, “Maximization by quadratic hill-

climbing,” Econometrica: Journal of the Econometric Society, pp. 541–551, 1966. [Cited

on page 67.]

[184] W. Zang, P. Zhang, C. Zhou, and L. Guo, “Comparative study between incremental

and ensemble learning on data streams: Case study,” Journal Of Big Data, vol. 1,

no. 1, p. 5, 2014. [Cited on page 68.]

[185] Y. Sun, Z. Wang, H. Liu, C. Du, and J. Yuan, “Online ensemble using adaptive win-

dowing for data streams with concept drift,” International Journal of Distributed Sen-

sor Networks, vol. 12, no. 5, p. 4218973, 2016. [Cited on page 68.]

[186] B. Krawczyk, L. L. Minku, J. Gama, J. Stefanowski, and M. Wozniak, “Ensemble

learning for data stream analysis: A survey,” Information Fusion, vol. 37, pp. 132–

156, 2017. [Cited on page 68.]

[187] Z. Ding and M. Fei, “An anomaly detection approach based on isolation forest algo-

rithm for streaming data using sliding window,” IFAC Proceedings Volumes, vol. 46,

no. 20, pp. 12–17, 2013. [Cited on page 68.]

[188] J. Gama, Knowledge discovery from data streams. CRC Press, 2010. [Cited on page 68.]

[189] A. Bifet and R. Gavalda, “Learning from time-changing data with adaptive win-

dowing,” in Proceedings of the 2007 SIAM international conference on data mining.

SIAM, 2007, pp. 443–448. [Cited on page 68.]

[190] A. Bifet, G. Holmes, B. Pfahringer, R. Kirkby, and R. Gavalda, “New ensemble meth-

ods for evolving data streams,” in Proceedings of the 15th ACM SIGKDD interna-

tional conference on Knowledge discovery and data mining, 2009, pp. 139–148. [Cited on

page 68.]

Page 119: Anomaly Detection in Cybersecurity

BIBLIOGRAPHY 106

[191] N. C. Oza and S. Russell, “Online bagging and boosting,” in In Artificial Intelligence

and Statistics 2001. Morgan Kaufmann, 2001, pp. 105–112. [Cited on page 68.]

[192] W. Hoeffding, Probability Inequalities for sums of Bounded Random Variables. New

York, NY: Springer New York, 1994, pp. 409–426. [Online]. Available: https:

//doi.org/10.1007/978-1-4612-0865-5 26 [Cited on page 69.]

[193] F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation-based anomaly detection,” ACM

Transactions on Knowledge Discovery from Data (TKDD), vol. 6, no. 1, pp. 1–39, 2012.

[Cited on page 69.]

[194] M. A. Aydın, A. H. Zaim, and K. G. Ceylan, “A hybrid intrusion detection system

design for computer network security,” Computers & Electrical Engineering, vol. 35,

no. 3, pp. 517–526, 2009. [Cited on page 72.]

[195] M. Elhamahmy, H. N. Elmahdy, and I. A. Saroit, “A new approach for evaluating

intrusion detection system,” CiiT International Journal of Artificial Intelligent Systems

and Machine Learning, vol. 2, no. 11, pp. 290–298, 2010. [Cited on page 72.]

[196] Elasticsearch. (2020, Jul.) Use linux secure computing mode (seccomp). [On-

line]. Available: https://www.elastic.co/guide/en/beats/filebeat/master/linux-

seccomp.html [Cited on page 75.]

[197] CVE. (2020, Sep.) Common vulnerabilities and exposures. [Online]. Available:

https://cve.mitre.org/ [Cited on page 75.]

[198] NIST. (2020, Aug.) Common vulnerability scoring system calculator. [On-

line]. Available: https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator [Cited on

page 75.]