research document compressive sensing based network ...€¦ · research document compressive...

29
Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 A document submitted in part fulfilment of the degree of BSc. (Hons.) in Software Development Supervisor: Dr. Lei Shi Institute of Technology Carlow November 6, 2017

Upload: others

Post on 13-Aug-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Research Document

Compressive Sensing Based NetworkMonitoring For Large-scale Data

CentreMingjie ShaoC00188468

A document submitted in part fulfilment of the degree of

BSc. (Hons.) in Software Development

Supervisor: Dr. Lei Shi

Institute of Technology Carlow

November 6, 2017

Page 2: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 The Python programming language . . . . . . . . . . . . . . . . . . . . . . . 3

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2 Data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.3 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.4 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 The Experimental Platform: Kubernetes . . . . . . . . . . . . . . . . . . . . 10

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.3 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.4 Workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 Compressive Sensing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.4 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 Cloud Monitoring and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5.1 Cloud Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5.2 Heapster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5.3 cAdvisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.4 Collectd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5.5 Prometheus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5.6 InfluxDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5.7 Grafana . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Page 1 of 28

Page 3: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Chapter 1: Introduction

Continues monitoring of server progress in a cloud data centre is important, it enables rapid responseto anomalies. However, gathering status information from a large number of servers is expensive.Therefore, handling the resulting torrent of measurement data poses a significant challenge. Forexample, in MapReduce computation, a few Map stragglers can significantly delay the start ofthe Reduce phase. Since stragglers are a relative notion, their detection requires global decision.Fortunately, the issues are often sparse, there are a few algorithms exploits this issue.

Compressive sensing is an emerging technology that has drawn considerable attention recently forits capability to acquire and extract critical information efficiently. Candes, Tao, Romberg, andDonoho proposed the compressive sensing concept to capture and represent compressible signals ata rate significantly below the Nyquist rate. It has found applications in various fields such as signal,image processing, wireless communication, sensor networks, and cloud computing.

In research document, we will research relavent theory, algorithm, platform and tools of the possi-bility of implementing compressive sensing in data centre continues status monitoring.

Page 2 of 28

Page 4: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Chapter 2: The Python programming language

2.1 Introduction

Python is a general-purpose, high-level programming language whose design philosophy emphasizescode readability. Python’s syntax allows programmers to express concepts in fewer lines of codethan would be possible in languages such as C, and the language provides constructs intended toenable clear programs on both a small and large scale.

Python supports multiple programming paradigms, including object-oriented, imperative and func-tional programming styles. It features a fully dynamic type system and automatic memory manage-ment, similar to that of Scheme, Ruby, Perl and Tclm and has a large and comprehensive standardlibrary.

Like other dynamic languages, Python is often used as a scripting language, but is also used in awide range of non-scripting contexts. Using third-party tools, Python code can be packaged intostandalone executable programs. Python interpreters are available for many operating systems.

CPython, the reference implementation of Python, is free and open source software and has acommunity-based development model, as do nearly all of its alternative implementations. CPythonis managed by the non-profit Python Software Foundation.[16]

2.2 Data types

2.2.1 Numeric types

In Python, data takes the form of objects—either built-in objects that Python provides, or objectswe create using Python tools and other languages such as C. In fact, objects are the basis of everyPython program you will ever write. Because they are the most fundamental notion in Pythonprogramming.

Most of Python’s number types are fairly typical and will probably seem familiar. In Python,numbers are not really a single object type, but a category of similar types. Python supports theusual numeric types (integers and floating points), as well as literals for creating numbers andexpressions for processing them. In addition, Python provides more advanced numeric programmingsupport and objects for more advanced work. A complete inventory of Python’s numeric toolboxincludes:

• Integer and floating-point objects

• Complex number objects

• Decimal: fixed-precision objects

• Fraction: rational number objects

Page 3 of 28

Page 5: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

• Sets: collections with numeric operations

• Booleans: true and false

• Built-in functions and modules: round, math, random, etc.

• Expressions; unlimited integer precision; bitwise operations; hex, octal, and binary formats

• Third-party extensions: vectors, libraries, visualization, plotting, etc.

Among its basic types, Python provides integers, which are positive and negative whole numbers,and floating-point numbers, which are numbers with a fractional part (sometimes called “floats”for verbal economy). Python also allows us to write integers using hexadecimal, octal, and binaryliterals; offers a complex number type; and allows integers to have unlimited precision, they cangrow to have as many digits as your memory space allows. Table 2.1 shows what Python’s numerictypes look like when written out in a program as literals or constructor function calls.

Table 2.1: Numeric literals and constructors.

Literal Interpretation1234, −24, 0, 99999999999999 Integers (unlimited size)1.23, 1., 3.14e-10, 4E210, 4.0e+210 Floating-point numbers0o177, 0x9ff, 0b101010 Octal, hex, and binary literals in 3.X0177, 0o177, 0x9ff, 0b101010 Octal, octal, hex, and binary literals in 2.X3+4j, 3.0+4.0j, 3J Complex number literalsset(’spam’), 1, 2, 3, 4 Sets: 2.X and 3.X construction formsDecimal(’1.0’), Fraction(1, 3) Decimal and fraction extension typesbool(X), True, False Boolean type and constants

2.2.2 Strings

From a functional perspective, strings can be used to represent just about anything that can beencoded as text or bytes. In the text department, this includes symbols and words (e.g., yourname), contents of text files loaded into memory, Internet addresses, Python source code, and soon. Strings can also be used to hold the raw bytes used for media files and network transfers, andboth the encoded and decoded forms of non-ASCII Unicode text used in internationalized programs.

Python’s strings serve the same role as character arrays in languages such as C, but they are asomewhat higher-level tool than arrays. Unlike in C, in Python, strings come with a powerful setof processing tools. Also unlike languages such as C, Python has no distinct type for individualcharacters; instead, we just use one-character strings.

Python strings are categorized as immutable sequences, meaning that the characters they containhave a left-to-right positional order and that they cannot be changed in place. In fact, strings arethe first representative of the larger class of objects called sequences.[11]

Strings in python can be enclosed in single quotes (’...’) or double quotes (”...”) with the sameresult, it can also be used to escape quotes:

Page 4 of 28

Page 6: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

1 >>> 'spam eggs' # single quotes2 'spam eggs'3 >>> 'doesn\'t' # use \' to escape the single quote...4 "doesn't"5 >>> "doesn't" # ...or use double quotes instead6 "doesn't"7 >>> '"Yes," he said.'8 '"Yes," he said.'9 >>> "\"Yes,\" he said."

10 '"Yes," he said.'11 >>> '"Isn\'t," she said.'12 '"Isn\'t," she said.'

2.2.3 Lists

Lists are Python’s most flexible ordered collection object type. Unlike strings, lists can contain anysort of object: numbers, strings, and even other lists. Also, unlike strings, lists may be changed inplace by assignment to offsets and slices, list method calls, deletion statements, and more, they aremutable objects.[11]

Because they are sequences, lists support many of the same operations as strings. For example,lists respond to the + and * operators much like strings—they mean concatenation and repetitionhere too, except that the result is a new list, not a string:

1 >>> len([1, 2, 3]) # Length2 33 >>> [1, 2, 3] + [4, 5, 6] # Concatenation4 [1, 2, 3, 4, 5, 6]5 >>> ['Ni!'] * 4 # Repetition6 ['Ni!', 'Ni!', 'Ni!', 'Ni!']

2.2.4 Dictionaries

The dict type is not only widely used in our programs but also a fundamental part of the Pythonimplementation. Module namespaces, class and instance attributes and function keyword argumentsare some of the fundamental constructs where dictionaries are deployed. The built-in functions livein __builtins__.__dict__.

Because of their crucial role, Python dicts are highly optimized. Hash tables are the engines behindPython’s high performance dicts.[13]

For a python dictionary, each key of is separated from its value by a colon (:), the items are separatedby commas, and the whole thing is enclosed in curly braces. An empty dictionary without any itemsis written with just two curly braces, like this: {}.

Keys are unique within a dictionary while values may not be. The values of a dictionary can be ofany type, but the keys must be of an immutable data type such as strings, numbers, or tuples.

Page 5 of 28

Page 7: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Here is a small example using a dictionary:

1 >>> tel = {'jack': 4098, 'sape': 4139}2 >>> tel['guido'] = 41273 >>> tel4 {'sape': 4139, 'guido': 4127, 'jack': 4098}5 >>> tel['jack']6 40987 >>> del tel['sape']8 >>> tel['irv'] = 41279 >>> tel

10 {'guido': 4127, 'irv': 4127, 'jack': 4098}11 >>> list(tel.keys())12 ['irv', 'guido', 'jack']13 >>> sorted(tel.keys())14 ['guido', 'irv', 'jack']15 >>> 'guido' in tel16 True17 >>> 'jack' not in tel18 False

2.2.5 Tuples

A tuple is defined in the same way as a list, except that the whole set of elements is enclosed inparentheses instead of square brackets. The elements of a tuple have a defined order, just like alist. Although they don’t support as many methods, tuples share most of their properties with lists.Tuple indices are zero-based, just like a list, so the first element of a non-empty tuple is alwaystuple[0]. Negative indices count from the end of the tuple, just like a list.

A tuple consists of a number of values separated by commas, for instance:

1 >>> t = 12345, 54321, 'hello!'2 >>> t[0]3 123454 >>> t5 (12345, 54321, 'hello!')6 >>> # Tuples may be nested:7 ... u = t, (1, 2, 3, 4, 5)8 >>> u9 ((12345, 54321, 'hello!'), (1, 2, 3, 4, 5))

10 >>> # Tuples are immutable:11 ... t[0] = 8888812 Traceback (most recent call last):13 File "<stdin>", line 1, in <module>14 TypeError: 'tuple' object does not support item assignment15 >>> # but they can contain mutable objects:16 ... v = ([1, 2, 3], [3, 2, 1])17 >>> v18 ([1, 2, 3], [3, 2, 1])

Page 6 of 28

Page 8: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

The major difference between tuples and lists is that tuples can not be changed. In technical terms,tuples are immutable.[12]

2.2.6 Sets

A Set is an unordered collection of unique elements. It’s also a iterable mutable data type. Python’sset class represents the mathematical notion of a set. The major advantage of using a set is that ithas a highly optimized method for checking whether a specific element is contained in the set. Setdata type in python is based on hash table data structure. So set elements must be hashable.[6]

As set is a collection of unique objects. A basic use case is removing duplication:

1 >>> l = ['spam', 'spam', 'eggs', 'spam']2 >>> set(l) {'eggs', 'spam'}3 >>> list(set(l)) ['eggs', 'spam']

In addition to guaranteeing uniqueness, the set types implement the essential set operations as infixoperators, so, given two sets a and b, a | b returns their union, a & b computes the intersection,and a - b the difference. Smart use of set operations can reduce both the line count and the runtime of Python programs, at the same time making code easier to read and reason about — byremoving loops and lots of conditional logic.[13]

2.2.7 Statements

If statement

Perhaps the most well-known statement type is the if statement. For example:

1 >>> x = int(input("Please enter an integer: "))2 Please enter an integer: 423 >>> if x < 0:4 ... x = 05 ... print('Negative changed to zero')6 ... elif x == 0:7 ... print('Zero')8 ... elif x == 1:9 ... print('Single')

10 ... else:11 ... print('More')

There can be zero or more elif parts, and the else part is optional. The keyword ‘elif‘ is short for‘else if’, and is useful to avoid excessive indentation. An if ... elif ... elif ... sequence is a substitutefor the switch or case statements found in other languages.[15]

Page 7 of 28

Page 9: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

For statement

The for statement in Python differs a bit from what you may be used to in C or Pascal. Ratherthan always iterating over an arithmetic progression of numbers (like in Pascal), or giving the userthe ability to define both the iteration step and halting condition (as C), Python’s for statementiterates over the items of any sequence (a list or a string), in the order that they appear in thesequence. For example:

1 >>> # Measure some strings:2 ... words = ['cat', 'window', 'defenestrate']3 >>> for w in words:4 ... print(w, len(w))5 ...6 >>> cat 37 >>> window 68 >>> defenestrate 1

2.3 Functions

Functions are a means by which we can package up and parameterize functionality. Four kindsof functions can be created in Python: global functions, local functions, lambda functions, andmethods.

Functions can be grouped into global functions and local functions. Global functions are accessibleto any code in the same module (i.e., the same .py file) in which the function is created. Globalfunctions can also be accessed from other modules. Local functions (also called nested functions)are functions that are defined inside other functions. These functions are visible only to the functionwhere they are defined; they are especially useful for creating small helper functions that have nouse elsewhere.

The general syntax for creating a (global or local) function is:

1 def functionName(parameter1, parameter2):2 """Your code goes here"""3 return

’return’ statement is not necessary at the end of the function if no value will be returned. Pythoninterrupter will return a None type automatically. In addition, python allows us assign a defaultparameter while defining the function. For example:

1 >>>def functionName(parameter1=1):2 ... """Your code goes here"""3 ... return4 ...5 >>>functionName(parameter2=2)

Page 8 of 28

Page 10: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

In this case, parameter1 can take the default value which is 1. We can pass parameter2 only toinvoke the function without syntax mistakes.

Python provides many built-in functions, and the standard library and third-party libraries addhundreds more (thousands if we count all the methods), so in many cases the function we want hasalready been written. For this reason, it is always worth checking Python’s online documentationto see what is already available.

2.4 Classes

Compared with other programming languages, Python’s class mechanism adds classes with a min-imum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ andModula-3. Python classes provide all the standard features of Object Oriented Programming: theclass inheritance mechanism allows multiple base classes, a derived class can override any methodsof its base class or classes, and a method can call the method of a base class with the same name.Objects can contain arbitrary amounts and kinds of data. As is true for modules, classes partakeof the dynamic nature of Python: they are created at runtime, and can be modified further aftercreation.[15]

Defining a class in Python is simple. As with functions, there is no separate interface definition.Just define the class and start coding. A Python class starts with the reserved word class, followedby the class name. Technically, that’s all that’s required, since a class doesn’t need to inherit fromany other class. Just like def statements, class is a statement, so we can create classes dynamicallyif we want to.[12]

Here is an example of defining a custom class:

1 >>> class FirstClass: # Define a class object2 ... def setdata(self, value): # Define class's methods3 ... self.data = value # self is the instance4 ... def display(self):5 ... print(self.data) # self.data: per instance

Functions inside a class are usually called methods. They’re coded with normal defs, and theysupport everything we’ve learned about functions already (they can have defaults, return values,yield items on request, and so on).[11]

Page 9 of 28

Page 11: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Chapter 3: The Experimental Platform:Kubernetes

3.1 Introduction

Kubernetes is a portable, extensible open-source platform for managing containerized workloadsand services, that facilitates both declarative configuration and automation. It has a large, rapidlygrowing ecosystem. Kubernetes services, support, and tools are widely available.

Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half ofexperience that Google has with running production workloads at scale, combined with best-of-breedideas and practices from the community.

Kubernetes provides a container-centric management environment. It orchestrates computing, net-working, and storage infrastructure on behalf of user workloads. This provides much of the simplicityof Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enablesportability across infrastructure providers.

Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetesoperates at the container level rather than at the hardware level, it provides some generally applicablefeatures common to PaaS offerings, such as deployment, scaling, load balancing, logging, andmonitoring. However, Kubernetes is not monolithic, and these default solutions are optional andpluggable. Kubernetes provides the building blocks for building developer platforms, but preservesuser choice and flexibility where it is important.

3.2 Components

3.2.1 Master Components

Master components provide the cluster’s control plane. Master components make global decisionsabout the cluster (for example, scheduling), and detecting and responding to cluster events (startingup a new pod when a replication controller’s ‘replicas’ field is unsatisfied).

Master components can be run on any machine in the cluster. However, for simplicity, set up scriptstypically start all master components on the same machine, and do not run user containers on thismachine.

kube-apiserver

Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetescontrol plane.

Page 10 of 28

Page 12: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

etcd

Consistent and highly-available key value store used as Kubernetes’ backing store for all clusterdata.

kube-scheduler

Component on the master that watches newly created pods that have no node assigned, and selectsa node for them to run on.

kube-controller-manager

Component on the master that runs controllers.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled intoa single binary and run in a single process.

These controllers include: Node Controller, Replication Controller, Endpoints Controller, ServiceAccount & Token Controllers

cloud-controller-manager

cloud-controller-manager runs controllers that interact with the underlying cloud providers andcloud-provider-specific controller loops. cloud-controller-manager allows cloud vendors code andthe Kubernetes core to evolve independent of each other.

The following controllers have cloud provider dependencies: Node Controller, Route Controller,Service Controller, Volume Controller

3.2.2 Node Components

Node components run on every node, maintaining running pods and providing the Kubernetesruntime environment.

kubelet

An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.

kube-proxy

kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the hostand performing connection forwarding.

Container Runtime

The container runtime is the software that is responsible for running containers. Kubernetes supportsseveral runtimes: Docker, rkt, runc and any OCI runtime-spec implementation.

Page 11 of 28

Page 13: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

3.2.3 Addons

Addons are pods and services that implement cluster features. The pods may be managed byDeployments, ReplicationControllers, and so on. Namespaced addon objects are created in thekube-system namespace.

DNS

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, whichserves DNS records for Kubernetes services.

Web UI (Dashboard)

Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manageand troubleshoot applications running in the cluster, as well as the cluster itself.

Container Resource Monitoring

Container Resource Monitoring records generic time-series metrics about containers in a centraldatabase, and provides a UI for browsing that data.

Cluster-level Logging

A Cluster-level logging mechanism is responsible for saving container logs to a central log store withsearch/browsing interface.

3.3 Architecture

3.3.1 Node

A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VMor physical machine, depending on the cluster. Each node has the services necessary to run podsand is managed by the master components. The services on a node include Docker, kubelet andkube-proxy.

Unlike pods and services, a node is not inherently created by Kubernetes: it is created externally bycloud providers like Google Compute Engine, or exists in your pool of physical or virtual machines.What this means is that when Kubernetes creates a node, it is really just creating an object thatrepresents the node. After creation, Kubernetes will check whether the node is valid or not.

3.3.2 Master Node Communications

The communication paths between the master (really the apiserver) and the Kubernetes clusterallow users to customize their installation to harden the network configuration such that the clustercan be run on an untrusted network (or on fully public IPs on a cloud provider).

Page 12 of 28

Page 14: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Master Talks To Cluster

All communication paths from the cluster to the master terminate at the apiserver (none of theother master components are designed to expose remote services). In a typical deployment, theapiserver is configured to listen for remote connections on a secure HTTPS port (443) with one ormore forms of client authentication enabled. One or more forms of authorization should be enabled,especially if anonymous requests or service account tokens are allowed.

The default operating mode for connections from the cluster (nodes and pods running on the nodes)to the master is secured by default and can run over untrusted and/or public networks.

Cluster Talks To Master

There are two primary communication paths from the master (apiserver) to the cluster. The firstis from the apiserver to the kubelet process which runs on each node in the cluster. The second isfrom the apiserver to any node, pod, or service through the apiserver’s proxy functionality.

3.3.3 Cloud Controller Manager

The cloud controller manager (CCM) concept was originally created to allow cloud specific vendorcode and the Kubernetes core to evolve independent of one another. The cloud controller managerruns alongside other master components such as the Kubernetes controller manager, the API server,and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top ofKubernetes.

The cloud controller manager’s design is based on a plugin mechanism that allows new cloudproviders to integrate with Kubernetes easily by using plugins.

Figure 3.1: Architecture of A Kubernetes Cluster With Cloud Controller Manager

Page 13 of 28

Page 15: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

3.4 Workloads

3.4.1 Pods

A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetesobject model that you create or deploy. A Pod represents a running process on your cluster.

A Pod encapsulates an application container (or, in some cases, multiple containers), storage re-sources, a unique network IP, and options that govern how the container(s) should run. A Podrepresents a unit of deployment: a single instance of an application in Kubernetes, which mightconsist of either a single container or a small number of containers that are tightly coupled andthat share resources.

Docker is the most common container runtime used in a Kubernetes Pod, but Pods support othercontainer runtimes as well.

Pods in a Kubernetes cluster can be used in two main ways:

• Pods that run a single container.

• Pods that run multiple containers that need to work together.

3.4.2 Controllers

ReplicaSet

ReplicaSet is the next-generation Replication Controller. ReplicaSet supports the new set-basedselector requirements as described in the labels user guide whereas a Replication Controller onlysupports equality-based selector requirements.

ReplicationController

A ReplicationController ensures that a specified number of pod replicas are running at any onetime. In other words, a ReplicationController makes sure that a pod or a homogeneous set of podsis always up and available.

Deployments

A Deployment controller provides declarative updates for Pods and ReplicaSets.

StatefulSets

StatefulSet is the workload API object used to manage stateful applications.

DaemonSet

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to thecluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage

Page 14 of 28

Page 16: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

• running a cluster storage daemon, such as glusterd, ceph, on each node.

• running a logs collection daemon on every node, such as fluentd or logstash.

• running a node monitoring daemon on every node, such as Prometheus Node Exporter,collectd, Datadog agent, New Relic agent, or Ganglia gmond.

In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. Amore complex setup might use multiple DaemonSets for a single type of daemon, but with differentflags and/or different memory and cpu requests for different hardware types.

Garbage Collection

The role of the Kubernetes garbage collector is to delete certain objects that once had an owner,but no longer have an owner.

Jobs

A job creates one or more pods and ensures that a specified number of them successfully terminate.As pods successfully complete, the job tracks the successful completions. When a specified numberof successful completions is reached, the job itself is complete. Deleting a Job will cleanup the podsit created.

A simple case is to create one Job object in order to reliably run one Pod to completion. The Jobobject will start a new Pod if the first pod fails or is deleted (for example due to a node hardwarefailure or a node reboot).

A Job can also be used to run multiple pods in parallel.

Here is an example Job config. It computes π to 2000 places and prints it out. It takes around 10sto complete.

1 apiVersion: batch/v12 kind: Job3 metadata:4 name: pi5 spec:6 template:7 spec:8 containers:9 - name: pi

10 image: perl11 command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]12 restartPolicy: Never13 backoffLimit: 4

We can run the example job by running the following command:

Page 15 of 28

Page 17: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

$ kubectl create -f ./job.yamljob "pi" created

We can also check the job status by running the following command:

$ kubectl describe jobs/piName: piNamespace: defaultSelector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495

job-name=piAnnotations: <none>Parallelism: 1Completions: 1Start Time: Tue, 07 Jun 2016 10:56:16 +0200Pods Statuses: 0 Running / 1 Succeeded / 0 FailedPod Template:

Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495job-name=pi

Containers:pi:Image: perlPort:Command:perl-Mbignum=bpi-wleprint bpi(2000)

Environment: <none>Mounts: <none>

Volumes: <none>Events:

FirstSeen LastSeen Count From Reason--------- -------- ----- ---- ------1m 1m 1 {job-controller } Normal

CronJob

A Cron Job manages time based Jobs, namely:

• Once at a specified point in time

• Repeatedly at a specified point in time

One CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on agiven schedule, written in Cron format.

A typical use case is:

• Schedule a job execution at a given point in time.

• Create a periodic job, e.g. database backup, sending emails.

Page 16 of 28

Page 18: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Chapter 4: Compressive Sensing Algorithm

4.1 Introduction

The traditional approach of reconstructing signals or images from measured data follows the well-known Shannon sampling theorem, which states that the sampling rate must be twice the highestfrequency. Similarly, the fundamental theorem of linear algebra suggests that the number of collectedsamples of a discrete finite-dimensional signal should be at least as large as its length in orderto ensure reconstruction. This principle underlies most devices of current technology, such asanalog to digital conversion, medical imaging or audio and video electronics. The novel theoryof compressive sensing — also known under the terminology of compressed sensing, compressivesampling or sparse recovery — provides a fundamentally new approach to data acquisition whichovercomes this common wisdom.[10] It predicts that certain signals or images can be recoveredfrom what was previously believed to be highly incomplete measurements, by finding solutions tounderdetermined linear systems. This is based on the principle that, through optimization, thesparsity of a signal can be exploited to recover it from far fewer samples than required by theShannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.The first one is sparsity which requires the signal to be sparse in some domain. The second one isincoherence which is applied through the isometric property which is sufficient for sparse signals.[2]

In many practical problems of science and technology, one encounters the task of inferring quantitiesof interest from measured information. For instance, in signal and image processing, one would liketo reconstruct a signal from measured data. When the information acquisition process is linear, theproblem reduces to solving a linear system of equations. In mathematical terms, the observed dataelement of y ∈ Cm is connected to the signal x ∈ CN interest via

Ax = y. (4.1)

The matrix A ∈ Cm×N models the linear measurement process. Then one tries to recover thevector x ∈ CN by solving the above linear system. Traditional wisdom suggests that the numberm of measurements, i.e., the amount of measured data, must be at least as large as the signallength N. This principle is the basis for most devices used in current technology, such as analog-to-digital conversion, medical imaging, radar, and mobile communication. Indeed, if m < N, thenclassical linear algebra indicates that the linear system (4.1) is underdetermined and that there areinfinitely many solutions (provided, of course, that there exists at least one). In other words, withoutadditional information, it is impossible to recover x from y in the case m < N. This fact also relatesto the Shannon sampling theorem, which states that the sampling rate of a continuous-time signalmust be twice its highest frequency in order to ensure reconstruction.

Page 17 of 28

Page 19: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

4.2 History

Compressed sensing relies on L1 techniques, which several other scientific fields have used historically.[4]In statistics, the least squares method was complemented by the L1-norm, which was introduced byLaplace. Following the introduction of linear programming and Dantzig’s simplex algorithm, the L1-norm was used in computational statistics. In statistical theory, the L1-norm was used by GeorgeW. Brown and later writers on median-unbiased estimators. It was used by Peter J. Huber andothers working on robust statistics. The L1-norm was also used in signal processing, for example,in the 1970s, when seismologists constructed images of reflective layers within the earth based ondata that did not seem to satisfy the Nyquist–Shannon criterion.[1] It was used in matching pur-suit in 1993, the LASSO estimator by Robert Tibshirani in 1996[14] and basis pursuit in 1998.[9]There were theoretical results describing when these algorithms recovered sparse solutions, but therequired type and number of measurements were sub-optimal and subsequently greatly improvedby compressed sensing.

At first glance, compressed sensing might seem to violate the sampling theorem, because compressedsensing depends on the sparsity of the signal in question and not its highest frequency. This is amisconception, because the sampling theorem guarantees perfect reconstruction given sufficient,not necessary, conditions. A sampling method fundamentally different from classical fixed-ratesampling cannot ”violate” the sampling theorem. Sparse signals with high frequency componentscan be highly under-sampled using compressed sensing compared to classical fixed-rate sampling.[7]

4.3 Method

4.3.1 Sparse Solutions of Underdetermined Systems

Given a signal x ∈ RN , we consider measurement system that acquire N linear measurements. Wecan represent this process mathmatically as:

y = Φx (4.2)

In practical instances, the vector x may be the coefficients of a signal f ∈ RN in an orthonormalbasis Ψ

f (t) =N

∑i=1

xiψi(t), t = 1,2, ...,N. (4.3)

For example, we might choose to expand the signal as a superposition of spikes, sinusoids, B-splines, wavelets [36], and so on. As a side note, it is not important to restrict attention toorthogonal expansions as the theory and practice of compressive sampling accommodates othertypes of expansions. For example, x might be the coefficients of a digital image in a tight-frame ofcurvelets [5]. To keep on using convenient matrix notations, one can write the decomposition (4.3)as x = Ψ f where Ψ is the N by N matrix with the waveforms Ψi as rows or equivalently, f = Ψx.[8]

We will say that a signal f is sparse in the Ψ-domain if the coefficient sequence is supported ona small set and compressible if the sequence is concentrated near a small set. Suppose we haveavailable undersampled data about f of the same form as before

Page 18 of 28

Page 20: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

y = Φ f . (4.4)

Expressed in a different way, we collect partial information about x via y = Φ′x where Φ = ΦΨ.In this setup, one would recover f by finding – among all coefficient sequences consistent with thedata – the decomposition with minimum l1-norm

min||x||l1 such that Φ′x = y

4.3.2 Sparse Signal Recovery

Given noisy compressive measurements y = f ix+ e of a signal x, a core problem is compressivesensing is to recover a sparse signal x from a set of measurements y. Considerable efforts have beendirected towards developing algorithms that perform fast, accurate, and stable reconstruction of xfrom y. A good compressive sensing matrix f i typically satisfies certain geometric conditions, suchas the restricted isometry property. Practical algorithms exploit this fact in various ways in orderto drive down the number of measurements, enable faster reconstruction, and ensure robustness toboth numerical and stochastic errors.

The design of sparse recovery algorithms are guided by various criteria. Some important ones arelisted as follows.

• Minimal number of measurements. Sparse recovery algorithms must require approximatelythe same number of measurements required for the stable embedding of K-spare signals.

• Robustness to measurement noise and model mismatch. Sparse recovery algorithms mustbe stable with regards to perturbations of the input signal, as well as noise added to themeasurements; both types of errors arise naturally in practical systems.

• Speed. Sparse recovery algorithms must strive towards expending minimal computationalresources, a lot of applications in compressive sensing deal with very high-dimensional signals.

• Performance guarantees. We can choose to focus on algorithm performance for the recoveryof exactly K-sparse signals x, or consider performace for the recovery of general signal xs.Alternately, we can also consider algorithms that are accompanied by performance guaranteesin either the noise-free or noisy settings.

4.4 Application

Compressive sensing can be used in all applications where the task is the reconstruction of a signalor an image from linear measurement. There should be reason to believe that the signal is sparsein a suitable basis.

Magnetic Resonance Imaging

MRI is a medical imaging technique used in radiology to visualise detailed internal structures. InMRI, samples are collected directly in Fourier frequency domain (k-space) of object. The scan time

Page 19 of 28

Page 21: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

in MRI is proportional to the number of Fourier coefficients. Using compressive sensing technique,the number of samples and scan time can be reduced.

Further applications include analogue to digital conversion, single-pixel imaging, data compression,astronomical signal, geophysical data analysis and compressive radar imaging. The point of com-pressive sensing is that even though the amount of data is very small, we can have most of theinformation contained in the object. Thus, compressive sensing has many potential applications invarious fields.

Page 20 of 28

Page 22: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Chapter 5: Cloud Monitoring and Tools

5.1 Cloud Monitoring

Monitoring is one of the operational tasks in a cloud-based environment for fault detection, cor-rection, and system maintenance. For example, utilization of servers and storage capacity may beregularly monitored. Monitoring of data may be useful for short-term management as well as forlong-term capacity planning. Machine images run from the service catalog may also need to bemonitored. Systems administrators may need to know which applications are used frequently. Mon-itoring may also include security monitoring, such as monitoring user activity, suspicious events,authentication failures or repeated unauthorized access attempts, and scanning of inbound andoutbound network traffic.

At its very simplest monitoring is a three stage process illustrated by Figure 1: the collection ofrelevant state, the analysis of the aggregated state and decision making as a result of the analysis.The more trivial monitoring tools are simple programs which interrogate system state such as theUNIX tools df, uptime or top. These tools are run by a user who in turn analyses the systemstate and makes an informed decision as to what, if any action to take. Thus, in fact, the user isperforming the vast majority of the monitoring process and not software. As computing systemscontinue to grow in size and complexity there is an increasing need for automated tools to performmonitoring with a reduced, or removed need for human interaction. These systems implement all orsome of the 3 stage monitoring process. Each of these stages have their own challenges, especiallywith regards to cloud computing.[17]

5.2 Heapster

Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetesnatively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how anyKubernetes application would run. The Heapster pod discovers all nodes in the cluster and queriesusage information from the nodes’ Kubelets, the on-machine Kubernetes agent. The Kubelet itselffetches the data from cAdvisor. Heapster groups the information by pod along with the relevantlabels. This data is then pushed to a configurable backend for storage and visualization. Currentlysupported backends include InfluxDB (with Grafana for visualization), Google Cloud Monitoring.The overall architecture of the service can be seen below:

Page 21 of 28

Page 23: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Figure 5.1: Heapster Monitoring Architecture in Kubernetes

5.3 cAdvisor

cAdvisor is an open source container resource usage and performance analysis agent. It is purpose-built for containers and supports Docker containers natively. In Kubernetes, cAdvisor is integratedinto the Kubelet binary. cAdvisor auto-discovers all containers in the machine and collects CPU,memory, filesystem, and network usage statistics. cAdvisor also provides the overall machine usageby analyzing the ‘root’ container on the machine.

On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194.Here is a snapshot of part of cAdvisor’s UI that shows the overall machine usage:

Page 22 of 28

Page 24: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Figure 5.2: cAdvisor Dashboard

5.4 Collectd

collectd is a daemon which collects system and application performance metrics periodically andprovides mechanisms to store the values in a variety of ways. collectd gathers metrics from varioussources, e.g. the operating system, applications, logfiles and external devices, and stores this infor-mation or makes it available over the network. Those statistics can be used to monitor systems, find

Page 23 of 28

Page 25: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

performance bottlenecks (i.e. performance analysis) and predict future system load (i.e. capacityplanning). It’s written in C for performance and portability, allowing it to run on systems withoutscripting language or cron daemon, such as embedded systems.

Here’s a graph showing the CPU utilization of a system over the last 60 minutes:

Figure 5.3: collectd 60 minutes CPU utilization

5.5 Prometheus

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.Since its inception in 2012, many companies and organizations have adopted Prometheus, and theproject has a very active developer and user community. It is now a standalone open source projectand maintained independently of any company.[5]

5.5.1 Features• a multi-dimensional data model with time series data identified by metric name and key/value

pairs

• a flexible query language to leverage this dimensionality

• no reliance on distributed storage; single server nodes are autonomous

• time series collection happens via a pull model over HTTP

• pushing time series is supported via an intermediary gateway

• targets are discovered via service discovery or static configuration

• multiple modes of graphing and dashboarding support

Page 24 of 28

Page 26: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

5.5.2 Components• the main Prometheus server which scrapes and stores time series data

• client libraries for instrumenting application code

• a push gateway for supporting short-lived jobs

• special-purpose exporters for services like HAProxy, StatsD, Graphite, etc.

• an alertmanager to handle alerts

• various support tools

Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary pushgateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data toeither aggregate and record new time series from existing data or generate alerts.

5.6 InfluxDB

InfluxDB is used as a data store for any use case involving large amounts of timestamped data, in-cluding DevOps monitoring, application metrics, IoT sensor data, and real-time analytics. Conservespace on your machine by configuring InfluxDB to keep data for a defined length of time, auto-matically expiring & deleting any unwanted data from the system. InfluxDB also offers a SQL-likequery language for interacting with data.[3]

Page 25 of 28

Page 27: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Figure 5.4: TICK Stack Architecture

5.6.1 High Performance

InfluxDB is a custom high-performance data store written specifically for time series data. It allowsfor high throughput ingest, compression and real-time querying of that same data. InfluxDB iswritten entirely in Go and it compiles into a single binary with no external dependencies. InfluxDBprovides a high performance write and query HTTP/S API and supports plugins for data ingestionprotocols like Telegraf, Graphite, collectd, and OpenTSDB.

5.6.2 SQL-Like Queries

InfluxDB provides InfluxQL as a SQL-like query language for interacting with your data. It has beenlovingly crafted to feel familiar to those coming from other SQL or SQL-like environments while alsoproviding features specific to storing and analyzing time series data. InfluxQL also supports regularexpressions, arithmetic expressions, and time series specific functions to speed up data processing.

Page 26 of 28

Page 28: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

5.6.3 Downsampling and Data Retention

InfluxDB can handle millions of data points per second. Working with that much data over a longperiod of time can create storage concerns. A natural solution is to downsample the data; keepingthe high precision raw data for only a limited time, and storing the lower precision, summarizeddata for much longer or forever.

5.7 Grafana

Grafana is an open source, feature rich metrics dashboard and graph editor for Graphite, Elastic-search, OpenTSDB, Prometheus and InfluxDB. Grafana allows you to query, visualize, alert on andunderstand your metrics no matter where they are stored. Create, explore, and share dashboardswith your team and foster a data driven culture.

Figure 5.5: Grafana Dashboard

Page 27 of 28

Page 29: Research Document Compressive Sensing Based Network ...€¦ · Research Document Compressive Sensing Based Network Monitoring For Large-scale Data Centre Mingjie Shao C00188468 Adocumentsubmittedinpartfulfilmentofthedegreeof

Bibliography

[1] The best bits | american scientist. https://www.americanscientist.org/article/the-best-bits. (Accessed on 10/27/2017).

[2] Compressed sensing - wikipedia. https://en.wikipedia.org/wiki/Compressed_sensing.(Accessed on 10/26/2017).

[3] Influxdb | the time series database in the tick stack. https://www.influxdata.com/time-series-platform/influxdb/. (Accessed on 10/30/2017).

[4] Microsoft powerpoint - goyal-2010-10-16.c.ppt [compatibility mode]. https://faculty.math.illinois.edu/~laugesen/imaha10/goyal_talk.pdf. (Accessed on 10/26/2017).

[5] Overview | prometheus. https://prometheus.io/docs/introduction/overview/. (Ac-cessed on 10/30/2017).

[6] Sets in python - geeksforgeeks. https://www.geeksforgeeks.org/sets-in-python/. (Ac-cessed on 10/27/2017).

[7] Emmanuel J Candès, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exactsignal reconstruction from highly incomplete frequency information. IEEE Transactions oninformation theory, 52(2):489–509, 2006.

[8] Emmanuel J Candès and Michael B Wakin. An introduction to compressive sampling. IEEEsignal processing magazine, 25(2):21–30, 2008.

[9] Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decomposition bybasis pursuit. SIAM review, 43(1):129–159, 2001.

[10] Massimo Fornasier and Holger Rauhut. Compressive sensing. In Handbook of mathematicalmethods in imaging, pages 187–228. Springer, 2011.

[11] M Lutz. Learning Python. 5th. 2011.

[12] Mark Pilgrim and Simon Willison. Dive Into Python 3, volume 2. Springer, 2009.

[13] Luciano Ramalho. Fluent Python: clear, concise, and effective programming. ” O’Reilly Media,Inc.”, 2015.

[14] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the RoyalStatistical Society. Series B (Methodological), pages 267–288, 1996.

[15] Guido Van Rossum and Fred L Drake Jr. Python tutorial. Centrum voor Wiskunde en Infor-matica Amsterdam, The Netherlands, 1995.

[16] Guido Van Rossum et al. Python programming language. In USENIX Annual TechnicalConference, volume 41, page 36, 2007.

[17] Jonathan Stuart Ward and Adam Barker. Observing the clouds: a survey and taxonomy ofcloud monitoring. Journal of Cloud Computing, 3(1):24, 2014.

Page 28 of 28