ethical hacking trainer - project portfolio
TRANSCRIPT
Registration number 100160371
2019
Ethical Hacking Trainer - Project Portfolio
Supervised by Dr Oliver Buckley
University of East Anglia
Faculty of Science
School of Computing Sciences
Abstract
As the rate of cyber crime increases, so too does the demand for cyber professionals to
help combat this threat. Many educational tools have been created and used to help train
the next generation of cybersecurity professionals, including testbeds and competative
platforms for users to test and hone skills with. Despite the large range of solutions
available, they are often impracticle to be used in the teaching and training of students
due to their complexity, required maintenence and limited supporting educational mate-
rials. This report details the planning and development of a testbed to aid teachers in the
teaching and training of application-layer cyber attacks using exercises and challenges,
using gamification as a technique to increase effectiveness.
Acknowledgements
I would like to thank Dr Oliver Buckley for his help and support throughout this project,
and for giving me opertunities to further develop my skills.
CMP-6013Y
Contents
1 Introduction 7
1.1 Cyber security education background . . . . . . . . . . . . . . . . . . 7
1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1 the DETER Project . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.2 CyTrONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Gamification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Simple Cyber Vulnerabilities to be covered in the project . . . . . . . . 11
2 Design of Software System 13
2.1 MoSCoW Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Development Methodology . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Local Exercise Testbed . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 Exercise Testbed GUI . . . . . . . . . . . . . . . . . . . . . . 18
2.4.2 Exercise-specific tools . . . . . . . . . . . . . . . . . . . . . . 19
2.4.3 Exercise Testbed Technical Documentation . . . . . . . . . . . 20
2.4.4 Exercise design . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.5 Integration with Central Server . . . . . . . . . . . . . . . . . . 23
2.5 Central Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.1 Account and Session Management . . . . . . . . . . . . . . . . 24
2.5.2 Viewing learning resources . . . . . . . . . . . . . . . . . . . . 25
2.5.3 Submitting progress . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.4 Implementation of Gamification Features . . . . . . . . . . . . 26
2.6 Changes to Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 System Implementation 30
3.1 Exercise Testbed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 User Emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3 Adding new exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Central Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Reg: 100160371 iii
CMP-6013Y
4 Discussion, evaluation and conclusion 35
4.1 Testing and project success evaluation . . . . . . . . . . . . . . . . . . 35
4.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
References 37
Reg: 100160371 iv
CMP-6013Y
List of Figures
1 DETERLab Virtual Machine configuration for a DDOS exercise . . . . 9
2 Architecture of the CyTrONE cybersecurity training framework . . . . 9
3 The effects of Gamification on a 3 day conventional word learning course,
taken from Vaibhav and Gupta (2014) . . . . . . . . . . . . . . . . . . 11
4 Simple Use Case diagram showing user interaction with the system . . . 16
5 Exercise testbed for launching exercises . . . . . . . . . . . . . . . . . 17
6 Basic file structure for exercise testbed . . . . . . . . . . . . . . . . . . 17
7 Exercise Testbed GUI Database . . . . . . . . . . . . . . . . . . . . . 18
8 Sequence diagram for launching exercises from exercise testbed . . . . 19
9 User emulation used for XSS and CSRF exercises . . . . . . . . . . . . 20
10 Class Diagram for exercise testbed . . . . . . . . . . . . . . . . . . . . 20
11 Database UML for exercises in exercise testbed . . . . . . . . . . . . . 22
12 Syntax of keys in the system . . . . . . . . . . . . . . . . . . . . . . . 23
13 Use Case diagram depicting user interaction with the central server . . . 24
14 Database UML for Central Website . . . . . . . . . . . . . . . . . . . . 25
15 A simple example of the leaderboard . . . . . . . . . . . . . . . . . . . 26
16 Exercise Testbed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
17 User Emulation Window for Exercise Testbed . . . . . . . . . . . . . . 31
18 Example error message created by the new exercise GUI . . . . . . . . 31
19 GUI for adding custom exercises to the exercise testbed . . . . . . . . . 32
20 Centrally managed website for tracking user progress, not logged in . . 33
21 Centrally managed website for tracking user progress, logged in . . . . 34
Reg: 100160371 v
CMP-6013Y
List of Tables
1 MoSCoW requirements analysis, agreed upon with Dr Oliver Buckley . 15
2 Exercise & corresponding environments to be implemented . . . . . . . 21
3 Simple examples of achievements that will be added to the system . . . 28
4 Revised MoSCoW requirements analysis . . . . . . . . . . . . . . . . . 29
Reg: 100160371 vi
CMP-6013Y
1 Introduction
Cyber crime has existed since the dawn of the internet, and has appeared in many forms
with many different motives throughout the internet’s development. As the underlying
technology evolves and becomes more complex, so have cyber attacks, and as more
vulnerabilities in software and protocols are found every year, cyber security has started
to become a greater issue around the globe. To combat this, educational resources and
testbeds have been created to provide a platform for people to learn these skill. The aim
of this project is to produce a simple cyber-security testbed to be used in university labs,
to aid students in learning about application-layer cyber security attacks, and to give
those students a practical introduction and experience in attacking web servers, within
a safe, fun and engaging environment. Furthermore, to implement a successful learn-
ing environment through different channels of motivational affordances, also known as
gamification. This report will cover related research, design, implementation and the
outcome of the software system, as well as discussion of its success and future work.
1.1 Cyber security education background
Education in cyber security has lagged behind cyber adversaries since the start of the
world wide web. Attacks on US government systems increased >650% from 2001-
2006 (Conklin et al., 2014), to which the then president of the United States George
W. Bush responded by launching an initiative to expand cyber education (The White
House, 2008). The cyber threats that exist in this world are growing at an exponential
rate, however we are still struggling to educate enough people to fill cyber security
roles. A global shortage of “1.8 million cyber security professionals by the year 2022"
has been predicted (GISWS, 2017), so to combat this educational institutions are using
new techniques and tools to attempt to meet this demand. Research has been made into
different learning approaches, such as challenge based cyber education (Cheung et al.,
2011), gamification (Jovi Umawing, 2018), and game-like cyber attack simulations (HM
Government, 2015).
Reg: 100160371 7
CMP-6013Y
1.2 Related Work
In the last few years, cyber security has become a global issue, and with this, so have
the number of testbeds for training and research purposes. Many different testbeds exist
for many different target audiences, which can be split into three main categories: Cap-
ture the Flag (CTF) systems and competitions, research-based testbeds, and educational
testbeds. In the first category exist systems such as Google CTF (Google), DEFCON
CTF (DEFCON), CTF365 (CTF365), and many more. These systems provide users
with challenges of increasing difficulty, often in a race against each other in compet-
itive environments. Research testbeds are often used for malware analysis, as well as
research into critical infrastructure such as SCADA systems (Davis et al., 2006). The
last category include testbeds that are used in conjunction with the passive, traditional
method of teaching via lectures and textbooks for teaching and training. Examples of
this include the DETER Project (Benzel, 2011), and CyTrONE (Beuran et al., 2017).
1.2.1 the DETER Project
DETER, as an organisation, provide frameworks for experimental research to help com-
bat asymmetric cyber warfare. Cyber criminals have the entire world as their testbed,
with endless time and is largely undetected, while security researches have limited ex-
perimental environments with limited time. This imbalance is what DETER aim to
combat. This is done using large scale UNIX virtual machines (DETERLab), with cus-
tom emulation software based on the Emulab (Emulab, 2002) testbed. An example of
this can be seen in figure 1, where a network has been emulated in preparation for an ex-
ercise on DDOS attacks. As well as research, they provide security education tools for
colleges and universities. These tools include teaching materials, student progress mon-
itoring, homework/project assignments and access to the DETERLab system. Although
these teaching materials and exercises are predominantly on lower-level vulnerabilities,
their structure of learning materials is of great relevance to this project. Although not
all identical, most exercises include an overview of the learning outcomes, background
information on the topic, helpful examples, additional reading, and guidance in setting
up the DETERLab environment. As each exercise is written individually, some exer-
cises include small snippets of humour (such as a topic-related comic), giving students a
Reg: 100160371 8
CMP-6013Y
more personal connection with the exercise. One downside of the educational materials
provided by DETER, is that to use them in conjunction with the DETERLab testbed (as
intended), an educational institution must register on your behalf. This excludes those
studying outside of colleges and universities.
Figure 1: DETERLab Virtual Machine configuration for a DDOS exercise
1.2.2 CyTrONE
CyTrONE (Cybersecurity Training and Operation Network Environment) addresses the
tedious and error-prone manual setup and configuration for hands-on exercises. CyTrONE
is an integrated cybersecurity training framework that aims to facilitate exercises by
providing an open source framework that automates the exercise content generation and
environment setup tasks. The framework is designed to facilitate a range of abilities, as
well as allow the modification of exercises, and the ability to add new training content.
CyTrONE uses the open source learning management system Moodle (Moodle, 2002)
to provide learning materials on each exercise, as well as tracking student engagement.
This can be seen in CyTrONE’s system architecture, in figure 2.
Figure 2: Architecture of the CyTrONE cybersecurity training framework
Reg: 100160371 9
CMP-6013Y
The underlying issue of these larger testbeds, especially educational is summarised
well in a paper written at the Austrian Institute of Technology Center for Digital Safety
and Security:
"Testbeds have been well established within the information security community (e.g.,
malware analysis, cyber security experimentation, etc.). However, these testbeds often
require a certain level of maintenance or resources and were therefore not often used
in non-expert communities." - Frank et al. (2017)
This restricts smaller courses and modules to use self-made learning environments, or
often none at all.
1.3 Gamification
Although gamification has existed in some form since the early 20th century (Lloyd,
2014), the first gamification summit was only in 2011 (Orland, 2010). As of today, there
remains very few educational cyber security testbeds that incorporate elements of gam-
ification. Those that do, such as CTF competitions (Google, CTF365, DEFCON) are
unsuitable as teaching tools to be used in conjunction with lectures and textbooks due to
their steep learning curves and overwhelming complexity, with no additional resources
to aid users. Gamification has however been incorporated into many educational ma-
terials outside the realm of information security. Research (Vaibhav and Gupta, 2014)
has assessed the effectiveness of gamification on online educational courses, and found
an increased pass rate of 28% on a 3 day conventional word learning course. Although
it does not discuss how this gamification was integrated into their learning environment
directly, the study shows a definite increase in pass-rates and user engagement and in-
terest in the course (figure 3). Literature reviews on gamification (Conklin et al., 2014)
shed light on the types of gamification being used in industry, highlighting points-based
leaderboards and the combination of achievements and badges as being the most popular
in research. Although it does not discuss the effectiveness of these types of gamification
directly, it provides a wide variety of gamification techniques possible.
Reg: 100160371 10
CMP-6013Y
Figure 3: The effects of Gamification on a 3 day conventional word learning course,
taken from Vaibhav and Gupta (2014)
1.4 Simple Cyber Vulnerabilities to be covered in the project
The application-layer vulnerabilities relevant to my system are based on the OWASP
Top 10 (OWASP, 2001). The Open Web Application Security Project (OWASP) is a
non-profit charitable organization that focus on improving the security of software. As
well as supporting a number of open-source projects, they maintain a “Top 10” list of
the most critical security risks to web applications, which is updated every few years
(the most recent iteration being 2017).
“The OWASP Top 10 - 2017 is based primarily on 40+ data submissions from firms
that specialize in application security and an industry survey that was completed by
over 500 individuals. [...] The Top 10 items are selected and prioritized according
to this prevalence data, in combination with consensus estimates of exploit-ability, de-
tectability, and impact ” OWASP (2001)
Below we will discuss the categories in OWASP’s Top 10 2017 relevant to my project,
what vulnerabilities are labelled under each category and how each vulnerability works.
Reg: 100160371 11
CMP-6013Y
A1:2017 – Injection
While this technically includes other categories of injection, the relevant application-
layer attack is SQL Injection, as it is the most common form of this type of attack. SQL
Injection attacks occur when a user enters nefarious SQL statements into a database,
usually through an unfiltered HTML entry field. As well as a user being able to send
commands to the SQL server, they are often able to get results back through the HTML
domain they attacked from. Thus attackers are able to view, edit, create and delete
sensitive data, as well as executing administration operations on the database and in
some cases issue commands to the operating system.
A2:2017 – Broken Authentication
Broken Authentication is a broad category of vulnerabilities, however Session Hijacking
and brute force attacks are the most common application-layer attacks, and therefore
are the authentication-related attacks relevant to this project. Session hijacking is the
exploitation of a web session control mechanism (OWASP, 2001). It comes in many
different flavours, however most commonly appears as predictable session tokens, ses-
sion sniffing, and Cross Site Scripting (addressed bellow).
The former of these vulnerabilities occur when a session ID is made up of predictable,
unencrypted personal data. A user would then be able to work out the session ID for a
different user, were they to know the correct personal information about the victim and
the syntax of the session ID.
Session sniffing is the process of monitoring a network for unencrypted HTTP traffic,
and viewing what could be another users sensitive information (including but not lim-
ited to session IDs). Brute force attacks involve an attacker trying all possible values
until a successful value is reached. An example of this is trying character or word con-
figurations to crack a password. Although not so common when attacking a web server
directly, a website can still be vulnerable to it if not careful (such as password reset
codes).
Reg: 100160371 12
CMP-6013Y
A7:2017 – Cross-Site Scripting (XSS)
Cross Site Scripting (XSS) occurs when a web server allows untrusted and unfiltered
user input onto a web page. The user is then able to exploit this, to inject malicious
scripts (usually in the form of JavaScript) into the page. XSS comes in two flavours:
Persistent and Reflective. Persistent is where the user input is stored on the web server
(usually in the form of a database). Whenever this data is retrieved from the database
and displayed on the web-page, the escaped code will then run in the client’s browser.
Reflective XSS differs from persistent XSS, as the malicious script isn’t stored on the
database. An example of this is a web page with a search function, where the search
term that the user entered is displayed on the page with the results.
A8:2013 - Cross-Site Request Forgery (CSRF)
Cross-Site Request Forgery (CSRF for short) is an attack that tricks or forces an end
user to execute unwanted actions on a web application in which they’re currently au-
thenticated. (OWASP, 2001) This differs from native Cross-Site Scripting in that the
attacker is unable to get data or any feedback from his attack at all, and that a user must
be authenticated for a CSRF attack to be successful.
2 Design of Software System
2.1 MoSCoW Requirements
Created by Dai Clegg, the MoSCoW method is a requirement gathering technique that
allows users to prioritise certain tasks over others. MoSCoW analysis is often advan-
tageous when working within a fixed time frame, as it reduces the risk of having an
incomplete non-functional system at the end of the project. This requirements analysis
methodology was preferred over a traditional approach, as this project has a fixed time
frame, and the project scope allows for expansion. The MoSCoW requirements was
agreed upon with the module organiser in which the project is intended to be deployed,
Reg: 100160371 13
CMP-6013Y
Dr Oliver Buckley, as well as being inspired by analysed literature. This can be seen in
table 1.
2.2 Development Methodology
Both agile and waterfall development approaches were considered during the planning
of this project. While the structured nature of the waterfall development approach suited
the fixed requirements list given, an agile approach was preferred as an iterative devel-
opment approach fitted well with the format of the requirements list. A test-driven
development approach was finally decided upon, for the following reasons:
1. Test-driven development works efficiently in small development teams. This is
especially effective due a development team size of 1.
2. Works well when used parallel to a MoSCoW requirements analysis, the chosen
analysis type for this project.
3. Often leads to simpler, modular code, leading to better documentation and main-
tainability, thus making it easier for the system to last after deployment.
4. Simple mistakes are caught quickly, increasing efficiency and ease of develop-
ment.
Reg: 100160371 14
CMP-6013Y
Prio
rity
Req
uire
men
t
Mus
tPr
ovid
ea
test
bed
toal
low
stud
ents
tope
ntes
tusi
ngsi
mpl
eap
plic
atio
n-la
yerc
yber
atta
cks
Mus
tPr
ovid
elit
erat
ure
toin
form
and
teac
hst
uden
tsth
ehi
stor
yan
da
step
-by-
step
tuto
rial
inat
tack
ing
the
syst
em
Mus
tPr
ovid
eex
erci
ses
fort
hete
stbe
don
SQL
inje
ctio
nan
dC
ross
Site
Scri
ptin
g
Shou
ldIn
clud
ea
sim
ple
cent
ralw
ebsi
teto
allo
wst
uden
tsto
uplo
adpr
ogre
ssan
dvi
ewtu
tori
als
Shou
ldin
clud
ega
mifi
catio
nfe
atur
esto
incr
ease
user
enga
gem
ent
Shou
ldPr
ovid
eex
erci
ses
forC
ross
Site
Req
uest
Forg
ery,
Acc
ount
Enu
mer
atio
nan
dB
rute
Forc
e
Cou
ldPr
ovid
eex
erci
ses
with
less
obvi
ous
vuln
erab
ilitie
s(s
andb
oxen
viro
nmen
t)
Cou
ldTr
ack
user
enga
gem
ent,
toen
able
som
ega
mifi
catio
nfe
atur
esto
bem
ore
inte
ract
ive
Won
’tPr
ovid
eex
erci
ses
onat
tack
slo
wer
than
appl
icat
ion-
laye
r
Table 1: MoSCoW requirements analysis, agreed upon with Dr Oliver Buckley
Reg: 100160371 15
CMP-6013Y
2.3 UML
Before development of the project began, UML diagrams of databases and Class Dia-
grams were drawn up. Using UML enabled the easy communication of design ideas,
providing a good programming reference while encouraging good, modular code with
high maintainability. This system is split into two logical systems. A local exercise
testbed on which users will load exercises to pentest, and a central web server which
controls all user account data, user progress tracking, and gamification techniques. This
user interaction with both systems is shown in figure 4.
Figure 4: Simple Use Case diagram showing user interaction with the system
2.4 Local Exercise Testbed
The first half of this project is a downloadable application that allows users to load
various pentesting environments. This exercise testbed will include a GUI written using
the Tkinter python library, with a built in web server using the Flask python library to
host exercises. The GUI will act as an admin console, from which the user can launch
and interact with different exercises for different labs. Figure 5 shows the basic design
of the application. This half of the project can be split into three main sections: the main
application, the exercise design, and exercise-specific tools.
Reg: 100160371 16
CMP-6013Y
Figure 5: Exercise testbed for launching exercises
/app.pyaddExercise.pyflaskThread.pyuserThread.pysettings.pyexercises
exercise1exercise1_server.pytemplatesstaticdatabase.db
exercise2_serverapp.pytemplatesstaticdatabase.db
exercise3_server...
Figure 6: Basic file structure for exercise testbed
Reg: 100160371 17
CMP-6013Y
2.4.1 Exercise Testbed GUI
The main application (fig 5) will be the user’s main
window from which they can load pentesting en-
vironments. This is split into two panels. The
left panel allows users to select exercises, with a
brief description appearing at the bottom explain-
ing what is included in the exercise. The right
panel of the application is for web server control.
A widget shows the outputs from the exercise, for
exercise flask server messages or simply for debug-
ging purposes. A small notice box displays the ex-
ercise currently loaded, below which are buttons
that allow for the server to be stopped and started.
The exercise testbed uses a database to store details
of exercises, such as file names and configuration
settings. This is all kept in a single table, and can
be seen in figure 7. A basic layout of the file struc-
ture for this can be seen in figure 6, where app.py
is the main window, and exercise1...exercise3 hold
the relevant files for each exercise.
Figure 7: Exercise Testbed GUI
Database
The main application uses two additional threads to handle and launch the flask web
server responsible for running exercises, and for emulating background user interaction
emulation. These two extra threads are started when the user launches an exercise from
the exercise testbed. As seen in figure 8, the launched exercise is first checked to see
if it requires background user interaction emulation (stored under selenium_thread in
exercise testbed database, seen in figure 7). Once this is complete, a thread is launched
to manage the flask server for the exercise, and corresponding details (Settings class
shown in figure 10) is sent to the thread to launch the server.
Reg: 100160371 18
CMP-6013Y
Figure 8: Sequence diagram for launching exercises from exercise testbed
2.4.2 Exercise-specific tools
For some exercises to work as intended, it may be necessary to emulate background
user engagement. This is most predominant in Cross Site Scripting (XSS) and Cross
Site Request Forgery (CSRF) attacks. These types of attack often cannot work with-
out a victim user with an active session to access a certain webpage, or click a certain
link. This tool will be used to emulate this user interaction. When the user loads an
exercise from the exercise testbed that is documented to require this tool (stored un-
der selenium_thread in figure 7), a small instant-messaging style window will start as
shown in figure 9. This messaging window aims to mimic very simple interaction with
a fictitious character Bob. When you message a hyper-link to Bob (e.g. a payload in the
case of reflective XSS), the system sends a request to the link. XSS 2 (persistent XSS)
also requires a level of user interaction, so to satisfy this need the system will emulate
general user interaction, running scripts and html on pages.
Reg: 100160371 19
CMP-6013Y
Figure 9: User emulation used for XSS and CSRF exercises
2.4.3 Exercise Testbed Technical Documentation
Figure 10: Class Diagram for exercise testbed
Reg: 100160371 20
CMP-6013Y
2.4.4 Exercise design
Once an exercise is selected and is started from the main window, the users default
browser will be opened to 127.0.0.1:5000 (default location and port for Flask). The
pentesting environment is modelled around an online e-commerce store. This allows a
wide range of attacks to be possible on a platform that is often targeted in the online
cyber security world. Each exercise that is loaded from the exercise testbed loads a
different version of the e-commerce server, with corresponding SQLite3 database.
Name Description
SQLi tutorial An introduction to SQL Injection
SQLi 1 Targeting individual records through simple SQLi
SQLi 2 Complex SQL Injection attacks explored
XSS tutorial An introduction to XSS
XSS 1 Reflective XSS attacks introduced
XSS 2 Complex persistent XSS attacks explored
CSRF tutorial An introduction to CSRF
CSRF 1 Simple CSRF attacks explored
CSRF 2 Complex CSRF attacks explored
Table 2: Exercise & corresponding environments to be implemented
Each exercise will cater to a specific attack, with three exercises being designated to
each of the most significant attacks. The module organiser is able to decide whether to
timetable lab sessions for each exercise, or set some as possible extra-curricular exer-
cises. The following table (table 2) of exercises has been selected from the MoSCoW
requirements list. Both SQLi, XSS and CSRF will be split into three logical sections.
The first exercise of each attack acts as a material to be used in conjunction with a tu-
torial. In these sections students will learn the basics of each attack type - following
instructions written in supporting teaching material will teach users how to interact with
the system, as well as how each attack type works. The following exercises for each
attack type focus on students completing certain tasks. Each exercise will be based on
the same e-commerce shop, as students will not have to relearn the layout of the web-
Reg: 100160371 21
CMP-6013Y
site for each exercise. Each exercise will therefore use the same relational database to
store related e-commerce data. Having access to the UML for this database can be of
great advantage to students completing these exercise, so may be included in supporting
materials depending on the difficulty required. The UML for these can be seen in figure
11.
Figure 11: Database UML for exercises in exercise testbed
SQLi 1 and SQLi 2
SQLi 1 and SQLi 2 (SQL injection) exercises will feature a flask server with multiple
unfiltered inputs, allowing users to pentest through a search function in the e-commerce
shop. SQLi 1 will centre around getting usernames and password of users. Any inter-
nal server errors that are encountered during this exercise will be displayed to users, to
aid them in the exercise. One of the user’s passwords will be an code, which will be
uploaded to the central server to update progress. SQLi 2 will involve users revealing
credit card details from user accounts in the database, however server errors will not be
displayed to users. This will require users to execute blind SQL injection, one of the
more common forms of SQL injection.
Reg: 100160371 22
CMP-6013Y
XSS 1 and XSS 2
XSS 1 and XSS 2 exercises will involve a flask server with various unescaped inputs,
both exercises focusing on a different part of XSS. XSS 1 will focus on reflective XSS,
while XSS 2 will focus on persistent XSS. In XSS 1, users will attempt to steal the ses-
sion cookie data from the fictitious character Bob by embedding malicious javascript
on the e-commerce website. This will involve students creating their own small flask
application that will collect data sent from victims’ browsers. XSS 2 will focus on users
altering forms to capture credit card info. For any form of XSS to take place, users must
be actively interacting with the site. To do this, both XSS 1 and XSS 2 will utilise a tool
to emulate background user interaction, as discussed in section 2.4.2.
CSRF 1 and CSRF 2
CSRF 1 and CSRF 2 exercises will involve a flask server with multiple state-changing
CSRF vulnerabilities. Both exercises will make use of the messaging function intro-
duced in XSS 1 (figure 9). CSRF 1 will focus on account related state-changing attacks
(such as changing a users password or email), and CSRF 2 will focus on making state-
changing requests though the use of forms from an small flask application made by
students. Both of these exercises will require the use of background user emulation,
discussed in section 2.4.2
2.4.5 Integration with Central Server
As users complete exercises, they will uncover codes which they can use to submit their
progress on the central system. The codes must be easily identifiable from the rest of
the data, to ensure users are able to easily see them after completing the necessary steps
for an attack. This syntax can be seen in the example shown in figure 12.
CTF{xbeutrnshfiw}
Figure 12: Syntax of keys in the system
Reg: 100160371 23
CMP-6013Y
2.5 Central Website
The central server will be a web server build using the Flask python web framework.
It will act as a hub for students to track progress, as well as viewing other students’
progress. Students will be able to log into the website over HTTPS, where they can
view supporting materials and update their current progress using codes obtained from
their attacks carried out on their exercise testbeds. The Central server can be split into
5 main sections: Account management, Viewing learning resources, Gamification fea-
tures, submitting progress, and downloading the exercise testbed. Users can also down-
load their exercise testbed from this central website. Users must be logged in and have
an active session to complete this action.
Figure 13: Use Case diagram depicting user interaction with the central server
2.5.1 Account and Session Management
Users must register using their forename, surname, a username and password, and are
advised to create a new password due to the nature of the project. Users’ passwords
will be encrypted using the bcrypt hashing algorithm, due to it’s incorporation of a salt
and resistance to brute force attacks. Once a user logs in successfully, a session will be
created allowing them to navigate the website freely. The session key will be generated
using Flask’s in-built hashing algorithm, and will have the secure, HttpOnly and Same-
Site (strict) attributes to mitigate any possibility of cross site request forgery or session
Reg: 100160371 24
CMP-6013Y
hijacking. As per the Payment Card Industry Data Security Standard requirements 8.1.8
(2016), sessions on our system last for a total of 15 minutes of inactivity. Although
this standard is specifically for online banking, it serves as a good marker that can be
adjusted if it is affecting user experience.
Figure 14: Database UML for Central Website
2.5.2 Viewing learning resources
The central server will include resources to be used in conjunction with the pen-testing
exercises. These resources will be static pages, using content either written by or written
with the module organiser. These pages will be available to anyone using the site, even
if they are not logged in. Downloadable versions of these pages will also exist, to allow
users to download in PDF format for later consumption.
Reg: 100160371 25
CMP-6013Y
2.5.3 Submitting progress
Users will uncover codes while engaging with their local pentesting environment (ex-
plained in section 2.4.5), which they will use to upload to the central website. Once a
user submits a code to the central site, the users’ total points is increased by the cor-
responding amount of points available for that code, and their account is checked to
determine whether they qualify for any achievements and therefore by proxy any new
badges.
2.5.4 Implementation of Gamification Features
As explored in section 1.3 of this report, there are many potential gamification feature
that can be introduced. This system will implement the use of a points-based leader-
board, achievements and badges. The leaderboard will show all users, with their se-
lected badges in a list with their corresponding scores. Badges will be coloured for
emphasis and fun. A simple example of this can be seen in figure 15.
Figure 15: A simple example of the leaderboard
Achievements can be split into two logical groups: passive, and non-passive. Non-
passive achievements are awarded to a user once they have completed a goal, and once
received will remain tied to their account indefinitely. Passive achievements are achieve-
ments that a user has, given that a certain criteria is being met. An example of this is
the King achievement, which a user only retains while they are first on the leaderboard.
Once a user unlocks an achievement, they will receive a message displaying the achieve-
ment notice, as well as a short description and any reward points they may receive.
Reg: 100160371 26
CMP-6013Y
Badges and Achievements will be implemented together, and directly relate to one an-
other. With every achievement that a user gets, they will receive a corresponding badge.
For passive achievements, they will retain the badge for as long as the corresponding
achievement is held. A list of achievements that will be implemented are listen in table
3. While this list is bound to grow over time and through the development stage of this
system, these achievements provide a good basis to start with.
2.6 Changes to Design
As development continued, it became apparent that a change in requirements was de-
sirable. After careful consideration, it was decided that a framework to add your own
exercises would be more desirable over a system with a fixen number of exercises. If
there is a change in curriculum, a want to cover more vulnerabilities, or that some vul-
nerabilities are done differently, a user is able to add their own flask application (exer-
cise) to the system. This would require an extra UI tool, to ensure that adding exercises
was easy to do, simple to do, and if any discrepancies were found in the configuration,
the user would know what they were. The revised MoSCoW requirements list can be
seen in table 4. Another addition to the MoSCoW requirements list was the inclusion of
a reset database button on the exercise testbed GUI. This would allow users to roll back
the database of an exercise to its initial form, if irreversible changes have been made to
it. This gives users greater freedom when completing exercises, as they do not need to
worry about breaking the database.
Reg: 100160371 27
CMP-6013Y
Ach
ieve
men
tsan
dB
adge
sav
aila
ble
tost
uden
ts
Ach
ieve
men
tPa
ssiv
eSh
ortD
esc
rew
ard
badg
e/tit
le
You
suck
!N
Hav
ea
nega
tive
scor
e0
the
Jaes
t
Aw
esom
e!N
owtr
yth
eSa
ndbo
xN
Com
plet
eal
lXSS
task
s10
0m
aste
rofX
SS
Aw
esom
e!N
owtr
yth
eSa
ndbo
xN
Com
plet
eal
lSQ
Lit
asks
100
mas
tero
fSQ
Li
Aw
esom
e!N
owtr
yth
eSa
ndbo
xN
Com
plet
eal
lCSR
Fta
sks
100
mas
tero
fCSR
F
N/A
YH
ave
the
high
ests
core
0K
ing
Ifat
first
you
don’
tsuc
ceed
NSu
bmit
mor
eth
an5
wro
ngco
des
-50
brut
efo
rcer
N/A
YN
ever
subm
ita
wro
ngco
de0
the
Perf
ect
N/A
Y(d
efau
ltba
dge)
0th
eFe
eble
You
ract
ions
have
been
repo
rted
NA
ttem
ptto
atta
ckth
ece
ntra
lser
ver
-200
0th
eN
augh
ty
N/A
YH
ave
the
seco
ndhi
ghes
tsco
re0
the
Env
ious
You
have
beat
enth
ega
me
NC
ompl
ete
alle
xtra
-cur
ricu
lara
ctiv
ities
100
the
Ext
ra-c
urri
cula
r
Table 3: Simple examples of achievements that will be added to the system
Reg: 100160371 28
CMP-6013Y
Prio
rity
Req
uire
men
t
Mus
tPr
ovid
ea
test
bed
toal
low
stud
ents
tope
ntes
tusi
ngsi
mpl
eap
plic
atio
n-la
yerc
yber
atta
cks
Mus
tPr
ovid
esu
ppor
ting
mat
eria
lto
show
how
tous
esy
stem
Mus
tPr
ovid
eex
erci
ses
fort
hete
stbe
don
SQL
inje
ctio
nan
dC
ross
Site
Scri
ptin
g
Shou
ldPr
ovid
efu
nctio
nalit
yto
add
cust
omex
erci
ses
Shou
ldIn
clud
ea
sim
ple
cent
ralw
ebsi
teto
allo
wst
uden
tsto
uplo
adpr
ogre
ssan
dvi
ewtu
tori
als
Shou
ldin
clud
ega
mifi
catio
nfe
atur
esto
incr
ease
user
enga
gem
ent
Shou
ldPr
ovid
eex
erci
ses
forC
ross
Site
Req
uest
Forg
ery
Cou
ldA
llow
func
tiona
lity
forc
usto
mex
erci
ses
tous
eth
ese
leni
umw
indo
w
Cou
ldA
llow
user
sto
rese
texe
rcis
eda
taba
ses
Cou
ldPr
ovid
eex
erci
ses
with
less
obvi
ous
vuln
erab
ilitie
s(s
andb
oxen
viro
nmen
t)
Cou
ldTr
ack
user
enga
gem
ent,
toen
able
som
ega
mifi
catio
nfe
atur
esto
bem
ore
inte
ract
ive
Cou
ldPr
ovid
eex
erci
ses
forA
ccou
ntE
num
erat
ion
and
Bru
teFo
rce
Won
’tPr
ovid
eex
erci
ses
onat
tack
slo
wer
than
appl
icat
ion-
laye
r
Table 4: Revised MoSCoW requirements analysis
Reg: 100160371 29
CMP-6013Y
3 System Implementation
3.1 Exercise Testbed
Pictured in figure 16 can be seen the final version of the exercise testbed. It includes
all of the relevant requirements, excluding the inclusion of exercises covering account
enumeration and brute force attacks. This graphical user interface closely follows its de-
sign document, however includes additional functionality to accommodate the adjusted
requirement list.
Figure 16: Exercise Testbed
3.2 User Emulation
Unlike the exercise testbed, the attached message box (pictured in figure 17) underwent
some minor alterations. In the original design, it was designed to follow links to both
pages embedded with XSS code, as well as CSRF attacks. Although this was possi-
ble for following simple GET links, more complex attacks that involved users filling
out HTML forms were both intensive and broke the functionality of the website. It
was found impracticable to follow a link, to search and submit all forms on that page.
Reg: 100160371 30
CMP-6013Y
Functionality was added so users indicate which form the user wants the user to fill out.
While in the real world, a cyber criminal doesn’t tell a user the ID of a form to fill out
(often is a matter of luck), it was found to be the most practical implementation for this
system.
Figure 17: User Emulation Window for Exercise Testbed
3.3 Adding new exercises
As previously mentioned, another design change was adding functionality to allow users
add custom exercises. The final graphical user interface design can be seen in figure 19.
While not completely seamless, this simple tool ensures users don’t need to manually
change the exercise testbed database and risk wrongly configuring the existing testbed.
It also enables users to see configuration errors in real time, as showcased in figure 18.
Figure 18: Example error message created by the new exercise GUI
Reg: 100160371 31
CMP-6013Y
Figure 19: GUI for adding custom exercises to the exercise testbed
3.4 Central Website
The latter part of this project focussed on gamification. As explored in the design, the
intention for this was that it be a website, that allowed users to log in, submit codes
found in exercises to receive points, which placed them on a leaderboard. This design
was closely followed, and the leaderboard seen in figure 21 closely follows the original
design. As previously discussed, both achievements and badges were both implemented
in the final release. Users receive achievements as they enter uncovered codes, as is the
case for badges. Once a user has received a badge they are able to change it at any time,
and it will appear on the leaderboard to public viewing.
Reg: 100160371 32
CMP-6013Y
Figu
re20
:Cen
tral
lym
anag
edw
ebsi
tefo
rtra
ckin
gus
erpr
ogre
ss,n
otlo
gged
in
Reg: 100160371 33
CMP-6013Y
Figu
re21
:Cen
tral
lym
anag
edw
ebsi
tefo
rtra
ckin
gus
erpr
ogre
ss,l
ogge
din
Reg: 100160371 34
CMP-6013Y
4 Discussion, evaluation and conclusion
4.1 Testing and project success evaluation
Testing this system is difficult, as to accurately quantify the projects complete suc-
cess requires full deployment - something not possible until the related module begins.
Smaller unit testing is possible however, and although it does not evaluate the exact
success of the project, the comparison between the final system and Mos-cow require-
ments list provides a good measure of the successfulness of the system. The system
fulfils all requirements listed under Must (requirements from revised requirements list,
as seen in table 4). Evidence of these requirements can be seen in section 3. As listed
under the should requirements, the system allows users to add and modify exercises to
the testbed, as well as using a central website for the use of gamification and progress
tracking. The final should requirement is also satisfied, as the exercise testbed includes
exercises on SQL injection, XSS and CSRF. From the could requirements, the system
includes a solution for emulating user interaction with a webserver to aid in XSS and
CSRF exercises, as well as the ability to reset the database used by an exercise.
4.2 Limitations
Despite the success of this system, its design does have some limitations. Although the
system being written in python, a language with a large selection of build-in libraries,
it requires extra libraries to be installed for it to function. The web server used by
exercises uses the Flask python library, and background user emulation is completed
using the Selenium library. Having users install these libraries on their own machines
is often error-prone, especially when they are new to the field of computer science.
To help combat this from happening, material has been provided to aid users through
the installation process. Another limitation is with the python language itself. The
exercise testbed utilises three threads, one of which runs the local Flask server, one
handling the application GUI, and one handles background user emulation. These three
threads all working simultaneously can be resource intensive, and therefore occasionally
a slowdown of the graphical user interface of the exercise testbed occurs.
Reg: 100160371 35
CMP-6013Y
4.3 Future Work
There is a wide range of possible further development for this project. The exercise
testbed includes exercises focussing on SQL injection, Cross Site Scripting and Cross
Site Request Forgery, however as suggested in the revised MoSCoW requirements list,
this can be extended to include Account Enumeration and Brute Force attacks. The
exercise testbed doesn’t need to be limited to application-layer attacks either, session
sniffing through man-in-the-middle attacks would introduce users to network-layer at-
tacks. Another area for future development includes an exercise that is hosted on a
global server. This could allow multiple users to conduct simultaneous attacks on the
same server, increasing the types and complexity of possible attacks, as well as intro-
ducing a strong element of competition.
4.4 Conclusion
In an information era where cyber security education is becoming more prevalent by the
week, a greater demand is constantly being put on educational materials to aid in the
training of the next generation of cyber professionals. The implementation of gamifica-
tion in education, although not a new idea, is gaining popularity due to the connectivity
brought by the world wide web. This project sought to implement a cyber security
testbed to aid education in application-layer attacks, using gamification to increase its
effectiveness. By completing all Must and Should, as well as two Could requirements
from the MoSCoW requirements list, this project was able to fulfil this aim, while still
providing the code maintainability for future work to be easily implemented.
Reg: 100160371 36
CMP-6013Y
References
Benzel, T. (2011). The science of cyber security experimentation: the deter project. In
Proceedings of the 27th Annual Computer Security Applications Conference, pages
137–148. ACM.
Beuran, R., Pham, C., Tang, D., Chinen, K.-i., Tan, Y., and Shinoda, Y. (2017). Cytrone:
An integrated cybersecurity training framework.
Cheung, R. S., Cohen, J. P., Lo, H. Z., and Elia, F. (2011). Challenge based learning in
cybersecurity education. In Proceedings of the International Conference on Security
and Management (SAM), page 1. The Steering Committee of The World Congress in
Computer Science, Computer . . . .
Conklin, W. A., Cline, R. E., and Roosa, T. (2014). Re-engineering cybersecurity edu-
cation in the us: An analysis of the critical factors. In 2014 47th Hawaii International
Conference on System Sciences, pages 2006–2014.
CTF365 (2012). Ctf365. https:// ctf365.com/ . Accessed: 2018-12-01.
Davis, C., Tate, J., Okhravi, H., Grier, C., Overbye, T., and Nicol, D. (2006). Scada
cyber security testbed development. In 2006 38th North American Power Symposium,
pages 483–488. IEEE.
DEFCON (1996). Defcon. https:// www.defcon.org/ html/ defcon-26/ dc-26-ctf.html . Ac-
cessed: 2018-12-01.
Emulab (2002). Emulab. www.emulab.net/ . Accessed: 2019-05-01.
Frank, M., Leitner, M., and Pahi, T. (2017). Design considerations for cy-
ber security testbeds: A case study on a cyber security testbed for education.
In 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Comput-
ing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf
on Big Data Intelligence and Computing and Cyber Science and Technology
Congress(DASC/PiCom/DataCom/CyberSciTech), pages 38–46.
Reg: 100160371 37
CMP-6013Y
GISWS (2017). 2017 global information security workforce study.
Google (2016). googlectf. https:// capturetheflag.withgoogle.com/ . Accessed: 2018-12-
01.
HM Government (2015). A guide to programmes and resources for schools and further
education. https:// assets.publishing.service.gov.uk/ government/ uploads/ system/
uploads/ attachment_data/ file/ 410221/ bis-15-77-Guide-to-cyber-security-schools-
programmes-and-resources.pdf .
Jovi Umawing, M. (2018). Engaging students in cybersecurity: a primer for educators.
Lloyd, V. (2014). A brief history of gamification. https:// www.thehrdirector.com/
features/ gamification/ a-brief-history-of-gamification/ .
Moodle (2002). Moodle. https:// www.howtomoodle.com/ . Accessed: 2019-05-01.
Orland, K. (2010). Gamification summit 2011 announced.
OWASP (2001). Owasp top 10 2017. https:// www.owasp.org/ index.php/ Top_10-2017_
Top_10. Accessed: 2018-12-01.
PCI Security Standards Council (2016). The prioritized approach to pursue pci dss
compliance.
The White House (2008). The comprehensive national cybersecurity initia-
tive. https:// obamawhitehouse.archives.gov/ issues/ foreign-policy/ cybersecurity/
national-initiative. Accessed: 2019-05-01.
Vaibhav, A. and Gupta, P. (2014). Gamification of moocs for increasing user engage-
ment. In 2014 IEEE International Conference on MOOC, Innovation and Technology
in Education (MITE), pages 290–295.
Reg: 100160371 38