penetration testing for the in- experienced ethical hacker1217958/fulltext01.pdfupphovsrätt detta...

65
Linköpings universitet SE–581 83 Linköping +46 13 28 10 00 , www.liu.se Linköping University | Department of Computer and Information Science Master thesis, 30 ECTS | Datateknik 2018 | LIU-IDA/LITH-EX-A--18/012--SE Penetration testing for the in- experienced ethical hacker A baseline methodology for detecting and mitigating web application vulnerabilities Penetrationstestning för den oerfarne etiska hackaren En gedigen grundmetodologi för detektering och mitigering av sårbarheter i webbapplikationer Per Lindquist Henrik Ottosson Supervisor : Jan Karlsson, Ulf Kargén Examiner : Nahid Shahmehri

Upload: others

Post on 31-May-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Linköpings universitetSE–581 83 Linköping

+46 13 28 10 00 , www.liu.se

Linköping University | Department of Computer and Information ScienceMaster thesis, 30 ECTS | Datateknik

2018 | LIU-IDA/LITH-EX-A--18/012--SE

Penetration testing for the in-experienced ethical hacker– A baseline methodology for detecting and mitigatingweb application vulnerabilities

Penetrationstestning för den oerfarne etiska hackaren– En gedigen grundmetodologi för detektering och mitigering avsårbarheter i webbapplikationer

Per LindquistHenrik Ottosson

Supervisor : Jan Karlsson, Ulf KargénExaminer : Nahid Shahmehri

Page 2: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 årfrån publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår.Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstakakopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och förundervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva dettatillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. Föratt garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman iden omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sättsamt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende elleregenart. För ytterligare information om Linköping University Electronic Press se förlagetshemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement– for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone toread, to download, or to print out single copies for his/hers own use and to use it unchangedfor non-commercial research and educational purpose. Subsequent transfers of copyrightcannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measuresto assure authenticity, security and accessibility. According to intellectual property law theauthor has the right to be mentioned when his/her work is accessed as described above andto be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of documentintegrity, please refer to its www home page: http://www.ep.liu.se/.

c© Per LindquistHenrik Ottosson

Page 3: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Abstract

Having a proper method of defense against attacks is crucial for web applications toensure the safety of both the application itself and its users. Penetration testing (or ethicalhacking) has long been one of the primary methods to detect vulnerabilities against suchattacks, but is costly and requires considerable ability and knowledge. As this expertiseremains largely individual and undocumented, the industry remains based on expertise. Alack of comprehensive methodologies at levels that are accessible to inexperienced ethicalhackers is clearly observable.

While attempts at automating the process have yielded some results, automated toolsare often specific to certain types of flaws, and lack contextual flexibility. A clear, simpleand comprehensive methodology using automatic vulnerability scanners complementedby manual methods is therefore necessary to get a basic level of security across the entiretyof a web application. This master’s thesis describes the construction of such a methodology.

In order to define the requirements of the methodology, a literature study was per-formed to identify the types of vulnerabilities most critical to web applications, and theapplicability of automated tools for each of them. These tools were tested against variousexisting applications, both intentionally vulnerable ones, and ones that were intended tobe secure.

The methodology was constructed as a four-step process: Manual Review, Testing, RiskAnalysis, and Reporting. Further, the testing step was defined as an iterative process inthree parts: Tool/Method Selection, Vulnerability Testing, and Verification. In order to ver-ify the sufficiency of the methodology, it was subject to Peer-review and Field experiments.

Page 4: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Sammanfattning

Att ha en gedigen metodologi för att försvara mot attacker är avgörande för att upprät-thålla säkerheten i webbapplikationer, både vad gäller applikationen själv och dess använ-dare. Penetrationstestning (eller etisk hacking) har länge varit en av de främsta metodernaför att upptäcka sårbarheter mot sådana attacker, men det är kostsamt och kräver stor per-sonlig förmåga och kunskap. Eftersom denna expertis förblir i stor utsträckning individuelloch odokumenterad, fortsätter industrin vara baserad på expertis. En brist på omfattandemetodiker på nivåer som är tillgängliga för oerfarna etiska hackare är tydligt observerbar.

Även om försök att automatisera processen har givit visst resultat är automatiseradeverktyg ofta specifika för vissa typer av sårbarheter och lider av bristande flexibilitet. Entydlig, enkel och övergripande metodik som använder sig av automatiska sårbarhetsverk-tyg och kompletterande manuella metoder är därför nödvändig för att få till en grundläg-gande och heltäckande säkerhetsnivå. Denna masteruppsats beskriver konstruktionen aven sådan metodik.

För att definiera metodologin genomfördes en litteraturstudie för att identifiera detyper av sårbarheter som är mest kritiska för webbapplikationer, samt tillämpligheten avautomatiserade verktyg för var och en av dessa sårbarhetstyper. Verktygen i fråga tes-tades mot olika befintliga applikationer, både mot avsiktligt sårbara, och sådana som varutvecklade med syfte att vara säkra.

Metodiken konstruerades som en fyrstegsprocess: manuell granskning, sårbarhetstest-ning, riskanalys och rapportering. Vidare definierades sårbarhetstestningen som en iterativprocess i tre delar: val av verkyg och metoder, sårbarhetsprovning och sårbarhetsverifier-ing. För att verifiera metodens tillräcklighet användes metoder såsom peer-review ochfältexperiment.

Page 5: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Acknowledgments

The authors of this thesis would like to do the following acknowledgements:

Ulf Kargén, thank you for great academic supervision and excellence in answering our ques-tions during our thesis work.

Jan Karlsson and Secure State Cyber, thank you for great support and the possibility to be ableto complete this work.

Peter Österberg, Staffan Huslid and Jonas Lejon, thank you for your intelligent answers duringour industry and expert interviews. Your insight in this field helped us tremendously.

Edward Nsolo, thank you for bringing constructive criticism and knowledge by proof readingand questioning.

Lastly, the authors would like to thank each other for great support, hard work and excel-lent company.

v

Page 6: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Contents

Abstract iii

Acknowledgments v

Contents vi

List of Figures ix

List of Tables x

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Method Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.6 Secure State Cyber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Notable Vulnerabilities 42.1 National Vulnerability Database (NVD) . . . . . . . . . . . . . . . . . . . . . . . 42.2 Open Web Application Security Project (OWASP) Top 10 . . . . . . . . . . . . . 5

1 - Injection (I) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 - Broken Authentication (BA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 - Sensitive Data Exposure (SDE) . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 - XML External Entities (XXE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 - Broken Access Control (BAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 - Security Misconfiguration (SM) . . . . . . . . . . . . . . . . . . . . . . . . . . 77 - Cross-Site Scripting (XSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 - Insecure Deserialization (ID) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 - Using Components with Known Vulnerabilities (UCwKV) . . . . . . . . . . 910 - Insufficient Logging and Monitoring (IL&M) . . . . . . . . . . . . . . . . . . 9

3 Penetration Testing 113.1 Information Gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Vulnerability Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.3 Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.4 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Existing Tools, Methods and Environments 144.1 Methods of Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2 Vulnerability scanning requirements . . . . . . . . . . . . . . . . . . . . . . . . . 15

FedRAMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Experienced Penetration Testers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

vi

Page 7: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4.3 Existing Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Zed Attack Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Burp Suite Community Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Sqlmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16IronWASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Vega . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Arachni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Wapiti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.4 Related Works: Tools being tested against special environments . . . . . . . . . 174.5 Existing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Manual Crawling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Logical Flaws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Vulnerability Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.6 Existing Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19DVWA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19OWASP Bricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19WACKOPICKO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Peruggia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19OrangeHRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Bila Nu & Presentshoppen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5 Method 215.1 Literature study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.3 Selection, Testing and Evaluation of Tools and Features . . . . . . . . . . . . . . 235.4 Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.5 Design of Interactive Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.6 Verification of Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . 24

Empirical Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Peer-Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Field Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

6 Results 266.1 Literature Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.3 Selection, Testing and Evaluation of Tools and Features . . . . . . . . . . . . . . 27

VEGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Zed Attack Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Arachni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Sqlmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27IronWASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Wapiti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28BURPsuite Community Edition (CE) . . . . . . . . . . . . . . . . . . . . . . . . . 28Non-viable Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

6.4 Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Manual Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Tools and Method Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Vulnerability Testing and Verification . . . . . . . . . . . . . . . . . . . . . . . . 30Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

6.5 Design of Interactive Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326.6 Verification of Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . 32

vii

Page 8: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Peer-Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Empirical Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Field Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

7 Discussion 347.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Literature Study and Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Selection, Testing and Evaluation of Tools and Features . . . . . . . . . . . . . . 35Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Design of Interactive Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Verification of Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . 36

7.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Literature Study and Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Selection, Testing and Evaluation of Tools and Features . . . . . . . . . . . . . . 37Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Design of Interactive Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Verification of Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . 38

7.3 The work in a wider context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Existing Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Ethical Aspect: The potential for malicious use . . . . . . . . . . . . . . . . . . . 39Societal Aspect: Increasing ease of penetration testing . . . . . . . . . . . . . . . 39Societal Aspect: Increasing Cybersecurity Threats . . . . . . . . . . . . . . . . . 39

8 Conclusion 408.1 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408.2 Research Question Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

A Appendix 1 - Vulnerability Scanner evaluation 42

B Appendix 2 - Interviews with industry experts 45Interview with Peter Österberg, 2018-02-19 . . . . . . . . . . . . . . . . . . . . . 45Interview with Staffan Huslid, 2018-02-26 . . . . . . . . . . . . . . . . . . . . . . 47Interview with Jonas Lejon 2018-03-21 . . . . . . . . . . . . . . . . . . . . . . . . 48

C Appendix 3 - OrangeHRM Field Experiment 50Manual Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Tools and Method Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Vulnerability testing and Verification . . . . . . . . . . . . . . . . . . . . . . . . . 51Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Bibliography 52

viii

Page 9: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

List of Figures

3.1 Risk Severity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

6.1 Comprehensive Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

ix

Page 10: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

List of Tables

2.1 Log States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4.1 Scanners and OWASP top 10 vulnerability mapping as claimed by each scanner’sdocumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.2 Environment-Vulnerability mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6.1 Tools and their respective fulfillment of requirements . . . . . . . . . . . . . . . . . 28

A.1 Initial testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42A.2 Wackopicko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43A.3 DVWA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43A.4 Peruggia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

x

Page 11: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

1 Introduction

The digital age has resulted in a considerable number of practical web applications, givingusers access to large quantities of information and functionality. With this increase of avail-ability, the concerns about application security increase accordingly, as concluded by Post etal. [1] in 1991, before many of the concepts behind the modern web application had beenintroduced. Ease of access to an application could be considered inversely correlated to ap-plication security, in the sense that an increase in the exposure of an application is likely tocreate new attack vectors [2].

Still, organizations tend to mainly design their applications to be as user-friendly, acces-sible and visually interesting as possible, which may well result in the introduction of newsecurity flaws, as pointed out by professional penetration tester Peter Österberg [2] in an in-terview conducted as part of this thesis. Accordingly, Jang et al. [3] reported findings on 12practical attacks that they claim were made possible specifically by the introduction of cer-tain accessibility features in modern operating systems. As attackers work quickly to discoverand exploit such vulnerabilities, owners of web applications need to act even faster towardsdetecting and resolving those same vulnerabilities. With the speed at which the situationevolves, David Talbot [4] from the MIT Technology Review notes that organizations wishing toprotect their sensitive data can find it increasingly hard to even determine the level of securitythey have achieved at any given time.

Of course, there are professionals available to assist with such matters. There are twodistinct titles within the industry that describe the concept of non-malicious hacking, the titleof ethical hacker1, and the title of penetration tester. Universities2,3 and other organizations4

have created educations to be able to meet the increasing market demands for this type ofprofessional [2].

Moreover, there are organizations dedicated to the structural analysis of common vul-nerabilities and attack vectors, such as the non-profit Open Web Application Security Project(OWASP). OWASP maintains a Top 10 list of the security issues currently considered themost notable, such as Injections, Cross-Site Scripting and Sensitive Data Exposure [5].

1https://en.oxforddictionaries.com/definition/ethical_hacker2EP3370: https://www.kth.se/student/kurser/kurs/EP3370?l=en3EN2720: https://www.kth.se/student/kurser/kurs/EN2720?l=en4EC-Council: https://www.eccouncil.org/programs/certified-ethical-hacker-ceh/

1

Page 12: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

1.1. Motivation

Due to the considerable variation of potential attacks, purely manual testing tends to be-come unwieldy, especially in larger applications, as noted by Kupsch et al. [6]. Purely auto-mated testing, on the other hand, has its own set of issues, regardless of the specific field inwhich it is applied. While effective as a time saving effort in largely unchanging and repet-itive situations, automated systems tend to lack sufficient flexibility, which creates the needfor skilled operators to manually verify the results [6].

Unfortunately, skilled operators may soon become short supply, given the increased mar-ket demands [7], [8]. As the penetration testing profession remains largely individualistic andexperience-driven, testers generally choose their own tools and work according to their ownmethods [9], which causes an almost complete lack of documented general standards, bothregarding methodology and results. Due to this, an upcoming generational shift may lead toknowledge loss among penetration testers, as senior experience is not sufficiently transferredto juniors [2].

1.1 Motivation

While penetration testing is an established field, the expertise remains largely undocumented.Studies have been performed regarding specific (or closely coupled) vulnerabilities and howto neutralize them, but no comprehensive methodology appears to have been published re-garding the combination of the existing automated tools with manual penetration testing. Inorder to allow inexperienced testers to perform at acceptable levels, comprehensive standardmethodologies, upon which expertise and specialization may be built, must be introduced.

1.2 Aim

This master’s thesis is expected to result in a methodology for web application security anal-ysis, including automated and manual tools and methods, covering common critical flaws,as per industry definitions. In order to enhance the credibility of the methodology, it will beverified by security experts.

1.3 Method Overview

The general approach will be as follows:

1. Perform a literature study of common attack vectors and their impacts, based on estab-lished industry data.

2. Create a set of requirements for complete vulnerability detection and mitigation ser-vices, including both automated and manual tools and methods.

3. Research and investigate currently existing automated services and manual methodsfor performing vulnerability analysis of web applications.

4. Based on the identified automated services and manual methods, develop a compre-hensive methodology that could be used to get a full overview of critical vulnerabilities,including recommended mitigating measures.

5. Verify the validity of the solution.

1.4 Research questions

This master’s thesis is intended to answer the following questions:

• What notable vulnerabilities are most critical for web applications?

2

Page 13: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

1.5. Delimitations

• For what types of vulnerabilities are existing automated scanners suitable to use?

• What manual methods are needed to complement the automated services to develop acomprehensive methodology?

• How can the effectiveness of the methodology be verified?

1.5 Delimitations

This master’s thesis will not involve the construction of any new vulnerability scanners orspecific methods, nor concern any matters of design and development of new web applica-tions, nor will it concern matters of information security outside of web application bound-aries, such as social engineering or physical security.

The methodology presented in the Master’s thesis is intended for use by knowledgeablebut inexperienced penetration testers, and will therefore require basic information securityand penetration testing knowledge. It will not go into technical detail regarding the use ofany tools or methods included within it.

1.6 Secure State Cyber

Secure State Cyber5, for which this thesis work was performed, specializes in cyber secu-rity and information security. They offer certified cybersecurity services based on the cus-tomer’s needs and conditions. They help customers solve complex issues in a structured waythroughout all stages of the cybersecurity process, to identify, protect, detect, respond andrecover.

Secure State Cyber was founded in 2005 and operates from their headquarters in Sweden.At the beginning of 2018, they had roughly 20 employees.

5https://securestate.se/en

3

Page 14: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

2 Notable Vulnerabilities

There are many potential threats against web applications and many ways in which they canbe important to consider, be it due to high frequency, high potential harm, or the ease of actingthem out. In order to provide a comprehensive methodology for detection and mitigation, themost critical and common threats must be defined. In this chapter, organizations devoted tothe documentation and ranking of vulnerability types are introduced. Descriptions of thesevulnerabilities, and ways they can be specifically counteracted, are also detailed.

2.1 National Vulnerability Database (NVD)

The Department of Homeland Security’s National Cyber Security Division1 has sponsoredan open-source database for common vulnerabilities. It is based on predefined standards foridentification (Common Vulnerabilities and Exposures (CVE)) and rating (Common Vulner-ability Scoring System (CVSS)).

These two standards have been used to pioneer other vulnerability databases and CVE ismaintained by the non-profit organization Mitre Corporation2, while CVSS is an open indus-try standard3.

The CVE4 is a list of known vulnerabilities. Each entry details the specific technical vul-nerability, a practical description of how the vulnerability can be exploited, and referencesto relevant documentation. Each such vulnerability is given an CVE number and added tothe list when reported. The primary purpose of this is to allow for independent security or-ganizations and tools to have a common reference to specific issues, removing the need forextensive internal databases, and easing communication between entities which may eachhave given the same issue a unique designation. While extensive and detailed, the CVE li-brary is generally of limited use for web application security assessment, as such systems arerarely prominent or large enough to warrant inclusion [2].

1NIST, National Vulnerability Database: https://www.nist.gov/programs-projects/national-vulnerability-database-nvd

2Mitre Corporation: https://mitre.org/3CVSS: https://nvd.nist.gov/vuln-metrics/cvss4CVE: http://cve.mitre.org/

4

Page 15: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

2.2. Open Web Application Security Project (OWASP) Top 10

In turn, CVSS5 provides a way to calculate a vulnerabilities severity numerically, basedon a number of factors.

2.2 Open Web Application Security Project (OWASP) Top 10

The OWASP Top 10 list of critical risks, which according to penetration tester Peter Österberg[2] covers 95-99% of actually occurring vulnerabilities, was updated in November of 20176.Notable changes from the 2013 version7 include the introduction of three new issues, XMLExternal Entities, Insecure Deserialization and Insufficient Logging Monitoring [5]. The cur-rent list is as follows:

1 - Injection (I)

Injection flaws generally allow attackers to submit data which is interpreted as commands tobe executed by the system but may also be used in attempts to overflow buffers or cause otherharm to a system [5]. Depending on the design of the system, injection flaws include varioustypes, such as SQL, Xpath and LDAP injection [10]. Such attacks are suitable to launch acrossnetworks and are often used to gain access or create insertion vectors for various types ofmalware or further attacks [11].

Countermeasures and detection

There are established methods of protection against injection type attacks, including param-eterizing statements and validating input [10], [12]. By constructing limitations of what isaccepted as input, the potential for harmful content being processed is limited. Such valida-tion can also be achieved with the use of whitelisting, accepting only input which follows anestablished set of rules with regards to traits such as content and size.

Other methods focus less on the rejection of suspicious input, opting instead for prevent-ing the system from executing the received instructions, such as the one proposed by Gauravet al. [11], which employs the use of randomized instruction sets. This method is proclaimedto protect against any type of injection, by making the executable code unique to the partic-ular environment, preventing attackers from providing accepted instructions to the system,without first having access to the randomization key.

While manual penetration testing against injection flaws is entirely possible, attacks ofthis kind are easiest to perform using automated tools, as confirmed by the proprietary webvulnerability scanner vendor Netsparker [13].

2 - Broken Authentication (BA)

Broken Authentication refers to an event where an attacker successfully deduces the creden-tials of other users through any of a multitude of methods, such as by stealing keys, session to-kens or passwords [5]. Often, simple password deduction is achieved by brute-forcing log-incredentials using external tools in combination with known vulnerabilities and/or dictionar-ies of common passwords [14], bypassing the need to actually intercept any communication.

Countermeasures and detection

Huluka et al. [15] has researched the root causes of broken authentication. They have iden-tified some causes, including lack of security knowledge, mixing of confidential data withnon-confidential data, and the use of in-house developed functions instead of well tested

5CVSS: https://www.first.org/cvss/6OWASP Top 10 2017: https://www.owasp.org/images/7/72/OWASP_Top_10-2017_(en).pdf.pdf7OWASP Top 10 2013: https://www.owasp.org/images/f/f8/OWASP_Top_10_-_2013.pdf

5

Page 16: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

2.2. Open Web Application Security Project (OWASP) Top 10

external alternatives. Strong and secure defence mechanisms can be very costly and causeinconveniences for both owners and users, but the consequences can be severe if they are notimplemented. Suggestions on what a proper defence mechanism should include are: vali-dation to ensure that users use strong passwords, avoiding the use of session ids in URLs,using image recognition tasks to detect non-human behavior, and not giving attackers usefulinformation in error messages [15], [16]. In addition to this, broken authentication also con-cerns how passwords are stored, whether a good hashing algorithm is being used, and if thepasswords are salted [13].

Automated scanners are able to find some divergence from safe patterns for some ofthese mechanisms, such as session IDs or passwords being sent in clear text, but many ofthe vulnerabilities require logical analysis to discover, based on behavior which cannot bedetected/verified by an automated scanner [13]. As such, while automated tools are able todiscover some blatantly insecure circumstances, they cannot be relied upon to provide suffi-cient security on their own, requiring additional manual analysis.

3 - Sensitive Data Exposure (SDE)

Without specific protection, sensitive data is at risk of interception. Such data may includefinancial details and health records, possibly allowing for a variety of further crimes if stolen[5]. According to Risk Based Security [17], the yearly number of exposed records saw a markedincrease in the early 2010s, most notably between 2012 and 2013 (an increase by 212%).

Countermeasures and detection

The potential exposure of sensitive data is an important aspect of any data handling system.Poorly encrypted or even unencrypted data is at great risk from a number of types of attack.In order to prevent such issues, some basic methods exist [5]: Firstly, sensitive data shouldbe stored to as small an extent as possible, depending on the purpose of the application inquestion. If the data is not stored, it cannot be stolen from the repository. Secondly, any datathat is stored must be sufficiently encrypted. Thirdly, to prevent interception attacks, anysensitive data must also be sufficiently encrypted during transit.

Apart from those types of data which is required by applicable law to be especially pro-tected, such as social security numbers, credit card details and user credentials, exactly whatconstitutes sensitive data is highly subjective, and depends on each individual stakeholder.While automated scanners are able to scan for patterns matching legally protected structures,the subjectivity of the matter makes automation a very limited solution [13]. In cases whereknown sensitive data is stored, transmitted, or displayed, policies are likely already in place.It is therefor much simpler to review documentation and practices manually, rather than at-tempting to automate the process [10].

4 - XML External Entities (XXE)

Incorrectly configured XML processors may allow for external actors to access internal filesand functionality, resulting in the potential of backdoors being placed to enable further at-tacks [5]. Additionally, sensitive data may be exposed, and there is potential for DOS attacks,a risk made more prevalent due to this vulnerability remaining present in the default config-urations of many current XML parsers [18].

Countermeasures and detection

The vulnerability of XXE consists of the possibility for a malicious user to cause a variety ofissues by supplying the parser with a reference to an external entity. These vulnerabilitiesmay exist in any parser, including those sold by high-profile vendors [19]. Any customer ofparsers should therefore pay attention to known exploits and weaknesses continuously, as

6

Page 17: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

2.2. Open Web Application Security Project (OWASP) Top 10

the situation may well change over time. As this type of vulnerability is highly technical,automated scanners can detect these types of vulnerabilities with high accuracy [13].

Notably, removing the threat of external entities is often as simple as configuring theparser to not accept them [18], which will often have the additional benefit of preventingmany other types of malicious XML injection attacks (such as the Billion Laughs8 DOS at-tack). During a manual assessment, it is often worth questioning whether the XML parsingis actually providing any key business value, and removing the threat entirely if possible.

5 - Broken Access Control (BAC)

Broken Access Control, where non-authorized users/attackers are able to gain access to databy tampering with access rights or compromising user accounts that have access to confiden-tial files [5]. Notably, static analysis is very difficult to perform, as most attacks are highlydynamic, and the vulnerabilities may leave very limited traces in the actual source code [20].

Countermeasures and detection

If the validation of user access rights is manipulated or the access control system is miscon-figured, confidential information could end up in the wrong hands. Mitigation for BrokenAccess Control has considerable inherent overlap with mitigation of Broken Authentication,as well as various types of injection flaws, as such vulnerabilities may allow for access to re-stricted data, meaning that mitigation against injection flaws and broken authentication mayalso mitigate risks relating to access control, as shown by Zhu et al. [20].

Sandhu et al. [21] state that access control should be accompanied by log auditing, wherelogs track what users do in the system to confirm that the access control system works inthe correct manner. They further point out that the use of log auditing is also helpful whenverifying that administrators do not misuse their privileges. Such auditing can generally behandled by automatic tools, to detect situations like a non-admin user being able to accessrestricted pages like /admin/.

However, access control is primarily an organizational field, as poor access control is of-ten possible to detect (and exploit) even without intrusive software manipulation, makingmanual analysis preferable, possibly aided by the use of a proxy [10]. In addition to this,automated systems are difficult to design to recognize expected levels of access, further com-plicating the use of automated tools [13].

6 - Security Misconfiguration (SM)

Security Misconfiguration is often caused by the use of default configurations, deployingapplications with debugging messages and detailed error reports still in place, or makingpoorly planned changes, resulting in a variety of issues [5], [10]. It can also easily be causedby connected systems, frameworks, and libraries not being maintained and updated. Defaultinstallations are often publicly known, and their attack vectors are easily made visible byusing an automated scan [13], [22].

This issue is further propagated, in part, due to many web applications being dependenton the same frameworks [22]. If such frameworks allow for easy mistakes when configuringsecurity, the same specific vulnerabilities are likely to be represented in many other-wiseindependent applications.

Countermeasures and detection

OWASP [5] suggests that prevention to security misconfiguration should be configured toa company standard, so that new development environments will be deployed with ease,making security configuration very efficient.

8Billion Laughs: https://en.wikipedia.org/wiki/Billion_laughs_attack

7

Page 18: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

2.2. Open Web Application Security Project (OWASP) Top 10

As correctly configuring security is often not as technically complex as many other aspectsof security, many mitigation techniques, such as the one proposed by Eshete et al. [22], areprimarily focused on simply identifying and reporting such issues.

7 - Cross-Site Scripting (XSS)

Input fields that do not have decent validation in place are vulnerable to a variety of at-tacks, which can allow the attacker to execute scripts or insert elements into the browsersof other users without their knowledge or consent. The users can then face identity theft orbe redirected to malicious sites [5]. XSS attacks are commonly grouped into DOM/Type-0,stored/persistent/Type I, or reflected/non-persistent/Type II.

Stored XSS refers to malicious code being stored on a server, often hidden within usersupplied material such as images or comments, which will then be executed by any browserwhich attempts to display the content [23].

Reflected XSS describes attacks in which the malicious code is provided by a client andthen reflected from the server back to the client’s browser, which executes it. The XSS re-quires the input to be part of the response, which often means that error messages or searchresults are used as attack vectors. Attacks generally involve tricking the victim to do this tothemselves through links or similar means [10], [23], [24].

DOM Based XSS vulnerabilities exist within the DOM (Document Object Model), ratherthan the actual HTML. The primary difference from the other types of XSS is that the HTMLsource code and responses are completely unchanged, making detection and prevention moredifficult. As the malicious parts of the code is never sent to the server, server-side filteringalso has no effect [25].

Many automatic tools are equipped with capabilities of detecting the possibility for per-forming XSS attacks, making such detection quite suitable for automation [13], [26]. Manualanalysis, while applicable, will generally be considerably less effective.

Countermeasures and detection

Detecting XSS vulnerabilities is fairly straight-forward through penetration testing. Once po-tential insertion points are identified, vulnerabilities can often be confirmed non-intrusivelyby attempting to insert basic special characters such as < or > [10].

To prevent XSS there are, according to Jeremiah Grossman, several methods [24]. Of these,the most common is to filter or block user input [24]. It should be noted that error messagesresulting from blocking invalid user input could itself become an attack vector for XSS, forexample if the input is incorporated into the response. Filtering input is often preferred forits relative simplicity, but has its own set of issues, as it can often be circumvented. As withmany other types of input handling, whitelisting is preferable to blacklisting.

On an architectural level, OWASP [5] recommends a basic separation of verified and un-verified application content, through frameworks, escaping or context-sensitive encoding.

8 - Insecure Deserialization (ID)

If applications deserialize objects supplied by users, they may have been written/modified byan attacker, potentially altering user rights or tampering with other aspects of the application[5].

For example, as shown by Charlie Lai [27], Java classes inheriting from the standard Javainterface Serializable, generally allows objects to be created from serialized streams, avoidingany checks that are part of the class’s constructor, potentially allowing malicious objects to beinstantiated within an application.

8

Page 19: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

2.2. Open Web Application Security Project (OWASP) Top 10

Countermeasures and detection

According to OWASP themselves, the only way to be truly safe from insecure deserializationis to simply not deserialize any user-submitted objects, except from trusted sources (white-listing)[5]. Secondary solutions include requiring digital signatures, which would be appliedas part of legitimate and approved serialization methods, or isolating the deserialization pro-cess from sensitive or important systems.

Automation, while somewhat applicable for this type of vulnerability, has limited uses.Tools may search for patterns implying serialized objects, or probe the application with itsown serialized objects, but generally require some degree of specific source-code knowledge[13].

9 - Using Components with Known Vulnerabilities (UCwKV)

The use of components with known vulnerabilities in applications implies that the systemas a whole has the same vulnerabilities as the component, as they are often running withthe same levels of privilege as the application itself [5]. For the same reasons, attacks usingsuch vulnerabilities may be vulnerable in spite of existing security measures, which might bebypassed by the privileged components [10].

A study by Inoue et al. [28], which included a total of 123 open-source projects using oneor more of three chosen libraries, showed that 84 (68.3%) used outdated versions of theselibraries, potentially exposing them to various vulnerabilities. This would imply that, at leastfor small-scale projects, the issue of vulnerable components is fairly common.

Countermeasures and detection

Preventing exposure to known vulnerabilities of used components is largely centered onchoosing components, and handling them properly. Avoiding the use of third-party com-ponents to the extent possible, only using components that are truly needed, and makingsure that they are kept up-to-date at all time are some basic mitigation methods [5]. Auto-mated scanners can be used to map specific components to certain known vulnerabilities,but access to a database with all vulnerabilities cannot be expected, limiting the usefulnessof automated mapping [13]. Presuming the existence of documentation of used componentsand their version numbers, manually checking them against vulnerability databases or theirown respective documentation regularly is necessary to ensure continued security. If possi-ble, use of third-party components should be avoided, or limited to cases where they provideconsiderable business value.

10 - Insufficient Logging and Monitoring (IL&M)

Lack of sufficient logging and monitoring will likely increase the time to detect when some-thing has gone wrong, thereby delaying reactive countermeasures and damage control.Events to log and monitor can be either intended or unintended behaviors that need reac-tive controls. Lack of a correctly configured Intrusion Detection System (IDS) to monitor thelogs will make this problem increasingly severe [5].

Countermeasures and detection

Insufficient logging and monitoring is stated as a vulnerability that is impossible to detectautomatically [13], since a detection system could monitor a web application from a remoteserver. Therefore, when attempting to verify the quality of logging and monitoring processes,one first needs to ask the system administrator whether a detection system exists at all.

Logs can reduce detection time of an attack or an intrusion. Such intrusions may not bethe actual attack itself, but general vulnerability probing in preparation for a more substantial

9

Page 20: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

2.2. Open Web Application Security Project (OWASP) Top 10

Table 2.1: Log States

State DescriptionNormal Normal user behavior.Alert Something has diverged from the normal site, such as a series of failed log in

attempts.Incident Something is not working as intended, potentially disrupting the service.Critical Something critical has changed, like root privileges or root commands, either

with malicious intent or accidentally.

breach [5]. If the detection system is properly configured, the time to detect a breach willalmost be infinitesimal.

However, logs on their own are not enough to reduce detection time, as proper monitor-ing tools are also needed. A monitoring tool will have the logs as an input and then categorizethem into different states, notifying the operator when there is actually something that needsto be taken care of. Possible types of states, as defined by the European Commission Infor-mation System Security Policy [29] are shown in table 2.1.

Preferably, logging should be performed consistently across all aspect of an application,an organization, or even an entire industry, to ease analysis between instances. However, itis important to note that the quality of this system is not guaranteed just by its presence.

NIST, the National Institute of Standards and Technology, has presented a Guide to Com-puter Security Log Management [30] where they propose a four-step process, which admin-istrators should follow for the system which they are responsible for:

• Configure the log sources, including log generation, storage and security.Depending on organizational policies, administrators must decide what compo-nents of their host systems should be part of the logging infrastructure. Logsources must be coupled to their respective logs, and the particular data to belogged for each type of event. It is prudent to limit logging to interesting events,given that the logs may otherwise grow excessively large. In addition to this, thestorage of entries must be formalized, as different events may be suitable to storefor longer periods of time, or in different systems altogether.

• Perform analysis of log data.In order to make any use of the logs, the administrators need to obtain a good graspof the logs. For this to be as simple as possible, the logs need to be constructed in angood way, providing enough context and clarity to be easily understood. Entriesshould be prioritized according to time of day of occurrence, frequency, log source,and other factors, and marked as interesting for different levels of the applicationwhen applicable.

• Initiate appropriate responses to identified events.When analyzing logs, there must be procedures in place to identify the need forresponse. Administrators should not only make sure that such procedures are inplace, but also be ready to assist in the efforts.

• Manage the long-term storage of log data.Some data may be necessary to store for long periods of time, perhaps even forseveral years. In such cases, additional factors must be taken into consideration.The data logs must have a format decided for such storage, storage devices mustbe chosen, and the method of storage, regardless of technical details, must be phys-ically secure.

10

Page 21: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

3 Penetration Testing

Penetration testing, the process of simulating attacks on a system in order to detect any po-tential vulnerabilities, and usually confirm their existence by exploiting them (penetrating thesystem), is a common technique throughout the industry. The InfoSec Institute [10] definesthe process as having four phases, each of which is expanded upon below:

• Information Gathering

• Vulnerability Testing

• Risk Assessment

• Reporting

Many other methodologies exist, such as those presented by OWASP in their testing guide[31], which also goes into considerably greater technical detail, but the general outline of thepenetration testing methodology does not deviate much from the InfoSec Institute’s descrip-tion.

3.1 Information Gathering

The information gathering phase of penetration testing consists primarily of a manual reviewof the application and its environment. To be able to understand how a specific web appli-cation operates and what kind of vulnerabilities might exist, such reviews are a crucial partof the initiation phase of the penetration testing [2], [10]. A manual reviewer, as opposed toan automated tool, has the advantage of being able to have concepts and design decisionsexplained to them, to better identify and assess potential security concerns [31]. Solving suchproblems requires a great deal of flexibility and knowledge, something which automatedsystems would struggle to handle [32].

The review may generally investigate matters such as the verification of the establishedapplication requirements, the information classification, the security- and logging policy andthe risk analysis. These activities make it possible to detect logical vulnerabilities and to iden-tify vulnerabilities such as insecure deserialization and insufficient logging and monitoring,before any resources have been spent on intrusive testing.

11

Page 22: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

3.2. Vulnerability Testing

While the review should include documentation, policies, requirements and other non-code circumstances, one of the core activities is the actual source code review [31]. For thispurpose, implementing some manner of automation often saves considerable time and effort.Specifically, Biffl et al. [33] point out the worth of automation in cases of large code bases, andlooking for simple and recurring problems relating to code quality itself. Still, as automatictools are unable to independently surmise the intent of any particular piece of code, theycannot reliably identify flaws within the design, and is likely to result in large amounts of falsepositives. Even though it may find security issues based on coding errors, manual review willalways be necessary [31], [34].

The review also allows the reviewer to get a comprehensive idea of how the web appli-cation works and understand what parts may be of particular interest for further analysis.The primary areas of interest for the reviewer should be the same as for a potential attacker.Regarding many of the more organizational vulnerabilities, a basic rhetorical survey may beenough to detect them, including questions like: Is there a correctly configured Web Appli-cation Firewall in place? Are the activities within the web application being logged? Does itseem like the web application has client-server requests being sent to a database? What typeof database is being used?

Such surveys would generally be a good and inexpensive first step, in preparation for theplanning of more complex analysis and mitigation.

3.2 Vulnerability Testing

During the vulnerability testing phase, the penetration tester investigates the system whileit is running, attempting to identify vulnerabilities, generally by applying various exploits.While the manual review will have provided guidance towards some of the existing vul-nerabilities, there is a large degree of experimentation involved in vulnerability testing [2].Initially, focus should be on mapping all available paths and points of interaction. Perform-ing such mapping manually may provide great insight into potential flaws and exploits, butit can become unwieldy for larger applications [10].

To mitigate this issue, tools can be used to automate the process, either partially or com-pletely. Relevant features to support the dynamic analysis include intercepting proxies, man-ual request editors, fuzzers, crawlers and active/passive scanners.

An intercepting proxy is located between a user and the internet. During malicious attacks,the user/client would not normally be aware of the proxy, but the client/server communi-cation is being intercepted. The proxy can then perform actions, including redirection, inputmodification, or simply logging the communication [35].

Manual request editors can make requests to specific targets at any time with desired pa-rameters. They are also capable of modifying parameters of current requests, potentiallybypassing front end validation.

A fuzzer is a black-box functionality commonly included in vulnerability tools, which al-low an attacker to submit illegitimate data to a target, in desire to find an exploitable vul-nerability. Irrelevant data could include all sorts of basic SQL-injection code, XSS-scripts andcommon passwords [36]. This malicious input is often supplied in a high-speed brute-forcefashion, though application-specific knowledge may allow for the input types to be specifi-cally chosen, depending on the suspected vulnerability.

A crawler works by crawling web pages of a website to map resources, using existinghyperlinks. Benefits include a faster and more extensive mapping process than a regularmanual mapping and the possibility to find hidden resources that are not meant to be public.This type of feature is a common type of automated scan, and is often a basic feature of webscanners.

An active scanner will scan a website and perform common attacks on the website tryingto find potential vulnerabilities. Active scanners suffer when there is dynamic content in live

12

Page 23: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

3.3. Risk Assessment

Figure 3.1: Risk Severity

applications, such as if user supplied content is added or modified during scanning. In suchcases, the scanner will likely give a report of limited accuracy.

A passive scanner, on the other hand, is used to monitor requests and responses withoutmodifying them. This causes considerably less strain on network capabilities, while still al-lowing for a lot of information to be gathered [37]. Drawbacks to such scanners concern thelack of thoroughness and accuracy, as it is inherently limited to observing existing traffic.

Unfortunately, black-box penetration testing cannot be guaranteed or even expected tofind all security flaws, even those that may be obvious upon manual source code analysis. Inaddition to this, automated scanners can only detect the existence of known vulnerabilities,and has no capabilities to detect operational or procedural issues [34].

Moreover, verification of any identified potential vulnerabilities is an important part ofvulnerability scanning and must be performed manually. There is considerable risk for anyautomated system to issue alerts and warnings due to issues that do not actually presentany vulnerability or issue worth considering for mitigation. While relevant in many fields ofstudy, the issue of false positives is of particular interest to this field and must be taken intoconsideration [38].

3.3 Risk Assessment

Not all findings from a vulnerability test are necessarily relevant to the specific evaluated webapplication, or cause enough damage to business values to motivate taking action against. Arisk assessment must be done in order to decide which findings are relevant to the studiedweb application and to decide what risk level each finding has [10]. One method for deter-mining the severity of a specific risk is to estimate the likelihood of an attack and the expectedimpact, rating them as either low, medium or high, and rating the result in accordance withthe severity levels shown in figure 3.1 [10].

3.4 Reporting

A well written report of the findings of each studied web application is desired. The reportshould contain the list of found vulnerabilities, the severity and the likelihood of each vul-nerability to occur and be sorted in a prioritizing order. Moreover, it needs to clearly give alist of relevant readings as well as a description on how to mitigate each vulnerability [10].

13

Page 24: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4 Existing Tools, Methods andEnvironments

Different types of vulnerabilities require different approaches to detect and mitigate. To aidwith this, a variety of tools and methods are available, as well as specifically constructedenvironments against which to test them.

4.1 Methods of Analysis

Industry experts [39] point out different issues as being applicable for different types of anal-ysis, such as by differentiating between logical and technical vulnerabilities. Technical vul-nerabilities refer to issues best found by automated systems, as they are rule-intensive andtime-consuming. Examples include scanning for potential injection points via HTML-formsor other user input. Logical vulnerabilities, on the other hand, generally concern designchoices, such as what data is passed via URLs, or what data is stored by the application.These logical vulnerabilities generally require human analysis to identify.

Within the industry, companies are generally promoting automated systems for vulnera-bility scanning as an alternative to or even replacement of manual penetration testing [40],[41].

Nevertheless, methods of analysis depend on the specific type of task being performed.Just as other areas of analysis, vulnerability scanning have mainly two types, static and dy-namic analysis [33], [38], [42].

Static analysis of software refers to the act of checking whether some piece of softwarematches a set of requirements, without executing the software in question.

Dynamic analysis, in turn, is performed during run-time, observing how the system be-haves when interacted with, often in explicitly malicious or otherwise non-standard ways, asdescribed by Arkin et al. [42]. They also point out the efficient use of automated tools doingthe grunt work, but that it cannot be fully reliable without a skilled security analyst. Whiledynamic analysis and penetration testing is widely used by many companies as the primary,if not the only, approach to test security, this does not provide complete safety from all vul-nerabilities [31]. Unless external circumstances prevent it, security analysis should alwayscontain a mixture of many techniques and methods.

For these different types of analysis, different tool features are required. A feature could becapable of operating in both automatic and manual environments, or in only one of the two.

14

Page 25: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4.2. Vulnerability scanning requirements

Through the types of analysis, individual tool features can further be mapped to relevantvulnerabilities, to verify the extent of the coverage.

4.2 Vulnerability scanning requirements

Relevant requirements to choose vulnerability scanners for the comprehensive methodologyare presented below.

FedRAMP

FedRAMP, the Federal Risk and Authorization Management Program, is an American gov-ernmental program which provides standardization of security measures for cloud-basedproducts and services.

Their mission is to raise the quality of governmental IT infrastructure by moving frominsecure legacy system to modern cloud-based ones. By providing an authorization programto private cloud service providers, other American agencies can trust those CSPs to providehigh quality cloud-based IT solutions.

In order to maintain high quality, as a part of their business practices, they require thatCloud Service Providers (CSPs) identify and apply security evaluation tools for three distinctcategories: Operating System and Networks, Databases, and Web Applications [43]. Theyrequire CSPs to perform these scans monthly and submit the reports to authorization officialsat FedRAMP. These reports are critical, as they allow FedRAMP to continuously monitor thestate of the CSP’s systems.

For the reports to fulfill their purpose, FedRAMP has set up a list of requirements forvulnerability scanners, as shown below:

• Authenticated/credential ScansScans must be performed with full access credentials, as a lack thereof is likely toproduce limited results.

• Enable all Non-destructive Plug-insScanners must scan all findings, including add-ons and plug-ins.

• Full System Boundary ScanningScans must include any and all components that are part of the system.

• Scanner Signatures Up to DateScanners must be continuously maintained with current versions of vulnerabilitydata, as well as being updated themselves.

• Provide Summary of ScanningEach scan report must be complete with various data, including what ob-jects/components have been scanned, what specific tools were used, and scannersettings.

• POA&M All FindingsAll findings must be noted in a Plan of Action and Milestones, until the vulnera-bility has been dealt with. [43]

Several points in the above list are corroborated by other similar requirement guidelines.For example, A Hong Kong Government Vulnerability Scanners Overview [34] points outthe importance of vulnerability scanners being frequently updated. It also makes note of thepotential for overlapping instances of the same vulnerability, and the need for consistent andwell-structured scan reports. Similarly, Netsparker [44] point out that if there are parts of the

15

Page 26: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4.3. Existing Tools

web application that require authentication of any type to access, scanners should be able tonot only investigate the authenticating mechanism, but also be possible to configure to log inautomatically and test the application components it is protecting.

Experienced Penetration Testers

In addition, experienced penetration testers prioritize configurability of tools as an importantfactor [45]. While focused tools are often optimized for their particular purposes, being ableto modify and add features creates flexibility.

Another requirement of note is the cost factor [9]. Smaller businesses often do not havethe resources to buy licences for state-of-the-art penetration testing tools, which can rangebetween a few hundred dollars per year to tens of thousands, depending on the amount oflicences required1,2,3,4.

4.3 Existing Tools

The market for automated vulnerability scanning software is already well established, andwhite papers from companies selling such services, including Beyond Security [40] and thefor-profit SANS Institute [41], present many benefits of using them.

Independent research, however, points out that while automated vulnerability servicesare indeed useful and being continuously improved, there is still a need for a skilled operatorto sort out the false positives and to know how to prioritize the identified vulnerabilities [6].

Below, such services and tools are listed and described.

Zed Attack Proxy

Zed Attack Proxy (ZAP)5 was developed based on the OWASP guidelines. It is marketedas a beginner’s product, supporting both automated services and manual security testing.It includes a variety of features, including active and passive scanner, an intercepting proxyand manual request editor, a port scanner and a fuzzer [14].

Burp Suite Community Edition

Burp Suite Community Edition6 is a free edition of the commercial Burp Suite, includingessential manual tools, though not the advanced set nor the vulnerability scanner. In anevaluation concerning The Web Application Security Scanner Evaluation Criteria (WASSEC)the full version performed top three out of 13 web application scanners [46].

Sqlmap

The open source sqlmap7 performs automatic detection scans for SQL injection flaws in spe-cific URLs and provides tools for exploiting these same flaws. It includes support for a num-ber of techniques, including stacked queries, time-based blinds and error-based attacks, aswell as dictionary attacks on automatically identified hashed passwords, among other fea-tures. While lacking any automated crawling capabilities, the specific features allow sqlmapto be used as an effective verification and exploitation tool [47].

1Netsparker Pro: https://www.netsparker.com/pricing/2Acunetix: https://www.acunetix.com/ordering/3Detectify: https://detectify.com/pricing4Nstalker: http://www.nstalker.com/buy/5ZAP: https://github.com/zaproxy/zaproxy6Burp Suite: https://portswigger.net/burp7Sqlmap: http://sqlmap.org/

16

Page 27: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4.4. Related Works: Tools being tested against special environments

Table 4.1: Scanners and OWASP top 10 vulnerability mapping as claimed by each scanner’sdocumentation

I BA SDE XXE BAC SM XSS ID UCwKV IL&MVega X X X X X XZAP X X X X X X X

Arachni X X XSqlmap X X X

IronWasp X X X X X X XWapiti X X X X

BurpSuite CE X X

IronWASP

The IronWASP8 (Iron Web application Advanced Security testing Platform) framework canbe used for vulnerability scanning of web applications for a variety of well known vulner-abilities. It gives fairly detailed reports of the found errors and is capable of detection offalse positives/negatives. In a test [48] run, against the WackoPicko9 web application, it wasable to detect the most vulnerabilities of the subject scanners, which included OWASP ZAP,IronWASP and Vega.

Vega

Vega10 is capable of assisting with the detection and validation of SQL Injection, XSS, amongother vulnerabilities and threats. It is still in development and does not support post-authentication scanning in its current version (1.0) [48].

Arachni

The Arachni11 web application security scanner framework is a combined crawler and pen-etration testing tools, checking for a variety of common vulnerabilities, including Injection,XXS and XXE, among many other issues. It is also capable of detecting and adapting todynamic changes during path traversal, which is not a common feature in scanners [49].Arachni has two versions, one with a graphical user interface (GUI) and one with executionfrom command line.

Wapiti

The web-application vulnerability scanner Wapiti12 is designed for purely black-box scanningthrough crawling, attempting to discover a variety of types of injection flaws through fuzzing[50].

4.4 Related Works: Tools being tested against special environments

In general, research within the field of automatic vulnerability scanning is comprised of com-parisons between existing tools and frameworks. They are almost exclusively academic, andrarely discuss matters of practicality, instead focusing on structured testing in specifically de-signed environments. Given the potential security risks involved, real life industry method-ology and processes are rarely made public.

8IronWASP: https://ironwasp.org/index.html9WackoPicko: https://github.com/adamdoupe/WackoPicko

10Vega: https://subgraph.com/vega/11Arachni: http://www.arachni-scanner.com/12Wapiti: http://wapiti.sourceforge.net/

17

Page 28: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4.5. Existing Methods

The paper Why Johnny Can’t Pentest: An Analysis of Black-Box Web VulnerabilityScanners [26] from 2010 describes the testing of 11 scanners, three of which were open-source,against the vulnerable application WackoPicko, which was specifically constructed for thisexperiment. It was found that, at the time, none of the scanners were able to find even halfof the vulnerabilities, while a group of students described as being of "average" skill wereable manually to find all but one. The limited performance of the scanners was concludedto be due to a small set of critical limitations. Foremost of these limitations is the scanner’sgeneral inability to detect logical vulnerabilities and great difficulties in performing multi-step attacks. In addition, they were unable to access parts of the application that requiredauthorization.

In the years following the above study, the field was studied further, generally coming tosimilar conclusions, such as Antunes and Vieira [51], who performed a study on four scan-ners, three of which were commercial. That study reports that even when used together,these four scanners were only able to collectively find less than a third of the known vulner-abilities, compared to a group of experts who were able to manually identify 98%. Notably,they also show a considerable amount of false positives, with over half of the vulnerabilitiesreported by two of the scanners not actually being considered valid. They use this to pointout that trusting automated scanners is likely to result in large amounts of work being putinto solving problems that do not actually pose any threat, or possibly do not exist at all. Assuch, neither the coverage nor the precision of available tools were found to be satisfactory,nor able to compare to human expertise.

Alsaleh et al. [52] point out that, based on their testing of four scanners, there was no-ticeable discrepancy between the results that were reported, even when performance levelswere similar, pointing out a lack of comprehensive studies on the subject. While generallysatisfied with the quality of the tools’ respective user interfaces, they make note of question-able crawler coverage, as well as the accuracy, speed, and false positive rates being little morethan "acceptable".

In the master’s thesis of David A. Shelly, Using a Web Server Test Bed to Analyze theLimitations of Web Application Vulnerability Scanners [53], a method for evaluating theeffectiveness of automatic vulnerability scanners is presented. It used a specifically modi-fied environment in two versions (secure and insecure) to test six scanners, primarily focus-ing on the analysis of false negatives and false positives. Interestingly, it is found that evenvulnerabilities normally considered very suitable for automation, such as basic form-basedSQL-injection and reflected XSS vulnerabilities, have quite low detection rates, averaging at59.7% and 43.3% respectively. For less simple vulnerabilities, the scanners were found to befar from sufficient. Interestingly, regarding false positives, it was found that many such caseswere caused by duplicate reports, rather than truly incorrect results. While false positiveswere generally not deemed very common, it was speculated that this was strongly correlatedwith a generally limited capability of the scanners to find vulnerabilities at all, whether theywere actually present or not.

4.5 Existing Methods

Some vulnerabilities are difficult to detect with automated tools, instead requiring the use ofvarious manual techniques. Some such methods are described below.

Manual Crawling

An automated spider crawl is likely to add irrelevant URLs to its findings, especially in thecase of large scale web applications. Occasionally, some URLs are also considered irrelevantdue to specific scope restrictions having been set for the penetration testing. Tools like Burp-Suite have the feature to detect URLs from a manual browse, helping the operator to optimizerun-time and vulnerability findings of e.g. active scanners.

18

Page 29: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4.6. Existing Environments

Logical Flaws

Logical flaws are not generally something an automated tool is able to detect [13]. For exam-ple, if a discount code is either possible to enroll multiple times or modify, it is considered tobe a logical flaw which a human penetration tester easily could detect.

Many of the logical vulnerabilities require creative thinking to detect and exploit. Astwo penetration testers have stated in separate interviews performed as part of this thesis,penetration testing can be considered to be as much of an artform as a technical skill [2], [9].

Vulnerability Verification

Whenever a vulnerability is suspected to exist, whether because of having been reported byscanners or through intuition, they must be thoroughly verified to exist before being eligibleto be reported. Oftentimes, the most practical way to do this would be to perform the appro-priate attack. The specific methods are as varied as the vulnerabilities they exploit, and maybe performed with or without tools, depending on the situation.

4.6 Existing Environments

The following applications were chosen for use in the testing of various tools and methods.With the exception of Bila Nu, Presentshoppen and OrangeHRM, they were deliberately de-signed for such testing.

DVWA

The Damn Vulnerable Web Application (DWVA)13 is a deliberately vulnerable web appli-cation intended primarily for educational penetration testing, both manual and tool-based.The application contains ten sets of vulnerabilities (each with three difficulty levels), includ-ing Broken Authentication, SQL injections and XSS attacks. DVWA has seen some use inacademic research on the subject of vulnerability testing, such as by Lebau et al. [54], whoattempted to automate model-based vulnerability testing of web applications.

OWASP Bricks

OWASP Bricks14, a currently inactive project, is a small set of specific vulnerability exercises.Totalling 13, the "Bricks" are divided into "Login", "File Upload" and "Content", focusing onvarious forms of code injection.

WACKOPICKO

As part of a paper on automated penetration testing [26], the WACKOPICKO applicationwas constructed for the purpose of creating a completely known list of vulnerabilities againstwhich to measure scanner results. It attempts to mimic a real web application, with user cre-ation, user submitted content (including image files) and other similar features. As a result,it has a range of different vulnerabilities in naturally occurring environments.

Peruggia

Peruggia15 is described as a web application environment intended for basic educational pur-poses, allowing its users to apply basic attacks on a realistic application. The web applicationis small and one-man developed but has not been updated since 2015 at the time of writing.

13DVWA: http://www.dvwa.co.uk/14Bricks: https://www.owasp.org/index.php/OWASPBricks15Peruggia: https://sourceforge.net/projects/peruggia/

19

Page 30: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

4.6. Existing Environments

Table 4.2: Environment-Vulnerability mapping

I BA SDE XXE BAC SM XSS ID UCwKV IL&MDVWA X X X X X X

OWASP Bricks X X XWACKOPICKO X X X X X X

Peruggia X X X X X XOrangeHRM X X X X X X

OrangeHRM

OrangeHRM16 is a continuously updated open-source Human Resources management ap-plication, which is not in any way intended to be vulnerable. Old versions of the softwareis, however, still available, complete with vulnerabilities that have been patched out in thecurrent version.

Bila Nu & Presentshoppen

For their respective Bachelor’s theses, the authors of this master’s thesis took part in thedevelopment of two separate single page web applications, Bila Nu and Presentshoppen.They were constructed with minimal focus on security, but using well established pythonand JavaScript libraries and frameworks such as Flask and SQLAlchemy, making them veryinteresting to test against.

16OrangeHRM: https://www.orangehrm.com/

20

Page 31: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

5 Method

In this chapter, the methods used for creating the complete vulnerability scanning method-ology is described. This includes the process of the literature study, the creation of the re-quirements, and the testing and evaluation of automated and manual vulnerability scanningmethods. Finally, the design, construction and verification of the comprehensive methodol-ogy, including the underlying logic, is presented.

5.1 Literature study

In order to form an understanding of the subject at large, find relevant tools and methods touse, as well as to be able to define suitable requirements, a literature study was conducted.

Primary subjects of interest included existing vulnerability scanning tools and methods,existing environments for evaluating these same tools and methods, a basis for effectivenessrequirements, notable existing vulnerabilities (including their respective mitigation and de-tection techniques), as well as the differences between the respective applicability of manualand automated vulnerability analysis.

These subjects were split into two categories: Practical industry phenomena, and primar-ily academic subjects.

For the more academic subject, such as the general applicability of automation, peer-reviewed scientific sources were found by searching journal and conference repositories suchas the IEEE Xplore Digital Library and publishing houses like Springer Link, mainly throughaggregate search engines like Google Scholar and the LiU Digital Library.

To find examples of existing software and methodologies, as well as environments againstwhich to test them, similar methods were used, though mainly to find white-papers, specificsoftware documentation and industry blogs related to current industry phenomena. The ac-tual finding of the software examples was not strictly part of the literature study, but theliterature study was used to provide a better understanding of how well any specific pieceof software can be expected to perform when used to scan different web applications, as aninitial reference point to decide whether they were worth considering for further experimen-tation.

Regarding the categorization and internal ranking of known vulnerabilities, there werealready established industry standards in place. The OWASP top 10 list, which since its firstversion in 2007 has been updated approximately every third year, is often used for similar

21

Page 32: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

5.2. Requirements

purposes in other works, both academic and industrial [2]. Due to this, primary focus of thispart of the literature study became to collect data on the definitions of the ten vulnerabilitiesincluded in the November 2017 version of the list. OWASPs prevalence within both academiaand industry, and the community peer-review involved in the creation of its material lendscredibility to its use. In addition to this, the penetration testers who were interviewed as partof this master’s thesis [2], [9], [45] have all confirmed that it is accurate and provides extensivecoverage of the industry situation.

While OWASP provides detection and mitigation techniques for their top vulnerabilities,other industry sources present their own methods for many specific vulnerabilities, examplesof which are included in the results of the literature study. These methods have been incor-porated into this thesis to be applicable individually, or as part of an extensive methodology.

Given the initial choice of combining manual and automatic processes, their respectiveapplicability for different type of vulnerabilities needed to be ascertained. Given this, the twomethods of analysis, static and dynamic analysis, as described in section 4.1, were used as abasis alongside the differentiation between automatic and manual analysis, while the generalapproach to penetration testing was based on the four-step process described by InfoSec [10].

In order to know the degree of quality of the methodology, relevant scientific methods onhow to verify a methodology needed to be established. While the scope of the thesis did notallow for thorough application, three suitable techniques were identified: Empirical Valida-tion, Peer-review and Field Experiments. Each of these techniques are described further insection 5.6.

Interviews

In connection with the literature study, a number of interviews were performed with experi-enced penetration testers, in order to confirm or counter the findings of the literature study,as well as providing new information, and feed-back regarding the general aim and approachof this master’s thesis. In preparation for the first interview, a set of questions was writtendown. The set was primarily focused on the general work structure of professional penetra-tion testing and the real-world relevance of various concepts which had been encounteredthrough the literature study. The full set of questions can be seen in table B.

The interviews were performed over distance, with one person asking the majority of thequestions, while the other was responsible for documentation and keeping notes. Followingthe interview, the notes were translated and compiled for future referencing. The finalizednotes were sent to the respondent for approval, to make sure that any references made to theinterview would be accurate. The notes are provided in appendix B.

5.2 Requirements

For the purpose of this thesis, requirements are divided up and are considered to be of one oftwo types: Effectiveness requirements or Practical requirements. Effectiveness requirementsconcern matters of tool effectiveness, such as the degree to which a tool is likely to find thevulnerabilities it is designed to detect. Practical requirements concern external matters, suchas ease-of-use, run-time, memory allocation, the level of required manual supervision andinteraction, and whether the tools are designed for black- or white-box scanning.

The requirements for both the included tools and the comprehensive methodologyneeded to be defined. Since the tools would not be constructed as part of this thesis, therequirements centered on usability and that they provided a sufficiently detailed report.

The requirements for the comprehensive methodology were based on the vulnerabilitiesthat were intended to be scanned for, which in turn had been chosen from the risks andvulnerabilities considered most critical by OWASP [5]. The system was required to be ableto identify and alert the user of the presence of any of the most critical vulnerabilities, rankthem by severity, and provide guidance on mitigation.

22

Page 33: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

5.3. Selection, Testing and Evaluation of Tools and Features

In addition, the requirements for the FedRAMP [43] program, as described in section 4.2were taken into account, where applicable.

5.3 Selection, Testing and Evaluation of Tools and Features

The literature study resulted in a short list of well known open-source or otherwise free-of-charge tools for web application vulnerability scanning and/or penetration testing. Basedon documentation, 11 of them were chosen for initial testing, with seven being accepted intothe methodology. Many of them included functionality which could be considered fully ormostly automated. The distinction was defined as an automatic tool or functionality beingone that did not require manual input or analysis beyond preparatory investigation and post-scan analysis, with limited or no manual input during run-time.

As an example of an edge case, the running of sqlmap on an input form to detect potentialinjectable parameters required the specification of the URL in question and a valid input com-bination for comparison to failed attempts. During run-time, sqlmap would suggest differentcourses of action depending on findings, such as ignoring other database configurations onceenvironmental factors has been deduced, which the users would be required to respond to.In addition, if the user wished for the tool to access the database and display its contents, orcrack passwords found within it, this would need to be requested. Given the above defini-tion, the tool was deemed manual for the purposes of this thesis despite the attack itself beingperformed by the tool, requiring limited personal knowledge of SQLi exploitation.

The first step to perform when evaluating vulnerability scanners is to run them againstweb application with well known and specific vulnerabilities, according to Acunetix [55].This will not only allow you to get a good idea of the level of accuracy through analysis ofmissed vulnerabilities and false positives, but will also greatly limit the risk of disruptingoperations of any actively used applications one might wish to scan [9].

For these reasons, the automatic functionalities were run against a variety of vulnera-ble web applications from the OWASP Broken Web App virtual machine. The expectedand actual results of the runs were noted down, along with comments regarding softwarerequirements/dependencies, usability issues, relative run-time, and other relevant observa-tions. The testing was done over several sessions, starting out with the primary intent ofbecoming familiar with the tools, environments and vulnerabilities involved. Once some fa-miliarity had been achieved, sessions were held during which a number of tools were runagainst a single environment, for which the intended vulnerabilities were explicitly known,allowing for calculation of detection rate, as well as analysis of vulnerabilities that were re-ported without being represented in the documentation. The specific results for each sessioncan be found in Appendix A.

Once this had been done, manual attempts were performed against the vulnerabilitieswhich remained. Several of the tools had both automatic and manual features, in which casethey were counted towards the same coverage, although marked as separate in the statistics.For the known vulnerabilities of each environment, the tools were marked as either havingidentified the issue automatically, having provided some assistance to manual detection, oras not having been able to do either. For automatic detection, a tool was considered successfulif it identified the URL and basic type of vulnerability and reported it to the user. For manualassistance, any type of usefulness was considered enough, including proxies being markeduseful for having request editors capable of serving as vectors for various injection-basedattacks.

A successful attempt was also made to identify potential application of direct usage oflarger scale functionalities to identify interest points for more specific analysis with morespecialized tools and methods.

23

Page 34: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

5.4. Comprehensive Methodology

5.4 Comprehensive Methodology

Based on the gathered data, the comprehensive methodology was constructed to follow abasic structure of manual review, vulnerability detection/verification, risk analysis and re-porting. The actual composition of the comprehensive methodology was primarily agreedupon through internal discussion based on established work methods found through the lit-erature study and interviews.

5.5 Design of Interactive Algorithm

The initial idea regarding the structure, application, and implementation of the intendedmethodology was founded on use cases based on the business of Secure State Cyber. Inthis context, the idea was that an experienced security analyst or penetration tester shouldperform a basic mapping of the web application in question, the results of which would beentered into an algorithm, which would construct a suitable method out of relevant tools andmethods. As an early proof-of-concept prototype, a basic JavaScript application was con-structed, into which traits of the system-under-test was entered. These traits included purepractical components such as user input fields and back-end databases, as well as businessaspects such as whether various types of sensitive data were being stored. The applicationwould then, based on the input, produce a recommended set of tools and methods to ap-ply when testing the application, although at this stage, the solutions were rudimentary andincomplete.

5.6 Verification of Comprehensive Methodology

In any field of science, there are numerous methods to verify results. When attempting toverify the effectiveness of the proposed methodology, there were a few of particular interest.

Empirical Validation

Empirical validation, in its essence, describes the gathering and analysis of empirical evi-dence to verify the truth or falsity of any given claim. While the methods involved in thecollection of evidence may vary considerable depending on the field of study, the validationrelies on the possibility of evidence verification, only allowing for objectively ascertainabledata to be considered valid. While such validation provides excellent levels of confidence, itis also not applicable to many types of experimentation, primarily in cases were subjectiveexperiences are in any way relevant.

For this thesis, empirical validation could not be included in the scope, but is noted hereand elsewhere as potential future work.

Peer-Review

Peer-review, in its general sense, describes the evaluation of the work of some individualor group by another individual or group (of similar or greater knowledge of the field), inorder to enhance (or question) the validity of the results and conclusions of the work. Havingan independent review process is likely to result in the discovery of issues with the workthrough the introduction of additional perspectives itself [56].

A specific type of this process, scholarly peer-review, describes this process within theenvironment of academic work. In such cases, including this master’s thesis, the reviewercould be an industry expert, rather than an academic one, to ensure that the methodologyis sound from an industry stand-point, rather than a purely academic one. While otherwisesimilar to ordinary industry peer-review, the process is somewhat more difficult, as schol-arly work may often describe results from unique instances of research, with little previous

24

Page 35: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

5.6. Verification of Comprehensive Methodology

knowledge against which to compare. Unfortunately, this also increases the possibility of thereview process failing to identify errors, as well as introduces an increased risk of bias, wherereviewers may favor works which strengthens their own beliefs, or even suppress work thatruns counter to established theories, such as the supposed cases presented by Martin [57].

While the interviews performed during the literature study served as early confirmationof the general ideas and concepts relating to the methodology, as well as confirmation re-garding the relevance and accuracy of the representation of industry phenomena within thisthesis work, they did not apply to the final methodology. Instead, peer-review sessions wereheld with employees of Secure State Cyber.

Field Experiment

A field experiment is a method of research by which interaction between some test-subject(such as a computer program or system) and a naturally occurring environment, to inves-tigate the result. It is differentiated from ordinary field research by the introduction of thisexternal entity. While the natural environment may be artificially limited, it is distinctly dif-ferent from a laboratory test, as the researcher does not have nearly the same level of controlover the natural environment. The primary reason for using such methodology is simply thatthe results can generally be trusted to match an actual release of the subject fairly well, givingthe results implicit external validity [58]. Still, the lack of control and high level of random-ization means that the results may always be contaminated by situations beyond the controlof the researchers. True repeatability is almost impossible to guarantee.

Several of the environments used for testing could be considered natural environments,such as OrangeHRM, which provided a much different kind of testing from the specificallyconstructed ones.

25

Page 36: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6 Results

6.1 Literature Study

The results of the literature study are detailed in chapters 2, 3 and 4 of this Master’s thesis.

6.2 Requirements

Scanners and tools needed to be usable with limited knowledge of the specific vulnerabili-ties, and/or have easily accessible documentation. They also needed to provide a report withenough detail to easily understand what type of vulnerability had been found and whereexactly it had been encountered, using well established terms, such as the ones used in theOWASP top ten list. In addition, automated scanners had to be reasonably effective at dis-covering the vulnerabilities that were mentioned in their respective documentation, withoutconsiderable amounts of false positives being reported.

The Federal Risk and Authorization Management Program, FedRAMP has been consid-ered to cohere with the requirements established for this papers comprehensive methodology.Whenever applicable, such as when a tool includes a crawler or active scanner, they need tobe able to perform authenticated scans of the entirety of the application. Furthermore, thescanners should provide a summary of its findings in order to be able to measure each scan-ners effectiveness.

Given that no tool can be expected to work flawlessly or find every potential flaw in anapplication, effectiveness was largely measured by comparison between tools, dependingon range and depth of their results regarding particular vulnerabilities. For the completemethodologies, it was concluded that any given solution must include at least one measureagainst each of the vulnerability types from the OWASP top 10 to be considered effective.

In summary, the comprehensive methodology had to contain only tools which fulfill thebelow requirements (when applicable), and provide guidance regarding each of the vulnera-bilities presented in the latest OWASP top 10 (November 2017).

• Authenticated ScanningAutomated tools, which include crawlers and active scanners, need to be able toperform these tasks in an authenticated state.

• Ease-of-use

26

Page 37: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6.3. Selection, Testing and Evaluation of Tools and Features

Tools need to either be intuitively designed to a degree that an inexperienced usercan apply them, or have easily accessible and understandable documentation.

• Tool reportsTool reports need to report their finding with sufficient detail and accuracy.

• Tool EffectivenessTools need to have been confirmed as functional at detecting the vulnerabilities forwhich they are designed.

6.3 Selection, Testing and Evaluation of Tools and Features

The results of the testing against environment with known and documented vulnerabilitiesis included in table form in appendix A, but will be described in greater detail below, catego-rized by tool.

VEGA

The VEGA tool was able to detect many types of vulnerabilities using its automated fea-tures, especially various types of XSS, with decent accuracy. While false positives were re-ported, it was deemed to be within acceptable boundaries, given that even those instanceswere reported based on the acceptance of special characters as input. SQL injection and XXEvulnerabilities were also found without much issue. The inclusion of a request editor andproxy also meant that the tool could be used to ease the manual exploitation of many othervulnerabilities. Authenticated scans were limited in applicability, being difficult to configureand occasionally impossible due to a reported "high complexity" of application authentica-tion. The report was displayed with clear priority levels and included specific URLs and therequests used to find the vulnerabilities, when applicable.

Zed Attack Proxy

ZAP was the ultimate tool when performing crawling, scanning, and fuzzing. Its ability tomanually intercept requests between a client and a server made it easy to verify the findingsof the active scanners. ZAP was able to do full automated credential scans on all test envi-ronments, either with a session cookie or user credentials. Further, the summary providedsufficient details with distinct levels of awareness in order to measure its effectiveness againstthe environments’ known vulnerabilities.

Arachni

The Arachni tool was highly effective, but somewhat difficult to work with. It was not un-common to see the run-time pass 24 hours nor was it intuitive to configure. Still, the reportwas of high quality, making very clear distinction between levels of critically, and providingsome recommendations regarding mitigation.

Sqlmap

While specifically targeting only one type of vulnerability, sqlmap proved to be very capablewithin its particular scope. The inclusion of actual exploitation of the found flaws, includingdatabase display and password cracking, allowed for easy and effective verification, elim-inating the possibility of false positives entirely. While the tool is command line based, itallows for extensive customization, and documentation is easily available.

27

Page 38: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6.4. Comprehensive Methodology

Table 6.1: Tools and their respective fulfillment of requirements

Authenticated Scanning Ease-of-use Tool reports Tool EffectivenessVega X X X XZAP X X X X

Arachni X X XSqlmap X X X

IronWasp X XWapiti X X

BurpSuite CE X X XGrabber X

w3af XWebscarab X

Skipfish X

Figure 6.1: Comprehensive Methodology

IronWASP

While similar to Vega and ZAP, IronWASP was found to be considerably more complex, con-taining several features the aforementioned tools did not. However, actual results were com-paratively lacking, and active scans had much longer run-time, even when manually guided.

Wapiti

As a completely automated tool, Wapiti employs a crawler and an active scanner, report-edly capable of detecting a variety of vulnerabilities, including XSS and SQLi. When tested,though, it was unable to detect the easily exploited SQLi flaws of Wackopicko and did notfind any of the flaws in DVWA, as that environment requires authentication to access any ofthe vulnerabilities.

BURPsuite Community Edition (CE)

BurpSuite CE did only provide essential manual tools. It helped to verify manually foundvulnerabilities with the intercepting proxy and to map out a custom site-map of the targetedweb application.

Non-viable Tools

After intital testing, Grabber, w3af, webscarab and Skipfish were found to not fulfill enoughof the requirements to be included in the methodology, as shown in table 6.1.

6.4 Comprehensive Methodology

This section presents the methodology, as depicted in figure 6.1, in full. Specific motivationsfor each step will be detailed in their respective subsections below.

28

Page 39: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6.4. Comprehensive Methodology

Manual Review

In order to start off the testing, one must get a clear and comprehensive overview of the ap-plication. The first step of pre-testing investigation should be to answer the below questions.Some are easy to answer by oneself or by simply looking through any documentation thatyou have been given access to. Other questions require the aid of someone related to the webapplication being tested. It is important to note that different question may require differentsets of knowledge to answer and different decisions may require different levels of authorityto make. As such, always try to make sure to speak to the right person at the right time, beit a manager, an engineer or a legal representative. Below, a list of general questions that canaid with the manual review is presented.

Further, it is very important to be aware of the possibility for both customer knowledgeand documentation to be faulty or incomplete. The results of the manual review should beused as a foundation for the testing, not as a complete guide to be followed blindly.

1. What kind of web application are you dealing with? Is it an informative web applicationor an interactive web application?

The incentive behind this general question is to get the user of this comprehensivemethodology to understand the main purpose of the web application. As definedin this context, an informative web application does not have authentication, ac-cess control or any database. Therefore, there is no need to spend time on checkingbroken authentication, broken access control nor sensitive data exposure or injec-tion.

2. What tools are available? Scanners, fuzzers, specialized tools? Will the customer supplyany software to be incorporated?

In cases where the user needs to conduct tests in-house, tools could be limited.Different tools are good at different things and the tool arsenal should be verifiedbefore-hand. Since this algorithm is not strictly connected to any particular set oftools or methods, this question also serves to define the potential tools that couldbe included in the solution.

3. What is the scope of the testing? What kind of preferences and instructions has thecustomer supplied you with?

General questions are here to help the user of this comprehensive methodology tounderstand the great advantage of knowing the testing environments inside andout before actually conducting any tests. This particular question reminds the userto get the customer to sufficiently explain the scope of the testing as well as provideexisting documentation.

4. Who are you dealing with? Is your customer military, governmental or commercial?Different laws, threats, and scopes are applicable to different customer segments.Such external factors have to be known in order to know which level of securitythe testing should be conducted. The organization will probably provide details insuch matters, based on which particular circumstances can be discussed.

5. Is there any prioritization order? Is there any particular attack or functionality, that isbeing extra important to consider?

If preferences regrading priority of various application components exist, it is bet-ter to know this before conducting tests in order to spend the time testing wisely.It is recommended to prioritize in accordance with the OWASP top 10, unless oth-erwise instructed by the customer, especially in the case of scope limitations.

6. What knowledge/experience/competence are available within the customer’s organi-zation?

29

Page 40: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6.4. Comprehensive Methodology

What professional title/position does the primary customer contact have? Isit a technician or a manager, what insight do they have? Is the contact el-igible to provide sufficiently detailed answers and documentation, and in aposition to make executive decisions? In the cases where limited knowl-edge/experience/competence is provided, the user has to make the best of thesituation and manually check for certain vulnerability categories. It may also benecessary to request additional contacts within particular fields.

7. Is there sensitive data being stored within the application’s database? If so, what spe-cific kind of data?

Regardless of the kind of the web application, it is not always necessary for anapplication to have a database. If there is no database or if there is a database butwhich is not storing sensitive data, there is no need to spend time on finding sensi-tive data exposure. If there is such data, the level of sensitivity must be considered.

8. Is there any logging and monitoring on the web application?If no logging and monitoring on the web application can be found, the only wayto mitigate this is by having a dialog with the system owner. On the other hand,if there are logging and monitoring on the web application, this can only be re-viewed manually and by verifying the countermeasures and detection chapter inthe literature study 2.2.

9. Is there any deserialization of user supplied objects?Whether XML snippets, java objects, or any other potentially deserializable userinput, any use of deserialization needs to be identified and scrutinized. As men-tioned in section 2.2, such input should not be allowed to be deserialized at all, ifpossible, and must otherwise be manually examined and made secure.

Once the above has been sufficiently covered, one must perform a comprehensive map-ping of the application in question. Preferably, a good idea of the layout has already been es-tablished through documentation or source code analysis, but no matter the extent of knowl-edge, a practical web crawl, either manual or automatic, is greatly beneficial.

Tools and Method Selection

Once the manual review has been performed, there should be enough information to figureout what vulnerabilities are likely to be present in the application, and what actions should beprioritized. To aid with this, an algorithm was constructed as part of this Master’s thesis. Thisalgorithm requires the questions from the manual review to be answered, based on which itwill recommend a particular set of actions. While the algorithm should be helpful, the rest ofthe methodology does not strictly require its use. As the process is iterative, and the results oftesting may change one’s understanding of the application, this step should also be returnedto several times to expand the list of interesting tools and methods.

Vulnerability Testing and Verification

The specific nature and procedures of vulnerability tests vary immensely from situation tosituation. Each iteration of testing will involve different tools and methods, but some gener-alizations can be made depending on the results from the manual review.

Firstly, question 1 from the manual review has a major impact on the proceedings. Aninformative web page can be presumed to be safe from a large amount of attacks, given thelimited interactivity. A basic scan with some automatic crawler and scanner will still be worthperforming, but many of the more specialized technical tests will be unlikely to be worth-while. Since the only real user input is the URL itself, it may be worth attempting to fuzz

30

Page 41: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6.4. Comprehensive Methodology

it, to look for potential public access to back-end features or administrative details, howeverlimited those may be. Otherwise, for web application that are interactive, the recommendedmethod will be to start out with general scanning, using primarily automatic tools with activescanners, such as ZAP or Vega. If possible, the use of multiple such tools is highly beneficial,as it will allow for substantial inter-tool verification of vulnerabilities. If several tools pro-vide matching results, it is very likely that the vulnerability is actually present, and should beeasily verifiable by manual means, presuming personal understanding of whichever partic-ular vulnerability is being reported. Such cases should of course take priority, with potentialvulnerabilities reported only by one scanner being of limited immediate interest. Depend-ing on personal ability and previous experience, different vulnerabilities will be more or lessdifficult to verify. For simpler and common vulnerabilities, immediate verification should beperformed, while more unfamiliar or inherently more complex vulnerabilities may need toawait further investigation at a later time.

Similarly, question 2 affects what specific scans and techniques can be applied. While thecomprehensive methodology takes into consideration the tools described in chapter 4, thealgorithm is highly modifiable and expandable with regards to potential tools.

During testing, care must be taken to adhere to whatever specification the customer hasgiven with regards to questions 3,4 and 5. Given the wide array of potential circumstances,it is impossible to apply general rules, and every decision must be based on the informationsupplied by the customer. However, unless these circumstances prevent it, it is recommendedto focus on the vulnerabilities in the order by which they are listed in the OWASP top 10.

When initial scans have been run, the automatic tools will with no doubt have reportedsome vulnerabilities, particularly ones of more technical nature, but a large amount of poten-tial issues must be handled manually. This should be done in parallel with the verificationof the low priority findings from the automatic tools. There are some vulnerabilities on theOWASP top 10 which are almost purely noticeable through analysis of documentation. Dur-ing the manual review, some potential vulnerabilities should already have been confirmed,if present. This includes Insufficient Logging and Monitoring, many aspects of Security Mis-configuration and Using Components with known vulnerabilities. Some security risks canalso have been shown by the manual review to be very unlikely, such as having insecure de-serialization if there is no deliberate deserialization present in the application to begin with(although this does not exclude the possibility entirely). The same goes for Broken Authen-tication, Bad Access Control and Injection flaws, all of which require different features to bepresent within the application. Depending on question 6, 7, 8 and 9, aid can be requestedfrom whatever experts are available from the customer regarding specific contexts, and spe-cific vulnerabilities can be identified, prioritized and discussed in the following risk analysisand reporting steps.

Risk Analysis

Prior to the reporting, a risk analysis needs to be done to create the prioritization order ofthe findings, as a recommendation regarding which finding the customer should mitigatefirst. Since the mitigation itself is not part of the penetration testing, it will need to be up tothe customer to make the final decisions, although supplying with recommendations wouldgenerally be considered to be within the scope of this methodology. Firstly, the risk of ex-ploitation must be estimated. By identifying potential attackers, one may estimate how likelyit is for a flaw to be discovered by that same attacker, and how easy it would be for them toexploit it. Secondly, the potential damages must be calculated. Such damages include directmonetary losses and potential reputation risk. If a flaw could constitute non-compliance withestablished law, this must also be taken into account.

Notably, a penetration tester may not have sufficient knowledge to estimate some of thesefactors, in which case it is important to get assistance from professionals within the specificfields. This could include legal councilors, marketing professionals or economists. If possible,

31

Page 42: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6.5. Design of Interactive Algorithm

these should be part of the web application owner’s organization, to maximize relevance totheir particular situation.

Given the potential for a considerable amount of findings to analyze, it is important thatrisk analysis is given enough time to ensure the quality of the assessment. Since mitigationcan be costly, not all vulnerabilities will necessarily be handled, making prioritization veryimportant. It is crucial to consider the costs of implementing mitigation techniques versusthe potential cost of an exploitation. A single flaw in the application may cause a multitudeof vulnerabilities, in which case it should be given high priority.

Reporting

The final stage of the comprehensive methodology is to write a report to the customer. Thecontents of the report should be sorted in a risk severity order together with a notation of howimportant it is to mitigate the particular issues. All content in the report has to be verifiedand should include suitable mitigation techniques.

6.5 Design of Interactive Algorithm

Based on the comprehensive methodology, a JavaScript based algorithm was developed inorder to sustain a visible and understandable prototype. It was built upon the manual reviewquestions, in order to accumulate enough details to optimize and automate the choice of toolsand methods, derived from the likelihood of the occurrence of specific vulnerabilities.

6.6 Verification of Comprehensive Methodology

While methods for verifying the quality and effectiveness of the comprehensive methodologyhave been identified, the scope of this thesis work did not allow for sufficient application ofthese methods. To the extent possible, the intended or performed methods are describedbelow.

Peer-Review

For the peer-review process, individuals from Secure State Cyber were chosen.Jakob Hägle, a Certified Information Systems Security Professional (CISSP) noted that the

comprehensive methodology was a good starting point for in-experienced ethical hackers,and that it appeared to be generally applicable to web application penetration testing. Hefurther agreed with the importance of being open to specifications from the customer, giventhat the level of interest any particular vulnerability deserves is highly dependent on a vastamount of circumstances. In many cases, he speculates, a web application and its robustnessmay be of very little importance in the larger scope of the customer’s business.

Security Advisor and ISTQB-certified Tester Jonas Molin agreed thoroughly with the in-dustry need for knowledge and experience transferal. He further notes the importance ofnot blindly trusting the information given by the owners of web application, as they maywell provide incorrect information. Reasons for this can include simple misunderstandingsor lack of insight, but may also be a result of professional pride. Jonas further found the riskanalysis to be well constructed, noting the necessity of allowing the customer to make anydecisions regarding actual mitigation.

This validation method could be made much more substantial with the inclusion of inputby additional peers, something which was not possible to perform as part of this thesis workdue to time constraints.

32

Page 43: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6.6. Verification of Comprehensive Methodology

Empirical Validation

For any data to be considered to be empirical, it would have to be gathered by persons notinvolved with the construction of the methodology, and not include results of initial tooltesting. Such arrangements were not possible to make within the time restrictions of thisthesis work.

Field Experiment

As most of the environments available to test against were designed with deliberate vulner-abilities, an effort was made to run tests against environments that were not designed in thisway. Since these applications were entirely natural, these tests could not be verified againstany list of known vulnerabilities. While limited in scope, field experiments of the compre-hensive methodology were run against Bila Nu, Presentshoppen and OrangeHRM, none ofwhich had any intentional flaws.

Bila Nu & Presentshoppen

Since these two applications are single page (SPA), they use a lot of dialog boxes for both dis-play and user interaction. Dialog boxes are by nature hard for an automatic tool to evaluatesince it creates a disconnect between the user and the actual web page, rather than the kindof direct requests most scanner and crawlers are designed to take advantage of.

Given the frameworks used to develop these applications, particularly SQLAlchemy, se-curity against certain attacks was already in place. The biggest advantage was the limitationsof SQLi exploits, to the point where specialized tools like sqlmap were unable to find anyvulnerabilities.

OrangeHRM

Due to OrangeHRM’s status as an HR management tool, it has a considerable amount ofaccess points to the database, but not much else. The results of the field experiment for thisapplication is included in appendix C.

33

Page 44: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

7 Discussion

For each aspect of the work performed as part of this thesis, the result and methods used isdiscussed below.

7.1 Results

The work performed was successful with regards to the research questions and goals set upat the start of the thesis. Each specific matter is discussed in their respective sections below.

Literature Study and Sources

The literature study resulted in a good amount of reliable data from a broad set of sources,both academic and industrial. Due to the prevalence of the many projects within OWASP,and particularly the Top 10 vulnerability list [5], the vast majority of sources used specificand well established terms. This allowed for extensive overlap between different studies andprojects, as far as terminology was concerned.

Interviews

The interviews [2], [9], [45] that were performed in parallel with the literature study providedconsiderable insight into the real-life situation of penetration testers and the methods theyapply. They were overwhelmingly supportive of the information that had been gathered upto that point, and internally consistent between each other to an almost complete degree.As such, they served as excellent sources of credibility for the results of the literature study,in addition to the new information they provided. They were also positive to the generalidea behind the thesis work, being very positive to the concept of standardization of basicpenetration testing.

Requirements

The requirements constructed and used for the quality verification of both the tools and thecomprehensive methodology were founded in practical necessities and established industrybaselines, such as the ones by FedRAMP [43].

34

Page 45: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

7.1. Results

The delimitations made regarding scope meant that basic performance and comparativeeffectiveness regarding specific vulnerabilities were the most important. Had the scope ofthe thesis been larger, allowing for the inclusion of more tools and methods, it would havebecome necessary to create stricter and more formal requirements with regards to the abovemetrics. However, with the current scope, the end result is considered sufficient, providing asolid foundation for potential expansion.

Apart from the effectiveness requirements, which were completely necessary for the pur-poses of the methodology, practical circumstances were also chosen as a deciding factor forwhich tools to include. Giving that the target users of the solution were professionals of lim-ited experience, ease-of-use was given more consideration than would have been the case ifanother, more experienced, demographic had been selected. However, ease-of-use impliesvery little regarding the effectiveness of a tool, which motivated the separation and equalconsideration of both types of requirement.

Selection, Testing and Evaluation of Tools and Features

One of the most interesting observations made during the testing was the difficulty manyactive scanners and crawlers had in accessing authenticated pages, or even detect many in-teractive elements. While many crawlers had functionality to perform authenticated scans,given that they were supplied with authentication to begin with, the effectiveness of thesefunctionalities was found to vary greatly from crawler to crawler and application to applica-tion. In general, the amount of manual work required to point even the more sophisticatedscanners towards vulnerable components of the test environments was surprisingly high,pointing towards a need for a fair bit of specific application knowledge when using them.There is also the issue of active scanners being unable to report the extent to which theytraversed the entirety of any given application, once more requiring manual verification ofcoverage levels.

In addition, the results were often questionable. Fairly often, the active scanners were ableto detect the possibility of various exploits, but their verification processes varied greatly inboth scope and accuracy. For example, the XSS detection within the Vega active scanner veri-fied the potential for XSS attacks by attempting to insert various special characters, but neverby implementing any actual scripts. While this serves well to imply potential vulnerabilities,it does little to confirm their actual existence, as other security measures may be in place toprevent full-fledged attacks. For comparison, sqlmap allows for the completion of SQLi at-tacks, including the detection and cracking of passwords, leaving no room for interpretationas to the existence of an actual security flaw.

Interestingly, during scanning, many tools left considerable traces in the application. Pri-marily, this consisted of tools creating large numbers of new users and/or leaving commentsand posts wherever possible, sometimes resulting in the input showing up as new url-pathsin later scans. While not catastrophic, this had to be taken into consideration, given that theenvironment could not be considered completely static when scans were run concurrently orin succession without applications being reset. It also points to potential issues if testing isperformed in live environments.

Unsurprisingly, some types of vulnerabilities were much easier for tools to detect thanothers. Straight-forward insertion flaws of various types were by far the most commonly de-tected by scanners, and easily verified by either using fuzzers or simple manual attacks. Formany of the slightly more complex attacks, such as stored SQLi and multi-step XSS attacks,most tools were completely unable to detect the problem. This is likely due to a lack of aproper fail state, meaning that the tools may have successfully exploited the flaw, but did notrecognize or report it. Some vulnerabilities, that could be considered somewhere betweenlogical and technical, such as common user credentials or URL-paths, were often crackedwith little manual effort using fuzzers, although run-time could become an issue. It is worthnoting that while the run-time for fuzzing a username/password pair was somewhat imprac-

35

Page 46: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

7.1. Results

tical during this thesis, and could pose some minor issues when under time-limited contracts,it would cause very little concern for actual hackers, who are rarely under any considerablepressure regarding time. However, in practise, fuzzing input into web-applications is eas-ily prevented by limiting how many attempts are allowed in rapid succession, a mitigationtechnique already in place for many applications, especially for authentication input.

Comprehensive Methodology

While the proposed comprehensive methodology is rather general, there is a noticeable lackof established and general methods of complete vulnerability detection and mitigation withinthe industry. Despite not providing any new specific methods or tools, the methodology de-livers a solid foundation for inexperienced ethical hackers, providing several inherent bene-fits.

Primarily, it describes a structured and understandable sequence of steps to perform, fromthe very beginning of the information gathering phase, all the way to the reporting of the find-ings. This way, the process is kept simple, high in quality, and should result in the possibilityof raising the security level for any evaluated web application.

It was based on the InfoSec Institute methodology rather than any other alternative pri-marily due to it’s limited length and clear distinction between activities. For this thesis work,the information gathering phase was changed to a comprehensive manual review completewith an algorithm which leads into the vulnerability testing phase. The testing has been ex-panded and made explicitly iterative. The risk analysis and reporting stages have in turnbeen designed as distinctly customer oriented.

Design of Interactive Algorithm

The interactive algorithm, while sufficient for the scope and purpose of this thesis, remainshighly expandable. Notably, the exclusion of commercial tools leaves much to be desired,although the delimitation remains sound. With the field of web security remaining active, anyalgorithm will, by its very nature, be subject to degradation over time, and require continuousupkeep.

To represent the process of penetration testing and vulnerability scanning as a short ques-tionnaire could be considered an over-simplification. While a valid point, the purpose o thisrepresentation is to allow a simple and basic methodology for base-line processes, not anextensive and complete technical guide.

Verification of Comprehensive Methodology

As there was no opportunity to perform tests against actual customers, the results of the fieldexperiments were limited to largely functional aspects, regarding the effectiveness of the pen-etration testing, and applicability of the selection process, risk analysis and reporting. Due tothe personal insight into the construction of Bila Nu and Presentshoppen, it was possible toconfirm with great certainty that simply by using modern libraries and frameworks, the de-velopment of web applications are likely to be considerably secure against the great majorityof simple and technical attacks, even without expressively taking security into account. In afitting counterargument, the example of the older OrangeHRM made clear that web applica-tion being secure at any point in time is unlikely to remain so. Due to this, the methodologycould be shown to be fully capable of being used to continuously verify that an applicationremains secure, so long as the technical content of the methodology itself is kept current.

The peer-review process, though limited to security professionals working for Secure StateCyber, resulted in positive response. In combination with the equally positive response tothe basic concept, received during the earlier interviews[2], [9], [45], it shows an interest fromprofessional parties which greatly implies potential demand for methodologies such as this.

36

Page 47: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

7.2. Method

7.2 Method

The methods used to achieve the above discussed goals are themselves discussed below.

Literature Study and Sources

While care was taken to use peer-reviewed sources wherever possible, many of the subjectsof the literature study required extensive use of industry sources, largely due to the fast pacewith which the industry is evolving. In such cases, though, there was often several sourcesavailable to use to corroborate any given claims. Notably, one of the central pieces of theliterature study, the OWASP Top 10, is constructed through a massive industry peer-reviewprocess1, lending considerable credibility despite its industry origins. As with any literaturestudy in an active field, replicability is somewhat of a concern, especially considering thefairly broad scope of this particular thesis. While general points are unlikely to become incor-rect within the foreseeable future, the field is being continuously researched, and new studieswill become available.

Interviews

The way the interviews are presented in this thesis, as paraphrased appendixes, was a neces-sity caused by the structure of the interviews. Firstly, since they were performed in Swedish,they needed to be translated, disallowing pure transcripts. Secondly, during the interviewsthe respondents were allowed to speak freely and expand on issues not present in the ques-tionnaire. As such, many specific fields were returned to at different points in the interview,requiring paraphrasing to ensure that the representation of the interviews had an under-standable flow and structure. Since this allowed for various faults, primarily mistranslationsand misinterpretations, the interview notes were sent to the respective respondents for con-firmation and comment, mitigating this issue and ensuring adequate accuracy.

Requirements

As the intended user of the methodology was of similar experience and education as theauthors of this thesis, most of the requirements that did not directly concern effectivenesswere constructed based on the experiences during test executions. While this makes themsomewhat less structured and informal than would have been ideal, the scope and focus ofthis thesis did not allow for extensive research or external tests within the particular field ofrequirement construction.

Selection, Testing and Evaluation of Tools and Features

When documenting the results of the tests, a number of decisions had to be made regardingtheir representation. One issue was to determine how close an automatic tool needed to getto identifying the actual issue in order to be considered successful for any particular vulnera-bility. The final decision, to consider it successful if it marked the correct URL with the correctgeneral type of attack, means that some of the results marked as correct may be consideredto border on being false positives. Still, it was determined that such reporting would resultin manual penetration testing that is likely to detect the actual attack potential. For manualassistance, clearance was given so long as the tool was determined to be of any assistancewhen detecting or executing any given exploit. This is admittedly somewhat problematic, asit marks any tools that contain proxies or request editors as capable of determining a widevariety of exploit, even those that could be performed without the use of any tools.

1OWASP Top 10 Issues: https://github.com/OWASP/Top10/issues

37

Page 48: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

7.3. The work in a wider context

All of the tests were performed over a network where both the host machine and themachines performing the scans and attacks co-existed. In real-world black-box hacking situ-ations, this is rarely the case, and additional circumstances become relevant. Unfortunately,performing the tests over the Internet would not have been practical, as it would introducepractical issues, as well as open up the host machine to potential external threats.

In addition, there is no real upper limit to the potential extent of experimentation of thistype that could be performed. The scope with which it was carried out was decided based onthe available subjects and the general scope of the thesis, but more testing could always beperformed.

Comprehensive Methodology

The intended user of the methodology was defined quite early to be educated security an-alysts with limited or no professional experience of penetration testing, but with a decentunderstanding of a few core concepts. This idea had its origin in that such a user wouldbe of similar ability as the authors of this Master’s thesis, but was further cemented by theinterviews [2], [9], [45] held during the early parts of the thesis. Comments regarding thelack of established methodology and difficulties getting experience in the early career stagesof professional penetration testing lent credence to the choice of focusing on this particulardemographic.

Design of Interactive Algorithm

The method by which the algorithm was designed was limited by the availability of welldocumented tools and environments, as well as the vulnerabilities that were chosen to beconsidered. From the literature study, it became apparent that most open-source tools andenvironments are no longer being updated, implying that the market may have moved on toalmost exclusively commercial tools. In addition to this, individual vulnerabilities will, overtime, experience shift in their level of criticality to the industry, further increasing the needfor the algorithm to be kept updated.

Verification of Comprehensive Methodology

While the interviewees had all been positive to the idea of the complete methodology, theywere not involved in the peer-review verification process of the complete methodology. In-stead, systems security professionals from Secure State Cyber involved with the thesis wereused. The limited scale of this peer-review process is unfortunate, and if there had beenopportunity for it, it would have been expanded.

Due to field experiments with real-world customers not being possible within the scopeof the thesis, they had to be run against other real environments. While the lack of exter-nal feed-back was unfortunate, this required and allowed for a different mindset and ap-proach, providing different types of results and conclusions. Still, the method by which theywere performed allowed for the construction of the example procedure shown in appendixC, which serves well to visualize a potential application of the methdology.

7.3 The work in a wider context

Existing Methodology

In 2001, a patent was claimed regarding a method for designing and constructing a Systemfor Determining Web Application Vulnerabilities [59]. The patent consists of a web crawlerthat gets the application interface structure and a detection process that gets the list of theweb applications vulnerabilities in place. Furthermore, it contains a mutation process which

38

Page 49: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

7.3. The work in a wider context

corresponds to the potential attack vectors that could be used in order to exploit the vulner-abilities in the before mentioned list. Lastly, it describes a report stage where it generates areport recommending fixes and other relevant pieces of advice.

Stuttard et al, the authors of The Web Application Hacker’s Handbook [60], have pub-lished a technical methodology on how to attack a web application. They put the describedtechniques in a step-by-step process. The methodology is divided into a recon and anal-ysis part, followed by four distinct vulnerability test areas. The described techniques aresubset to the vulnerability test areas which are: application logic, access handling, input han-dling, and application hosting. Lastly, miscellaneous checks and information leakage testsare performed. The general concept, as with this master’s thesis methodology, is to use themethodology as a guide and not follow it too strictly.

Ethical Aspect: The potential for malicious use

Concerning research in this area, precautions need to be taken considering the potential ille-gal application of the methodology. Since both attackers and defenders are highly concernedabout the same type of tools and vulnerabilities, knowledge contributions in the area bene-fits both types of hackers. However, raising the baseline can be expected to benefit ethicalhacking primarily, as it makes low-level attacks much harder to perform.

Societal Aspect: Increasing ease of penetration testing

While raising the baseline of security by making a methodology that inexperienced peo-ple can use effectively is likely to increase the average quality of work, it may potentiallyalso serve to reduce incentives for developing expertise. Still, there is little reason to expectthis outcome, given that the field will still remain highly technical, and encourage personalknowledge no matter how standardized the basic concepts become.

Societal Aspect: Increasing Cybersecurity Threats

With the increase in potential attack vectors and access points as a result of the number of in-ternet connected devices, and the high likelihood of such products being produced as cheaplyas possible for economic and marketing purposes, it is reasonable to expect security to be alower than optimal priority in their development. Making low level security easier to imple-ment through standardization should serve to mitigate the potential effect of this develop-ment.

39

Page 50: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

8 Conclusion

8.1 Main Contributions

From the onset, the primary purpose of this Master’s thesis was to give the current and fu-ture generations of penetration testers a methodology and a mind-set of how to try to facethe huge problem of vulnerabilities in web applications, without requiring great amount ofpersonal experience in the field.

The business value of the presented methodology is quite clear, given the current stateof the industry. The same few vulnerabilities remain on the OWASP top 10 list [5] for yearson end, despite many of them being fairly simple to mitigate. At the same time, develop-ers are focused on the creating functional applications, while managerial staff is interested inthe applications being user-friendly and easy to market. Far too rarely is security a primaryconcern [9]. The reason for this is, at least in part, that web application security is a com-plex field, with individual expertise being a critical factor for many professional penetrationtesters. This makes it expensive to perform high quality tests, further hurting the likelihoodof sufficient tests being run at all.

The proposed methodology allows for people of limited security knowledge to performbaseline vulnerability scanning and penetration testing, without the need for an expensive ex-pert. Intruders will, in the vast majority of cases, be most interested in attacks which achievetheir goals with minimal effort. By increasing the general security of web applications by onlya small amount, the required effort to penetrate the system may be increased to the point thatan attacker is unlikely to even attempt a black-box penetrative attack. When that potentialattack vector is removed by the developers themselves, security experts are free to use theirtime to focus on security measures within the greater cope of the organization or system,rather than attempt to fortify its perimeter.

8.2 Research Question Summary

This thesis set out to answer the following four questions:

• What notable vulnerabilities are most critical for web applications?

• For what types of vulnerabilities are existing automated scanners suitable to use?

40

Page 51: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

8.3. Future Work

• What manual methods are needed to complement the automated services to develop acomprehensive methodology?

• How can the effectiveness of the methodology be verified?

The first research question was answered during the literature study. While there arecountless specific vulnerabilities that could be present in any given web application, they canoften be comfortably sorted into categories. The OWASP top 10 provides not only such cate-gorization, but also prioritizes them based on a number of factors, including prevalence andpotential impact. As noted during the interviews, it provides considerable coverage, beingestimated to cover 95% of real-world vulnerabilities by one of the interviewees [2]. Beingcontinuously updated every few years also means that it adapts to changes in the industry,making it a very sound basis upon which to base a methodology.

Regarding questions two and three, there is a level of subjectivity and overlap involved,but some distinctions can be made. The literature study showed some established terminol-ogy regarding technical and logical vulnerabilities [13], [39], where automatic tools are morecapable of handling the technical ones. It is generally practical to automate where possible,leaving manual methods to simply complement where the automatic tools stop being suffi-ciently dependable. Based on the literature study and the practical experimentation, it wasfound that the below distinctions were generally applicable:

• Automatic: Injection, XSS, XXE

• Mixed: Broken Authentication, Sensitive data exposure, Broken Access Control, Secu-rity Misconfiguration, Insecure Deserialization.

• Manual: Insufficient Logging and Monitoring, Using components with known vulner-abilities.

The fourth and final research question was answered during the verification of the com-prehensive methodology section 5.6. In this thesis, a combination of three verification meth-ods was used: Peer-review, Empirical Validation and Field Experimentation. Each of theseserve different specific purposes, and have their own coverage area. When used in combina-tion, they serve to verify that the methodology appears sound, to the extent possible withinthe given scope.

8.3 Future Work

While the methodology presented in this master’s thesis may be considered fit for purpose atthe time of writing, this is unlikely to remain the case for any considerable amount of time.The security situation is under constant development, with new technologies resulting innew possibilities for malicious actors to exploit. Not only does this mean that owners of webapplications need to do recurring testing, but it also means that any tools used need to beupdated as well. Any future use of this kind of methodology is no exception. For it to be ofany use at all, it must be kept continuously up-to-date, including the removal or introductionof tools and methods.

Even beyond such practical matters, this thesis leaves some areas unexplored. For exam-ple, the use of commercial tools has been ignored, for economical and scope-related reasons.Future work, including or even focusing on such tools, are highly likely to introduce newinsights.

There is also the matter of the verification process, of which only some aspects couldbe performed as part of this thesis work. In particular, it would be highly beneficial for thecredibility of the methodology to perform large-scale empirical validation by external parties.The peer-review sessions and field experiments that were performed would also benefit fromexpansion in the future.

41

Page 52: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

A Appendix 1 - VulnerabilityScanner evaluation

In the below tables, "A" refers to Automatic, and "M" to Manual, where applicable. All envi-ronments are of the versions included in the OWASP BWA v1.2.

Table A.1: Initial testing

ID Scanner Environment Comment1 Grabber WackoPicko Was easy to run, but further investigation against other

environments is necessary2 sqlmap WackoPicko Login page vulnerable to SQL injection. Sqlmap seems

easy to use, needs to be tested against otherenvironments.

3 sqlmap Dvwa Found and read database through SQLi, easily crackedthe passwords. Seems to function well against weakprotection.

4 sqlmap Bricks Was able to inject aginst all 6 login-pages, each withtheir own vulnerabilities.

5 Vega Bricks Found many problems, including cleartext over HTTP,insecure cookeis, XSS, and SQLi. XSS warning seemlikely to be false positives. SQLi is implied, but nevertested. Did not find all SQLi points.

7 Vega OrangeHRM Mainly notes issues with HTTP traffic: Cookies andplaintext passwords. Seems to have logged in, butunclear whether entirety of application is being scanned.

8 sqlmap OrangeHRM Might not actually be injectible, at least not the loginscreen.

9 Grabber Wordpress Same as above. Find it difficult to generate a report.10 ZAP Peruggia Found url to sql injection11 sqlmap Peruggia Was able to hack the url that was found by zap in test 10

42

Page 53: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Table A.2: Wackopicko

Documented Vulnerability Comments Vega ZAP Arachni Sqlmap IronWasp Wapiti BURPSuite CEReflected XSS Vulnerable query parameter A A A A A MStored XSS Vulnerable comment field A A A MSessionID vulnerability Session cookie value

"admin-session",auto-incrementing

A A M M

Stored SQLi Stored SQLi triggered by "similaruser" functionality

A A M

Reflected SQLi Vulnerable Username field A A A MDirectory Traversal Tag-field allows traversal A A A AMulti-step Stored XSS Comment field vulnerable, but

requires a preview confirmationM M M M

Forceful Browsing High quality photoes availablethrough URL without paying

Command-line Injection Name field vulnerable A A A AFile Inclusion File Inclusion A A A A A MReflected XSS behind JavaScript Value parameter vulnerable M A A M MLogic Flaw Coupon code is repeatable MReflected XSS behind Flash form Value field vulnerable M M M MWeak username/password admin/admin M M M M

Table A.3: DVWA

Documented Vulnerability Vega ZAP Arachni Sqlmap IronWasp Wapiti BURPSuite CEBrute Force M A MCommand Execution M A M MCSRF MFile Inclusion M A A MSQL Injection M A A M M MBlind SQL Injection M A A M MReflected XSS M A A M MStored XSS M A A M M43

Page 54: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Table A.4: Peruggia

Documented Vulnerability Comment Vega ZAP Arachni Sqlmap IronWasp Wapiti BURPSuite CEPersistent XSS Username M M MLocal File Inclusion Index.php A A ASQLi Add User A A MSQLi Delete UserSQLi Comment Picture A A M MPersistent XSS Comment Picture A A MReflected XSS Pic ID A A MSQli Pic ID A A M MRemoted File Inclusion Learn.php A AReflected XSS Learn.php A A ASQLi Login A A M A MPersistent XSS UploadSQLi Upload

44

Page 55: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

B Appendix 2 - Interviews withindustry experts

Below, the respondents’ statements and comments regarding the above listed questions ispresented in translated and paraphrased form. The below has been verified in its currentform by the respective respondents.

Questions for first set of interviewsID Question1 Describe a generic industry case (theoretical, if necessary)2 Describe your general approach towards a web application penetration test. What

steps are performed?3a Would you consider it possible to create a general/generic methodology, using

open-source tools? What would you consider the main drawbacks?3b Are you aware of any such general methodology that is already established? If so,

have you made use of them?4 We have limited the scope to only include web applications, how much of your

work is directed towards this type of application?5 What tools/features do you use? How have you chosen them?6 To what extent do you use automatic tools, as opposed to manual penetration

testing? For what purpose/steps do you use them?7 Do you have any experience with test-environments or deliberately vulnerable

applications? If so, how accurate did you find them to be?8 Static or dynamic analysis, what is usually the main focus? Do you read source

code at all as part of normal contractual work?9 Do you normally work in live environments, or are you supplied with your own

instances of the applications?10 How relevant is OWASP top 10? Do you search other types of vulnerabilities as a

standard, or are they defined by the customers scope? In general, how much of thescope is actually defined by the customer? Business value etc.?

Interview with Peter Österberg, 2018-02-19

Peter Österberg first started working with penetration testing in 1997, and has been heavilyfocused on web application and server penetration tests.

45

Page 56: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

The Methodology

Peter’s methodology is developed through experience and he likes to work by his own stan-dard method. He notes that, at an earlier penetration testing position, the seniors did not likewhen the management tried to standardize a methodology. They were more into the creativeparts of finding vulnerabilities. Despite this, Peter believes that a standardized methodologywould be a good thing, and notes that universities have started ethical hacking courses wherethey try to standardize a penetration testing methodology. Of course, there are common prac-tices in place already. For example, using several tools in parallel is generally considered agood idea. Just as penetration testers, tools are individually good at different things. Evenfree versions of commercial tools are often quite handy, in Peter’s opinion.

Regarding false positives/negatives, Peter notes that, while it is unfortunate when thingsare missed, false positives occur much more often. He claims to always make a deliberateeffort to verify all reported vulnerabilities manually, to avoid this problem. Often, in hisexperience, many of the found issues are related to a very small number of actual designflaws, and by determining those specific flaws, he is able to provide much more help to thecustomer than simply pointing out the consequences.

The Customers

Peter states that, in the general case, when the customer wants help with securing their webapplication, they do not know what specific modules need to be tested. In Peter’s experience,customers are often mostly concerned about on the “outside” of the web application, i.e.the components that are visible to a non-authenticated user, despite this representing onlya fraction of potential issues. Further, he points out that they also have very vague ideasof what specific components this is. In order to be able to start off and help the customerto secure their web application, a tester needs to get a hang of the web application itself bybrowsing around and identifying areas where it appears to be most vulnerable. Since thetester normally works according to a contract, he needs to optimize his time in order to helpthe customer as much as possible. Such concerns do not generally apply to a potential hacker,that could spend thousands of hours attempting to breach an application, if they so wished.

Penetration testing often occurs during final acceptance testing of new applications orfeatures. Sometimes, though not as often, the customer wants recurring penetration testsat specific intervals. By a considerable margin, Peter observes that the organizations inter-ested in information security and penetration testing is primarily those that are financial orgovernmental in nature.

While Peter prefers to do his testing against local copies of applications, he is often"forced" to test in live environments. Occasionally, he is given VPN-access to specificallyconstructed test-versions of the systems-under-test, although if the application contains dataof considerable confidentiality, this is rarely an option. The primary benefit of live environ-ments, he claims, is the possibility of testing additional security measures such as firewalls,which are often somewhat differently set up in test environments.

Peter is rarely given access to source-code, but is often able to identify recurring problems,allowing him to identify specific pieces of code that cause problem, with the help of thedevelopers themselves.

The Industry

CVE-denotations, are not very applicable to web applications, Peter claims. The CVE-entriesare often specific to a certain pieces of software (such as Facebook, OS X, Apache, etc.) and isoften very technical. Oftentimes, they therefore relate to issues outside the scope of what heconsiders to be the web application itself.

The OWASP Top 10 categories cover, in Peter’s estimation, about 95-99% of the actualvulnerabilities found in the field. With the limited time given to penetration testers, focusing

46

Page 57: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

on these vulnerabilities is a very good idea. Since the list is sorted, in part, by occurrence,even the top five positions provides good coverage. Regarding some vulnerabilities remain-ing relevant for so long, he believes it is an issue of poor design processes. He notes that themodules that tend to be most often vulnerable are the active functions, like user-input. In theearly days of his career, web applications were much simpler, and therefore much less likelyto be vulnerable. This has changed considerable in the last twenty years, moving securityfocus from servers and hardware to the applications themselves. He points out the benefitsgained from the standardization of using third-party frameworks, which are often quite se-cure. Still, designer often modify these, or add their own modules, which is much more likelyto introduce issues. Peter believes designers are often too focused on the aesthetic elementsof applications, and management too focused on usability, giving too little priority to mattersof security.

The Future

Peter notes that the web application industry is constantly evolving with regards to secu-rity. For example, regarding online purchasing, he claims that many of the issues that werepreviously prominent has become much smaller problems with time. He believes one of theprimary reasons is the introduction of PCI-standards in the mid-00’s, and the requirementsthis puts on the owner of any website with money transferal functionality of any type. Thishas lead to much fewer websites having such functionality specifically implemented by theirdesigners, instead opting for third-party plug-ins. The requirements further include period-ical third-party testing, though some companies still opt for in-house testers in addition tothis.

Peter does not believe that the scenario of complete automation of the field of penetrationtesting can be expected to come to pass within the foreseeable future. He does, however,point out that the consultation market often demands veteran consultants, making it difficultfor new consultant to get into the business properly. In addition to this, the older generationof penetration testers has not made a sufficient effort to educate the younger generation, lead-ing to a potential loss of knowledge in the next few years. Different penetration testers willalso, by nature, find different vulnerabilities (though, of course, with considerable overlap).This variance causes a bit of a problem where automated and standard ways are needed forcompliance, which in turn specifies comprehensive and a standard level of defense. Prettymuch every penetration tester wants to automate things for practical, but the best ones aredefinitely those who have a strong passion with creative thinking. According to Peter, pene-tration testing is a craftsmanship and not something that can be entirely automated withouta very good AI, much better than anything currently available. For instance, automated toolshave a hard time dealing with multi-step modules, where user-input is supplied on one pageand the result shows at another. In essence, there is a lack of flexibility and high-level un-derstanding within the automated tools, giving them a highly limited understanding of thelogical flow of applications.

Interview with Staffan Huslid, 2018-02-26

Staffan Huslid, a certified ethical hacker began penetration testing in a professional capacityin 2006. The primary focus of his work is not web applications, focusing more on variousother fields of security, such as credential handling, component analysis and similar specificareas. He claims to be more interested in “what kind of systems are in place” rather thanactual source-code analysis.

The Methodology

Staffan uses a variety of tools, all of which are open-source or free versions of commercialtools, citing difficulty arguing their relative monetary value when used against their costs.

47

Page 58: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

While commercial tools will yield greater results, the cost is an issue throughout the field, es-pecially in markets with a limited customer base for penetration testing, in his experience. Ex-amples include sqlmap, Nessus, openVAS, BURPSuite, Zenmap, and Metasploit. He pointsout that many of these tools are very useful, especially for penetration testers who do notspend the majority of their time experimenting themselves, as they keep a good baselinequality and create a platform from which to make decisions regarding specific tests.

While test environments can be useful for testing out tools and techniques, they do not,in Staffan’s estimation, have nearly the same level of security as live sites, even disregardingthe deliberate weaknesses, limiting their usefulness.

Staffan considers penetration testing to be a largely independent field of work, with testersindividually choosing the tools they find most useful. Different background means differentspecialization, leading to personal preferences when choosing tools. In general, Staffan doesapply a general method of vulnerability scanning and information gathering, before seeingwhat potential there is for penetration tests to be useful. He often finds that this is verynecessary, as customer rarely have any detailed understanding of vulnerabilities and security.He makes particular note that the terms “penetration testing” and “vulnerability scanning”are often confused with each other, leading to occasional misunderstandings.

Staffan usually, by a considerable margin, works with white-box testing rather than black-box. He states that no machine is ever truly safe from attackers gain access through phishingand other types of social engineering, making black-box testing less useful. He claims thanreal attacks are very rarely performed completely unauthenticated, as that is much more dif-ficult and resource consuming.

He is occasionally allowed to perform his tests against live environments rather than hisown copies, but this is entirely up to how much the customer is willing to risk. A live testwill be a better representation of real situations, but it is often not practical, given the risk ofdisrupting the live environment. Obviously, there are also potential legal issues, it must beclear stated conditions within the agreement on what the penetration tester should test andwhat happens if something goes wrong.

The Customers

Most customers are governmental, although commercial sites aren’t unusual. Generally,though, Staffan speculates that commercial entities don’t bother much with security beforethey are subjected to a costly breach. He notes that the back-end is often the most sensitivecomponent of any system, and most often become the priority of attackers. Once an attackerhas access to the back-end, preferably including administrative rights or root access, they areable to do pretty much anything. General naivety among designers and developers is anissue, with third-party components being used without enough consideration, among othersimilar issues. Testing is also not done to a sufficient extent, often being part of general test-ing towards the end of the production cycle. Staffan would much prefer a continuous testingprocess well beyond release, given that information security is a highly dynamic field. Healso believes that this the reason that the OWASP top 10 sees such limited changes over time.He believes that, if testing and updating was done more seriously and more often, or securitywas a bigger concern during development, many of the issues would become considerablyless common.

Interview with Jonas Lejon 2018-03-21

Jonas Lejon, founder of IT security supplier Triop AB1, has been involved in the informationsecurity business since the 1990’s. While much of his previous work centered on physicalaspect of information security, it has since shifted focus towards internet connected systems.

1Triop AB: https://triop.se/

48

Page 59: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

The Industry

In his experience, issues often center around various implemented APIs, as well as web-application based administrative tools and interfaces, which he claim can introduce manyattack vectors that are not immediately obvious. For example, admin access to firewall con-figuration through web-applications could allow for XXS-attacks.

Generally, the vulnerabilities from the OWASP top 10 are in focus, and Jonas makes note ofthe new additions to the 2017 version, XXE, Insecure deserialization, and Insufficient Loggingand Monitoring. He notes that, in particular, Insufficient Logging and Monitoring is difficultto detect as an external analyst and needs to be checked with customers.

Regarding XXE, which Jonas points out as having been around for a while before becom-ing prominent, he notes that injection points are often not known by system owners. Further,he points out that many tools are highly lacking in capability of detecting XXE at this time,especially blind XXE.

He uses a variety of tools, including BURPSuite, which he has extended with third-partycomponents.

The Methodology

Often, his general methodology includes a fair amount of preparation, with about half of thetotal time being spent mapping the application, including crawling, studying documentation,and going through whatever APIs may be present. According to Jonas, a good understandingof the development process is key to knowing where faults can be expected to occur. In hisopinion, there is no such thing as a truly secure piece of software, and it often only takesone vulnerability to gain access. Isolation of software, and running everything at minimalprivileges should be common practice, but is often not the case. In addition to this, eventlogging is critical, as an attack can often go unnoticed for many months, meaning the logsneed to not only be thorough, but also kept around for as long as is practical.

On the issue of some vulnerability remaining prominent for many years, Jonas states thatmodern frameworks almost always have safeguards, but that it takes time for such things tobe adopted to enough of an extent.

Most of the time, Jonas is allowed to perform his tests against live environments in orderto maximize the validity of the tests being run, although this depends on the customer. Oc-casionally, he is only permitted to run tests at low-traffic hours of the day, to avoid accidentaldisruptions of business. It is not uncommon for him to run his tests while the applicationsare in the final stages of development, although this has on occasion led to delays, especiallywith governmental clients. Such clients often have high security requirements and tend tohave development done by external actors.

49

Page 60: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

C Appendix 3 - OrangeHRM FieldExperiment

In this appendix, the field experiment concerning that application of the comprehensivemethodology on the web application OrangeHRM, version 2.4.2, released in 2009, will be de-scribed. The field experiment was performed by the authors of this thesis, and did not havea customer, disallowing certain details of the methodology from being realistically tested.

Manual Review

Below, each of the questions from the manual review is answered, and their implication dis-cussed.

1. What kind of web application are you dealing with? Is it an informative web applicationor an interactive web application?

It is interactive. It has a large amount of form-based interactions with a database,with multiple authority levels for users.

2. What tools are available? Scanners, fuzzers, specialized tools? Will the customer supplyany software to be incorporated?

For this experiment we have access to SQLmap, Vega, ZAP, Arachni, Wapiti, andBurp CE including scanners and fuzzers.

3. What is the scope of the testing? What kind of preferences and instructions has thecustomer supplied you with?

For this experiment, we will have a goal of attempting to verify the applicationas secure to the point where we cannot find any vulnerabilities that could causeconsiderable harm to it’s users or owners. We have been given a copy of the envi-ronment.

4. Who are you dealing with? Is your customer military, governmental or commercial?The customer is commercial.

5. Is there any prioritization order? Is there any particular attack or functionality, that isbeing extra important to consider?

The customer is primarily worried about database integrity and user-on-user at-tacks such as XSS.

50

Page 61: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

6. What knowledge/experience/competence are available within the customer’s organi-zation?

The customer does not have any resources of this kind to supply us with.

7. Is there sensitive data being stored within the application’s database? If so, what spe-cific kind of data?

User credentials, as well as personal data regarding salaries, professional positionsand other private data is stored within the database.

8. Is there any logging and monitoring on the web application?There is no logging and monitoring of the traffic, but the customer simply sells theapplication to private companies, and the responsibility of logging and monitoringwill be on that organization instead.

9. Is there any deserialization of user supplied objects?There is no known deserialization of user supplied objects.

Tools and Method Selection

Given the available tools, and the specifications and requests received during the manualreviewr, it was chosen to start off using the ZAP active scanner on the base URL of the appli-cation, with the intention of repeating the same procedure with the Vega and Arachni toolsfor verification. The same tools will be used to verify any finding. If SQL injection flawsare implied by the scanners, sqlmap will serve to verify them. Manual crawling was also beperformed, especially making note of specific application components that are not reachedby the automatic crawlers for any reason.

Vulnerability testing and Verification

While all of the active scanners made note of some minor insecurities, such as cookies be-ing handled over http, none of them were able to detect any major vulnerabilities. As such,manual crawling was performed in an attempt to find any particular paths that had beenmissed by the active scanners. For the sake of completeness, sqlmap was deployed on anyaccess point to the database that was discovered, despite the active scanners not having re-ported any issues with them. However, some user input was handled via forms in separatedialog boxes. Since these are commonly missed or insufficiently analyzed by active scanners,we pointed the ZAP scanner at these particular URLs, at which point it alerted us to poten-tial XSS vulnerabilities in three separate locations. For efficiency, a manual attempt with arudimentary XSS attack was performed, resulting in a successful verification of the vulnera-bilities.

Risk Analysis

Given the extreme ease with which the XSS attack could be performed, and the high potentialconsequences of this type of user-on-user attacks regarding both monetary losses and repu-tational harm, they were rated as critical. Potential costs of mitigation would be determinedwith the aid of customer representatives.

Reporting

In the complete report, the above findings would be described in a similar manner. It wouldbe recommended that the application not be sold in its current state, as the XSS flaws aremuch too severe.

51

Page 62: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Bibliography

[1] G. V. Post and K.-A. Kievit, “Accessibility vs. security: A look at the demand for com-puter security”, Computers Security, vol. 10, no. 4, pp. 331–344, 1991, ISSN: 0167-4048.DOI: 10.1016/0167-4048(91)90109-Q.

[2] P. Österberg, Interview 2018-02-19, Available in section B.

[3] Y. Jang, C. Song, S. P. Chung, T. Wang, and W. Lee, “A11y attacks: Exploiting acces-sibility in operating systems”, in Proceedings of the 2014 ACM SIGSAC Conference onComputer and Communications Security, ser. CCS ’14, Scottsdale, Arizona, USA: ACM,2014, pp. 103–115, ISBN: 978-1-4503-2957-6. DOI: 10.1145/2660267.2660295.

[4] D. Calbot. (Jun. 2011). The costs of bad security, [Online]. Available: https://www.technologyreview.com/s/424165/the-costs-of-bad-security (Accessed:19 Apr. 2018).

[5] OWASP. (Nov. 2017). Owasp top 10 - 2017: The ten most critical web application, [On-line]. Available: https://www.owasp.org/images/7/72/OWASP_Top_10-2017_(en).pdf.pdf (Accessed: 19 Apr. 2018).

[6] J. A. Kupsch and B. P. Miller, “Manual vs. automated vulnerability assessment: A casestudy”, in In First International Workshop on Managing Insider Security Threats (MIST),West, 2009.

[7] “Enhancing the cybersecurity workforce.”, IT Professional, IT Prof, no. 1, p. 12, 2011,ISSN: 1520-9202.

[8] S. Furnell, P. Fischer, and A. Finch, “Feature: Can’t get the staff? the growing need forcyber-security skills.”, Computer Fraud Security, vol. 2017, pp. 5–10, 2017, ISSN: 1361-3723.

[9] S. Huslid, Interview 2018-02-26, Available in section B.

[10] Infosec Institute, Web Application Penteration Testing Methododogy. Aug. 2015. [Online].Available: http : / / resources . infosecinstitute . com / wp - content /uploads/Web-Application-Penetrating-Testing-Methodology.pdf.

[11] G. S. Kc, “Countering code-injection attacks with instruction-set randomization”, in InProceedings of the ACM Computer and Communications Security (CCS) Conference, ACMPress, 2003, pp. 272–280.

[12] J. Clarke-Salt, SQL Injection Attacks and Defense. Elsevier Science, 2014.

52

Page 63: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Bibliography

[13] Netsparker. (n.d). An automated scanner that finds all owasp top 10 security flaws,really?, [Online]. Available: https : / / www . netsparker . com / blog / web -security/owasp-top-10-web-security-scanner/ (Accessed: 19 Apr. 2018).

[14] J. Pauli, The basics of web hacking: tools and techniques to attack the Web. Elsevier, 2013.

[15] D. Huluka and O. Popov, “Root cause analysis of session management and brokenauthentication vulnerabilities”, in World Congress on Internet Security (WorldCIS-2012),Jun. 2012, pp. 82–86.

[16] K. Naresh and S. Kanika, “Study of web application attacks & their countermeasures.”,2014.

[17] Risk Based Security and Open Security Foundation, “An executive’s guide to 2013 databreach trends”, Presentation, Risk Based Security, 2014.

[18] C. Roberts, “A hands-on xml external entity vulnerability training module.”, SANS In-stitute, 2014.

[19] S. Jan, C. D. Nguyen, and L. C. Briand, “Automated and effective testing of web servicesfor xml injection attacks”, in Proceedings of the 25th International Symposium on SoftwareTesting and Analysis, ser. ISSTA 2016, Germany: ACM, 2016, pp. 12–23, ISBN: 978-1-4503-4390-9. DOI: 10.1145/2931037.2931042.

[20] J. Zhu, B. Chu, H. Lipford, and T. Thomas, “Mitigating access control vulnerabilitiesthrough interactive static analysis”, in Proceedings of the 20th ACM Symposium on AccessControl Models and Technologies, ser. SACMAT ’15, Vienna, Austria: ACM, 2015, pp. 199–209, ISBN: 978-1-4503-3556-0. DOI: 10.1145/2752952.2752976.

[21] R. S. Sandhu and P. Samarati, “Access control: Principle and practice”, IEEE Communi-cations Magazine, vol. 32, no. 9, pp. 40–48, Sep. 1994, ISSN: 0163-6804. DOI: 10.1109/35.312842.

[22] B. Eshete, A. Villafiorita, and K. Weldemariam, “Early detection of security misconfig-uration vulnerabilities in web applications”, in 2011 Sixth International Conference onAvailability, Reliability and Security, Aug. 2011, pp. 169–174. DOI: 10.1109/ARES.2011.31.

[23] OWASP. (Mar. 2017). Types of cross-site scripting, [Online]. Available: https://www.owasp.org/index.php/Types_of_Cross-Site_Scripting (Accessed: 19 Apr.2018).

[24] J. Grossman, XSS Attack: Cross-Site Scripting Exploits and Defence. Burlington, Mass. :Syngress, c2007, 2007, ISBN: 9781597491549.

[25] Netsparker. (n.d). Dom based cross-site scripting vulnerability, [Online]. Available:https://www.netsparker.com/blog/web-security/dom-based-cross-site-scripting-vulnerability (Accessed: 19 Apr. 2018).

[26] A. Doupé, M. Cova, and G. Vigna, “Why johnny can’t pentest: An analysis of black-box web vulnerability scanners”, in International Conference on Detection of Intrusionsand Malware, and Vulnerability Assessment, Springer, 2010, pp. 111–131.

[27] C. Lai, “Java insecurity: Accounting for subtleties that can compromise code”, IEEESoftware, vol. 25, no. 1, pp. 13–19, Jan. 2008, ISSN: 0740-7459. DOI: 10.1109/MS.2008.9.

[28] P. Xia, M. Matsushita, N. Yoshida, and K. Inoue, “Studying reuse of out-dated third-party code in open source projects”, Information and Media Technologies, vol. 9, no. 2,pp. 155–161, 2014.

[29] European Commission - Human Resources and Security, Standard on logging and moni-toring, Coordination and Informatics Security, 2010.

53

Page 64: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Bibliography

[30] K. Kent and M. Souppaya, “Guide to computer security log management”, NIST specialpublication, vol. 92, 2006.

[31] M. Meucci and A. Muller, “Owasp testing guide”, v4. 0,//OWASP Foundation, no. s 453,2014.

[32] L. Allen, Advanced penetration testing for highly-secured environments. the ultimate securityguide : learn to perform professional penetration testing for highly-secured environments withthis intensive hands-on guide. Ser. Open source : community experience distilled. Birm-ingham, U.K. : Packt Pub., 2012., 2012, ISBN: 9781849517744.

[33] S. Biffl, D. Winkler, and J. Bergsmann, Software quality. process automation in softwaredevelopment ; 4th International Conference, SWQD 2012, Vienna, Austria, January 17-19,2012, proceedings. Ser. Lecture notes in business information processing: 94. Berlin ; NewYork : Springer, c2012., 2012, ISBN: 9783642272134.

[34] T. G. of the Hong Kong Special Administrative Region. (2008). An overview of vulner-ability scanners, [Online]. Available: http://www.infosec.gov.hk/english/technical/files/vulnerability.pdf (Accessed: 19 Apr. 2018).

[35] MaxCDN. (Mar. 2015). What is a transparent proxy?, [Online]. Available: https://www . maxcdn . com / one / visual - glossary / transparent - proxy / ?utm _source=text (Accessed: 19 Apr. 2018).

[36] M. Sutton, A. Greene, and P. Amini, Fuzzing: brute force vulnerability discovery. PearsonEducation, 2007.

[37] J. Snyder. (Jun. 2006). Active vs. passive scanning, [Online]. Available: https://www.networkworld.com/article/2305289/network-security/active-vs--passive-scanning.html (Accessed: 19 Apr. 2018).

[38] H. Shahriar and M. Zulkernine, “Mitigating program security vulnerabilities: Ap-proaches and challenges.”, ACM Computing Surveys, vol. 44, no. 3, 11–11:46, 2012, ISSN:03600300.

[39] FireWall.cx. (n.d.). Web application vulnerabilities – benefits of automated tools pen-etration testers, [Online]. Available: http : / / www . firewall . cx / general -topics-reviews/web-application-vulnerability-scanners/1149-web-vulnerability-scanning-using-automated-tools-and-penetration-testers.html (Accessed: 19 Apr. 2018).

[40] N. Rathaus. (n.d). Automating vulnerability assessment, [Online]. Available: http://www.beyondsecurity.cn/pdf/AVDS_Whitepaper.pdf (Accessed: 19 Apr.2018).

[41] MultiMedia LLC. (n.d). Vulnerabilities ; vulnerability scanning, [Online]. Available:https : / / www . sans . org / reading - room / whitepapers / threats /vulnerabilities-vulnerability-scanning-1195 (Accessed: 19 Apr. 2018).

[42] B. Arkin, S. Stender, and G. McGraw, “Software penetration testing”, IEEE SecurityPrivacy, vol. 3, no. 1, pp. 84–87, Jan. 2005, ISSN: 1540-7993. DOI: 10.1109/MSP.2005.23.

[43] FedRAMP, “Fedramp jab p-ato vulnerability scan requirements guide”, 2015. [Online].Available: https://s3.amazonaws.com/sitesusa/wp-content/uploads/sites / 482 / 2015 / 01 / FedRAMP - JAB - P - ATO - Vulnerability - Scan -Requirements-Guide-v1-0.pdf (Accessed: 19 Apr. 2018).

[44] Netsparker. (n.d). How to evaluate web application security scanners, [Online]. Avail-able: https : / / www . netsparker . com / blog / web - security / how - to -evaluate-web-application-security-scanners-tools/ (Accessed: 19 Apr.2018).

54

Page 65: Penetration testing for the in- experienced ethical hacker1217958/FULLTEXT01.pdfUpphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

Bibliography

[45] J. Lejon, Interview 2018-03-21, Available in section B.

[46] F. A. Saeed, “Using wassec to evaluate commercial web application security scanners”,International Journal of Soft Computing and Engineering (IJSCE), vol. 4, no. 1, pp. 177–181,2014.

[47] “Advanced automated sql injection attacks and defensive mechanisms.”, 2016 AnnualConnecticut Conference on Industrial Electronics, Technology Automation (CT-IETA), In-dustrial Electronics, Technology Automation (CT-IETA), Annual Connecticut Conference on,2016, ISSN: 978-1-5090-0799-8.

[48] N. Suteva, D. Zlatkovski, and A. Mileva, “Evaluation and testing of several free/opensource web vulnerability scanners”, in Proceedings of the 10th International conference onInformatics and Information Technology (CIIT 2013), 2013.

[49] I. Mukhopadhyay, S. Goswami, and E. Mandal, “Web penetration testing using nessusand metasploit tool”, IOSR Journal of Computer Engineering, vol. 16, no. 3, pp. 126–129,2014.

[50] S. Wagner, “The use of application scanners in software product quality assessment”,in Proceedings of the 8th International Workshop on Software Quality, ser. WoSQ ’11, Szeged,Hungary: ACM, 2011, pp. 42–49, ISBN: 978-1-4503-0851-9. DOI: 10.1145/2024587.2024597.

[51] N. Antunes and M. Vieira, “Penetration testing for web services”, Computer, vol. 47,no. 2, pp. 30–36, Feb. 2014, ISSN: 0018-9162. DOI: 10.1109/MC.2013.409.

[52] M. Alsaleh, A. Alarifi, N. Alomar, M. Alshreef, and A. Al-Salman, “Performance-basedcomparative assessment of open source web vulnerability scanners.”, SECURITY ANDCOMMUNICATION NETWORKS, 2017, ISSN: 19390114.

[53] D. A. Shelly, “Using a web server test bed to analyze the limitations of web applicationvulnerability scanners”, 2010.

[54] F. Lebeau, B. Legeard, F. Peureux, and A. Vernotte, “Model-based vulnerability test-ing for web applications.”, 2013 IEEE Sixth International Conference on Software Testing,Verification and Validation Workshops, p. 445, 2013, ISSN: 978-1-4799-1324-4.

[55] Acunetix. (Jan. 2011). How to choose a web vulnerability scanner, [Online]. Available:https://www.acunetix.com/blog/articles/how- to- choose- web-vulnerability-scanner/ (Accessed: 19 Apr. 2018).

[56] G. K. Sims, “Student peer review in the classroom: A teaching and grading tool.”, Jour-nal of Agronomic Education (JAE), vol. 18, no. 2, pp. 105–08, 1989.

[57] B. Martin, Suppression Stories. Research Online, 1997. [Online]. Available: http://www.bmartin.cc/dissent/documents/ss/ss.pdf.

[58] ReviseSociology. (Jan. 2016). Field experiments: Definition, examples, advantages anddisadvantages, [Online]. Available: https : / / revisesociology . com / 2016 /01/17/field- experiments- definition- examples- advantages- and-disadvantages (Accessed: 19 Apr. 2018).

[59] R. Eran, E.-H. Yuval, R. Gil, and T. Tsarfati, “System for determining web applicationvulnerabilities”, US 6584569B2, Jun. 2003. [Online]. Available: https://patents.google.com/patent/US6584569B2/en (Accessed: 19 Apr. 2018).

[60] M. P. Dafydd Stuttard, in The Web Application Hacker’s Handbook - Finding and ExploitingSecurity Flaws, Second Edition, John Wiley Sons, Inc., 2011, ISBN: 978-1-118-02647-2.

55