threat modeling: applied on a publish-subscribe architectural style
TRANSCRIPT
Threat Modeling: Applied on a Publish-Subscribe Architectural Style
Dr. Dharma Ganesan, Ph.D., [email protected]
1
Context of the Slides
• I was a Lecturer for “Secure Software Testing and Construction” course (Fall 2015)
–at University of Maryland, College Park
• Threat modeling was introduced to graduate students of this course
–Hands-on approach to modeling and security
3
Agenda
• Threat Modeling – Introduction–First 30 slides are from a threat modeling book–Got permission from the author of the book
•https://threatmodelingbook.com/
• Applying it on a simplified real-world system–Publish-Subscribe architectural style–Software Enterprise Bus
• Conclusion
4
Wouldn’t it be better to find security issues before we write a
line of code?So how can we do that?
5
Ways to Find Security Issues
• Static analysis of code• Fuzzing or other dynamic testing• Pen test/red team• Wait for bug reports after release
• These issues are detected later in the process
6
Ways to Find Security Issues (2)
• Threat modeling!– Think about security issues early– Understand our requirements better– Don’t write security bugs into the code
7
How to Threat Model (Summary)
• What are we building?• What can go wrong?• What are we going to do about it?
9
What Are We Building?
• Create a model of the software/system/technology
• A model abstracts away the details so you can look at the whole– Diagraming is a key approach– Mathematical models rare in commercial environs
• Software models for threat modeling usually focus on data flows and boundaries
• DFDs, “swim lanes,” state machines can all help (next slides)
10
DFD (Data Flow Diagram)
• Developed in the early 70s, and still useful– Simple: easy to learn, sketch– Threats often follow data
• Abstracts programs into:– Processes: your code– Data stores: files, databases, shared memory– Data flows: connect processes to other elements– External entities: everything but your code & data.
Includes people & cloud software– Trust boundaries now made explicit 11
Swim Lane Diagrams
• Show two or more entities communicating, each “in a lane”
• Useful for network communication
• Lanes have implicit boundaries between them
13
State Machines
• Helpful for considering what changes security state– For example, unauthenticated to
authenticated– User to root/admin
• Rarely shows boundaries
14
How to Threat Model (Summary)
• What are we building?• What can go wrong?• What are we going to do about it?
15
What Can Go Wrong?
• Fun to brainstorm• Mnemonics, trees or libraries of threats can all
help structure thinking• Structure helps get you towards completeness
and predictability • STRIDE is a mnemonic
– Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege
– Easy, right?
16
STRIDEThreat Property
ViolatedDefinition Example
Spoofing Authentication Impersonating something or someone else.
Pretending to be any of Bill Gates, Paypal.com or ntdll.dll
Tampering Integrity Modifying data or code Modifying a DLL on disk or DVD, or a packet as it traverses the network
Repudiation Non-repudiation Claiming to have not performed an action.
“I didn’t send that email,” “I didn’t modify that file,” “I certainly didn’t visit that web site, dear!”
Information Disclosure
Confidentiality Exposing information to someone not authorized to see it
Allowing someone to read the Windows source code; publishing a list of customers to a web site.
Denial of Service Availability Deny or degrade service to users
Crashing Windows or a web site, sending a packet and absorbing seconds of CPU time, or routing packets into a black hole.
Elevation of Privilege Authorization Gain capabilities without proper authorization
Allowing a remote internet user to run commands is the classic example, but going from a limited user to admin is also EoP.
23
Using STRIDE
• Consider how each STRIDE threat could impact each part of the model – “How could a clever attacker spoof this part of the
system?...tamper with?… etc.”• Easier with aids
– Elevation of Privilege game – Attack trees (see Threat Modeling: Designing for Security, Appendix B)
– Experience
24
What Can Go Wrong?
• Track issues as we find them– “attacker could pretend to be a client & connect”
• Track assumptions– “I think that connection is always over SSL”
• Both lists are inputs to “what are we going to do about it”
25
How to Threat Model (Summary)
• What are we building?• What can go wrong?• What are we going to do about it?
26
What Are You Going to Do About It?
• For each threat:– Fix it!– Mitigate with standard or custom approaches– Accept it?– Transfer the risk?
• For each assumption:– Check it– Wrong assumptions lead to reconsider what goes
wrong
27
Fix It!
• The best way to fix a security bug is to remove functionality– For example, if SSL doesn’t have a “heartbeat”
message, the “heartbleed bug” couldn’t exist– You can only take this so far– Oftentimes end up making risk tradeoffs
• Mitigate the risk in various ways (next slide)
28
Mitigate• Add/use technology to prevent attacks• For example, prevent tampering:
– Network: Digital signatures, cryptographic integrity tools, crypto tunnels such as SSH or IPsec
• Developers, sysadmins have different toolkits for mitigating problems
• Standard approaches available which have been tested & worked through
• Sometimes you need a custom approach
29
Some Technical Ways to Address Threat Mitigation Technology Developer Example Sysadmin
ExampleSpoofing Authentication Digital signatures,
Active directory, LDAPPasswords, crypto tunnels
Tampering Integrity, permissions Digital signatures ACLs/permissions, crypto tunnels
Repudiation Fraud prevention, logging, signatures
Customer history risk management
Logging
Information disclosure
Permissions, encryption
Permissions (local), PGP, SSL
Crypto tunnels
Denial of service
Availability Elastic cloud design Load balancers, more capacity
Elevation of privilege
Authorization, isolation
Roles, privileges, input validation for purpose, (fuzzing*)
Sandboxes, firewalls
* Fuzzing/fault injection is not a mitigation, but a great testing techniqueSee chapter 8, Threat Modeling for more
30
Agenda
• Threat Modeling – Introduction
• Applying it on a simplified real-world system
• Conclusions
31
Can we identify threats of this sample architecture?
32
The system we analzed is quite similar to this architectural style
Threat modeling of a bus
• A software bus for component/application communication
• Ideal for developing distributed systems• Publish-subscribe architectural style
–Components publish messages–The bus routes the messages to subscribers based on the message subject/topic
•Let us enumerate STRIDE for this architecture
33
Input artifacts to our review
• Software architecture• API documentation• Test cases• Source code
34
S - Spoofing (sample threats)• We reviewed the initial design and APIs
• It turned out that is no method for verifying the authenticity of the bus, another system could impersonate the bus, responding to calls as if it were the bus.
• This could be mitigated by using a system of authentication (e.g. public key cryptography) between the applications and the bus.
35
S - Spoofing (sample)•An unauthorized application could impersonate another application by publishing messages which would normally be published only by a particular application.
•An attacker could unsubscribe legitimate applications from the bus.
•These issues could be mitigated by using authentication and authorization controls
36
T - Tampering (sample)• Because the messages are not encrypted,
messages can be intercepted and modified.
• For example, while a legitimate application tries to subscribe to a particular topic the message is intercepted and the subscriber is subscribed to a different topic.
• This could be mitigated by encrypting messages between the applications and the bus.
37
R - Repudiation (sample)• There does not appear to be any method to
enforce non-repudiation in the system.
• For example, there does not appear to be any logging of published messages, or tracking of who originally sent messages
• It would be possible for someone to create a fake message and say that it was a published message received from the bus.
38
R - Repudiation (sample)• An application could also claim that it
published a message when in fact it did not do so.
• Alternatively (or in addition), messages could be digitally signed and timestamped so as to guarantee the sender and recipient of the data, and the time of the occurrence.
39
I - Information Disclosure (sample)
• The APIs show no evidence of encryption of data-in-transit
• Because messages are not encrypted, it is possible to eavesdrop on the messages sent between the bus and the applications.
• This could be mitigated through encryption.
40
I - Information Disclosure (sample)•An application could subscribe to the topic “.*” (a regular expression for “everything”), thereby matching all messages destined for all applications.
•This would be a way for an evil application to view all messages without even knowing the available topics.
•Since XML is the messaging format, xml entity injection could be used to steal files
41
I - Information Disclosure (sample)•This could be mitigated by adding some restrictions to wildcard usage of message subjects
•Or by limiting the set of message subjects to a pre-defined set rather than allowing regular expressions.
42
D - Denial of Service (sample)• An application could prevent other applications
from accessing the bus by impersonating the bus and sending disconnect messages to other applications.
• Similarly, an application could send unsubscribe messages to prevent the other applications from receiving data.
43
D - Denial of Service (sample)•Because messages are in XML, the system may be vulnerable XML bombs which may crash the bus
–This could be mitigated by carefully ensuring proper parsing of inputs to the bus.
•An application can make too many connections to the bus.
–This could be mitigated by limiting the # of connections
44
E - Elevation of Privilege (sample)
• It may be possible to craft a particular XML input which would be incorrectly parsed– For example, XML injection to run remote code
• Large messages may trigger buffer overflows and remote code execution– This could be mitigated by introducing
appropriate compiler flags • (e.g. DEP prevention, stack canaries, etc)
– Of course, length check in the source code, too45
Deriving security requirements using threat modeling - (sample)
• Based on the threats described above, below are recommended high-level security requirements for the software bus:
1) All traffic between the bus and the applications must be encrypted using strong encryption.
2) There must be mutual authentication between the bus and each application.
46
Deriving security requirements using threat modeling - (sample)
3) Messages transmitted between the bus and the applications should be digitally signed and timestamped in order to prevent repudiation and spoofing.
4) The bus should contain a whitelist of applications which are allowed to subscribe to particular applications. This will prevent information disclosure by ensuring that messages are only seen by the proper applications.
47
Conclusion
• Threat Modeling using STRIDE helps in identifying security requirements
• STRIDE facilitates systematic enumeration of threats based on software architecture
• For every architectural style, the list of threats and mitigation strategies can be reused!
48
Conclusion ...• An organization could build a library of threats for
each architectural style• The identified threats become security bugs to
address• 4 questions to remember
–What are you building?– What can go wrong?– What are you going to do about it– Checking your work
•Reference: https://threatmodelingbook.com
49