human factor vs. technology joanna rutkowska invisible things lab gartner it security summit,...
Post on 02-Apr-2015
214 Views
Preview:
TRANSCRIPT
Human Factor vs. Technology
Joanna Rutkowska
Invisible Things Lab
Gartner IT Security Summit, London, 17 September, 2007.
Basic Definitions…
3 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Message of this talk
Human factor is not the weakest link in IT securityThe technology factor is as week as the human factor!
Human factor used to describe:User’s unawareness (“stupidity”)
Admin’s incompetence
NOT developer’s incompetence
NOT system designer’s incompetence
Security Consumers “Human Factor”
Security Vendors “Technology Factor”
4 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Getting Into System
Exploiting User’s Unawareness/IncompetenceSocial engineering
Bad configuration
Exploiting Technological WeaknessSoftware flaw (e.g. buffer overflow)
Protocol weakness (e.g. MitM)
Usual Goal: arbitrary code execution on target system
5 © Invisible Things Lab, http://invisiblethingslab.com, 2007
After Getting In…
“Break and Escape”E.g. website defacement, files deletion
Introduce damage, not compromise!
“Steal and Escape”Steal confidential files, databases records, etc..
Do not compromise system – escape after data theft!
Problems: encrypted data,
passwords – only hashes stored
“Install Some Malware”Compromise the system for full control!
Prevention Approaches…
7 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Prevention Approaches
Signature-based
User’s education
AI-based (anomaly detection)Host IPSes
OS hardening (anti-exploitation)Host IPSes
Least privilege design
Code verification
8 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Signature based approaches
Protect against “user’s stupidity” by blacklisting known attack patterns – e.g. certain “phishing mails”
Protect against technological weaknesses by having a signature for an exploit (majority) or generic signature for an attack (minority, unfortunately)
No protection against unknown (targeted) attacks!
All major A/V vendors alerts about increasing number of targeted attacks, since 2006
targeted we usually don’t have a signature
9 © Invisible Things Lab, http://invisiblethingslab.com, 2007
User’s education
Increase awareness among users and competences of system administrators
Should eliminate most of the social engineering based attacks, e.g. sending a malware via email
Can not protect against attacks exploiting flaws in software, i.e. exploits
“Keeping your A/V up to date” does not address the problem of targeted attacks
10 © Invisible Things Lab, http://invisiblethingslab.com, 2007
AI (anomaly based)
Using “Artificial Intelligence” (heuristics) to detect “abnormal” patterns of:
… behavior (e.g. iexplore.exe starting cmd.exe)
… network traffic (e.g. suspicious connections)
Problems:No guarantee to detect anything!
False positives!
Do you think “AI” can solve problems better then “HI” (Human Intelligence)? ;)
11 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Anti Exploitation
Make exploitation process (very) hard!
Stack Protection
Stack Guard for UNIX-like systems (1998)
Microsoft /GS stack protection (2003)
Address Space Layout Randomization (ASLR)
PaX project for Linux (2001)
Vista ASLR (Microsoft, 2007)
Non-Executable pages
PaX project for Linux (2000)
OpenBSD’s W^X (2003)
Windows NX (2005-06)
Other technologies
12 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Least Privilege and Privilege Separation
Limit scope of an attack by limiting the rights/privileges of the components exposed to the attack (e.g. processes)
Least Privilege Principle: every process (or other entity) has the minimal set of rights necessary to do its job
How many people work using the Administrator’s account?
Privilege SeparationDifferent programs have different, non-overlapping, competences…
13 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Example: Vista’s User Account Control
Attempt to force people to adhere to the LP Principle
All user’s processes run by default with restricted privs,
User want to perform an operation which requires more privileges – a popup appears asking for credentials,
Goal: if restricted process gets exploited, attacker does not automatically get administrator’s rights!
Many implementation problems though:February 2007: Microsoft announced that UAC is not… a security feature!
14 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Example: Privilege Separation
Different account for different tasks, e.g.:joanna – main account used to log in
joanna.web – used to run Firefox
joanna.email – used to run Thunderbird
joanna.sensitive – access to /projects directory, run password manager and another instance of web browser for banking.
Easy to implement on Linux or even on Vista!In Vista we rely on User Interface Privilege Isolation (UIPI)
15 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Problems with priv-separation
If attacker exploits a bug in kernel or one of kernel drivers (e.g. graphics card driver)…
… then she has full control over the system and can bypass all the protection offered by the OS!
This is a common problem of all general purpose OSes based on monolithic kernel – e.g. Linux, Windows.
Drivers are the weakest point in OS security!Hundreds of 3rd party drivers,
All run with kernel privileges!
We will get back top this later…
16 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Avoiding Bugs and Code Verification
Developers educatione.g. Microsoft and Secure Development Lifecycle (SDL)
FuzzingGenerate random “situations” and see when software crashes… Currently the favorite bughunter’s technique…
Code auditingVery expensive – requires experienced experts,
Few automatic tools exist to support the process.
Formal verification methodsManual methods only for very small projects (a few k-lines)
No mature automatic tools yet (still 5-10 years?)
How Prevention Fails In Practice…
18 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Example: the ANI bug
ANI bug (MS07-17, April 2007)
“This vulnerability can be exploited by a malicious web page or HTML email message and results in remote code execution with the privileges of the logged-in user. The vulnerable code is present in all versions of Windows up to and including Windows Vista. All applications that use the standard Windows API for loading cursors and icons are affected. This includes Windows Explorer, Internet Explorer, Mozilla Firefox, Outlook and others.”Source: Determina Security, http://www.determina.com/
19 © Invisible Things Lab, http://invisiblethingslab.com, 2007
ANI Bug vs. Vista
Code Review and Testing Process?MS admitted their fuzzers were not tuned up to catch this bug in their code…
Anti-Exploitation technologies?GS stack protection failed, because compiler “heuristics” decided not to include it for the buggy function!
NX usually fails, because IE and explorer have DEP disabled by default!
ASLR could be bypassed due to implementation weaknesses!
20 © Invisible Things Lab, http://invisiblethingslab.com, 2007
ANI Bug vs. Vista UAC?
UAC allows to run IE in a so called Protected Mode (PM)
However:PM is not deigned to protect user’s information!
It only protects against modification user’s data!
Also, MS announced that UAC/Protected Mode can not be treated as a security boundary!
i.e. expect that it will be easy to break out from Protected Mode…
21 © Invisible Things Lab, http://invisiblethingslab.com, 2007
ANI Bug vs. educated user?
To exploit this bug it’s just enough to redirect a user to browse a compromised page (or open an email)…
No special action from a user required!
Exploit can be very reliable – even experienced user might not realize that he or she has been just attacked!
22 © Invisible Things Lab, http://invisiblethingslab.com, 2007
ANI vs. A/V
Attack was discovered in December 2006
Information has been published in April 2007
What if it was discovered by a “black hat” even earlier?
Do you really believe that there was only 1 person on the planet capable of discovering it?
Why would A/V block/detect such an attack when the information about it was not public?
23 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Going further…
So, now we see that the technology can not protect (even smart) user from being exploited…
We saw an attack scenario, when an exploit bypasses various anti-exploitation techniques and eventually gets admin access to the systems…
The next goal is usually to install some rootkitin other words to get into kernel…
But, we have Vista Kernel Protection on Vista!
24 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Digital Drivers Signing…
“Digital signatures for kernel-mode software are an important way to ensure security on computer systems.”
“Windows Vista relies on digital signatures on kernel mode code to increase the safety and stability of the Microsoft Windows platform”
“Even users with administrator privileges cannot load unsigned kernel-mode code on x64-based systems.”
Quotes from the official Microsoft documentation: Digital Signatures for Kernel Modules on Systems Running Windows Vista, http://www.microsoft.com/whdc/system/platform/64bit/kmsigning.mspx
25 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Example: Vista Kernel Protection Bypassing
Presented by Invisible Things Lab at Black Hat in August
Exploiting bugs in 3rd party kernel drivers, e.g.:ATI Catalyst driver
NVIDIA nTune driver
It’s not important whether the buggy driver is present on the target system – a rootkit might always bring it there!
There are hundreds of vendors providing kernel drivers for Windows…
All those drivers share the same address space with the kernel…
26 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Buggy Drivers: Solution?
Today we do not have tools to automatically analyze binary code for the presence of bugs
Binary Code Validation/Verification
There are only some heuristics which produce too many false positives and also omit more subtle bugs
There are some efforts for validation of C programse.g. ASTREE (http://www.astree.ens.fr/)
Still very limited – e.g. assumes no dynamic memory allocation in the input program
Effective binary code verification is a very distant future
27 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Buggy Drivers: Solutions?
Drivers in ring 1 (address space shared among drivers)Not a good solution today (lack of IOMMU)
Drivers in usermodeDrivers execute in their own address spaces in ring3
Very good isolation of faulty/buggy drivers from the kernel
Examples:MINIX3, supports all drivers, but still without IOMMU
Vista UMDF, supports only drivers for a small subset of devices (PDAs, USB sticks). Most drivers can not be written using UMDF though.
28 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Message
I believe its not possible to implement effective kernel protection on General Purpose OSes based on a microkernel architecture
Establishing a 3rd party drivers verification authority might raise a bar, but will not solve a problem
Move on towards microkernel based architecture!
29 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Moral
Today’s prevention technology does not always work…
In how many cases it does work vs. fails?
30 © Invisible Things Lab, http://invisiblethingslab.com, 2007
How secure is our system?
In how many cases our prevention fails?
This is a meaningless question!
If you know that a certain type of attacks is possible (i.e. practically) then the system is simple insecure!
“System is not compromised with probability = 98%”?!
“The cat is alive with probability of 50%”?!
What does it mean?
Detection for the Rescue!
32 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Detection
Detection is used to verify that prevention works
Detection can not replace preventionE.g. data theft – even if we detect it, we can not make the attacker to “forget” the data she has stolen!
33 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Detection
Host-BasedTries to find out whether current OS and applications has been compromised or not
A/V products
Network BasedTries to detect attacks by analysis network traffic
E.g. detect known exploit, or suspicious connections
Network IDS
Sometimes combined with firewall – IPS systems
34 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Stealth Malware
rootkits, backdoors, keyloggers, etc…
stealth is a key feature!stealth – means that legal processes can’t see it (A/V)
stealth – means that administrator can’t see it (admin tools)
stealth – means that we should never know whether we’re infected or not!
35 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Paradox…
If a stealth malware does its job well…
…then we can not detect it…
…so how can we know that we are infected?
36 © Invisible Things Lab, http://invisiblethingslab.com, 2007
How we know that we were infected?
We count on a bug in the malware! We hope that the author forgot about something!
We use hacks to detect some known stealth malware (e.g. hidden processes).
We need to change this!
We need a systematic way to check for system integrity!
We need a solution which would allow us to detect malware which is not buggy!
37 © Invisible Things Lab, http://invisiblethingslab.com, 2007
State of Detection
Current detection products cannot not deal well with targeted stealth malware,
We need systematic way for checking system compromises, but,
Unfortunately current OS are too complex!
We can’t even reliably read system memory!Due to various attacks, e.g. against DMA
But… maybe we should be not afraid of targeted stealth malware? Maybe it’s just a FUD?
38 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Targeted Stealth Malware?
Gartner: 10 Key Predictions for 2007:
#5: By the end of 2007, 75 percent of enterprises will be infected with undetected, financially motivated, targeted malware that evaded their traditional perimeter and host defenses. (source: eWeek based on Gartner)
39 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Prevention vs. Detection
Prevention is not perfect as we saw,
Detection is very immature,
We should have better detection to verify our prevention mechanisms,
OS complexity is a problem when verifying system integrity
There is no way to implement effective detection without cooperation with the OS vendors!
40 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Human Factor vs. Technology
“User stupidity” is only part of the problem (a small part)
Many modern attacks do not require user to do anything “stupid” or suspicious (e.g. WiFi driver’s exploitation)
There is no technology on the market that offers unbreakable prevention
Even competent admins can not do much about it
Current technology does not even allow for detecting many modern stealth malware!
Conscious users can not find out whether their systems has been compromised -- they can only count on attacker’s mistake!
41 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Final Message
Human Factor is a weak link in computer security,
But the technology is also flawed!
We should work on improving the technology just as we work on educating users…
Unfortunately challenges here are much bigger, mostly due to over complexity of the current OSes.
As a savvy user, I would like to have technology, that would protect me!
I don’t have it today! Not even effective detection!
Cooperation from OS vendors required!
42 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Invisible Things Lab
Focus on Operating System SecurityIn contrast to application security and network security
Targeting 3 groups of customersVendors – assessing their products, advising
Corporate Customers (security consumers) – unbiased advice about which technology to deploy
Law enforcement/forensic investigators – educating about current threats (e.g. stealth malware)
Thank You
Joanna Rutkowska, Invisible Things Labjoanna@invisiblethingslab.com
44 © Invisible Things Lab, http://invisiblethingslab.com, 2007
Topics For Roundtable Discussion
1. Virtualization based malware (a-little-bit-technical topic)how different from “classic” kernel malware?
should we be afraid?
defense approaches
2. Tricky tricks! why we should avoid tricks when building security?
built-in security vs. 3rd party-provided security?
3. “Dump users”human factor vs. technology
Can users be educated in security? Should they?
top related