1 adaptive fraud detection by tom fawcett and foster provosttom fawcett foster provost presented by:...

47
1 Adaptive Fraud Detection by Tom Fawcett and Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation by Adam Boyer)

Upload: britton-pierce-webb

Post on 22-Dec-2015

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

1

Adaptive Fraud Detection

by Tom Fawcett and Foster Provost

Presented by: Yunfei Zhao(updated from last year’s presentation by Adam Boyer)

Page 2: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

2

Outline Problem Description

Cellular cloning fraud problem Why it is important Current strategies

Construction of Fraud Detector Framework Rule learning, Monitor construction, Evidence combination

Experiments and Evaluation Data used in this study Data preprocessing Comparative results

Conclusion Exam Questions

Page 3: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

3

The Problem

Page 4: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

4

Cellular Fraud - Cloning

Cloning Fraud A kind of Superimposition fraud. Fraudulent usage is superimposed upon

( added to ) the legitimate usage of an account.

Causes inconvenience to customers and great expense to cellular service providers.

Other Examples: Credit card fraud, Calling card fraud, some types of computer intrusion

Page 5: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

5

Cellular communications and Cloning Fraud Mobile Identification Number (MIN) and

Electronic Serial Number (ESN) Identify a specific account Periodically transmitted unencrypted whenever

phone is on Cloning occurs when a customer’s MIN and

ESN are programmed into a cellular phone not belonging to the customer.

Page 6: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

6

Interest in reducing Cloning Fraud Fraud is detrimental in several ways:

Fraudulent usage congests cell sites Fraud incurs land-line usage charges Cellular carriers must pay costs to other

carriers for usage outside the home territory Crediting process is costly to carrier and

inconvenient to the customer

Page 7: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

7

Strategies for dealing with cloning fraud Pre-call Methods

Identify and block fraudulent calls as they are made Validate the phone or its user when a call is placed

Post-call Methods Identify fraud that has already occurred on an account so

that further fraudulent usage can be blocked Periodically analyze call data on each account to determine

whether fraud has occurred.

Page 8: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

8

Pre-call Methods

Personal Identification Number (PIN) PIN cracking is possible with more sophisticated

equipment. RF Fingerprinting

Method of identifying phones by their unique transmission characteristics

Authentication Reliable and secure private key encryption method. Requires special hardware capability An estimated 30 million non-authenticatable phones are in

use in the US alone (in 1997)

Page 9: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

9

Post-call Methods

Collision Detection Analyze call data for temporally overlapping

calls Velocity Checking

Analyze the locations and times of consecutive calls

Disadvantage of the above methods Usefulness depends upon a moderate level of

legitimate activity

Page 10: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

10

Another Post-call Method( Main focus of this paper ) User Profiling

Analyze calling behavior to detect usage anomalies suggestive of fraud

Works well with low-usage customers Good complement to collision and velocity checking

because it covers cases the others might miss

Page 11: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

11

Sample Frauded AccountDate Time Day Duration Origin Destination Fraud

1/01/95 10:05:01 Mon 13 minutes Brooklyn, NY Stamford, CT

1/05/95 14:53:27 Fri 5 minutes Brooklyn, NY Greenwich, CT

1/08/95 09:42:01 Mon 3 minutes Bronx, NY Manhattan, NY

1/08/95 15:01:24 Mon 9 minutes Brooklyn, NY Brooklyn, NY

1/09/95 15:06:09 Tue 5 minutes Manhattan, NY Stamford, CT

1/09/95 16:28:50 Tue 53 seconds Brooklyn, NY Brooklyn, NY

1/10/95 01:45:36 Wed 35 seconds Boston, MA Chelsea, MA Bandit

1/10/95 01:46:29 Wed 34 seconds Boston, MA Yonkers, NY Bandit

1/10/95 01:50:54 Wed 39 seconds Boston, MA Chelsea, MA Bandit

1/10/95 11:23:28 Wed 24 seconds Brooklyn, NY Congers, NY

1/11/95 22:00:28 Thu 37 seconds Boston, MA Boston, MA Bandit

1/11/95 22:04:01 Thu 37 seconds Boston, MA Boston, MA Bandit

Page 12: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

12

The Need to be Adaptive

Patterns of fraud are dynamic – bandits constantly change their strategies in response to new detection techniques

Levels of fraud can change dramatically from month-to-month

Cost of missing fraud and dealing with false alarms change with inter-carrier contracts

Page 13: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

13

Automatic Construction of Profiling Fraud Detectors

Page 14: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

14

One Approach

Build a fraud detection system by classifying calls as being fraudulent or legitimate

However there are two problems that make simple classification techniques infeasible.

Page 15: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

15

Problems with simple classification

Context A call that would be unusual for one customer may be

typical for another customer (For example, a call placed from Brooklyn is not unusual for a subscriber who lives there, but might be very strange for a Boston subscriber. )

Granularity At the level of the individual call, the variation in calling

behavior is large, even for a particular user.

Page 16: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

16

The Learning Problem

1. Which call features are important?

2. How should profiles be created?

3. When should alarms be raised?

Page 17: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

17

Detector Constructor Framework

Page 18: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

18

Use of a detector ( DC-1 )Account-Day

Day Time Duration Origin Destination

Tue 1:42 10 min Bronx, NY Miami, FL

Tue 10:05 3 min Scarsdale, NY Bayonne, NJ

Tue 11:23 24 sec Scarsdale, NY Congers, NY

Tue 14:53 5 min Tarrytown, NY Greenwich, CT

Tue 15:06 5 min Manhattan, NY Westport, CT

Tue 16:28 53 sec Scarsdale, NY Congers, NY

Tue 23:40 17 min Bronx, NY Miami, FL

# calls from BRONX at night exceeds daily threshold

Airtime from BRONX at night

SUNDY airtime exceeds daily threshold

Value normalization and weighting

1 27 0

Profiling Monitors

FRAUD ALARMYes

Evidence Combining

Page 19: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

19

Rule Learning – the 1st stage

Rule Generation Rules are generated locally based on

differences between fraudulent and normal behavior for each account

Rule Selection Then they are combined in a rule

selection step

Page 20: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

20

DC-1 uses the RL program to generate rules with certainty factors above user-defined threshold

For each Account, RL generates a “local” set of rules describing the fraud on that account.

Example:

(Time-of-Day = Night) AND

(Location = Bronx) FRAUD

Certainty Factor = 0.89

Rule Generation

Page 21: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

21

Rule Selection

Rule generation step typically yields tens of thousands of rules

If a rule is found in ( covers ) many accounts then it is probably worth using

Selection algorithm identifies a small set of general rules that cover the accounts

Resulting set of rules is used to construct specific monitors

Page 22: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

22

Profiling Monitors – the 2nd stageMonitor has 2 distinct steps - Profiling step:

Monitor is applied to an account’s non-fraud usage to measure account’s normal activity.

Statistics are saved with the account. Use step:

Monitor processes a single account-day, references the normality measure from profiling and generates a numeric value describing how abnormal the current account-day is.

Page 23: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

23

Most Common Monitor Templates Threshold Standard Deviation

Page 24: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

24

Threshold Monitors

Page 25: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

25

Standard Deviation Monitors

Page 26: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

26

Example for Standard Deviation Rule –(TIME OF DAY = NIGHT) AND (LOCATION = BRONX) FRAUD

Profiling Step - the subscriber called from the Bronx an average of 5 minutes per night with a standard deviation of 2 minutes. At the end of the Profiling step, the monitor would store the values (5,2) with that account.

Use step - if the monitor processed a day containing 3 minutes of airtime from the Bronx at night, the monitor would emit a zero; if the monitor saw 15 minutes, it would emit (15 - 5)/2 = 5. This value denotes that the account is five standard deviations above its average (profiled) usage level.

Page 27: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

27

Combining Evidence from the Monitors – the 3rd stage

Train a classifier with attributes (monitor outputs) class label (fraudulent or legitimate)

Weights the monitor outputs and learns a threshold on the sum to produce high confidence alarms

DC-1 uses Linear Threshold Unit (LTU) Simple and fast

Feature selection Choose a small set of useful monitors in the

final detector

Page 28: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

28

Data used in the study

Page 29: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

29

Data Information 4 months of call records from the New York

City area. Each call is described by 31 original

attributes Some derived attributes are added

Time-Of-Day To-Payphone

Each call is given a class label of fraudulent or legitimate.

Page 30: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

30

Data Cleaning

Eliminated credited calls made to numbers that are not in the created block The destination number must be only called by

the legitimate user. Days with 1-4 minutes of fraudulent usage

were discarded. Call times were normalized to Greenwich

Mean Time for chronological sorting

Page 31: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

31

Data Selection Once the monitors are created and accounts profiled, the system

transforms raw call data into a series of account-days using the monitor outputs as features

Rule learning and selection 879 accounts comprising over 500,000 calls

Profiling, training and testing 3600 accounts that have at least 30 fraud-free days of usage

before any fraudulent usage. Initial 30 days of each account were used for profiling. Remaining days were used to generate 96,000 account-days. Distinct training and testing accounts ,10,000 account-days for

training; 5000 for testing 20% fraud days and 80% non-fraud days

Page 32: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

32

Experiments and Evaluation

Page 33: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

33

Output of DC-1 components

Rule learning: 3630 rules Each covering at least two accounts

Rule selection: 99 rules 2 monitor templates yielding 198 monitors Final feature selection: 11 monitors

Page 34: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

34

The Importance Of Error Cost Classification accuracy is not sufficient to evaluate

performance Should take misclassification costs into account Estimated Error Costs:

False positive(false alarm): $5 False negative (letting a fraudulent account-day go

undetected): $0.40 per minute of fraudulent air-time Factoring in error costs requires second training

pass by LTU

Page 35: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

35

Alternative Detection Methods Collisions + Velocities

Errors almost entirely due to false positives High Usage – detect sudden large jump in

account usage Best Individual DC-1 Monitor

(Time-of-day = Evening) ==> Fraud SOTA - State Of The Art

Incorporates 13 hand-crafted profiling methods Best detectors identified in a previous study

Page 36: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

36

DC-1 Vs. AlternativesDetector Accuracy(%) Cost ($) Accuracy at Cost

Alarm on all 20 20000 20

Alarm on none 80 18111 +/- 961 80

Collisions + Velocities

82 +/- 0.3 17578 +/- 749 82 +/- 0.4

High Usage 88+/- 0.7 6938 +/- 470 85 +/- 1.7

Best DC-1 monitor 89 +/- 0.5 7940 +/- 313 85 +/- 0.8

State of the art (SOTA)

90 +/- 0.4 6557 +/- 541 88 +/- 0.9

DC-1 detector 92 +/- 0.5 5403 +/- 507 91 +/- 0.8

SOTA plus DC-1 92 +/- 0.4 5078 +/- 319 91 +/- 0.8

Page 37: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

37

Shifting Fraud Distributions

Fraud detection system should adapt to shifting fraud distributions

To illustrate the above point - One non-adaptive DC-1 detector trained on a

fixed distribution ( 80% non-fraud ) and tested against range of 75-99% non-fraud

Another DC-1 was allowed to adapt (re-train its LTU threshold) for each fraud distribution

Second detector was more cost effective than the first

Page 38: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

38

Effects of Changing Fraud Distribution

0

0.2

0.4

0.60.8

1

1.2

1.4

75 80 85 90 95 100Percentage of non-fraud

Cost

Adaptive

80/20

Page 39: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

39

DC-1 Component Contributions(1)

High Usage Detector Profiles with respect to undifferentiated

account usage Comparison with DC-1 demonstrates the

benefit of using rule learning Best Individual DC-1 Monitor

Demonstrates the benefit of combining evidence from multiple monitors

Page 40: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

40

DC-1 Component Contributions(2) Call Classifier Detectors

Represent rule learning without the benefit of account context

Demonstrates value of DC-1’s rule generation step, which preserves account context

Shifting Fraud Distributions Shows benefit of making evidence combination

sensitive to fraud distribution

Page 41: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

41

Conclusion DC-1 uses a rule learning program to uncover indicators of fraudulent behavior from a large database of customer transactions.

Then the indicators are used to create a set of monitors, which profile legitimate customer behavior and indicate anomalies.

Finally, the outputs of the monitors are used as features in a system that learns to combine evidence to generate high confidence alarms.

Page 42: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

42

Conclusion

Adaptability to dynamic patterns of fraud can be achieved by generating fraud detection systems automatically from data, using data mining techniques

DC-1 can adapt to the changing conditions typical of fraud detection environments

Experiments indicate that DC-1 performs better than other methods for detecting fraud

Page 43: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

43

Exam Questions

Page 44: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

44

Question 1

• What are the two major fraud detection categories, differentiate them, and where does DC-1 fall under?

• Pre Call Methods

• Involves validating the phone or its user when a call is placed.

• Post Call Methods – DC1 falls here

• Analyzes call data on each account to determine whether cloning fraud has occurred.

Page 45: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

45

Question 2

• Three stages of DC1?

• Rule learning and selection

• Profiling monitors

• Combine evidences from the monitors

Page 46: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

46

Question 3

• Profiling monitors have two distinct stages associated with them. Describe them.

• Profiling step: The monitor is applied to a segment of an account’s

typical (non-fraud) usage in order to measure the account’s normal activity.

• Use step: The monitor processes a single account-day at a time,

references the normalcy measure from the profiling step and generates a numeric value describing how abnormal the current account-day is.

Page 47: 1 Adaptive Fraud Detection by Tom Fawcett and Foster ProvostTom Fawcett Foster Provost Presented by: Yunfei Zhao (updated from last year’s presentation

47

The End. Questions?