chap5.ppt
Post on 20-Jun-2015
845 Views
Preview:
TRANSCRIPT
Data Mining
Dr. Brahim Medjahed
brahim@umich.edu
CIS 4262 Information Systems Design and Analysis
Why Mining Data? (1)
• Commercial Point of View– Lots of data is being collected
• Web data, e-commerce• Purchases at department/grocery stores• Bank/Credit Card Transactions
– High competitive pressure• Provide better, customized services for an edge
(e.g. in Customer Relationship Management)
Why Mining Data? (2)
• Scientific Point of View– Data collected and stored at enormous speeds
(TB/hour)
• Remote sensors on a satellite
• Telescopes scanning the skies
• Microarrays generating gene expression data
• scientific simulations generating terabytes of data
– Data mining may help scientists in:• Classifying and segmenting data• Hypothesis formation
Mining Large Data Set - Motivation• There is often information “hidden” in the data that is not
readily evident
• Human analysts may take weeks to discover useful information
• Much of the data is never analyzed at all
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
3,000,000
3,500,000
4,000,000
1995 1996 1997 1998 1999
The Data Gap
Total new disk (TB) since 1995
Number of analysts
From: R. Grossman, C. Kamath, V. Kumar, “Data Mining for Scientific and Engineering Applications”
Applications of Data Mining• Marketing:
– Identify likely responders to sales promotions
– Consumer behavior based on buying patterns
– Determination of marketing strategies including advertising, store location, and targeted mailing
• Fraud detection– Which types of transactions are likely to be fraudulent, given the demographics
and transactional history of a particular customer?
• Customer relationship management– Which of my customers are likely to be the most loyal, and which are most likely
to leave for a competitor?
• Banking: loan/credit card approval– predict good customers based on old customers
• Healthcare– Analysis of effectiveness of certain treatments– Relating patient wellness data with doctor qualifications– Analyzing side effects of drugs
• ….
What is Data Mining?
• Process of semi-automatically analyzing large databases to find patterns that are:– valid: hold on new data with some certainty– novel: non-obvious to the system– useful: should be possible to act on the item – understandable: humans should be able to interpret
the pattern
• Another definition– Exploration and analysis, by automatic or semi-automatic
means, of large quantities of data in order to discover meaningful patterns
Data Mining and Data Warehousing
Data Warehousing provides the Enterprise with a memory
Data Mining provides the Enterprise with intelligence
Data Mining as Part of the Knowledge Discovery Process (1)
• Example– Transaction database maintained by a specialty
consumer goods retailer– Client data includes
• Customer name, zip code, phone number, date of purchase, item code, price, quantity, total amount
• Problem Formulation– Plan additional store locations based on demographics– Run store promotions, – Combine items in advertisements– Plan seasonal marketing strategies– etc.
Data Mining as Part of the Knowledge Discovery Process (2)
• Data Selection– Data about specific items or categories of items, or
from stores in a specific region or area of the country, may be selected
• Data cleansing– Correct invalid zip codes or eliminate records with
incorrect phone prefixes• Enrichment
– Enhances the data with additional sources of information
– The store may purchase data about age, income, and credit rating and append them to each record
Data Mining as Part of the Knowledge Discovery Process (3)
• Data Transformation and Encoding– Done to reduce the data– Zip codes may be aggregated into geographic
regions, incomes may be divided into ten ranges, …• Data Mining
– Mine different rules and patterns• Whenever a customer buys video equipment, he/she also
buys another electronic gadget
• Reporting and display of the discovered information– Results may be reported in a variety of formats, such
as listings, graphic outputs, summary tables, or vizualizations
Goals of Data Mining
• Prediction
• Identification
• Optimization
• Classification
Goals of Data Mining (1)
• Predication– How certain attributes within the data will
behave in the future• Predict what consumers will buy under certain
discounts• Predict how much sales volume a store would
generate in a given period• Predict whether deleting a product line would yield
more profits• Predict an earthquake based on certain seismic
wave patterns
Goals of Data Mining (2)
• Identification– Use data pattern to identify the existence of an item,
an event, or an activity• Intruders trying to break a system may be identified by the
programs executed, files accessed, and CPU time per session.
• Existence of a gene may be identified by certain sequences of nucleotide symbols in the DNA sequence
• Optimization– Optimize the use of limited resources such as time,
space, money, or materials and maximize output variables such as sales or profits under certain constraints
Goals of Data Mining (3)
• Classification– Partition data so that different classes or
categories can be identified based on combinations of parameters
• Customers in a supermarket may be categorized into
– Discount-seeking shoppers– Shoppers in a rush– Loyal regular shoppers– Infrequent shoppers
Data Mining Tasks
• Association Rule Discovery
• Classification
• Clustering
• Detection of Sequential Patterns
• Detection of Patterns within Time Series
• Etc.
Association Rules• Given a set of records each of which contain
some number of items from a given collection;– Produce dependency rules which will predict
occurrence of an item based on occurrences of other items.
TID Items
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk
Rules Discovered: {Milk} --> {Coke} {Diaper, Milk} --> {Beer}
Rules Discovered: {Milk} --> {Coke} {Diaper, Milk} --> {Beer}
Association Rules – Applications (1)
• Marketing and Sales Promotion:– Let the rule discovered be {Bagels, … } --> {Potato Chips}– Potato Chips as consequent => Can be used to
determine what should be done to boost its sales.– Bagels in the antecedent => Can be used to see which
products would be affected if the store discontinues selling bagels.
– Bagels in antecedent and Potato chips in consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato chips!
Association Rules – Applications (2)
• Supermarket Shelf Management:– Goal: To identify items that are bought together by sufficiently
many customers.– Approach: Process the point-of-sale data collected with barcode
scanners to find dependencies among items.
• Inventory Management:– Goal: A consumer appliance repair company wants to anticipate
the nature of repairs on its consumer products and keep the service vehicles equipped with right parts to reduce on number of visits to consumer households.
– Approach: Process the data on tools and parts required in previous repairs at different consumer locations and discover the co-occurrence patterns.
Prevalent Rules Interesting Rules
• Analysts already know about prevalent rules
• Interesting rules are those that deviate from prior expectation
• Mining’s payoff is in finding surprising phenomena
1995
1998
Milk andcereal selltogether!
Zzzz... Milk andcereal selltogether!
Classification
• Given old data about customers and payments, predict new applicant’s loan eligibility.
AgeSalaryProfessionLocationCustomer type
Previous customers Classifier Decision rules
Salary > 5 L
Prof. = Exec
New applicant’s
data
Good/bad
Classification: Definition
• Predictive task: Use some variables to predict unknown or future values of other variables
• Given a collection of records (training set )– Each record contains a set of attributes, one of the attributes is
the class.
• Find a model for class attribute as a function of the values of other attributes.
• Goal: previously unseen records should be assigned a class as accurately as possible.– A test set is used to determine the accuracy of the model.
Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
Classification Example
Tid Refund MaritalStatus
TaxableIncome Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes10
categoric
al
categoric
al
continuous
class
Refund MaritalStatus
TaxableIncome Cheat
No Single 75K ?
Yes Married 50K ?
No Married 150K ?
Yes Divorced 90K ?
No Single 40K ?
No Married 80K ?10
TestSet
Training Set
ModelLearn
Classifier
Classification - Application (1)
• Direct Marketing– Goal: Reduce cost of mailing by targeting a set of
consumers likely to buy a new cell-phone product.– Approach:
• Use the data for a similar product introduced before. • We know which customers decided to buy and which
decided otherwise. This {buy, don’t buy} decision forms the class attribute.
• Collect various demographic, lifestyle, and company-interaction related information about all such customers.
– Type of business, where they stay, how much they earn, etc.
• Use this information as input attributes to learn a classifier model.
Classification - Application (2)
• Fraud Detection– Goal: Predict fraudulent cases in credit card
transactions.– Approach:
• Use credit card transactions and the information on its account-holder as attributes.
– When does a customer buy, what does he buy, how often he pays on time, etc
• Label past transactions as fraud or fair transactions. This forms the class attribute.
• Learn a model for the class of the transactions.• Use this model to detect fraud by observing credit card
transactions on an account.
Classification - Application (3)
• Customer Loyalty:– Goal: To predict whether a customer is likely
to be lost to a competitor.– Approach:
• Use detailed record of transactions with each of the past and present customers, to find attributes.
– How often the customer calls, where he calls, what time-of-the day he calls most, his financial status, marital status, etc.
• Label the customers as loyal or disloyal.• Find a model for loyalty.
Clustering
• Descriptive task: Find human-interpretable patterns that describe the data
• Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that– Data points in one cluster are more similar to one another.– Data points in separate clusters are less similar to one another.
• Key requirement: Need a good measure of similarity between instances.– Similarity Measures:
• Euclidean Distance if attributes are continuous.• Other Problem-specific Measures
Clustering - Application
• Document Clustering:– Goal: To find groups of documents that are similar to
each other based on the important terms appearing in them.
– Approach: To identify frequently occurring terms in each document. Form a similarity measure based on the frequencies of different terms. Use it to cluster.
– Gain: Information Retrieval can utilize the clusters to relate a new document or search term to clustered documents.
Detection of Sequential Patterns
• A sequence of actions or events is sought
• Given is a set of objects, with each object associated with its own timeline of events, find rules that predict strong sequential dependencies among different events.
(A B) (C) (D E)
Sequential Patterns - Examples
• In healthcare– If a patient underwent cardiac bypass surgery for
blocked arteries and an aneurysm and later developed high blood urea within a year of surgery, he or she is likely to suffer from kidney failure within the next 18 months
• In point-of-sale transaction sequences,– Computer Bookstore:
(Intro_To_Visual_C) (C++_Primer) -->
(Perl_for_dummies,Tcl_Tk)– Athletic Apparel Store:
(Shoes) (Racket, Racketball) --> (Sports_Jacket)
Detection of Patterns within Time Series
• Similarities can be detected within positions of the time series
• Examples– Stocks of a utility company ABC Power and a
financial company XYZ Securities show the same pattern during 1998 in terms of closing stock price
– Two products show the same selling pattern in summer but a different one in winter
More about Associations Rules
• Retail Shops– Someone who buys bread is quite likely to buy milk– A person who bought the book “Database System
Concepts” is quite likely to buy the book “Operating Systems Concepts”
• Data consists of 2 parts:– Transactions: customers purchases– Items: things that we bought
• For each transaction, there is a list of items
ExampleTransaction ID Time Items Bought
101 6:35 Milk, bread, juice
792 7:38 Milk, juice
1130 8:05 Milk, eggs
1735 8:40 Bread, cookies, coffee
Customerbuys juice
Customerbuys both
Customerbuys milk
Association Rule Form
• X Y– Where X={x1,…,xn} and Y={y1,…,ym} are sets of items
– If a customer buys X, he/she is likely to buy Y
• We need to automatically discover such association rules
• Two concepts to measure the strength of an association– Support– Confidence
Support
• Support of X Y– Also called prevalence– Percentage of transactions that hold all the
items of the union X Y– Probability that the 2 item sets occurred
together– Estimated by:
Transactions that contain every item in X and Y
All transactions
ExampleTransaction ID Time Items Bought
101 6:35 Milk, bread, juice
792 7:38 Milk, juice
1130 8:05 Milk, eggs
1735 8:40 Bread, cookies, coffee
- Support of Milk Juice ?
- Support of Bread Juice ?
Large and Small Itemsets
• Support threshold: user-specified
• Large Itemsets– Sets of items that have a support that
exceeds a certain threshold
• Small Itemsets– Sets of items that have a support that is below
a certain threshold
Confidence
• Confidence of the rule XY– Conditional probability of a transaction
containing item set Y given that it contains items set X
• Estimated by:Transactions that contain every item in X and Y
Transactions that contain the items in X
• Confidence threshold: user-specified
ExampleTransaction ID Time Items Bought
101 6:35 Milk, bread, juice
792 7:38 Milk, juice
1130 8:05 Milk, eggs
1735 8:40 Bread, cookies, coffee
- Confidence of Milk Juice ?
- Confidence of Bread Juice ?
Goal of Mining Association Rules
• Generate all possible rules that exceed some minimum user-specified support and confidence thresholds
Generating Large Itemsets – Exploratory Method
• Consider all possibilities
• Example: three items a, b, and c– {a}, {b}, {c}, {a,b}, {b,c}, {a,c}, {a,b,c}
• Works for a very small number of items
• Very computation intensive if the number of items becomes large (thousands)– If the number of items is m, then the number
of distinct items sets is 2m
Generating Large Itemsets – A Priori Method
• Idea– A subset of a large itemset must also be large
• if {juice, milk, bread} is frequent, so is {juice, bread}• Every transaction having {juice, milk, bread} also contains
{juice, bread}– Conversely, an extension of a small itemset is also
small• If there is any itemset which is infrequent, its superset should
not be generated/tested!
• Overview:– Only sets with single items are considered in the first
pass.– In the second pass, sets with two items are
considered, and so on.
Generating Large Itemsets – A Priori Method (cont’d)
1. Test the support for itemsets of length 1, called 1-itemsets by scanning the database1. Discard those that do not exceed the threshold
2. Extend the large 1-itemsets into 2-itemsets by appending one item each time, to generate all candidate itemsets of length two1. Test the support for all candidate itemsets by scanning the
database and eliminate those 2-itemsets that do not meet the minimum support
3. Repeat the above steps.1. At step k, the previously found (k-1) itemsets are extended into
k-itemsets and tested for minimum support
4. The process is repeated until no large itemsets can be found
The A Priori Method - Example
Frequency ≥ 50%, Confidence 100%:A CB E
BC ECE BBE C
Database TDB
1st scan
C1L1
L2
C2 C22nd scan
C3 L33rd scan
Tid Items
10 A, C, D
20 B, C, E
30 A, B, C, E
40 B, E
Itemset sup
{A} 2
{B} 3
{C} 3
{D} 1
{E} 3
Itemset sup
{A} 2
{B} 3
{C} 3
{E} 3
Itemset
{A, B}
{A, C}
{A, E}
{B, C}
{B, E}
{C, E}
Itemset sup{A, B} 1{A, C} 2{A, E} 1{B, C} 2{B, E} 3{C, E} 2
Itemset sup{A, C} 2{B, C} 2{B, E} 3{C, E} 2
Itemset
{B, C, E}Itemset sup{B, C, E} 2
top related