cse 5331/7331 f'07© prentice hall1 cse 5331/7331 fall 2007 data mining introductory and...

233
CSE 5331/7331 F' 07 © Prentice Hall 1 CSE 5331/7331 CSE 5331/7331 Fall 2007 Fall 2007 DATA MINING DATA MINING Introductory and Related Topics Introductory and Related Topics Margaret H. Dunham Margaret H. Dunham Department of Computer Science and Department of Computer Science and Engineering Engineering Southern Methodist University Southern Methodist University Slides extracted from Slides extracted from Data Mining, Introductory and Advanced Topics Data Mining, Introductory and Advanced Topics , Prentice Hall, , Prentice Hall, 2002. 2002.

Upload: randolph-baker

Post on 04-Jan-2016

226 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

CSE 5331/7331 F'07 © Prentice Hall 1

CSE 5331/7331CSE 5331/7331Fall 2007Fall 2007

DATA MININGDATA MININGIntroductory and Related TopicsIntroductory and Related Topics

Margaret H. DunhamMargaret H. DunhamDepartment of Computer Science and EngineeringDepartment of Computer Science and Engineering

Southern Methodist UniversitySouthern Methodist University

Slides extracted from Slides extracted from Data Mining, Introductory and Advanced TopicsData Mining, Introductory and Advanced Topics, Prentice Hall, 2002., Prentice Hall, 2002.

Page 2: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 2CSE 5331/7331 F'07

Data Mining OutlineData Mining Outline

PART IPART I– IntroductionIntroduction– Related ConceptsRelated Concepts

PART II– Classification– Clustering– Association Rules

Page 3: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 3CSE 5331/7331 F'07

Introduction OutlineIntroduction Outline

Define data miningDefine data mining Data mining vs. databasesData mining vs. databases Basic data mining tasksBasic data mining tasks Data mining developmentData mining development Data mining issuesData mining issues

Goal:Goal: Provide an overview of data mining. Provide an overview of data mining.

Page 4: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 4CSE 5331/7331 F'07

IntroductionIntroduction

Data is growing at a phenomenal rateData is growing at a phenomenal rate Users expect more sophisticated Users expect more sophisticated

informationinformation How?How?

UNCOVER HIDDEN INFORMATIONUNCOVER HIDDEN INFORMATION

DATA MININGDATA MINING

Page 5: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 5CSE 5331/7331 F'07

Data Mining DefinitionData Mining Definition

Finding hidden information in a Finding hidden information in a databasedatabase

Fit data to a modelFit data to a model Similar termsSimilar terms

– Exploratory data analysisExploratory data analysis– Data driven discoveryData driven discovery– Deductive learningDeductive learning

Page 6: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 6CSE 5331/7331 F'07

Data Mining AlgorithmData Mining Algorithm

Objective: Fit Data to a ModelObjective: Fit Data to a Model– DescriptiveDescriptive– PredictivePredictive

Preference – Technique to choose the Preference – Technique to choose the best modelbest model

Search – Technique to search the dataSearch – Technique to search the data– ““Query”Query”

Page 7: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 7CSE 5331/7331 F'07

Database Processing vs. Data Database Processing vs. Data Mining ProcessingMining Processing

QueryQuery– Well definedWell defined– SQLSQL

QueryQuery– Poorly definedPoorly defined– No precise query languageNo precise query language

DataData– Operational dataOperational data

OutputOutput– PrecisePrecise– Subset of databaseSubset of database

DataData– Not operational dataNot operational data

OutputOutput– FuzzyFuzzy– Not a subset of databaseNot a subset of database

Page 8: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 8CSE 5331/7331 F'07

Query ExamplesQuery Examples DatabaseDatabase

Data MiningData Mining

– Find all customers who have purchased milkFind all customers who have purchased milk

– Find all items which are frequently purchased Find all items which are frequently purchased with milk. (association rules)with milk. (association rules)

– Find all credit applicants with last name of Smith.Find all credit applicants with last name of Smith.– Identify customers who have purchased more Identify customers who have purchased more than $10,000 in the last month.than $10,000 in the last month.

– Find all credit applicants who are poor credit Find all credit applicants who are poor credit risks. (classification)risks. (classification)– Identify customers with similar buying habits. Identify customers with similar buying habits. (Clustering)(Clustering)

Page 9: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 9CSE 5331/7331 F'07

Data Mining Models and TasksData Mining Models and Tasks

Page 10: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 10CSE 5331/7331 F'07

Basic Data Mining TasksBasic Data Mining Tasks Classification Classification maps data into predefined groups maps data into predefined groups

or classesor classes– Supervised learningSupervised learning– Pattern recognitionPattern recognition– PredictionPrediction

RegressionRegression is used to map a data item to a real is used to map a data item to a real valued prediction variable.valued prediction variable.

Clustering Clustering groups similar data together into groups similar data together into clusters.clusters.– Unsupervised learningUnsupervised learning– SegmentationSegmentation– PartitioningPartitioning

Page 11: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 11CSE 5331/7331 F'07

Basic Data Mining Tasks Basic Data Mining Tasks (cont’d)(cont’d)

Summarization Summarization maps data into subsets with maps data into subsets with associated simple descriptions.associated simple descriptions.– CharacterizationCharacterization– GeneralizationGeneralization

Link AnalysisLink Analysis uncovers relationships among uncovers relationships among data.data.– Affinity AnalysisAffinity Analysis– Association RulesAssociation Rules– Sequential Analysis determines sequential Sequential Analysis determines sequential

patterns.patterns.

Page 12: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 12CSE 5331/7331 F'07

Ex: Time Series AnalysisEx: Time Series Analysis Example: Stock MarketExample: Stock Market Predict future valuesPredict future values Determine similar patterns over timeDetermine similar patterns over time Classify behaviorClassify behavior

Page 13: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 13CSE 5331/7331 F'07

Data Mining vs. KDDData Mining vs. KDD

Knowledge Discovery in Databases Knowledge Discovery in Databases (KDD):(KDD): process of finding useful process of finding useful information and patterns in data.information and patterns in data.

Data Mining:Data Mining: Use of algorithms to Use of algorithms to extract the information and patterns extract the information and patterns derived by the KDD process. derived by the KDD process.

Page 14: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 14CSE 5331/7331 F'07

KDD ProcessKDD Process

Selection:Selection: Obtain data from various sources. Obtain data from various sources. Preprocessing:Preprocessing: Cleanse data. Cleanse data. Transformation:Transformation: Convert to common format. Convert to common format.

Transform to new format.Transform to new format. Data Mining:Data Mining: Obtain desired results. Obtain desired results. Interpretation/Evaluation:Interpretation/Evaluation: Present results Present results

to user in meaningful manner.to user in meaningful manner.

Modified from [FPSS96C]

Page 15: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 15CSE 5331/7331 F'07

KDD Process Ex: Web LogKDD Process Ex: Web Log Selection:Selection:

– Select log data (dates and locations) to useSelect log data (dates and locations) to use Preprocessing:Preprocessing:

– Remove identifying URLsRemove identifying URLs– Remove error logsRemove error logs

Transformation:Transformation: – Sessionize (sort and group)Sessionize (sort and group)

Data Mining:Data Mining: – Identify and count patternsIdentify and count patterns– Construct data structureConstruct data structure

Interpretation/Evaluation:Interpretation/Evaluation:– Identify and display frequently accessed sequences.Identify and display frequently accessed sequences.

Potential User Applications:Potential User Applications:– Cache predictionCache prediction– PersonalizationPersonalization

Page 16: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 16CSE 5331/7331 F'07

Data Mining DevelopmentData Mining Development•Similarity Measures•Hierarchical Clustering•IR Systems•Imprecise Queries•Textual Data•Web Search Engines

•Bayes Theorem•Regression Analysis•EM Algorithm•K-Means Clustering•Time Series Analysis

•Neural Networks•Decision Tree Algorithms

•Algorithm Design Techniques•Algorithm Analysis•Data Structures

•Relational Data Model•SQL•Association Rule Algorithms•Data Warehousing•Scalability Techniques

Page 17: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 17CSE 5331/7331 F'07

KDD IssuesKDD Issues

Human InteractionHuman Interaction OverfittingOverfitting OutliersOutliers InterpretationInterpretation Visualization Visualization Large DatasetsLarge Datasets High DimensionalityHigh Dimensionality

Page 18: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 18CSE 5331/7331 F'07

KDD Issues (cont’d)KDD Issues (cont’d)

Multimedia DataMultimedia Data Missing DataMissing Data Irrelevant DataIrrelevant Data Noisy DataNoisy Data Changing DataChanging Data IntegrationIntegration ApplicationApplication

Page 19: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 19CSE 5331/7331 F'07

Social Implications of DMSocial Implications of DM

Privacy Privacy ProfilingProfiling Unauthorized useUnauthorized use

Page 20: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 20CSE 5331/7331 F'07

Data Mining MetricsData Mining Metrics

UsefulnessUsefulness Return on Investment (ROI)Return on Investment (ROI) AccuracyAccuracy Space/TimeSpace/Time

Page 21: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 21CSE 5331/7331 F'07

Visualization TechniquesVisualization Techniques

GraphicalGraphical GeometricGeometric Icon-basedIcon-based Pixel-basedPixel-based HierarchicalHierarchical HybridHybrid

Page 22: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 22CSE 5331/7331 F'07

Models Based on SummarizationModels Based on Summarization

Visualization:Visualization: Frequency distribution, mean, variance, Frequency distribution, mean, variance, median, mode, etc.median, mode, etc.

Box Plot:Box Plot:

Page 23: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 23CSE 5331/7331 F'07

Scatter DiagramScatter Diagram

Page 24: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 24CSE 5331/7331 F'07

Related Concepts OutlineRelated Concepts Outline

Database/OLTP SystemsDatabase/OLTP Systems Fuzzy Sets and LogicFuzzy Sets and Logic Information Retrieval(Web Search Engines)Information Retrieval(Web Search Engines) Dimensional ModelingDimensional Modeling Data WarehousingData Warehousing OLAP/DSSOLAP/DSS StatisticsStatistics Machine LearningMachine Learning Pattern MatchingPattern Matching

Goal:Goal: Examine some areas which are related to Examine some areas which are related to data mining.data mining.

Page 25: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 25CSE 5331/7331 F'07

DB & OLTP SystemsDB & OLTP Systems SchemaSchema

– (ID,Name,Address,Salary,JobNo)(ID,Name,Address,Salary,JobNo) Data ModelData Model

– ERER– RelationalRelational

TransactionTransaction Query:Query:

SELECT NameSELECT NameFROM TFROM TWHERE Salary > 100000WHERE Salary > 100000

DM: Only imprecise queriesDM: Only imprecise queries

Page 26: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 26CSE 5331/7331 F'07

Fuzzy Sets and LogicFuzzy Sets and Logic Fuzzy Set:Fuzzy Set: Set membership function is a real valued Set membership function is a real valued

function with output in the range [0,1].function with output in the range [0,1]. f(x): Probability x is in F.f(x): Probability x is in F. 1-f(x): Probability x is not in F.1-f(x): Probability x is not in F. EX:EX:

– T = {x | x is a person and x is tall}T = {x | x is a person and x is tall}– Let f(x) be the probability that x is tallLet f(x) be the probability that x is tall– Here f is the membership functionHere f is the membership function

DM: DM: Prediction and classification are fuzzy.Prediction and classification are fuzzy.

Page 27: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 27CSE 5331/7331 F'07

Fuzzy SetsFuzzy Sets

Page 28: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 28CSE 5331/7331 F'07

Classification/Prediction is Classification/Prediction is FuzzyFuzzy

Loan

Amnt

Simple Fuzzy

Accept Accept

RejectReject

Page 29: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 29CSE 5331/7331 F'07

Information Retrieval Information Retrieval

Information Retrieval (IR):Information Retrieval (IR): retrieving desired retrieving desired information from textual data.information from textual data.

Library ScienceLibrary Science Digital LibrariesDigital Libraries Web Search EnginesWeb Search Engines Traditionally keyword basedTraditionally keyword based Sample query:Sample query:

Find all documents about “data mining”.Find all documents about “data mining”.

DM: Similarity measures; DM: Similarity measures; Mine text/Web data.Mine text/Web data.

Page 30: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 30CSE 5331/7331 F'07

Information Retrieval (cont’d)Information Retrieval (cont’d)

Similarity:Similarity: measure of how close a measure of how close a query is to a document.query is to a document.

Documents which are “close enough” Documents which are “close enough” are retrieved.are retrieved.

Metrics:Metrics:– PrecisionPrecision = |Relevant and Retrieved| = |Relevant and Retrieved|

|Retrieved||Retrieved|– RecallRecall = |Relevant and Retrieved|= |Relevant and Retrieved|

|Relevant||Relevant|

Page 31: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 31CSE 5331/7331 F'07

IR Query Result Measures IR Query Result Measures and Classificationand Classification

IR Classification

Page 32: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 32CSE 5331/7331 F'07

Dimensional ModelingDimensional Modeling View data in a hierarchical manner more as View data in a hierarchical manner more as

business executives mightbusiness executives might Useful in decision support systems and miningUseful in decision support systems and mining Dimension:Dimension: collection of logically related collection of logically related

attributes; axis for modeling data.attributes; axis for modeling data. Facts:Facts: data stored data stored Ex: Dimensions – products, locations, dateEx: Dimensions – products, locations, date

Facts – quantity, unit priceFacts – quantity, unit price

DM: May view data as dimensional.DM: May view data as dimensional.

Page 33: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 33CSE 5331/7331 F'07

Relational View of DataRelational View of Data

ProdID LocID Date Quantity UnitPrice 123 Dallas 022900 5 25 123 Houston 020100 10 20 150 Dallas 031500 1 100 150 Dallas 031500 5 95 150 Fort

Worth 021000 5 80

150 Chicago 012000 20 75 200 Seattle 030100 5 50 300 Rochester 021500 200 5 500 Bradenton 022000 15 20 500 Chicago 012000 10 25 1

Page 34: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 34CSE 5331/7331 F'07

Dimensional Modeling QueriesDimensional Modeling Queries

Roll Up:Roll Up: more general dimension more general dimension Drill Down:Drill Down: more specific dimension more specific dimension Dimension (Aggregation) HierarchyDimension (Aggregation) Hierarchy SQL uses aggregationSQL uses aggregation Decision Support Systems (DSS):Decision Support Systems (DSS):

Computer systems and tools to assist Computer systems and tools to assist managers in making decisions and managers in making decisions and solving problems.solving problems.

Page 35: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 35CSE 5331/7331 F'07

Cube view of DataCube view of Data

Page 36: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 36CSE 5331/7331 F'07

Aggregation HierarchiesAggregation Hierarchies

Page 37: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 37CSE 5331/7331 F'07

Star SchemaStar Schema

Page 38: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 38CSE 5331/7331 F'07

Data WarehousingData Warehousing

““Subject-oriented, integrated, time-variant, nonvolatile” Subject-oriented, integrated, time-variant, nonvolatile” William InmonWilliam Inmon

Operational Data:Operational Data: Data used in day to day needs of Data used in day to day needs of company.company.

Informational Data:Informational Data: Supports other functions such as Supports other functions such as planning and forecasting.planning and forecasting.

Data mining tools often access data warehouses rather Data mining tools often access data warehouses rather than operational data.than operational data.

DM: May access data in warehouse.DM: May access data in warehouse.

Page 39: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 39CSE 5331/7331 F'07

Operational vs. InformationalOperational vs. Informational

  Operational Data Data Warehouse

Application OLTP OLAP

Use Precise Queries Ad Hoc

Temporal Snapshot Historical

Modification Dynamic Static

Orientation Application Business

Data Operational Values Integrated

Size Gigabits TerabitsLevel Detailed Summarized

Access Often Less Often

Response Few Seconds Minutes

Data Schema Relational Star/Snowflake

Page 40: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 40CSE 5331/7331 F'07

OLAPOLAP Online Analytic Processing (OLAP):Online Analytic Processing (OLAP): provides more provides more

complex queries than OLTP.complex queries than OLTP. OnLine Transaction Processing (OLTP):OnLine Transaction Processing (OLTP): traditional traditional

database/transaction processing.database/transaction processing. Dimensional data; cube view Dimensional data; cube view Visualization of operations:Visualization of operations:

– Slice:Slice: examine sub-cube. examine sub-cube.– Dice:Dice: rotate cube to look at another dimension. rotate cube to look at another dimension.– Roll Up/Drill DownRoll Up/Drill Down

DM: May use OLAP queries.DM: May use OLAP queries.

Page 41: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 41CSE 5331/7331 F'07

OLAP OperationsOLAP Operations

Single Cell Multiple Cells Slice Dice

Roll Up

Drill Down

Page 42: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 42CSE 5331/7331 F'07

StatisticsStatistics Simple descriptive modelsSimple descriptive models Statistical inference:Statistical inference: generalizing a model generalizing a model

created from a sample of the data to the entire created from a sample of the data to the entire dataset.dataset.

Exploratory Data Analysis:Exploratory Data Analysis: – Data can actually drive the creation of the Data can actually drive the creation of the

modelmodel– Opposite of traditional statistical view.Opposite of traditional statistical view.

Data mining targeted to business userData mining targeted to business user

DM: Many data mining methods come DM: Many data mining methods come from statistical techniques. from statistical techniques.

Page 43: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 43CSE 5331/7331 F'07

Machine LearningMachine Learning Machine Learning:Machine Learning: area of AI that examines how to area of AI that examines how to

write programs that can learn.write programs that can learn. Often used in classification and prediction Often used in classification and prediction Supervised Learning:Supervised Learning: learns by example. learns by example. Unsupervised Learning: Unsupervised Learning: learns without knowledge of learns without knowledge of

correct answers.correct answers. Machine learning often deals with small static datasets. Machine learning often deals with small static datasets.

DM: Uses many machine learning DM: Uses many machine learning techniques.techniques.

Page 44: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 44CSE 5331/7331 F'07

Pattern Matching Pattern Matching (Recognition)(Recognition)

Pattern Matching:Pattern Matching: finds occurrences of finds occurrences of a predefined pattern in the data.a predefined pattern in the data.

Applications include speech recognition, Applications include speech recognition, information retrieval, time series information retrieval, time series analysis.analysis.

DM: Type of classification.DM: Type of classification.

Page 45: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 45CSE 5331/7331 F'07

DM vs. Related TopicsDM vs. Related Topics

Area Query Data Results Output DB/OLTP Precise Database Precise DB Objects

or Aggregation

IR Precise Documents Vague Documents OLAP Analysis Multidimensional Precise DB Objects

or Aggregation

DM Vague Preprocessed Vague KDD Objects

Page 46: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 46CSE 5331/7331 F'07

Data Mining OutlineData Mining Outline

PART I– Introduction– Related Concepts

PART IIPART II– ClassificationClassification– ClusteringClustering– Association RulesAssociation Rules

Page 47: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 47CSE 5331/7331 F'07

Classification OutlineClassification Outline

Classification Problem OverviewClassification Problem Overview Classification TechniquesClassification Techniques

– RegressionRegression– DistanceDistance– Decision TreesDecision Trees– RulesRules– Neural NetworksNeural Networks

Goal:Goal: Provide an overview of the classification Provide an overview of the classification problem and introduce some of the basic problem and introduce some of the basic algorithmsalgorithms

Page 48: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 48CSE 5331/7331 F'07

Classification ProblemClassification Problem Given a database D={tGiven a database D={t11,t,t22,…,t,…,tnn} and a set } and a set

of classes C={Cof classes C={C11,…,C,…,Cmm}, the }, the Classification ProblemClassification Problem is to define a is to define a mapping f:Dmapping f:DC where each tC where each tii is assigned is assigned to one class.to one class.

Actually divides D into Actually divides D into equivalence equivalence classesclasses..

PredictionPrediction isis similar, but may be viewed similar, but may be viewed as having infinite number of classes.as having infinite number of classes.

Page 49: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 49CSE 5331/7331 F'07

Classification ExamplesClassification Examples

Teachers classify students’ grades as Teachers classify students’ grades as A, B, C, D, or F. A, B, C, D, or F.

Identify mushrooms as poisonous or Identify mushrooms as poisonous or edible.edible.

Predict when a river will flood.Predict when a river will flood. Identify individuals with credit risks. Identify individuals with credit risks. Speech recognitionSpeech recognition Pattern recognitionPattern recognition

Page 50: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 50CSE 5331/7331 F'07

Classification Ex: GradingClassification Ex: Grading

If x >= 90 then grade If x >= 90 then grade =A.=A.

If 80<=x<90 then If 80<=x<90 then grade =B.grade =B.

If 70<=x<80 then If 70<=x<80 then grade =C.grade =C.

If 60<=x<70 then If 60<=x<70 then grade =D.grade =D.

If x<50 then grade =F.If x<50 then grade =F.

>=90<90

x

>=80<80

x

>=70<70

x

F

B

A

>=60<50

x C

D

Page 51: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 51CSE 5331/7331 F'07

Classification Ex: Letter Classification Ex: Letter RecognitionRecognition

View letters as constructed from 5 components:

Letter C

Letter E

Letter A

Letter D

Letter F

Letter B

Page 52: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 52CSE 5331/7331 F'07

Classification TechniquesClassification Techniques

Approach:Approach:1.1. Create specific model by evaluating Create specific model by evaluating

training data (or using domain training data (or using domain experts’ knowledge).experts’ knowledge).

2.2. Apply model developed to new data.Apply model developed to new data. Classes must be predefinedClasses must be predefined Most common techniques use DTs, Most common techniques use DTs,

NNs, or are based on distances or NNs, or are based on distances or statistical methods.statistical methods.

Page 53: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 53CSE 5331/7331 F'07

Defining ClassesDefining Classes

Partitioning Based

Distance Based

Page 54: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 54CSE 5331/7331 F'07

Issues in ClassificationIssues in Classification

Missing DataMissing Data– IgnoreIgnore– Replace with assumed valueReplace with assumed value

Measuring PerformanceMeasuring Performance– Classification accuracy on test dataClassification accuracy on test data– Confusion matrixConfusion matrix– OC CurveOC Curve

Page 55: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 55CSE 5331/7331 F'07

Height Example DataHeight Example DataName Gender Height Output1 Output2 Kristina F 1.6m Short Medium Jim M 2m Tall Medium Maggie F 1.9m Medium Tall Martha F 1.88m Medium Tall Stephanie F 1.7m Short Medium Bob M 1.85m Medium Medium Kathy F 1.6m Short Medium Dave M 1.7m Short Medium Worth M 2.2m Tall Tall Steven M 2.1m Tall Tall Debbie F 1.8m Medium Medium Todd M 1.95m Medium Medium Kim F 1.9m Medium Tall Amy F 1.8m Medium Medium Wynette F 1.75m Medium Medium

Page 56: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 56CSE 5331/7331 F'07

Classification PerformanceClassification Performance

True Positive

True NegativeFalse Positive

False Negative

Page 57: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 57CSE 5331/7331 F'07

Confusion Matrix ExampleConfusion Matrix Example

Using height data example with Output1 Using height data example with Output1 correct and Output2 actual assignmentcorrect and Output2 actual assignment

Actual Assignment Membership Short Medium Tall Short 0 4 0 Medium 0 5 3 Tall 0 1 2

Page 58: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 58CSE 5331/7331 F'07

Operating Characteristic CurveOperating Characteristic Curve

Page 59: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 59CSE 5331/7331 F'07

RegressionRegression Assume data fits a predefined functionAssume data fits a predefined function Determine best values for Determine best values for regression regression

coefficientscoefficients cc00,c,c11,…,c,…,cnn.. Assume an error: y = cAssume an error: y = c00+c+c11xx11+…+c+…+cnnxxnn+ Estimate error using mean squared error for

training set:

Page 60: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 60CSE 5331/7331 F'07

Linear Regression Poor FitLinear Regression Poor Fit

Page 61: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 61CSE 5331/7331 F'07

Classification Using RegressionClassification Using Regression

Division:Division: Use regression function to Use regression function to divide area into regions. divide area into regions.

PredictionPrediction: Use regression function to : Use regression function to predict a class membership function. predict a class membership function. Input includes desired class.Input includes desired class.

Page 62: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 62CSE 5331/7331 F'07

DivisionDivision

Page 63: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 63CSE 5331/7331 F'07

PredictionPrediction

Page 64: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 64CSE 5331/7331 F'07

Classification Using DistanceClassification Using Distance Place items in class to which they are Place items in class to which they are

“closest”.“closest”. Must determine distance between an item Must determine distance between an item

and a class.and a class. Classes represented byClasses represented by

– Centroid:Centroid: Central value. Central value.– Medoid:Medoid: Representative point. Representative point.– Individual pointsIndividual points

Algorithm: KNNAlgorithm: KNN

Page 65: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 65CSE 5331/7331 F'07

Distance MeasuresDistance Measures

Measure dissimilarity between objectsMeasure dissimilarity between objects

Page 66: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 66CSE 5331/7331 F'07

K Nearest Neighbor (KNN):K Nearest Neighbor (KNN):

Training set includes classes.Training set includes classes. Examine K items near item to be Examine K items near item to be

classified.classified. New item placed in class with the most New item placed in class with the most

number of close items.number of close items. O(q) for each tuple to be classified. O(q) for each tuple to be classified.

(Here q is the size of the training set.)(Here q is the size of the training set.)

Page 67: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 67CSE 5331/7331 F'07

KNNKNN

Page 68: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 68CSE 5331/7331 F'07

KNN AlgorithmKNN Algorithm

Page 69: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 69CSE 5331/7331 F'07

Classification Using Decision Classification Using Decision TreesTrees

Partitioning based:Partitioning based: Divide search Divide search space into rectangular regions.space into rectangular regions.

Tuple placed into class based on the Tuple placed into class based on the region within which it falls.region within which it falls.

DT approaches differ in how the tree is DT approaches differ in how the tree is built: built: DT InductionDT Induction

Internal nodes associated with attribute Internal nodes associated with attribute and arcs with values for that attribute.and arcs with values for that attribute.

Algorithms: ID3, C4.5, CARTAlgorithms: ID3, C4.5, CART

Page 70: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 70CSE 5331/7331 F'07

Twenty Questions GameTwenty Questions Game

Page 71: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 71CSE 5331/7331 F'07

Decision TreesDecision Trees Decision Tree (DT):Decision Tree (DT):

– Tree where the root and each internal node is Tree where the root and each internal node is labeled with a question. labeled with a question.

– The arcs represent each possible answer to The arcs represent each possible answer to the associated question. the associated question.

– Each leaf node represents a prediction of a Each leaf node represents a prediction of a solution to the problem.solution to the problem.

Popular technique for classification; Leaf Popular technique for classification; Leaf node indicates class to which the node indicates class to which the corresponding tuple belongs.corresponding tuple belongs.

Page 72: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 72CSE 5331/7331 F'07

Decision Tree ExampleDecision Tree Example

Page 73: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 73CSE 5331/7331 F'07

Decision TreesDecision Trees

AA Decision Tree Model Decision Tree Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Decision TreeDecision Tree– Algorithm to create the treeAlgorithm to create the tree– Algorithm that applies the tree to data Algorithm that applies the tree to data

Creation of the tree is the most difficult part.Creation of the tree is the most difficult part. Processing is basically a search similar to Processing is basically a search similar to

that in a binary search tree (although DT may that in a binary search tree (although DT may not be binary).not be binary).

Page 74: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 74CSE 5331/7331 F'07

Decision Tree AlgorithmDecision Tree Algorithm

Page 75: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 75CSE 5331/7331 F'07

DT DT Advantages/DisadvantagesAdvantages/Disadvantages

Advantages:Advantages:– Easy to understand. Easy to understand. – Easy to generate rulesEasy to generate rules

Disadvantages:Disadvantages:– May suffer from overfitting.May suffer from overfitting.– Classifies by rectangular partitioning.Classifies by rectangular partitioning.– Does not easily handle nonnumeric data.Does not easily handle nonnumeric data.– Can be quite large – pruning is necessary.Can be quite large – pruning is necessary.

Page 76: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 76CSE 5331/7331 F'07

Decision TreeDecision TreeGiven: Given:

– D = {tD = {t11, …, t, …, tnn} where t} where tii=<t=<ti1i1, …, t, …, tihih> > – Database schema contains {ADatabase schema contains {A11, A, A22, …, A, …, Ahh}}– Classes C={CClasses C={C11, …., C, …., Cmm}}

Decision or Classification TreeDecision or Classification Tree is is a tree associated a tree associated with D such thatwith D such that– Each internal node is labeled with attribute, AEach internal node is labeled with attribute, A ii

– Each arc is labeled with predicate which can be Each arc is labeled with predicate which can be applied to attribute at parentapplied to attribute at parent

– Each leaf node is labeled with a class, CEach leaf node is labeled with a class, C jj

Page 77: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 77CSE 5331/7331 F'07

DT InductionDT Induction

Page 78: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 78CSE 5331/7331 F'07

DT Splits Area DT Splits Area

Gender

Height

M

F

Page 79: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 79CSE 5331/7331 F'07

Comparing DTsComparing DTs

BalancedDeep

Page 80: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 80CSE 5331/7331 F'07

DT IssuesDT Issues

Choosing Splitting AttributesChoosing Splitting Attributes Ordering of Splitting AttributesOrdering of Splitting Attributes SplitsSplits Tree StructureTree Structure Stopping CriteriaStopping Criteria Training DataTraining Data PruningPruning

Page 81: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 81CSE 5331/7331 F'07

Decision Tree Induction is often based on Decision Tree Induction is often based on Information TheoryInformation Theory

SoSo

Page 82: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 82CSE 5331/7331 F'07

InformationInformation

Page 83: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 83CSE 5331/7331 F'07

DT Induction DT Induction

When all the marbles in the bowl are When all the marbles in the bowl are mixed up, little information is given. mixed up, little information is given.

When the marbles in the bowl are all When the marbles in the bowl are all from one class and those in the other from one class and those in the other two classes are on either side, more two classes are on either side, more information is given.information is given.

Use this approach with DT Induction !Use this approach with DT Induction !

Page 84: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 84CSE 5331/7331 F'07

Information/EntropyInformation/Entropy Given probabilitites pGiven probabilitites p11, p, p22, .., p, .., pss whose sum is whose sum is

1, 1, EntropyEntropy is defined as:is defined as:

Entropy measures the amount of randomness Entropy measures the amount of randomness or surprise or uncertainty.or surprise or uncertainty.

Goal in classificationGoal in classification– no surpriseno surprise– entropy = 0entropy = 0

Page 85: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 85CSE 5331/7331 F'07

EntropyEntropy

log (1/p) H(p,1-p)

Page 86: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 86CSE 5331/7331 F'07

ID3ID3 Creates tree using information theory Creates tree using information theory

concepts and tries to reduce expected concepts and tries to reduce expected number of comparison..number of comparison..

ID3 chooses split attribute with the highest ID3 chooses split attribute with the highest information gain:information gain:

Page 87: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 87CSE 5331/7331 F'07

ID3 Example (Output1)ID3 Example (Output1) Starting state entropy:Starting state entropy:4/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.43844/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.4384 Gain using gender:Gain using gender:

– Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764– Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) = Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) =

0.43920.4392– Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) = Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) =

0.341520.34152– Gain: 0.4384 – 0.34152 = 0.09688Gain: 0.4384 – 0.34152 = 0.09688

Gain using height:Gain using height:0.4384 – (2/15)(0.301) = 0.39830.4384 – (2/15)(0.301) = 0.3983

Choose height as first splitting attributeChoose height as first splitting attribute

Page 88: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 88CSE 5331/7331 F'07

C4.5C4.5 ID3 ID3 favors attributes with large number of favors attributes with large number of

divisionsdivisions Improved version of ID3:Improved version of ID3:

– Missing DataMissing Data– Continuous DataContinuous Data– PruningPruning– RulesRules– GainRatio:GainRatio:

Page 89: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 89CSE 5331/7331 F'07

CARTCART

Create Binary TreeCreate Binary Tree Uses entropyUses entropy Formula to choose split point, s, for node t:Formula to choose split point, s, for node t:

PPLL,P,PRR probability that a tuple in the training set probability that a tuple in the training set

will be on the left or right side of the tree.will be on the left or right side of the tree.

Page 90: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 90CSE 5331/7331 F'07

CART ExampleCART Example At the start, there are six choices for At the start, there are six choices for

split point split point (right branch on equality):(right branch on equality):– P(Gender)=P(Gender)=2(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.2242(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.224

– P(1.6) = 0P(1.6) = 0– P(1.7) = P(1.7) = 2(2/15)(13/15)(0 + 8/15 + 3/15) = 0.1692(2/15)(13/15)(0 + 8/15 + 3/15) = 0.169

– P(1.8) = P(1.8) = 2(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.3852(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.385

– P(1.9) = P(1.9) = 2(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.2562(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.256

– P(2.0) = P(2.0) = 2(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.322(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.32

Split at 1.8Split at 1.8

Page 91: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 91CSE 5331/7331 F'07

Classification Using Neural Classification Using Neural NetworksNetworks

Typical NN structure for classification:Typical NN structure for classification:– One output node per classOne output node per class– Output value is class membership function valueOutput value is class membership function value

Supervised learning Supervised learning For each tuple in training set, propagate it For each tuple in training set, propagate it

through NN. Adjust weights on edges to through NN. Adjust weights on edges to improve future classification. improve future classification.

Algorithms: Propagation, Backpropagation, Algorithms: Propagation, Backpropagation, Gradient DescentGradient Descent

Page 92: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 92CSE 5331/7331 F'07

Neural Networks Neural Networks Based on observed functioning of human Based on observed functioning of human

brain. brain. (Artificial Neural Networks (ANN)(Artificial Neural Networks (ANN) Our view of neural networks is very simplistic. Our view of neural networks is very simplistic. We view a neural network (NN) from a We view a neural network (NN) from a

graphical viewpoint.graphical viewpoint. Alternatively, a NN may be viewed from the Alternatively, a NN may be viewed from the

perspective of matrices.perspective of matrices. Used in pattern recognition, speech Used in pattern recognition, speech

recognition, computer vision, and recognition, computer vision, and classification.classification.

Page 93: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 93CSE 5331/7331 F'07

Neural NetworksNeural Networks Neural Network (NN)Neural Network (NN) is a directed graph is a directed graph

F=<V,A> with vertices V={1,2,…,n} and arcs F=<V,A> with vertices V={1,2,…,n} and arcs A={<i,j>|1<=i,j<=n}, with the following A={<i,j>|1<=i,j<=n}, with the following restrictions:restrictions:– V is partitioned into a set of input nodes, VV is partitioned into a set of input nodes, V II, ,

hidden nodes, Vhidden nodes, VHH, and output nodes, V, and output nodes, VOO..– The vertices are also partitioned into layers The vertices are also partitioned into layers – Any arc <i,j> must have node i in layer h-1 Any arc <i,j> must have node i in layer h-1

and node j in layer h.and node j in layer h.– Arc <i,j> is labeled with a numeric value wArc <i,j> is labeled with a numeric value w ijij..– Node i is labeled with a function fNode i is labeled with a function f ii..

Page 94: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 94CSE 5331/7331 F'07

Neural Network ExampleNeural Network Example

Page 95: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 95CSE 5331/7331 F'07

NN NodeNN Node

Page 96: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 96CSE 5331/7331 F'07

NN Activation FunctionsNN Activation Functions

Functions associated with nodes in Functions associated with nodes in graph.graph.

Output may be in range [-1,1] or [0,1]Output may be in range [-1,1] or [0,1]

Page 97: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 97CSE 5331/7331 F'07

NN Activation FunctionsNN Activation Functions

Page 98: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 98CSE 5331/7331 F'07

NN LearningNN Learning

Propagate input values through graph.Propagate input values through graph. Compare output to desired output.Compare output to desired output. Adjust weights in graph accordingly.Adjust weights in graph accordingly.

Page 99: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 99CSE 5331/7331 F'07

Neural NetworksNeural Networks

A A Neural Network ModelNeural Network Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Neural Network graph Neural Network graph – Learning algorithm that indicates how Learning algorithm that indicates how

learning takes place.learning takes place.– Recall techniques that determine hew Recall techniques that determine hew

information is obtained from the network. information is obtained from the network. We will look at propagation as the recall We will look at propagation as the recall

technique.technique.

Page 100: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 100CSE 5331/7331 F'07

NN AdvantagesNN Advantages

LearningLearning Can continue learning even after Can continue learning even after

training set has been applied.training set has been applied. Easy parallelizationEasy parallelization Solves many problemsSolves many problems

Page 101: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 101CSE 5331/7331 F'07

NN DisadvantagesNN Disadvantages

Difficult to understandDifficult to understand May suffer from overfittingMay suffer from overfitting Structure of graph must be determined Structure of graph must be determined

a priori.a priori. Input values must be numeric.Input values must be numeric. Verification difficult.Verification difficult.

Page 102: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 102CSE 5331/7331 F'07

NN Issues NN Issues

Number of source nodesNumber of source nodes Number of hidden layersNumber of hidden layers Training dataTraining data Number of sinksNumber of sinks InterconnectionsInterconnections WeightsWeights Activation FunctionsActivation Functions Learning TechniqueLearning Technique When to stop learningWhen to stop learning

Page 103: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 103CSE 5331/7331 F'07

Decision Tree vs. Neural Decision Tree vs. Neural NetworkNetwork

Page 104: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 104CSE 5331/7331 F'07

PropagationPropagation

Tuple Input

Output

Page 105: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 105CSE 5331/7331 F'07

NN Propagation AlgorithmNN Propagation Algorithm

Page 106: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 106CSE 5331/7331 F'07

Example PropagationExample Propagation

© Prentie Hall

Page 107: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 107CSE 5331/7331 F'07

NN LearningNN Learning

Adjust weights to perform better with the Adjust weights to perform better with the associated test data.associated test data.

Supervised:Supervised: Use feedback from Use feedback from knowledge of correct classification.knowledge of correct classification.

Unsupervised:Unsupervised: No knowledge of No knowledge of correct classification needed.correct classification needed.

Page 108: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 108CSE 5331/7331 F'07

NN Supervised LearningNN Supervised Learning

Page 109: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 109CSE 5331/7331 F'07

Supervised LearningSupervised Learning

Possible error values assuming output from Possible error values assuming output from node i is ynode i is yii but should be d but should be d ii::

Change weights on arcs based on estimated Change weights on arcs based on estimated errorerror

Page 110: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 110CSE 5331/7331 F'07

NN BackpropagationNN Backpropagation

Propagate changes to weights Propagate changes to weights backward from output layer to input backward from output layer to input layer.layer.

Delta Rule:Delta Rule: w wijij= c x= c xijij (d (dj j – y– yjj)) Gradient Descent:Gradient Descent: technique to modify technique to modify

the weights in the graph.the weights in the graph.

Page 111: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 111CSE 5331/7331 F'07

BackpropagationBackpropagation

Error

Page 112: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 112CSE 5331/7331 F'07

Backpropagation AlgorithmBackpropagation Algorithm

Page 113: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 113CSE 5331/7331 F'07

Gradient DescentGradient Descent

Page 114: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 114CSE 5331/7331 F'07

Gradient Descent AlgorithmGradient Descent Algorithm

Page 115: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 115CSE 5331/7331 F'07

Output Layer LearningOutput Layer Learning

Page 116: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 116CSE 5331/7331 F'07

Hidden Layer LearningHidden Layer Learning

Page 117: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 117CSE 5331/7331 F'07

Types of NNsTypes of NNs

Different NN structures used for Different NN structures used for different problems.different problems.

PerceptronPerceptron Self Organizing Feature MapSelf Organizing Feature Map Radial Basis Function NetworkRadial Basis Function Network

Page 118: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 118CSE 5331/7331 F'07

PerceptronPerceptron

Perceptron is one of the simplest NNs.Perceptron is one of the simplest NNs. No hidden layers.No hidden layers.

Page 119: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 119CSE 5331/7331 F'07

Perceptron ExamplePerceptron Example

Suppose:Suppose:– Summation: S=3xSummation: S=3x11+2x+2x22-6-6

– Activation: if S>0 then 1 else 0Activation: if S>0 then 1 else 0

Page 120: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 120CSE 5331/7331 F'07

Self Organizing Feature Map Self Organizing Feature Map (SOFM)(SOFM)

Competitive Unsupervised LearningCompetitive Unsupervised Learning Observe how neurons work in brain:Observe how neurons work in brain:

– Firing impacts firing of those nearFiring impacts firing of those near– Neurons far apart inhibit each otherNeurons far apart inhibit each other– Neurons have specific nonoverlapping Neurons have specific nonoverlapping

taskstasks Ex: Kohonen NetworkEx: Kohonen Network

Page 121: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 121CSE 5331/7331 F'07

Kohonen NetworkKohonen Network

Page 122: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 122CSE 5331/7331 F'07

Kohonen NetworkKohonen Network

Competitive Layer – viewed as 2D gridCompetitive Layer – viewed as 2D grid Similarity between competitive nodes and Similarity between competitive nodes and

input nodes:input nodes:– Input: X = <xInput: X = <x11, …, x, …, xhh>>

– Weights: <wWeights: <w1i1i, … , w, … , whihi>>

– Similarity defined based on dot productSimilarity defined based on dot product

Competitive node most similar to input “wins”Competitive node most similar to input “wins” Winning node weights (as well as surrounding Winning node weights (as well as surrounding

node weights) increased.node weights) increased.

Page 123: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 123CSE 5331/7331 F'07

Radial Basis Function NetworkRadial Basis Function Network

RBF function has Gaussian shapeRBF function has Gaussian shape RBF NetworksRBF Networks

– Three LayersThree Layers– Hidden layer – Gaussian activation Hidden layer – Gaussian activation

functionfunction– Output layer – Linear activation functionOutput layer – Linear activation function

Page 124: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 124CSE 5331/7331 F'07

Radial Basis Function NetworkRadial Basis Function Network

Page 125: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 125CSE 5331/7331 F'07

Classification Using RulesClassification Using Rules Perform classification using If-Then Perform classification using If-Then

rulesrules Classification Rule:Classification Rule: r = <a,c> r = <a,c>

Antecedent, ConsequentAntecedent, Consequent May generate from from other May generate from from other

techniques (DT, NN) or generate techniques (DT, NN) or generate directly.directly.

Algorithms: Gen, RX, 1R, PRISMAlgorithms: Gen, RX, 1R, PRISM

Page 126: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 126CSE 5331/7331 F'07

Generating Rules from DTsGenerating Rules from DTs

Page 127: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 127CSE 5331/7331 F'07

Generating Rules ExampleGenerating Rules Example

Page 128: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 128CSE 5331/7331 F'07

Generating Rules from NNsGenerating Rules from NNs

Page 129: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 129CSE 5331/7331 F'07

1R Algorithm1R Algorithm

Page 130: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 130CSE 5331/7331 F'07

1R Example1R Example

Page 131: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 131CSE 5331/7331 F'07

PRISM AlgorithmPRISM Algorithm

Page 132: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 132CSE 5331/7331 F'07

PRISM ExamplePRISM Example

Page 133: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 133CSE 5331/7331 F'07

Decision Tree vs. Rules Decision Tree vs. Rules

Tree has implied Tree has implied order in which order in which splitting is splitting is performed.performed.

Tree created based Tree created based on looking at all on looking at all classes.classes.

Rules have no Rules have no ordering of ordering of predicates.predicates.

Only need to look at Only need to look at one class to one class to generate its rules.generate its rules.

Page 134: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 134CSE 5331/7331 F'07

Clustering OutlineClustering Outline

Clustering Problem OverviewClustering Problem Overview Clustering TechniquesClustering Techniques

– Hierarchical AlgorithmsHierarchical Algorithms– Partitional AlgorithmsPartitional Algorithms– Genetic AlgorithmGenetic Algorithm– Clustering Large DatabasesClustering Large Databases

Goal:Goal: Provide an overview of the clustering Provide an overview of the clustering problem and introduce some of the basic problem and introduce some of the basic algorithmsalgorithms

Page 135: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 135CSE 5331/7331 F'07

Clustering ExamplesClustering Examples

SegmentSegment customer database based on customer database based on similar buying patterns.similar buying patterns.

Group houses in a town into Group houses in a town into neighborhoods based on similar neighborhoods based on similar features.features.

Identify new plant speciesIdentify new plant species Identify similar Web usage patternsIdentify similar Web usage patterns

Page 136: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 136CSE 5331/7331 F'07

Similarity MeasuresSimilarity Measures

Determine similarity between two objects.Determine similarity between two objects. Similarity characteristics:Similarity characteristics:

Alternatively, distance measure measure how Alternatively, distance measure measure how unlike or dissimilar objects are.unlike or dissimilar objects are.

Page 137: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 137CSE 5331/7331 F'07

Similarity MeasuresSimilarity Measures

Page 138: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 138CSE 5331/7331 F'07

Clustering ExampleClustering Example

Page 139: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 139CSE 5331/7331 F'07

Clustering HousesClustering Houses

Size BasedGeographic Distance Based

Page 140: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 140CSE 5331/7331 F'07

Clustering vs. ClassificationClustering vs. Classification

No prior knowledgeNo prior knowledge– Number of clustersNumber of clusters– Meaning of clustersMeaning of clusters

Unsupervised learningUnsupervised learning

Page 141: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 141CSE 5331/7331 F'07

Clustering IssuesClustering Issues

Outlier handlingOutlier handling Dynamic dataDynamic data Interpreting resultsInterpreting results Evaluating resultsEvaluating results Number of clustersNumber of clusters Data to be usedData to be used ScalabilityScalability

Page 142: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 142CSE 5331/7331 F'07

Impact of Outliers on Impact of Outliers on ClusteringClustering

Page 143: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 143CSE 5331/7331 F'07

Clustering ProblemClustering Problem

Given a database D={tGiven a database D={t11,t,t22,…,t,…,tnn} of tuples } of tuples and an integer value k, the and an integer value k, the Clustering Clustering ProblemProblem is to define a mapping is to define a mapping f:Df:D{1,..,k} where each t{1,..,k} where each tii is assigned to is assigned to one cluster Kone cluster Kjj, 1<=j<=k., 1<=j<=k.

A A ClusterCluster, K, Kjj, contains precisely those , contains precisely those tuples mapped to it.tuples mapped to it.

Unlike classification problem, clusters Unlike classification problem, clusters are not known a priori.are not known a priori.

Page 144: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 144CSE 5331/7331 F'07

Types of Clustering Types of Clustering

HierarchicalHierarchical – Nested set of clusters – Nested set of clusters created.created.

Partitional Partitional – One set of clusters – One set of clusters created.created.

Incremental Incremental – Each element handled – Each element handled one at a time.one at a time.

SimultaneousSimultaneous – All elements handled – All elements handled together.together.

Overlapping/Non-overlappingOverlapping/Non-overlapping

Page 145: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 145CSE 5331/7331 F'07

Clustering ApproachesClustering Approaches

Clustering

Hierarchical Partitional Categorical Large DB

Agglomerative Divisive Sampling Compression

Page 146: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 146CSE 5331/7331 F'07

Cluster ParametersCluster Parameters

Page 147: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 147CSE 5331/7331 F'07

Distance Between ClustersDistance Between Clusters Single LinkSingle Link: smallest distance between points: smallest distance between points Complete Link:Complete Link: largest distance between points largest distance between points Average Link:Average Link: average distance between pointsaverage distance between points Centroid:Centroid: distance between centroidsdistance between centroids

Page 148: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 148CSE 5331/7331 F'07

Hierarchical ClusteringHierarchical Clustering

Clusters are created in levels actually Clusters are created in levels actually creating sets of clusters at each level.creating sets of clusters at each level.

AgglomerativeAgglomerative– Initially each item in its own clusterInitially each item in its own cluster– Iteratively clusters are merged togetherIteratively clusters are merged together– Bottom UpBottom Up

DivisiveDivisive– Initially all items in one clusterInitially all items in one cluster– Large clusters are successively dividedLarge clusters are successively divided– Top DownTop Down

Page 149: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 149CSE 5331/7331 F'07

Hierarchical AlgorithmsHierarchical Algorithms

Single LinkSingle Link MST Single LinkMST Single Link Complete LinkComplete Link Average LinkAverage Link

Page 150: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 150CSE 5331/7331 F'07

DendrogramDendrogram

Dendrogram:Dendrogram: a tree data a tree data structure which illustrates structure which illustrates hierarchical clustering hierarchical clustering techniques.techniques.

Each level shows clusters Each level shows clusters for that level.for that level.– Leaf – individual clustersLeaf – individual clusters– Root – one clusterRoot – one cluster

A cluster at level i is the A cluster at level i is the union of its children clusters union of its children clusters at level i+1.at level i+1.

Page 151: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 151CSE 5331/7331 F'07

Levels of ClusteringLevels of Clustering

Page 152: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 152CSE 5331/7331 F'07

Agglomerative ExampleAgglomerative ExampleAA BB CC DD EE

AA 00 11 22 22 33

BB 11 00 22 44 33

CC 22 22 00 11 55

DD 22 44 11 00 33

EE 33 33 55 33 00

BA

E C

D

4

Threshold of

2 3 51

A B C D E

Page 153: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 153CSE 5331/7331 F'07

MST ExampleMST Example

AA BB CC DD EE

AA 00 11 22 22 33

BB 11 00 22 44 33

CC 22 22 00 11 55

DD 22 44 11 00 33

EE 33 33 55 33 00

BA

E C

D

Page 154: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 154CSE 5331/7331 F'07

Agglomerative AlgorithmAgglomerative Algorithm

Page 155: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 155CSE 5331/7331 F'07

Single LinkSingle Link View all items with links (distances) View all items with links (distances)

between them.between them. Finds maximal connected components Finds maximal connected components

in this graph.in this graph. Two clusters are merged if there is at Two clusters are merged if there is at

least one edge which connects them.least one edge which connects them. Uses threshold distances at each level.Uses threshold distances at each level. Could be agglomerative or divisive.Could be agglomerative or divisive.

Page 156: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 156CSE 5331/7331 F'07

MST Single Link AlgorithmMST Single Link Algorithm

Page 157: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 157CSE 5331/7331 F'07

Single Link ClusteringSingle Link Clustering

Page 158: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 158CSE 5331/7331 F'07

Partitional ClusteringPartitional Clustering

NonhierarchicalNonhierarchical Creates clusters in one step as Creates clusters in one step as

opposed to several steps.opposed to several steps. Since only one set of clusters is output, Since only one set of clusters is output,

the user normally has to input the the user normally has to input the desired number of clusters, k.desired number of clusters, k.

Usually deals with static sets.Usually deals with static sets.

Page 159: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 159CSE 5331/7331 F'07

Partitional AlgorithmsPartitional Algorithms

MSTMST Squared ErrorSquared Error K-MeansK-Means Nearest NeighborNearest Neighbor PAMPAM BEABEA GAGA

Page 160: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 160CSE 5331/7331 F'07

MST AlgorithmMST Algorithm

Page 161: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 161CSE 5331/7331 F'07

Squared ErrorSquared Error

Minimized squared errorMinimized squared error

Page 162: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 162CSE 5331/7331 F'07

Squared Error AlgorithmSquared Error Algorithm

Page 163: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 163CSE 5331/7331 F'07

K-MeansK-Means Initial set of clusters randomly chosen.Initial set of clusters randomly chosen. Iteratively, items are moved among sets Iteratively, items are moved among sets

of clusters until the desired set is of clusters until the desired set is reached.reached.

High degree of similarity among High degree of similarity among elements in a cluster is obtained.elements in a cluster is obtained.

Given a cluster KGiven a cluster Kii={t={ti1i1,t,ti2i2,…,t,…,timim}, the }, the

cluster meancluster mean is m is mii = (1/m)(t = (1/m)(ti1i1 + … + t + … + timim))

Page 164: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 164CSE 5331/7331 F'07

K-Means ExampleK-Means Example Given: {2,4,10,12,3,20,30,11,25}, k=2Given: {2,4,10,12,3,20,30,11,25}, k=2 Randomly assign means: mRandomly assign means: m11=3,m=3,m22=4=4 KK11={2,3}, K={2,3}, K22={4,10,12,20,30,11,25}, ={4,10,12,20,30,11,25},

mm11=2.5,m=2.5,m22=16=16 KK11={2,3,4},K={2,3,4},K22={10,12,20,30,11,25}, m={10,12,20,30,11,25}, m11=3,m=3,m22=18=18 KK11={2,3,4,10},K={2,3,4,10},K22={12,20,30,11,25}, ={12,20,30,11,25},

mm11=4.75,m=4.75,m22=19.6=19.6 KK11={2,3,4,10,11,12},K={2,3,4,10,11,12},K22={20,30,25}, m={20,30,25}, m11=7,m=7,m22=25=25 Stop as the clusters with these means are the Stop as the clusters with these means are the

same.same.

Page 165: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 165CSE 5331/7331 F'07

K-Means AlgorithmK-Means Algorithm

Page 166: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 166CSE 5331/7331 F'07

Nearest NeighborNearest Neighbor

Items are iteratively merged into the Items are iteratively merged into the existing clusters that are closest.existing clusters that are closest.

IncrementalIncremental Threshold, t, used to determine if items Threshold, t, used to determine if items

are added to existing clusters or a new are added to existing clusters or a new cluster is created.cluster is created.

Page 167: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 167CSE 5331/7331 F'07

Nearest Neighbor AlgorithmNearest Neighbor Algorithm

Page 168: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 168CSE 5331/7331 F'07

PAMPAM

Partitioning Around Medoids (PAM) Partitioning Around Medoids (PAM) (K-Medoids)(K-Medoids)

Handles outliers well.Handles outliers well. Ordering of input does not impact results.Ordering of input does not impact results. Does not scale well.Does not scale well. Each cluster represented by one item, Each cluster represented by one item,

called the called the medoid.medoid. Initial set of k medoids randomly chosen.Initial set of k medoids randomly chosen.

Page 169: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 169CSE 5331/7331 F'07

PAMPAM

Page 170: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 170CSE 5331/7331 F'07

PAM Cost CalculationPAM Cost Calculation At each step in algorithm, medoids are At each step in algorithm, medoids are

changed if the overall cost is improved.changed if the overall cost is improved. CCjihjih – cost change for an item t – cost change for an item t jj associated associated

with swapping medoid twith swapping medoid t ii with non-medoid t with non-medoid thh..

Page 171: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 171CSE 5331/7331 F'07

PAM AlgorithmPAM Algorithm

Page 172: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 172CSE 5331/7331 F'07

BEABEA Bond Energy AlgorithmBond Energy Algorithm Database design (physical and logical)Database design (physical and logical) Vertical fragmentationVertical fragmentation Determine affinity (bond) between attributes Determine affinity (bond) between attributes

based on common usage.based on common usage. Algorithm outline:Algorithm outline:

1.1. Create affinity matrixCreate affinity matrix

2.2. Convert to BOND matrix Convert to BOND matrix

3.3. Create regions of close bondingCreate regions of close bonding

Page 173: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 173CSE 5331/7331 F'07

BEABEA

Modified from [OV99]

Page 174: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 174CSE 5331/7331 F'07

Genetic AlgorithmsGenetic Algorithms Optimization search type algorithms. Optimization search type algorithms. Creates an initial feasible solution and Creates an initial feasible solution and

iteratively creates new “better” solutions.iteratively creates new “better” solutions. Based on human evolution and survival of the Based on human evolution and survival of the

fittest.fittest. Must represent a solution as an individual.Must represent a solution as an individual. Individual:Individual: string I=I string I=I11,I,I22,…,I,…,Inn where I where Ijj is in is in

given alphabet A. given alphabet A. Each character IEach character I j j is called a is called a genegene.. Population:Population: set of individuals. set of individuals.

Page 175: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 175CSE 5331/7331 F'07

Genetic AlgorithmsGenetic Algorithms A A Genetic Algorithm (GA)Genetic Algorithm (GA) is a is a

computational model consisting of five parts:computational model consisting of five parts:– A starting set of individuals, P.A starting set of individuals, P.– CrossoverCrossover:: technique to combine two technique to combine two

parents to create offspring.parents to create offspring.– Mutation: Mutation: randomly change an individual.randomly change an individual.– Fitness: Fitness: determine the best individuals.determine the best individuals.– Algorithm which applies the crossover and Algorithm which applies the crossover and

mutation techniques to P iteratively using mutation techniques to P iteratively using the fitness function to determine the best the fitness function to determine the best individuals in P to keep. individuals in P to keep.

Page 176: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 176CSE 5331/7331 F'07

Crossover ExamplesCrossover Examples

111 111

000 000

Parents Children

111 000

000 111

a) Single Crossover

111 111

Parents Children

111 000

000

a) Single Crossover

111 111

000 000

Parents

a) Multiple Crossover

111 111

000

Parents Children

111 000

000 111

Children

111 000

000 11100

11

00

11

Page 177: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 177CSE 5331/7331 F'07

Genetic AlgorithmGenetic Algorithm

Page 178: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 178CSE 5331/7331 F'07

GA Advantages/DisadvantagesGA Advantages/Disadvantages AdvantagesAdvantages

– Easily parallelizedEasily parallelized DisadvantagesDisadvantages

– Difficult to understand and explain to end Difficult to understand and explain to end users.users.

– Abstraction of the problem and method to Abstraction of the problem and method to represent individuals is quite difficult.represent individuals is quite difficult.

– Determining fitness function is difficult.Determining fitness function is difficult.– Determining how to perform crossover and Determining how to perform crossover and

mutation is difficult.mutation is difficult.

Page 179: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 179CSE 5331/7331 F'07

Genetic Algorithm ExampleGenetic Algorithm Example

{{A,B,C,D,E,F,G,H}A,B,C,D,E,F,G,H} Randomly choose initial solution:Randomly choose initial solution:

{A,C,E} {B,F} {D,G,H} or{A,C,E} {B,F} {D,G,H} or10101000, 01000100, 0001001110101000, 01000100, 00010011

Suppose crossover at point four and Suppose crossover at point four and choose 1choose 1stst and 3 and 3rdrd individuals: individuals:10100011, 01000100, 0001100010100011, 01000100, 00011000

What should termination criteria be?What should termination criteria be?

Page 180: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 180CSE 5331/7331 F'07

GA AlgorithmGA Algorithm

Page 181: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 181CSE 5331/7331 F'07

Clustering Large DatabasesClustering Large Databases

Most clustering algorithms assume a large Most clustering algorithms assume a large data structure which is memory resident.data structure which is memory resident.

Clustering may be performed first on a Clustering may be performed first on a sample of the database then applied to the sample of the database then applied to the entire database.entire database.

AlgorithmsAlgorithms– BIRCHBIRCH– DBSCANDBSCAN– CURECURE

Page 182: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 182CSE 5331/7331 F'07

Desired Features for Large Desired Features for Large DatabasesDatabases

One scan (or less) of DBOne scan (or less) of DB OnlineOnline Suspendable, stoppable, resumableSuspendable, stoppable, resumable IncrementalIncremental Work with limited main memoryWork with limited main memory Different techniques to scan (e.g. Different techniques to scan (e.g.

sampling)sampling) Process each tuple onceProcess each tuple once

Page 183: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 183CSE 5331/7331 F'07

BIRCHBIRCH Balanced Iterative Reducing and Balanced Iterative Reducing and

Clustering using HierarchiesClustering using Hierarchies Incremental, hierarchical, one scanIncremental, hierarchical, one scan Save clustering information in a tree Save clustering information in a tree Each entry in the tree contains Each entry in the tree contains

information about one clusterinformation about one cluster New nodes inserted in closest entry in New nodes inserted in closest entry in

treetree

Page 184: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 184CSE 5331/7331 F'07

Clustering FeatureClustering Feature CT Triple: (N,LS,SS)CT Triple: (N,LS,SS)

– N: Number of points in clusterN: Number of points in cluster– LS: Sum of points in the clusterLS: Sum of points in the cluster– SS: Sum of squares of points in the clusterSS: Sum of squares of points in the cluster

CF TreeCF Tree– Balanced search treeBalanced search tree– Node has CF triple for each childNode has CF triple for each child– Leaf node represents cluster and has CF value Leaf node represents cluster and has CF value

for each subcluster in it.for each subcluster in it.– Subcluster has maximum diameterSubcluster has maximum diameter

Page 185: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 185CSE 5331/7331 F'07

BIRCH AlgorithmBIRCH Algorithm

Page 186: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 186CSE 5331/7331 F'07

Improve ClustersImprove Clusters

Page 187: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 187CSE 5331/7331 F'07

DBSCANDBSCAN

Density Based Spatial Clustering of Density Based Spatial Clustering of Applications with NoiseApplications with Noise

Outliers will not effect creation of cluster.Outliers will not effect creation of cluster. InputInput

– MinPts MinPts – minimum number of points in – minimum number of points in clustercluster

– EpsEps – for each point in cluster there must – for each point in cluster there must be another point in it less than this distance be another point in it less than this distance away.away.

Page 188: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 188CSE 5331/7331 F'07

DBSCAN Density ConceptsDBSCAN Density Concepts

Eps-neighborhood:Eps-neighborhood: Points within Eps Points within Eps distance of a point.distance of a point.

Core point:Core point: Eps-neighborhood dense enough Eps-neighborhood dense enough (MinPts)(MinPts)

Directly density-reachable:Directly density-reachable: A point p is A point p is directly density-reachable from a point q if the directly density-reachable from a point q if the distance is small (Eps) and q is a core point.distance is small (Eps) and q is a core point.

Density-reachable:Density-reachable: A point si density- A point si density-reachable form another point if there is a path reachable form another point if there is a path from one to the other consisting of only core from one to the other consisting of only core points.points.

Page 189: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 189CSE 5331/7331 F'07

Density ConceptsDensity Concepts

Page 190: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 190CSE 5331/7331 F'07

DBSCAN AlgorithmDBSCAN Algorithm

Page 191: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 191CSE 5331/7331 F'07

CURECURE

Clustering Using RepresentativesClustering Using Representatives Use many points to represent a cluster Use many points to represent a cluster

instead of only oneinstead of only one Points will be well scatteredPoints will be well scattered

Page 192: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 192CSE 5331/7331 F'07

CURE ApproachCURE Approach

Page 193: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 193CSE 5331/7331 F'07

CURE AlgorithmCURE Algorithm

Page 194: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 194CSE 5331/7331 F'07

CURE for Large DatabasesCURE for Large Databases

Page 195: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 195CSE 5331/7331 F'07

Comparison of Clustering Comparison of Clustering TechniquesTechniques

Page 196: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 196CSE 5331/7331 F'07

Association Rules OutlineAssociation Rules OutlineGoal: Provide an overview of basic Association Provide an overview of basic Association

Rule mining techniquesRule mining techniques Association Rules Problem OverviewAssociation Rules Problem Overview

– Large itemsetsLarge itemsets Association Rules AlgorithmsAssociation Rules Algorithms

– AprioriApriori– SamplingSampling– PartitioningPartitioning– Parallel AlgorithmsParallel Algorithms

Comparing TechniquesComparing Techniques Incremental AlgorithmsIncremental Algorithms Advanced AR TechniquesAdvanced AR Techniques

Page 197: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 197CSE 5331/7331 F'07

Example: Market Basket DataExample: Market Basket Data Items frequently purchased together:Items frequently purchased together:

Bread Bread PeanutButterPeanutButter Uses:Uses:

– Placement Placement – AdvertisingAdvertising– SalesSales– CouponsCoupons

Objective: increase sales and reduce Objective: increase sales and reduce costscosts

Page 198: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 198CSE 5331/7331 F'07

Association Rule DefinitionsAssociation Rule Definitions

Set of items:Set of items: I={I I={I11,I,I22,…,I,…,Imm}}

Transactions:Transactions: D={t D={t11,t,t22, …, t, …, tnn}, t}, tjj I I

Itemset:Itemset: {I {Ii1i1,I,Ii2i2, …, I, …, Iikik} } I I Support of an itemset:Support of an itemset: Percentage of Percentage of

transactions which contain that itemset.transactions which contain that itemset. Large (Frequent) itemset:Large (Frequent) itemset: Itemset Itemset

whose number of occurrences is above whose number of occurrences is above a threshold.a threshold.

Page 199: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 199CSE 5331/7331 F'07

Association Rules ExampleAssociation Rules Example

I = { Beer, Bread, Jelly, Milk, PeanutButter}

Support of {Bread,PeanutButter} is 60%

Page 200: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 200CSE 5331/7331 F'07

Association Rule DefinitionsAssociation Rule Definitions

Association Rule (AR): Association Rule (AR): implication X implication X Y where X,Y Y where X,Y I and X I and X Y = Y = ;;

Support of AR (s) X Support of AR (s) X YY: : Percentage of transactions that Percentage of transactions that contain X contain X YY

Confidence of AR (Confidence of AR () X ) X Y: Y: Ratio of Ratio of number of transactions that contain X number of transactions that contain X Y to the number that contain X Y to the number that contain X

Page 201: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 201CSE 5331/7331 F'07

Association Rules Ex (cont’d)Association Rules Ex (cont’d)

Page 202: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 202CSE 5331/7331 F'07

Association Rule ProblemAssociation Rule Problem Given a set of items I={IGiven a set of items I={I11,I,I22,…,I,…,Imm} and a } and a

database of transactions D={tdatabase of transactions D={t11,t,t22, …, t, …, tnn} } where twhere tii={I={Ii1i1,I,Ii2i2, …, I, …, Iikik} and I} and Iijij I, the I, the Association Rule ProblemAssociation Rule Problem is to is to identify all association rulesidentify all association rules X X Y Y with with a minimum support and confidence.a minimum support and confidence.

Link AnalysisLink Analysis NOTE:NOTE: Support of Support of X X Y Y is same as is same as

support of X support of X Y. Y.

Page 203: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 203CSE 5331/7331 F'07

Association Rule TechniquesAssociation Rule Techniques

1.1. Find Large Itemsets.Find Large Itemsets.

2.2. Generate rules from frequent itemsets.Generate rules from frequent itemsets.

Page 204: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 204CSE 5331/7331 F'07

Algorithm to Generate ARsAlgorithm to Generate ARs

Page 205: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 205CSE 5331/7331 F'07

AprioriApriori

Large Itemset Property:Large Itemset Property:

Any subset of a large itemset is large.Any subset of a large itemset is large. Contrapositive:Contrapositive:

If an itemset is not large, If an itemset is not large,

none of its supersets are large.none of its supersets are large.

Page 206: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 206CSE 5331/7331 F'07

Large Itemset PropertyLarge Itemset Property

Page 207: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 207CSE 5331/7331 F'07

Apriori Ex (cont’d)Apriori Ex (cont’d)

s=30% = 50%

Page 208: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 208CSE 5331/7331 F'07

Apriori AlgorithmApriori Algorithm

1.1. CC11 = Itemsets of size one in I; = Itemsets of size one in I;

2.2. Determine all large itemsets of size 1, LDetermine all large itemsets of size 1, L1;1;

3. i = 1;

4.4. RepeatRepeat

5.5. i = i + 1;i = i + 1;

6.6. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););

7.7. Count CCount Cii to determine L to determine L i;i;

8.8. until no more large itemsets found;until no more large itemsets found;

Page 209: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 209CSE 5331/7331 F'07

Apriori-GenApriori-Gen

Generate candidates of size i+1 from Generate candidates of size i+1 from large itemsets of size i.large itemsets of size i.

Approach used: join large itemsets of Approach used: join large itemsets of size i if they agree on i-1 size i if they agree on i-1

May also prune candidates who have May also prune candidates who have subsets that are not large.subsets that are not large.

Page 210: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 210CSE 5331/7331 F'07

Apriori-Gen ExampleApriori-Gen Example

Page 211: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 211CSE 5331/7331 F'07

Apriori-Gen Example (cont’d)Apriori-Gen Example (cont’d)

Page 212: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 212CSE 5331/7331 F'07

Apriori Adv/DisadvApriori Adv/Disadv

Advantages:Advantages:– Uses large itemset property.Uses large itemset property.– Easily parallelizedEasily parallelized– Easy to implement.Easy to implement.

Disadvantages:Disadvantages:– Assumes transaction database is memory Assumes transaction database is memory

resident.resident.– Requires up to m database scans.Requires up to m database scans.

Page 213: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 213CSE 5331/7331 F'07

SamplingSampling Large databasesLarge databases Sample the database and apply Apriori to the Sample the database and apply Apriori to the

sample. sample. Potentially Large Itemsets (PL):Potentially Large Itemsets (PL): Large Large

itemsets from sampleitemsets from sample Negative Border (BD Negative Border (BD -- ): ):

– Generalization of Apriori-Gen applied to Generalization of Apriori-Gen applied to itemsets of varying sizes.itemsets of varying sizes.

– Minimal set of itemsets which are not in PL, Minimal set of itemsets which are not in PL, butbut whose subsets are all in PL. whose subsets are all in PL.

Page 214: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 214CSE 5331/7331 F'07

Negative Border ExampleNegative Border Example

PL PL BD-(PL)

Page 215: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 215CSE 5331/7331 F'07

Sampling AlgorithmSampling Algorithm

1.1. DDss = sample of Database D; = sample of Database D;

2.2. PL = Large itemsets in DPL = Large itemsets in Dss using smalls; using smalls;

3.3. C = PL C = PL BDBD--(PL);(PL);4.4. Count C in Database using s;Count C in Database using s;

5.5. ML = large itemsets in BDML = large itemsets in BD--(PL);(PL);6.6. If ML = If ML = then donethen done

7.7. else C = repeated application of BDelse C = repeated application of BD-;-;

8.8. Count C in Database;Count C in Database;

Page 216: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 216CSE 5331/7331 F'07

Sampling ExampleSampling Example Find AR assuming s = 20%Find AR assuming s = 20% DDss = { t = { t11,t,t22}} Smalls = 10%Smalls = 10% PL = {{Bread}, {Jelly}, {PeanutButter}, PL = {{Bread}, {Jelly}, {PeanutButter},

{Bread,Jelly}, {Bread,PeanutButter}, {Jelly, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}PeanutButter}, {Bread,Jelly,PeanutButter}}

BDBD--(PL)={{Beer},{Milk}}(PL)={{Beer},{Milk}} ML = {{Beer}, {Milk}} ML = {{Beer}, {Milk}} Repeated application of BDRepeated application of BD- - generates all generates all

remaining itemsetsremaining itemsets

Page 217: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 217CSE 5331/7331 F'07

Sampling Adv/DisadvSampling Adv/Disadv

Advantages:Advantages:– Reduces number of database scans to one Reduces number of database scans to one

in the best case and two in worst.in the best case and two in worst.– Scales better.Scales better.

Disadvantages:Disadvantages:– Potentially large number of candidates in Potentially large number of candidates in

second passsecond pass

Page 218: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 218CSE 5331/7331 F'07

PartitioningPartitioning

Divide database into partitions DDivide database into partitions D11,D,D22,,…,D…,Dpp

Apply Apriori to each partitionApply Apriori to each partition Any large itemset must be large in at Any large itemset must be large in at

least one partition.least one partition.

Page 219: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 219CSE 5331/7331 F'07

Partitioning AlgorithmPartitioning Algorithm

1.1. Divide D into partitions DDivide D into partitions D11,D,D22,…,D,…,Dp;p;

2. For I = 1 to p do

3.3. LLii = Apriori(D = Apriori(Dii););

4.4. C = LC = L11 … … L Lpp;;

5.5. Count C on D to generate L;Count C on D to generate L;

Page 220: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 220CSE 5331/7331 F'07

Partitioning ExamplePartitioning Example

D1

D2

S=10%

L1 ={{Bread}, {Jelly}, {Bread}, {Jelly}, {PeanutButter}, {PeanutButter}, {Bread,Jelly}, {Bread,Jelly}, {Bread,PeanutButter}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}{Bread,Jelly,PeanutButter}}

L2 ={{Bread}, {Milk}, {Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, {Bread,PeanutButter}, {Milk, PeanutButter}, PeanutButter}, {Bread,Milk,PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer}, {Beer,Bread}, {Beer,Milk}}{Beer,Milk}}

Page 221: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 221CSE 5331/7331 F'07

Partitioning Adv/DisadvPartitioning Adv/Disadv

Advantages:Advantages:– Adapts to available main memoryAdapts to available main memory– Easily parallelizedEasily parallelized– Maximum number of database scans is Maximum number of database scans is

two.two. Disadvantages:Disadvantages:

– May have many candidates during second May have many candidates during second scan.scan.

Page 222: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 222CSE 5331/7331 F'07

Parallelizing AR AlgorithmsParallelizing AR Algorithms

Based on AprioriBased on Apriori Techniques differ:Techniques differ:

– What is counted at each siteWhat is counted at each site– How data (transactions) are distributedHow data (transactions) are distributed

Data ParallelismData Parallelism– Data partitionedData partitioned– Count Distribution AlgorithmCount Distribution Algorithm

Task ParallelismTask Parallelism– Data and candidates partitionedData and candidates partitioned– Data Distribution AlgorithmData Distribution Algorithm

Page 223: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 223CSE 5331/7331 F'07

Count Distribution Algorithm(CDA)Count Distribution Algorithm(CDA)1.1. Place data partition at each site.Place data partition at each site.2.2. In Parallel at each site doIn Parallel at each site do3.3. CC11 = Itemsets of size one in I; = Itemsets of size one in I;4.4. Count CCount C1;1;

5.5. Broadcast counts to all sites;Broadcast counts to all sites;6.6. Determine global large itemsets of size 1, LDetermine global large itemsets of size 1, L11;;7. i = 1; 8.8. RepeatRepeat9.9. i = i + 1;i = i + 1;10.10. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););11.11. Count CCount Ci;i;

12.12. Broadcast counts to all sites;Broadcast counts to all sites;13.13. Determine global large itemsets of size i, LDetermine global large itemsets of size i, L ii;;14.14. until no more large itemsets found;until no more large itemsets found;

Page 224: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 224CSE 5331/7331 F'07

CDA ExampleCDA Example

Page 225: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 225CSE 5331/7331 F'07

Data Distribution Algorithm(DDA)Data Distribution Algorithm(DDA)1.1. Place data partition at each site.Place data partition at each site.2.2. In Parallel at each site doIn Parallel at each site do3.3. Determine local candidates of size 1 to count;Determine local candidates of size 1 to count;4.4. Broadcast local transactions to other sites;Broadcast local transactions to other sites;5.5. Count local candidates of size 1 on all data;Count local candidates of size 1 on all data;6.6. Determine large itemsets of size 1 for local Determine large itemsets of size 1 for local

candidates; candidates; 7.7. Broadcast large itemsets to all sites;Broadcast large itemsets to all sites;8.8. Determine LDetermine L11;;9. i = 1; 10.10. RepeatRepeat11.11. i = i + 1;i = i + 1;12.12. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););13.13. Determine local candidates of size i to count;Determine local candidates of size i to count;14.14. Count, broadcast, and find LCount, broadcast, and find Lii;;15.15. until no more large itemsets found;until no more large itemsets found;

Page 226: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 226CSE 5331/7331 F'07

DDA ExampleDDA Example

Page 227: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 227CSE 5331/7331 F'07

Comparing AR TechniquesComparing AR Techniques TargetTarget TypeType Data TypeData Type Data SourceData Source TechniqueTechnique Itemset Strategy and Data StructureItemset Strategy and Data Structure Transaction Strategy and Data StructureTransaction Strategy and Data Structure OptimizationOptimization ArchitectureArchitecture Parallelism StrategyParallelism Strategy

Page 228: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 228CSE 5331/7331 F'07

Comparison of AR TechniquesComparison of AR Techniques

Page 229: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 229CSE 5331/7331 F'07

Hash TreeHash Tree

Page 230: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 230CSE 5331/7331 F'07

Incremental Association RulesIncremental Association Rules Generate ARs in a dynamic database.Generate ARs in a dynamic database. Problem: algorithms assume static Problem: algorithms assume static

databasedatabase Objective: Objective:

– Know large itemsets for DKnow large itemsets for D– Find large itemsets for D Find large itemsets for D { { D} D}

Must be large in either D or Must be large in either D or D D Save LSave Li i and counts and counts

Page 231: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 231CSE 5331/7331 F'07

Note on ARsNote on ARs Many applications outside market Many applications outside market

basket data analysisbasket data analysis– Prediction (telecom switch failure)Prediction (telecom switch failure)– Web usage miningWeb usage mining

Many different types of association rulesMany different types of association rules– TemporalTemporal– SpatialSpatial– CausalCausal

Page 232: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 232CSE 5331/7331 F'07

Advanced AR TechniquesAdvanced AR Techniques

Generalized Association RulesGeneralized Association Rules Multiple-Level Association RulesMultiple-Level Association Rules Quantitative Association RulesQuantitative Association Rules Using multiple minimum supportsUsing multiple minimum supports Correlation RulesCorrelation Rules

Page 233: CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science

© Prentice Hall 233CSE 5331/7331 F'07

Measuring Quality of RulesMeasuring Quality of Rules

SupportSupport ConfidenceConfidence InterestInterest ConvictionConviction Chi Squared TestChi Squared Test