diagnosing breast cancer with ensemble strategies for a medical diagnostic decision support system
Post on 09-Jan-2016
14 Views
Preview:
DESCRIPTION
TRANSCRIPT
1
Diagnosing Breast Cancer with Ensemble Strategies for a Medical
Diagnostic Decision Support System
David West
East Carolina University
Paul Mangiameli
University of Rhode Island
Rohit Rampal
Portland State University
2
Introduction
• Breast cancer is one of the most prevalent illnesses to women over 40
• Society must do all it can to reduce the frequency and severity of this disease
• Early and accurate detection is critical
• Our MDSS aids physicians in this task
3
Introduction
• Critical to performance and acceptance of any MDSS is the selection of the underlying models
• This paper investigates the performance of MDSS comprised of:– single (individual) models and – ensembles (groups) of models
• How should ensembles be formed
4
Overview of Presentation• Discussion of Literature on Model Selections for
MDSS • Research Methodology
– Breast cancer data sets– Model description
• Experimental Design• Results of Generalized Classification Errors• Greedy Model Selection Strategy• Conclusion
5
Literature Review
• Most MDSS studies use single best model strategy– Parametric:
• Linear discriminant analysis• Logistic regression
– Non-parametric• K nearest neighbor• Kernel density
6
Literature Review
– Neural Networks• Multilayer perceptron
• Radial Basis Function
• Mixture of experts
– Classification and Regression Trees (CART)
7
Literature Review
• A very limited number of studies use ensembles of models
• The ensembles are comprised of one type of model – we call these baseline ensembles
• These ensembles are usually bagging or bootstrap aggregating ensembles– Bootstrap is a way of perturbing the data so as to create
diversity in the decision making of each model in the ensemble
8
Research Methodology
Breast Cancer Data Sets
Model Descriptions
Experimental Design
9
Breast Cancer Data Sets
• Cytology Data– 699 Records of fine needle aspirants– Biopsied tumor mass is benign or malignant
• 458 benign and 241 malignant
• Prognostic Data– 198 Records of invasive, non-metastasized,
malignant tumors– Tumor is either recurrent or non-recurrent
• 151 non-recurrent and 47 recurrent
10
Model Description24 Models in Total
• Linear Discriminant Analysis (LDA) - 1• Logistic Regression (LR) - 1• Nearest Neighbor Classifiers (KNN) - 4• Kernal Density (KNN) - 4• Multilayer Perceptron (MLP) - 4• Mixture of Experts (MOE) - 4• Radial Basis Function (RBF) - 4• Classification and Regression Trees (CART) - 2
11
Experimental DesignSingle Best Model
Generalization Error• Split data into 3 partitions
– Training set– Validation set– Independent test set - test set is randomly
selected and equals 10% of data
• Partition remaining data into 10 mutually exclusive sets – ten fold cross validation
• One partition is the validation set
12
Experimental DesignSingle Best Model
Generalization Error
• Collapse other 9 partitions and use as training set
• Train each of the 24 models and then compute error on the validation set
• Repeat 10 times using each partition as the validation set
13
Experimental DesignSingle Best Model
Generalization Error
• Model with the lowest error across the 10 fold cross validation runs is called the single best model
• Test the single best model against the hold out test set and compute the error
• Repeat previous steps 100 times• Determine Single Best Model generalized error over
all 100 runs
14
Experimental DesignBaseline Bagging Ensembles
• Bagging means Bootstrap Aggregating
• Baseline means comprise of just one of the 24 models
• There are 24 baseline bagging ensembles – one for each model
15
Experimental DesignBaseline Bagging Ensembles
• Essentially, the same ten fold cross validation approach is used
• Perturb the training data (sample with replacement) for each of the 24 models in the ensemble– Creates 24 uniquely weighted models with the same general
architecture within the ensemble
• Use majority vote to determine aggregate decisions for the test set
• Use 500 runs to determine mean generalization error for each ensemble
16
Experimental DesignDiverse Bagging Ensembles
• Same as the baseline bagging procedure but now pre-select the models in the ensemble
• Of the 24 models, randomly choose: 24, 12, 8, 6, 4, 3, and 2 different models to make up the ensemble.
• Also, restrict the choice of models for the ensemble to the top 50% and top 25% of the single best models
17
ResultsGeneralized Error
Initial Results
Design and Results of Greedy Algorithm
18
Initial Results Comparison of Generalized Error
Model Selection Strategy
Prognostic Data Cytology Data
Single Best Model 0.226 0.029
Baseline Bagging Ensemble
0.209 0.028
Diverse Random Ensembles
0.215 0.033
Top 50% Diverse Bagging Ensemble
0.209 0.031
Top 25% Diverse Bagging Ensemble
0.203 0.027
19
Discussion of Initial Results
• The existing literature uses either the single best or baseline bagging model selection strategy
• By using a diverse initial group of 24 models and then creating diverse ensembles comprised of top 25% of the single best models, the generalized errors were significantly lowered
20
Some Observations about Initial Results
• Large number of single models were the best at least once for the 100 runs
• Baseline bagging ensembles did poorly for the cytology data compared to the single best model– We found near unanimous consensus (indicated by high levels
of error correlations) among models
– Indicates that bootstrapping may not create model instability
• The best diverse ensembles were comprised of 3 different models and the worse had all had more than 6
21
Development of Greedy Ensemble Algorithm
Create ensembles with:• Diversity of models (3 - 6 different models
in the ensemble)• Each model has a low generalization error
in the single best model results• Each model had low error correlation levels
(i.e.- high model instability) in the baseline bagging results
22
Model Selection for Greedy Algorithm
• Regressing generalized error as a function of model instability we found that there were a few models with high instability and low generalized error for each data set
• We choose 3 different models for the greedy algorithm ensemble
• Prognostic - RBFd, LR, CARTb• Cytology – RBFc, RBFb, MOEa
23
Results Comparison of Greedy
Algorithm Ensemble to Top 25% Ensemble
Model Selection Strategy
Prognostic Data Cytology Data
Top 25% Diverse Bagging Ensemble
0.203 0.027
Greedy Algorithm Ensemble
0.194 0.026
24
Concluding Discussion
25
Investigated the Performance of MDSS Comprised of:
• Single models
• Baseline bagging ensembles
• Diverse random bagging ensembles
• Selected bagging ensembles
• Greedy Algorithm bagging ensembles
26
We Found that• MDSS comprised of a single model or an ensemble
of single models (baseline) do not perform well• Diverse ensembles are better• Diverse ensembles comprised of models with low
generalized errors from the single best model results are still better
• The best results are achieved by diverse ensembles comprised of models with low generalized error from the single best model results and high instability from the baseline bagging results
27
Limitations
• Limitations of the Methodology– Use of plurality voting– Use of only bootstrap
• Limitations of Applicability– Only 2 data sets
• Both data sets regard breast cancer
– MDSS is crude – not for commercial applications
top related