streak-detection in dermoscopic color images using ...hamarneh/ecopy/crc2015a.pdf · photomedicine...

14
Chapter 1 Streak-Detection in Dermoscopic Color Images using Localized Radial Flux of Principal Intensity Curvature HENGAMEH MIRZAALIAN Medical Image Analysis Lab, Simon Fraser University, BC, Canada. [email protected] T IM KL EE Photomedicine Institute, Department of Dermatology and Skin Science, Univer- sity of British Columbia and Vancouver Coastal Health Research Institute, and Cancer Control Research Program, BC Cancer Agency, BC, Canada. [email protected] GHASSAN HAMARNEH Medical Image Analysis Lab, Simon Fraser University, BC, Canada. [email protected] 1.1 Introduction Malignant melanoma (MM) is one of the most frequent type of cancers among the world’s white population [1]. Dermoscopy is a noninvasive method for early recognition of MM allowing a better visualization of

Upload: others

Post on 19-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

Chapter 1

Streak-Detection in Dermoscopic Color

Images using Localized Radial Flux of

Principal Intensity Curvature

HENGAMEH MIRZAALIAN

Medical Image Analysis Lab, Simon Fraser University, BC, [email protected]

TIM K LEE

Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia and Vancouver Coastal Health Research Institute, andCancer Control Research Program, BC Cancer Agency, BC, [email protected]

GHASSAN HAMARNEH

Medical Image Analysis Lab, Simon Fraser University, BC, [email protected]

1.1 Introduction

Malignant melanoma (MM) is one of the most frequent type of cancers among the world’s white population

[1]. Dermoscopy is a noninvasive method for early recognition of MM allowing a better visualization of

Ghassan Hamarneh
Page 2: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

2 Dermoscopy Image Analysis

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 1.1: Examples of real (a) and simulated (e) hair occluded images. (b) and (f) are the hair masks

of the images in (a) and (e), respectively. (c) and (g) are hair-free images of (a) and (d), respectively. (c)

is generated by applying the in-painting approach in [2], where (g) was the original skin image processed

by HiarSim software to generate the simulated hair-occluded images in (e). (d) and (h) are masks of the

segmented lesions in (c) and (g).

skin structures. Studies showed that these dermoscopic features can improve the diagnostic accuracy of

melanoma. The features are interpreted by trained physicians. Without proper trainings, these features

could confuse physicians. Recently, a considerable amount of research has focused on automating the

feature extraction and classification of dermoscopic images. Among these features, streaks are important.

Although streaks in children are likely to be benign, adults with streaks should be examined carefully.

Irregular streaks are often associated with melanomas. In this chapter, we will present a fully automated

method for streak detection based on a machine learning approach, which is useful for a computer aided

diagnosis (CAD) system for pigmented skin lesions. In the following paragraphs, the typical pipeline for a

CAD system is described.

Artifact Removal. The first step in a CAD system of lesions in dermoscopic images is artifact removal as

a pre-processing step; dermoscopic images are often occluded with artifacts such as black frames, rulers,

air bubbles, and hairs (Figure 1.1). In particular, hairs are the most common artifacts. The existence of

Page 3: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

The Title of Your Chapter Goes Here 3

such artifacts complicates lesion segmentation, feature detection, and classification tasks. Although artifact

removal has been investigated extensively [2, 3, 4, 5, 6], the problem has not been fully solved. One of

the problems is the lack of validation tools. For example, since hairs are very thin with spatially-varying

width, preparing a ground truth manually for a large number of hair pixels would be exorbitantly tedious,

let alone for a large number of images. In order to assist validation and benchmarking of hair enhancement

and segmentation, we developed a simulation, HairSim, which is available publicly at www.cs.sfu.ca/

˜

hamarneh/software/hairsim. An example of simulated hair occluded image by HairSim is shown

in Figure 1.1(e).

In-painting. After identifying the artifacts, the next step is to replace the pixels comprising the artifact

with new values by estimating the underlying color of the scene. In computer vision, the technique of

reconstructing the lost or deteriorated part of an image is called in-painting. There exists limited works on

in-painting on dermoscopic images, e.g. using color [7] and texture [8] diffusion. Example in-painted (hair

disoccluded) image is shown in Figure 1.1(c).

Lesion Segmentation. The next step in the traditional pipeline is lesion segmentation, on which there exists

considerable works mostly using: color clustering [3, 9, 10], region growing [11, 12, 13, 14, 11, 15], active

contours [16, 17], and graph labeling [18, 19, 20] approaches (recent surveys: [21, 22]). Examples of lesion

segmentation masks are shown in Figure 1.1.

Feature extraction and Classification. The last step of a CAD system is feature extraction and classifi-

cation. A notable number of methods have been proposed to this end. In general, we classify the existing

feature descriptors to three major groups:

• Color based features, which are simple statistics of pixel intensities such as means and variances in

different color spaces (e.g. RGB, HSI, and Luv) [23, 24, 25, 26].

• Statistical texture descriptors, which measure texture properties such as smoothness, coarseness and

regularity of the lesion, e.g. intensity distribution descriptors [27, 28, 29, 30], wavelet-based (WT)

descriptors1 [31, 32, 26, 10], SIFT descriptors [32], and gray-level dependence matrix (GLDM)2 [27].

• Geometric based features, which describe the shape or the spatial relationship of a lesion mainly with

respect to the segmented border, e.g. elongation or border irregularity [23, 24, 25, 33, 34]. Recently, a

set of geometric based information is extracted from lesions for orientation analysis of structures, e.g.

by considering histogram of oriented gradients [18], computing local isotropy of the textures using

gradient of the image [35], and detecting cyclic subgraphs corresponding to skin texture structures

[36].1WT-based descriptors are set as the mean and variance of the WT-coefficients of the different sub-bands.2GLDM-based descriptors are rotation-invariant, which consider the relation between a pixel and all its neighbors.

Page 4: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

4 Dermoscopy Image Analysis

In this chapter, we focus on the extraction of a new geometric based feature for streak detection. Streaks,

also referred to as radial streamings, appear as linear structures located at the periphery of a lesion and are

classified as either regular or irregular depending on the appearance of their intensity, texture, and color

distribution [37]. Examples of dermoscopic images in the absence and presence of streaks are shown in

Figure 1.2. We notice that the appearance of vasculature in biomedical images resembles to some degree the

appearance of streaks in dermoscopic images. Despite notable differences (e.g. vessel images are typically

single channel whereas dermoscopic images are colored), methods for dermoscopic image analysis stand to

benefit from methods for the detection and analysis of tubular structures, as has been witnessed in state-of-

the-art research on vascular image analysis. In the following, we describe our streak detection approach,

which is based on our earlier work [38].

We extract orientation information of streaks through the use of eigenvalue decomposition of the Hessian

matrix (Section 1.2.2). After estimating tubularness and streak direction, we define a vector field in order

to quantify the radial component of the streak pattern. In particular, we compute the amount of flux of

calculated vector field passing through iso-distance lesion contours. We construct our appearance descriptor

based on the mean and variance of this flux through different concentric bands of the lesion, which in

turn allows for more localized features without the prerequisite of explicitly calculating a point-to-point

correspondence between the lesion shapes (Section 1.2.3). We validate the classification accuracy of a

support vector machine (SVM) classifier based on our extracted features (Section 1.3). Our results on 99

dermoscopic images show that we obtain improved classification, by up to 9% in term of area under receiver

operating characteristic (ROC) curves, compared to the state-of-the-art (Section 1.4).

1.2 Methods

1.2.1 Tubularness Filter for Streak Enhancement

Frangi et al. [39] proposed to measure the tubularness ⌫(x, s) at pixel x = (x, y) for scale s using:

⌫(x, s) =e

�R2(x,s)

2�2

✓1� e

(�S2(x,s)

2c2)◆

R (x, s) =

�1 (x, s)

�2 (x, s)S (x, s) =

sX

i2

2i

(x, s) (1.1)

where �i

(x, s), i = 1, 2 (|�1| 6 |�2|) are the eigenvalues, resulting from singular value decomposition

(SVD), of the Hessian matrix3 of image I computed at scale s of image I computed at scale s. R and S are

measures of blobness and second order structureness, respectively. � and c are parameters that control the

sensitivity of the filter to the measures R and S.3A 2⇥2 matrix of second partial derivatives of I(x, y).

Page 5: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

The Title of Your Chapter Goes Here 5

AB

S

0

50

100

150

200

250

0

0.2

0.4

0.6

0.8

1

ZOO

MA

BS

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

REG

0

50

100

150

200

250

0

0.2

0.4

0.6

0.8

1

ZOO

MR

EG

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

IRG

0

50

100

150

200

250

0

0.2

0.4

0.6

0.8

1

ZOO

MIR

G

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

(a) Image (b) ⌫

+ (c) ⌫

� (d) ⌫

+⌫

� (e) �

Figure 1.2: (a) Dermoscopic images overlaid with segmented borders of lesions. A close up of the region

inside the blue box is shown in the row just underneath the image. (b-c) Frangi filter responses ⌫+ (b) and

� (c) (1.2), which are encoded to the red and green channels in (d). (e) Direction of the minimum intensity

curvature. Different rows show images in the absence (first two rows), presence of regular (middle two

rows), and irregular streaks (bottom two rows).

Page 6: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

6 Dermoscopy Image Analysis

Since the sign of the largest eigenvalue is an indicator of the brightness or darkness of the pixels (i.e.

dark-on-bright vs. bright-on-dark), the following sign tests are used to determine the tubularness of the light,

�, and dark, ⌫+, structures [39]4:

�(x, s) =

8<

:0 if �2 (x, s) > 0

⌫(x, s) if �2 (x, s) < 0

(1.2)

+(x, s) =

8<

:0 if �2 (x, s) < 0

⌫(x, s) if �2 (x, s) > 0

Note that ⌫(x, s) = ⌫

�(x, s) + ⌫

+(x, s). Figures 1.2(c-d) show examples of the computed ⌫� and ⌫+ for

dermoscopic images of the different types: in the absence, presence of regular, and presence of irregular

streaks, denoted by ABS, REG, and IRG, respectively.

1.2.2 Flux Analysis of the Streaks’s Principal Curvature Vectors

While computing tubularness of the streaks using (1.2), we make an estimation of the streak direction

�(x, s). It is computed as the angle between the x-axis and the eigenvector corresponding to �1(x, s),

which points along the direction of the minimum intensity curvature. Given � and ⌫, we define a “streak

vector field” as:

~

E

+=(⌫

+cos(�), ⌫

+sin(�))

~

E

�=(⌫

�cos(�), ⌫

�sin(�)) (1.3)

To quantify a radial “streaming pattern” of the vector field with respect to a lesion contour C, we measure

the amount of the flow of ~E parallel and perpendicular to C, denoted by k and ?, respectively, using:

+k (~

E

+, C) =

I

C

k ~E+ ⇥ ~n k dc

�k (~

E

�, C) =

I

C

k ~E� ⇥ ~n k dc

+?(~

E

+, C) =

I

C

| < ~

E

+.~n > | dc

�?(~

E

�, C) =

I

C

| < ~

E

�.~n > | dc (1.4)

where ~n is the normal vector to C, ⇥ and <.> denote cross and dot products between the vectors, and k . k

and |.| measure the L2 norm of the vector and the absolute value of the scalar, respectively.4Superscripts � and + indicate the sign of the largest eigenvalue of the pixels being inside the light and dark tubular objects,

respectively.

Page 7: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

The Title of Your Chapter Goes Here 7

By computing (1.4), we state our hypothesis as: in the presence of streaks on the contour C, k and ?

would take low and high values, respectively, capturing the known radial characteristic of the streaks. In

Section 1.2.3, we discuss how to utilize the measured flux to construct a feature vector for streak detection.

Note that in our implementation, we make an initial estimation of C by applying a binary graph cut

segmentation [40], where the data term and regularization terms are set using the distribution of the pixel

intensities and the Pott’s model, respectively [41]. The intensity distributions of the foreground and back-

ground are estimated by clustering, in color space, the image pixels into two distinct clusters.

1.2.3 Streak Detection Features

We measure k and ? according to (1.4) over iso-distance contours of the lesion, where each contour is

the loci of the pixels which have equal distance from the outer lesion contour Co

. We calculate the distance

transform (DT) of the lesion mask to extract the iso-distance contours, denoted by C

d

, where d represents

the distance between C

d

and C

o

. Figure 1.3 shows an example of the computed DT of a lesion mask and the

iso-distance contours Cd

. We compute the mean and variance of the flux of the different bands of the lesion,

where the Kth band of thickness �, BK,�, is defined as the region limited between the contours C

K� and

C(K�1)� and is given by:

B

K,�(x) = �(C

K�(x)) \ (1� �(C(K�1)�(x)) (1.5)

where �(C) is the region inside contour C. Therefore, the mean and variance of the flux over band B

K,�

are given by:

µ

+K,k =

K�X

d=(K�1)�

+k (Cd

)/

Z

x2⌦

B

K,�(x) dx

µ

�K,k =

K�X

d=(K�1)�

�k (Cd

)/

Z

x2⌦

B

K,�(x) dx

+K,k =

vuutK�X

d=(K�1)�

(

+k (Cd

)� µ

+K,k)

2/

Z

x2⌦

B

K,�(x) dx

�K,k =

vuutK�X

d=(K�1)�

(

�k (Cd

)� µ

�K,k)

2/

Z

x2⌦

B

K,�(x) dx

(1.6)

Page 8: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

8 Dermoscopy Image Analysis

(a)

0.5

1

0

(b) (c)

d=0

d=30

d=60

d=90

(d)

Figure 1.3: The iso-distance contours and subbands of a lesion. (a) Lesion mask. (b) Distance transform of (a). (c) Iso-distance

contours Cd of the lesion, where d represents the distance between Cd and the lesion border in (a). (e) Bands of the lesion defined

according to (1.5) between the contours in (d).

µ

+K,? =

K�X

d=(K�1)�

+?(Cd

)/

Z

x2⌦

B

K,�(x) dx

µ

�K,? =

K�X

d=(K�1)�

�?(Cd

)/

Z

x2⌦

B

K,�(x) dx

+K,? =

vuutK�X

d=(K�1)�

(

+?(Cd

)� µ

+K,?)

2/

Z

x2⌦

B

K,�(x) dx

�K,? =

vuutK�X

d=(K�1)�

(

�?(Cd

)� µ

�K,?)

2/

Z

x2⌦

B

K,�(x) dx (1.7)

where ⌦ is the image domain. Note that the denominator in (1.7) corresponds to the area of the Kth band,

which is used to normalize the extracted features. After computing µ and � of the flux of the N different

bands (K = {1, 2, ..., N}), our SVD-flux based feature vector, denoted by SVD-FLX, is constructed by

concatenating the measurements of the different bands and is given by:

SVD-FLX = [µ

+1,k µ

�1,k �

+1,k �

�1,k µ

+1,? µ

�1,? �

+1,? �

�1,?...µ

+N,k µ

�N,k �

+N,k �

�N,k �

+N,? �

�N,? µ

+N,? µ

�N,?] (1.8)

Note that to make use of color information in the computed tubularness in (1.1), the tubularness is measured

using the eigenvalues of the quaternion Hessian matrix of the color image [42]. We denote the feature vector

utilizing the quaternion Hessian matrix by QSVD-FLX and provide a comparison between the classification

accuracies of SVD-FLX and QSVD-FLX in Section 1.4.

1.3 Machine Learning for Streak Classification

The final step in our approach is to learn how the extracted descriptors can best distinguish the three dif-

ferent classes: the absence (ABS), presence of regular (REG), or presence of irregular (IRG) streaks in the

Page 9: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

The Title of Your Chapter Goes Here 9

dermoscopic images. The 3-class classification task is realized using an efficient pairwise classification. The

pairwise classification is based on a non-linear SVM, trained and then validated according to a leave-one-out

scheme [43].

The SVM classifier requires the setting of two parameters: ⇠, which assigns a penalty to errors, and �,

which defines the width of a radial basis function [44]. We compute the false positive (FP) and true positive

(TP) rates of the classifier for different values of ⇠ and � in a logarithmic grid search (from 2

�8 to 2

8) to

create a ROC curve. Therefore, each pair of the parameters (⇠i

, �

j

) would generate a point (FP

ij

, TP

ij

) in

the graph. The ROC curve is constructed by selecting the set of optimal operating points. Point (FP

ij

, TP

ij

)

is optimal if there is no other point (FP

mn

, TP

mn

) such that FP

mn

FP

ij

and TP

mn

� TP

ij

. We use the

area under the generated ROC curves obtained from classification involving different descriptors to compare

their discriminatory power.

1.4 Results

The proposed algorithm has been tested on 99 768⇥512-pixel dermoscopic images of Argenziano et al.’s

atlas of dermoscopy [37], including 33 images in which the streaks are absent (ABS), 33 images in which

regular streaks are present (REG), and 33 images with irregular streaks (IRG). Note that the whole dataset

in [37] consists of 527 images of different resolutions, ranging from 0.033 to 0.5 mm/pixel. The 99 out of

527 images are selected such that a complete lesion occupying more than 10% of the image can be seen,

since only then the lesion texture is reasonably visible and suitable for analysis.

Figure 1.4 and Table 1.1 show the classification accuracies of the different descriptors in terms of the

ROC curves and the areas under them, where GLOB, WT, SVD-FLX, and QSVD-FLX denote the global

descriptors used in [24]5, WT-based descriptors used in [26]6, and our flux-based descriptors using the

eigenvalues of the luminance and RGB images in (1.1), respectively. In the last row of Table 1.1, we provide

multi-class classification accuracies as the geometric mean of the pairwise classifiers, as suggested in [45].

The results indicate that, averaged over all the groups (column 1 in Table 1.1), we obtain 92.66% accu-

racy for QSVD-FLX; up to 9% increase in terms of area under ROC curves compared with GLOB and WT.

Furthermore, it can be noticed that QSVD-FLX, which is obtained by considering the color information,

results in an average of 2.5% improvement in the classification accuracy compared with SVD-FLX.

5GLOB is constructed using the mean and variance of pixel intensities in different color spaces (RGB, HSI, and Luv) and the

border irregularities, where the latter is measured via the change in the lesion contour pixels’ coordinates relative to the lesion’s

centroid and the ratio between the lesion contour length and the maximum axis of the contour’s convex hull (details in [24]).6The WT-based descriptors are constructed by concatenating the mean and variance of the WT-coefficients of the different

sub-bands of the WT using Haar wavelets and three decomposition levels (details in [26]).

Page 10: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

10 Dermoscopy Image Analysis

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

ABS vs. REG

False positive rate

Tru

e posi

tive

rate

GLOBWTSVD!FLXQSVD!FLX

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

REG vs. IRG

False positive rate

Tru

e posi

tive

rate

GLOBWTSVD!FLXQSVD!FLX

(b)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

ABS vs. IRG

False positive rate

Tru

e posi

tive

rate

GLOBWTSVD!FLXQSVD!FLX

(c)

Figure 1.4: ROC curves of the pairwise classifiers resulting from using the different descriptors. Areas under the ROC curves are

reported in Table 1.1.

Group1 vs. Group2Area under the ROC curves

Selected Descriptor(s)GLOB WT SVD-FLX QSVD-FLX

ABS⇥REG 0.8913 0.9130 0.9130 0.9565 SVD-FLX

ABS⇥IRG 0.8091 0.8478 0.8478 0.8696 QSVD-FLX

REG⇥IRG 0.8781 0.7439 0.9783 0.9565 SVD-FLX

Geometric Mean 0.8587 0.8319 0.9115 0.9266 QSVD-FLX

Table 1.1: Area under the ROC curves in Figure 1.4. The last column shows the descriptor(s) that resulted in the highest AUC.

Note that we report multi-class classification accuracies in terms of geometric mean (GM) of the pairwise classifiers (as done in

[45]) in the last row.

1.5 Summary

We reviewed the general pipeline for a CAD system of dermoscopic images, and described our appearance

descriptor that captures the tubularness in the color dermoscopic images, which is sensitive to the radial

features of streaks, and is localized to different lesion bands (e.g. the most periphery band where streaks

commonly appear). The experimental results show that we achieve improved classification results compared

to the state-of-the-art global and wavelet transform based descriptors. We plan to extend our method to detect

and classify the presence of other dermoscopic features (e.g. pigment network, dots, vascular structures),

moving us an important step forward towards a machine-learning based computer aided diagnosis system

for early detection of MM.

Note that besides the dermoscopic images for early detection of MM, dermatologists advocate total

body photography for high-risk patients to identify new-appearing, disappearing, and changing PSL by

Page 11: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

The Title of Your Chapter Goes Here 11

tracking PSL in the 2D digital color images (e.g. 8 pixels/mm) collected during regular examinations.

However, manual inspection and matching of PSL is time consuming, error prone, and suffers from interand

intra-rater variability. A computer program for tracking the corresponding PSL will greatly improve the

matching process. Although computer-based systems provide sophisticated functionalities for automated

feature extraction and lesion assessment for quantitative analysis, there exists limited works to computerize

matching between lesions. We refer to our earlier works [46, 47, 48], on which we mainly focus on preparing

an automatic PSL tracking system.

Acknowledgements

This work was supported in part by a scholarship from scholarship from the Canadian Institutes of Health

Research (CIHR) Skin Research Training Centre and by grants from the Natural Sciences and Engineering

(NSERC) Research Council of Canada, and Canadian Dermatology Foundation.

Bibliography

[1] “Canadian cancer society’s steering committee. Canadian cancer statistics, Toronto, Canada,” 2009.

[2] T. Lee, V. Nguyen, R. Gallagher, A. Coldman, and D. McLean, “DullRazor: a software approach to hair removal

from images,” Comput Biol Med, vol. 27, no. 3, pp. 533–543, 1997.

[3] P. Schmid-Saugeon, J. Guillod, and J. Thiran, “Towards a computer-aided diagnosis system for pigmented skin

lesions,” CMIG, vol. 27, no. 1, pp. 65 – 78, 2003.

[4] F. Xie, S. Qin, Z. Jiang, and R. Meng, “PDE-based unsupervised repair of hair-occluded information in der-

moscopy images of melanoma,” CMIG, vol. 33, no. 4, pp. 275 – 282, 2009.

[5] M. Fiorese, E. Peserico, and A. Silletti, “Virtualshave: Automated hair removal from digital dermatoscopic

images,” IEEE EMBC, pp. 4378–4381, 2011.

[6] A. Afonso and M. Silveira, “Hair detection in dermoscopic images using percolation,” IEEE EMBC, pp. 4378–

4381, 2012.

[7] P. Wighton, T. L. A, and S. Atkins, “Dermascopic hair disocclusion using inpainting,” SPIE, vol. 8, pp. 1–8,

2008.

[8] H. Zhoua, M. Chenb, R. Gassb, and J. Rehg, “Feature-preserving artifact removal from dermoscopy images,”

SPIE, vol. 6914, pp. 1–9, 2008.

[9] M. Celebi, S. Hwang, H. Iyatomi, and G. Schaefer, “Robust border detection in dermoscopy images using thresh-

old fusion,” ICIP, 2010.

Page 12: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

12 Dermoscopy Image Analysis

[10] A. Chiem, A. Al-Jumaily, and R. Khushaba, “A novel hybrid system for skin lesion detection,” ISSNP, pp. 567

–572, 2007.

[11] E. Zagrouba and W. Barhoumi., “A prelimary approach for the automated recognition of malignant melanoma,”

Image Anal Stereol, vol. 23, pp. 121–135, 2004.

[12] H. Iyatomi, H. Oka, M. Celebi, M. Hashimoto, M. Hagiwara, M. Tanaka, and K. Ogawa., “An improved internet-

based melanoma screening system with dermatologist-like tumor area extraction algorithm,” CMIG, vol. 32,

no. 7, 2008.

[13] M. Celebi, H. Kingravi, H. Iyatomi, Y. Aslandogan, W. Stoecker, R. Moss, J. Malters, J. Grichnik, A. Marghoob,

and H. Rabinovitz, “Border detection in dermoscopy images using statistical region merging,” Skin Research and

Technology, vol. 14, no. 3, pp. 347–353, 2008.

[14] M. Celebi, Y. Aslandogan, W. Stoecker, H. Iyatomi, H. Oka, and X. Chen, “Unsupervised border detection in

dermoscopy images,” Skin Research and Technology, vol. 13, no. 4, pp. 454–462, 2007.

[15] M. Celebi, H. Kingravi, B. Uddin, H. Iyatomi, Y. Aslandogan, W. Stoecker, and R. Moss, “A methodological

approach to the classification of dermoscopy images,” CMIG, vol. 31, no. 6, pp. 362–373, 2007.

[16] B. Erkol, R. Moss, R. Joe, W. Stoecker, and E. Hvatum, “Automatic lesion boundary detection in dermoscopy

images using gradient vector flow snakes,” Skin Research and Technology, vol. 11, no. 1, p. 1726, 2005.

[17] K. Taouil, N. Ben Romdhane, and M. Bouhlel, “A new automatic approach for edge detection of skin lesion

images,” vol. 1, pp. 212 –220, 2006.

[18] P. Wighton, T. Lee, H. Lui, D. McLean, and M. Atkins, “Generalizing common tasks in automated skin lesion

diagnosis,” Transactions on Information Technology in BioMedicine, pp. 1–8, 2011.

[19] P. Wighton, T. Lee, G. Mori, H. Lui, D. McLean, and M. Atkins, “Conditional random fields and supervised

learning in automated skin lesion diagnosis,” International journal of biomedical imaging, 2011.

[20] P. Wighton, M. Sadeghi, T. Lee, and M. Atkins, “A fully automatic random walker segmentation for skin lesions

in a supervised setting,” MICCAI, pp. 1108–1115, 2009.

[21] E. Celebi, H. Iyatomi, G. Schaefer, and V. Stoecker, “Lesion border detection in dermoscopy images,” CMIG,

vol. 33, no. 2, pp. 148 – 153, 2009.

[22] M. Silveira, J. Nascimento, J. Marques, A. Marcal, T. Mendonca, S. Yamauchi, J. Maeda, and J. Rozeira, “Com-

parison of segmentation methods for melanoma diagnosis in dermoscopy images,” IEEE Journal of Selected

Topics in Signal Processing, vol. 3, pp. 35 –45, feb. 2009.

[23] G. Betta, G. D. Leo, G. Fabbrocini, A. Paolillo, and M. Scalvenzi, “Automated application of the 7-point check-

list diagnosis method for skin lesions: Estimation of chromatic and shape parameters.,” Instrumentation and

Measurement Technology Conference, vol. 3, pp. 1818–1822, 2005.

Page 13: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

The Title of Your Chapter Goes Here 13

[24] G. Fabbrocini, G. Betta, G. Leo, C. Liguori, A. Paolillo, A. Pietrosanto, P. Sommella, O. Rescigno, S. Cac-

ciapuoti, F. Pastore, V. Vita, I. Mordente, and F. Ayala, “Epiluminescence image processing for melanocytic

skin lesion diagnosis based on 7-point check-list: A preliminary discussion on three parameters,” The Open

Dermatology Journal, vol. 4, pp. 110–115, 2010.

[25] A. Tenenhaus, A. Nkengne, J. Horn, C. Serruys, A. Giron, and B. Fertil, “Detection of melanoma from dermo-

scopic images of naevi acquired under uncontrolled conditions,” Skin Research and Technology, vol. 16, no. 1,

pp. 85–97, 2009.

[26] G. Surowka and K. Grzesiak-Kopec, “Different learning paradigms for the classification of melanoid skin lesions

using wavelets,” in IEEE EMBS, pp. 3136 –3139, 2007.

[27] M. Anantha, R. H. Moss, and W. V. Stoecker, “Detection of pigment network in dermatoscopy images using

texture analysis,” Computerized Medical Imaging and Graphics, vol. 28, no. 5, pp. 225–234, 2004.

[28] H. Iyatomi, K. Norton, M. Celebi, G. Schaefer, M. Tanaka, and K. Ogawa, “Classification of melanocytic skin

lesions from non-melanocytic lesions,” IEEE EMBS, pp. 540–544, 2010.

[29] S. Tasoulis, C. Doukas, I. Maglogiannis, and V. Plagianakos, “Classification of dermatological images using

advanced clustering techniques,” IEEE EMBC, pp. 672–676, 2010.

[30] M. Celebi, A. Kingravi, Y. Alp, A. Ogan, and V. Stoecker, “Detection of blue-white veil areas in dermoscopy

images using machine learning techniques,” SPIE, vol. 6144, no. 5, pp. 1861–1868, 2006.

[31] M. Elbaum, A. Kopf, H. Rabinovitz, R. Langley, H. Kamino, M. Mihm, A. Sober, G. Peck, A. Bogdan,

D. Gutkowicz, M. Greenebaum, S. Keem, M. Oliviero, and S. Wang, “Automatic differentiation of melanoma

from melanocytic nevi with multispectral digital dermoscopy: A feasibility study,” Journal of the American

Academy of Dermatology, vol. 44, no. 2, pp. 207 – 218, 2001.

[32] N. Situ, T. Wadhawan, X. Yuan, and G. Zouridakis, “Modeling spatial relation in skin lesion images by the graph

walk kernel,” IEEE EMBC, pp. 613–616, 2010.

[33] T. K. Lee, D. McLean, and S. Atkins, “Irregularity index: a new border irregularity measure for cutaneous

melanocytic lesions,” MIA, vol. 7, no. 1, pp. 47–64, 2003.

[34] V. Ng, B. Fung, and T. Lee, “Determining the asymmetry of skin lesion with fuzzy borders,” Computers in

Biology and Medicine, vol. 35, no. 2, pp. 103–120, 2005.

[35] Z. She and P. S. Excell, “Skin pattern analysis for lesion classification using local isotropy,” Skin Research and

Technology, vol. 17, pp. 206–212, 2011.

[36] M. Sadeghi, T. Lee, D. McLean, H. Lui, and S. Atkins, “Detection and analysis of irregular streaks in dermo-

scopic images of skin lesions,” IEEE TMI, vol. 32, no. 5, pp. 849–861, 2013.

[37] G. Argenziano, H. Soyer, V. Giorgio, D. Piccolo, P. Carli, M. D. A. Ferrari, R. Hofmann, D. Massi, G. Mazzoc-

chetti, M. Scalvenzi, and H. Wolf, Interactive Atlas of Dermoscopy. Edra Medical Publishing and New Media,

2000.

Page 14: Streak-Detection in Dermoscopic Color Images using ...hamarneh/ecopy/crc2015a.pdf · Photomedicine Institute, Department of Dermatology and Skin Science, Univer-sity of British Columbia

14 Dermoscopy Image Analysis

[38] H. Mirzaalian, T. Lee, and G. Hamarneh, “Learning features for streak detection in dermoscopic color images

using localized radial flux of principal intensity curvature,” IEEE MMBIA, pp. 97–101, 2012.

[39] A. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever, “Multiscale vessel enhancement filtering,” MIC-

CAI, pp. 130–137, 1998.

[40] Y. Boykov and G. Funka-Lea, “Graph cuts and efficient N-D image segmentation,” Int. J. Comput. Vision, vol. 70,

pp. 109–131, Nov. 2006.

[41] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE TPAMI,

vol. 23, p. 2001, 1999.

[42] L. Shi, B. Funt, and G. Hamarneh, “Quaternion color curvature,” in Color Imaging, pp. 338–341, 2008.

[43] S. Park and J. Frnkranz, “Efficient pairwise classification,” in ECML, vol. 4701, pp. 658–665, Springer, 2007.

[44] V. Vapnik, Statistical Learning Theory. Wiley, 1998.

[45] Y. Sun, M. Kamel, and Y. Wang, “Boosting for learning multiple classes with imbalanced class distribution,” in

IEEE ICDM, pp. 592–602, 2006.

[46] H. Mirzaalian, G. Hamarneh, and T. Lee, “Graph-based approach to skin mole matching incorporating template-

normalized coordinates,” IEEE CVPR, pp. 2152–2159, 2009.

[47] H. Mirzaalian, T. Lee, and G. Hamarneh, “Uncertainty-based feature learning for skin lesion matching using a

high order MRF optimization framework,” MICCAI, pp. 98–105, 2012.

[48] H. Mirzaalian, T. Lee, and G. Hamarneh, “Spatial normalization of human back images for dermatological

studies,” IEEE JBHI, pp. 1–8, 2013.