a unified regularized group pls algorithm scalable to big data · big data pls methodsjstar 2016,...

81
A Unified Regularized Group PLS Algorithm Scalable to Big Data Pierre Lafaye de Micheaux 1 , Benoit Liquet 2 , Matthew Sutton 3 1 CREST, ENSAI. 2 Universit ´ e de Pau et des Pays de l’Adour, LMAP. 3 Queensland Uninversity of Technology, Brisbane, Australia. Big Data PLS Methods JSTAR 2016, Rennes 1/54

Upload: others

Post on 01-Aug-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

A Unified Regularized Group PLS AlgorithmScalable to Big Data

Pierre Lafaye de Micheaux1, Benoit Liquet2, Matthew Sutton3

1 CREST, ENSAI.2 Universite de Pau et des Pays de l’Adour, LMAP.

3 Queensland Uninversity of Technology, Brisbane, Australia.

Big Data PLS Methods JSTAR 2016, Rennes 1/54

Page 2: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Contents

1. Motivation: Integrative Analysis for group data

2. Application on a HIV vaccine study

3. PLS approaches: SVD, PLS-W2A, canonical, regression

4. Sparse ModelsI Lasso penaltyI Group penaltyI Group and Sparse Group PLS

5. R package: sgPLS

6. Regularized PLS Scalable to BIG-DATA

7. Concluding remarks

Big Data PLS Methods JSTAR 2016, Rennes 2/54

Page 3: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Integrative Analysis

Wikipedia. Data integration “involves combining data residing in dif-ferent sources and providing users with a unified view of these data.This process becomes significant in a variety of situations, which in-clude both commercial and scientific domains”.

System Biology. Integrative Analysis: Analysis of heterogeneoustypes of data from inter-platform technologies.

Goal. Combine multiple types of data:I Contribute to a better understanding of biological mechanisms.I Have the potential to improve the diagnosis and treatments of

complex diseases.

Big Data PLS Methods JSTAR 2016, Rennes 3/54

Page 4: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Example: Data definition

- n observations

- p variables

Xn

- n observations

- q variables

Yn

p q

I “Omics.” Y matrix: gene expression, X matrix: SNP (single nu-cleotide polymorphism). Many others such as proteomic, metabolomicdata.

I “Neuroimaging”. Y matrix: behavioral variables, X matrix: brainactivity (e.g., EEG, fMRI, NIRS)

I “Neuroimaging Genetics.” Y matrix: DTI (Diffusion Tensor Imag-ing), X matrix: SNP

Big Data PLS Methods JSTAR 2016, Rennes 4/54

Page 5: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Example: Data definition

- n observations

- p variables

Xn

- n observations

- q variables

Yn

p q

I “Omics.” Y matrix: gene expression, X matrix: SNP (single nu-cleotide polymorphism). Many others such as proteomic, metabolomicdata.

I “Neuroimaging”. Y matrix: behavioral variables, X matrix: brainactivity (e.g., EEG, fMRI, NIRS)

I “Neuroimaging Genetics.” Y matrix: DTI (Diffusion Tensor Imag-ing), X matrix: SNP

Big Data PLS Methods JSTAR 2016, Rennes 4/54

Page 6: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Example: Data definition

- n observations

- p variables

Xn

- n observations

- q variables

Yn

p q

I “Omics.” Y matrix: gene expression, X matrix: SNP (single nu-cleotide polymorphism). Many others such as proteomic, metabolomicdata.

I “Neuroimaging”. Y matrix: behavioral variables, X matrix: brainactivity (e.g., EEG, fMRI, NIRS)

I “Neuroimaging Genetics.” Y matrix: DTI (Diffusion Tensor Imag-ing), X matrix: SNP

Big Data PLS Methods JSTAR 2016, Rennes 4/54

Page 7: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Example: Data definition

- n observations

- p variables

Xn

- n observations

- q variables

Yn

p q

I “Omics.” Y matrix: gene expression, X matrix: SNP (single nu-cleotide polymorphism). Many others such as proteomic, metabolomicdata.

I “Neuroimaging”. Y matrix: behavioral variables, X matrix: brainactivity (e.g., EEG, fMRI, NIRS)

I “Neuroimaging Genetics.” Y matrix: DTI (Diffusion Tensor Imag-ing), X matrix: SNP

Big Data PLS Methods JSTAR 2016, Rennes 4/54

Page 8: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Data: Constraints and AimsI Main constraint: colinearity among the variables, or situation with

p > n or q > n. But p and q are supposed to be not too large.

I Two Aims:1. Symmetric situation. Analyze the association between two blocks

of information. Analysis focused on shared information.2. Asymmetric situation. X matrix= predictors and Y matrix=

response variables. Analysis focused on prediction.

I Partial Least Square Family: dimension reduction approachesI PLS finds pairs of latent vectors ξ = Xu, ω = Yv with maximal

covariance.

e.g., ξ = u1 × SNP1 + u2 × SNP2 + · · ·+ up × SNPp

I Symmetric situation and Asymmetric situation.I Matrix decomposition of X and Y into successive latent variables.

Latent variables: are not directly observed but are rather inferred(through a mathematical model) from other variables that are observed(directly measured). Capture an underlying phenomenon (e.g., health).

Big Data PLS Methods JSTAR 2016, Rennes 5/54

Page 9: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Data: Constraints and AimsI Main constraint: colinearity among the variables, or situation with

p > n or q > n. But p and q are supposed to be not too large.I Two Aims:

1. Symmetric situation. Analyze the association between two blocksof information. Analysis focused on shared information.

2. Asymmetric situation. X matrix= predictors and Y matrix=response variables. Analysis focused on prediction.

I Partial Least Square Family: dimension reduction approachesI PLS finds pairs of latent vectors ξ = Xu, ω = Yv with maximal

covariance.

e.g., ξ = u1 × SNP1 + u2 × SNP2 + · · ·+ up × SNPp

I Symmetric situation and Asymmetric situation.I Matrix decomposition of X and Y into successive latent variables.

Latent variables: are not directly observed but are rather inferred(through a mathematical model) from other variables that are observed(directly measured). Capture an underlying phenomenon (e.g., health).

Big Data PLS Methods JSTAR 2016, Rennes 5/54

Page 10: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Data: Constraints and AimsI Main constraint: colinearity among the variables, or situation with

p > n or q > n. But p and q are supposed to be not too large.I Two Aims:

1. Symmetric situation. Analyze the association between two blocksof information. Analysis focused on shared information.

2. Asymmetric situation. X matrix= predictors and Y matrix=response variables. Analysis focused on prediction.

I Partial Least Square Family: dimension reduction approachesI PLS finds pairs of latent vectors ξ = Xu, ω = Yv with maximal

covariance.

e.g., ξ = u1 × SNP1 + u2 × SNP2 + · · ·+ up × SNPp

I Symmetric situation and Asymmetric situation.I Matrix decomposition of X and Y into successive latent variables.

Latent variables: are not directly observed but are rather inferred(through a mathematical model) from other variables that are observed(directly measured). Capture an underlying phenomenon (e.g., health).

Big Data PLS Methods JSTAR 2016, Rennes 5/54

Page 11: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Data: Constraints and AimsI Main constraint: colinearity among the variables, or situation with

p > n or q > n. But p and q are supposed to be not too large.I Two Aims:

1. Symmetric situation. Analyze the association between two blocksof information. Analysis focused on shared information.

2. Asymmetric situation. X matrix= predictors and Y matrix=response variables. Analysis focused on prediction.

I Partial Least Square Family: dimension reduction approaches

I PLS finds pairs of latent vectors ξ = Xu, ω = Yv with maximalcovariance.

e.g., ξ = u1 × SNP1 + u2 × SNP2 + · · ·+ up × SNPp

I Symmetric situation and Asymmetric situation.I Matrix decomposition of X and Y into successive latent variables.

Latent variables: are not directly observed but are rather inferred(through a mathematical model) from other variables that are observed(directly measured). Capture an underlying phenomenon (e.g., health).

Big Data PLS Methods JSTAR 2016, Rennes 5/54

Page 12: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Data: Constraints and AimsI Main constraint: colinearity among the variables, or situation with

p > n or q > n. But p and q are supposed to be not too large.I Two Aims:

1. Symmetric situation. Analyze the association between two blocksof information. Analysis focused on shared information.

2. Asymmetric situation. X matrix= predictors and Y matrix=response variables. Analysis focused on prediction.

I Partial Least Square Family: dimension reduction approachesI PLS finds pairs of latent vectors ξ = Xu, ω = Yv with maximal

covariance.

e.g., ξ = u1 × SNP1 + u2 × SNP2 + · · ·+ up × SNPp

I Symmetric situation and Asymmetric situation.I Matrix decomposition of X and Y into successive latent variables.

Latent variables: are not directly observed but are rather inferred(through a mathematical model) from other variables that are observed(directly measured). Capture an underlying phenomenon (e.g., health).

Big Data PLS Methods JSTAR 2016, Rennes 5/54

Page 13: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

PLS and sparse PLS

Classical PLSI Output of PLS: H pairs of latent variables (ξh ,ωh), h = 1, . . . ,H.I Reduction method (H << min(p, q)). But no variable selection for

extracting the most relevant (original) variables from each latentvariable.

sparse PLSI sparse PLS selects the relevant SNPsI Some coefficients u` are equal to 0ξh = u1 ×SNP1 + u2︸︷︷︸

=0

×SNP2 + u3︸︷︷︸=0

×SNP3 + · · ·+up ×SNPp

I The sPLS components are linear combinations of the selectedvariables

Big Data PLS Methods JSTAR 2016, Rennes 6/54

Page 14: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

PLS and sparse PLS

Classical PLSI Output of PLS: H pairs of latent variables (ξh ,ωh), h = 1, . . . ,H.I Reduction method (H << min(p, q)). But no variable selection for

extracting the most relevant (original) variables from each latentvariable.

sparse PLSI sparse PLS selects the relevant SNPsI Some coefficients u` are equal to 0ξh = u1 ×SNP1 + u2︸︷︷︸

=0

×SNP2 + u3︸︷︷︸=0

×SNP3 + · · ·+up ×SNPp

I The sPLS components are linear combinations of the selectedvariables

Big Data PLS Methods JSTAR 2016, Rennes 6/54

Page 15: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Group structures within the dataI Natural example: Categorical variables form a group of dummy variables

in a regression setting.

I Genomics: genes within the same pathway have similar functions andact together in regulating a biological system.→ These genes can add up to have a larger effect→ can be detected as a group (i.e., at a pathway or gene set/modulelevel).

We consider that variables are divided into groups:

I Example: p SNPs grouped into K genes (Xj = SNPj)

X =[SNP1, . . . ,SNPk︸ ︷︷ ︸

gene1

|SNPk+1,SNPk+2, . . . ,SNPh︸ ︷︷ ︸gene2

| . . . |SNPl+1, . . . ,SNPp︸ ︷︷ ︸geneK

]I Example: p genes grouped into K pathways/modules (Xj = genej)

X =[X1,X2, . . . ,Xk︸ ︷︷ ︸

M1

|Xk+1,Xk+2, . . . ,Xh︸ ︷︷ ︸M2

| . . . |Xl+1,Xl+2, . . . ,Xp︸ ︷︷ ︸MK

]

Big Data PLS Methods JSTAR 2016, Rennes 7/54

Page 16: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Group structures within the dataI Natural example: Categorical variables form a group of dummy variables

in a regression setting.

I Genomics: genes within the same pathway have similar functions andact together in regulating a biological system.→ These genes can add up to have a larger effect→ can be detected as a group (i.e., at a pathway or gene set/modulelevel).

We consider that variables are divided into groups:

I Example: p SNPs grouped into K genes (Xj = SNPj)

X =[SNP1, . . . ,SNPk︸ ︷︷ ︸

gene1

|SNPk+1,SNPk+2, . . . ,SNPh︸ ︷︷ ︸gene2

| . . . |SNPl+1, . . . ,SNPp︸ ︷︷ ︸geneK

]I Example: p genes grouped into K pathways/modules (Xj = genej)

X =[X1,X2, . . . ,Xk︸ ︷︷ ︸

M1

|Xk+1,Xk+2, . . . ,Xh︸ ︷︷ ︸M2

| . . . |Xl+1,Xl+2, . . . ,Xp︸ ︷︷ ︸MK

]

Big Data PLS Methods JSTAR 2016, Rennes 7/54

Page 17: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Group structures within the dataI Natural example: Categorical variables form a group of dummy variables

in a regression setting.

I Genomics: genes within the same pathway have similar functions andact together in regulating a biological system.→ These genes can add up to have a larger effect→ can be detected as a group (i.e., at a pathway or gene set/modulelevel).

We consider that variables are divided into groups:

I Example: p SNPs grouped into K genes (Xj = SNPj)

X =[SNP1, . . . ,SNPk︸ ︷︷ ︸

gene1

|SNPk+1,SNPk+2, . . . ,SNPh︸ ︷︷ ︸gene2

| . . . |SNPl+1, . . . ,SNPp︸ ︷︷ ︸geneK

]I Example: p genes grouped into K pathways/modules (Xj = genej)

X =[X1,X2, . . . ,Xk︸ ︷︷ ︸

M1

|Xk+1,Xk+2, . . . ,Xh︸ ︷︷ ︸M2

| . . . |Xl+1,Xl+2, . . . ,Xp︸ ︷︷ ︸MK

]

Big Data PLS Methods JSTAR 2016, Rennes 7/54

Page 18: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Group PLS

Aim: select groups of variables taking into account the data structure

I PLS componentsξh = u1 × X1 + u2 × X2 + u3 × X3 + · · ·+ up × Xp

I sparse PLS components (sPLS)ξh = u1 × X1 + u2︸︷︷︸

=0

×X2 + u3︸︷︷︸=0

×X3 + · · ·+ up × Xp

I group PLS components (gPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸,0

X1 + u5︸︷︷︸,0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

→ select groups of variables; either all the variables within a group are selectedor none of them are selected

... does not achieve sparsity within each group ...

Big Data PLS Methods JSTAR 2016, Rennes 8/54

Page 19: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Group PLS

Aim: select groups of variables taking into account the data structure

I PLS componentsξh = u1 × X1 + u2 × X2 + u3 × X3 + · · ·+ up × Xp

I sparse PLS components (sPLS)ξh = u1 × X1 + u2︸︷︷︸

=0

×X2 + u3︸︷︷︸=0

×X3 + · · ·+ up × Xp

I group PLS components (gPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸,0

X1 + u5︸︷︷︸,0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

→ select groups of variables; either all the variables within a group are selectedor none of them are selected

... does not achieve sparsity within each group ...

Big Data PLS Methods JSTAR 2016, Rennes 8/54

Page 20: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Group PLS

Aim: select groups of variables taking into account the data structure

I PLS componentsξh = u1 × X1 + u2 × X2 + u3 × X3 + · · ·+ up × Xp

I sparse PLS components (sPLS)ξh = u1 × X1 + u2︸︷︷︸

=0

×X2 + u3︸︷︷︸=0

×X3 + · · ·+ up × Xp

I group PLS components (gPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸,0

X1 + u5︸︷︷︸,0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

→ select groups of variables; either all the variables within a group are selectedor none of them are selected

... does not achieve sparsity within each group ...

Big Data PLS Methods JSTAR 2016, Rennes 8/54

Page 21: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Group PLS

Aim: select groups of variables taking into account the data structure

I PLS componentsξh = u1 × X1 + u2 × X2 + u3 × X3 + · · ·+ up × Xp

I sparse PLS components (sPLS)ξh = u1 × X1 + u2︸︷︷︸

=0

×X2 + u3︸︷︷︸=0

×X3 + · · ·+ up × Xp

I group PLS components (gPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸,0

X1 + u5︸︷︷︸,0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

→ select groups of variables; either all the variables within a group are selectedor none of them are selected

... does not achieve sparsity within each group ...

Big Data PLS Methods JSTAR 2016, Rennes 8/54

Page 22: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Sparse Group PLSAim: combine both sparsity of groups and within each group.Example: X matrix = genes. We might be interested in identifying particularlyimportant genes in pathways of interest.I sparse PLS components (sPLS)ξh = u1 × X1 + u2︸︷︷︸

=0

×X2 + u3︸︷︷︸=0

×X3 + · · ·+ up × Xp

I group PLS components (gPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸,0

X1 + u5︸︷︷︸,0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

I sparse group PLS components (sgPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸=0

X4 + u5︸︷︷︸=0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

Big Data PLS Methods JSTAR 2016, Rennes 9/54

Page 23: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Sparse Group PLSAim: combine both sparsity of groups and within each group.Example: X matrix = genes. We might be interested in identifying particularlyimportant genes in pathways of interest.I sparse PLS components (sPLS)ξh = u1 × X1 + u2︸︷︷︸

=0

×X2 + u3︸︷︷︸=0

×X3 + · · ·+ up × Xp

I group PLS components (gPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸,0

X1 + u5︸︷︷︸,0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

I sparse group PLS components (sgPLS)

ξh =

module1︷ ︸︸ ︷u1︸︷︷︸=0

X1 + u2︸︷︷︸=0

X2 +

module2︷ ︸︸ ︷u3︸︷︷︸,0

X3 + u4︸︷︷︸=0

X4 + u5︸︷︷︸=0

X5 + · · ·+

moduleK︷ ︸︸ ︷up−1︸︷︷︸=0

Xp−1 + up︸︷︷︸=0

Xp

Big Data PLS Methods JSTAR 2016, Rennes 9/54

Page 24: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Aims in a regression setting

I Select groups of variables taking into account the data structure;all the variables within a group are selected otherwise none ofthem are selected

I Combine both sparsity of groups and within each group; only rel-evant variables within a group are selected

Big Data PLS Methods JSTAR 2016, Rennes 10/54

Page 25: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Illustration: Dendritic Cells in Addition to AntiretroviralTreatment (DALIA) trial

I Evaluation of the safety and the immunogenicity of a vaccine on n = 19HIV-1 infected patients.

I The vaccine was injected on weeks 0, 4, 8 and 12 while patients re-ceived an antiretroviral therapy. An interruption of the antiretrovirals wasperformed at week 24.

I After vaccination, a deep evaluation of the immune response was per-formed at week 16.

I Repeated measurements of the main immune markers and gene ex-pression were performed every 4 weeks until the end of the trials.

Big Data PLS Methods JSTAR 2016, Rennes 11/54

Page 26: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

DALIA trial: Question ?

First results obtained using group of genesI Significant change of gene expression among 69 modules over

time before antiretroviral treatment interruption.

I How does the gene abundance of these 69 modules as measuredat week 16 correlate with immune markers measured at week16?

Big Data PLS Methods JSTAR 2016, Rennes 12/54

Page 27: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

DALIA trial: Question ?

First results obtained using group of genesI Significant change of gene expression among 69 modules over

time before antiretroviral treatment interruption.I How does the gene abundance of these 69 modules as measured

at week 16 correlate with immune markers measured at week16?

Big Data PLS Methods JSTAR 2016, Rennes 12/54

Page 28: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

sPLS, gPLS and sgPLS

I Response variables Y= immune markers composed of q = 7 cy-tokines (IL21, IL2, IL13, IFNg, Luminex score, TH1 score, CD4).

I Predictor variables X= expression of p = 5399 genes extractedfrom the 69 modules.

I Use the structure of the data (modules) for gPLS and sgPLS.Each gene belongs to one of the 69 modules.

I Asymmetric situation.

Big Data PLS Methods JSTAR 2016, Rennes 13/54

Page 29: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Results: Modules and number of genes selected

p = 5399 ; 24 modules selected by gPLS or sgPLS on 3 scoresBig Data PLS Methods JSTAR 2016, Rennes 14/54

Page 30: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Results: Modules and number of genes selected

Big Data PLS Methods JSTAR 2016, Rennes 15/54

Page 31: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Results: Venn diagram

I sgPLS selects slightly more genes than sPLS (respectively 487 and 420 genes selected)I But sgPLS selects fewer modules than sPLS (respectively 21 and 64 groups of genes

selected)I Note: all the 21 groups of genes selected by sgPLS were included in those selected by

sPLS.I sgPLS selects slightly more modules than gPLS (4 more, 14/21 in common).I However, gPLS leads to more genes selected than sgPLS (944)I In this application, the sgPLS approach led to a parsimonious selection of modules and

genes that sound very relevant biologicallyChaussabel’s functional modules: http://www.biir.net/public wikis/module annotation/V2 Trial 8 Modules

Big Data PLS Methods JSTAR 2016, Rennes 16/54

Page 32: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Results: Venn diagram

I sgPLS selects slightly more genes than sPLS (respectively 487 and 420 genes selected)I But sgPLS selects fewer modules than sPLS (respectively 21 and 64 groups of genes

selected)I Note: all the 21 groups of genes selected by sgPLS were included in those selected by

sPLS.I sgPLS selects slightly more modules than gPLS (4 more, 14/21 in common).

I However, gPLS leads to more genes selected than sgPLS (944)I In this application, the sgPLS approach led to a parsimonious selection of modules and

genes that sound very relevant biologicallyChaussabel’s functional modules: http://www.biir.net/public wikis/module annotation/V2 Trial 8 Modules

Big Data PLS Methods JSTAR 2016, Rennes 16/54

Page 33: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Results: Venn diagram

I sgPLS selects slightly more genes than sPLS (respectively 487 and 420 genes selected)I But sgPLS selects fewer modules than sPLS (respectively 21 and 64 groups of genes

selected)I Note: all the 21 groups of genes selected by sgPLS were included in those selected by

sPLS.I sgPLS selects slightly more modules than gPLS (4 more, 14/21 in common).I However, gPLS leads to more genes selected than sgPLS (944)I In this application, the sgPLS approach led to a parsimonious selection of modules and

genes that sound very relevant biologicallyChaussabel’s functional modules: http://www.biir.net/public wikis/module annotation/V2 Trial 8 Modules

Big Data PLS Methods JSTAR 2016, Rennes 16/54

Page 34: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Stability of the variable selection (100 bootstrap samples)

SelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelected Not SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot Selected

0.0

0.2

0.4

0.6

0.8

1.0

M7.

1M

5.1

M3.

2M

5.15

M4.

2M

4.13

M4.

15M

4.6

M6.

6M

7.27

M6.

13M

5.7

M5.

14M

1.1

M3.

6M

3.5

M4.

1M

5.5

M5.

3M

4.7

M5.

11M

2.3

M6.

14M

3.1

M7.

2M

4.4

M7.

35M

5.4

M5.

8M

7.5

M7.

12M

7.15

M4.

16M

7.8

M7.

25M

6.4

M6.

10M

6.7

M7.

6M

7.11

M7.

33M

8.14

M4.

3M

5.10

M5.

6M

5.9

M7.

16M

7.7

M4.

12M

4.14

M4.

8M

7.24

M6.

20M

7.21

M5.

13M

4.9

M6.

2M

2.1

M7.

4M

7.14

M7.

26M

4.5

M7.

28M

8.35

M5.

2M

8.13

M4.

11M

6.9

M8.

59

Module

Freq

uenc

y

Apoptosis / SurvivalApoptosis / SurvivalCell CycleCell DeathCytotoxic/NK CellErythrocytesInflammationMitochondrial RespirationMitochondrial Stress / ProteasomeMonocytesNeutrophilsPlasma CellsPlateletsProtein SynthesisT cellT cellsUndetermined

sgPLS − component 1

SelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelected Not SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot Selected

0.0

0.2

0.4

0.6

0.8

1.0

M7.

1M

5.5

M5.

1M

5.13

M3.

5M

5.10

M5.

7M

7.5

M4.

7M

7.12

M5.

4M

7.4

M7.

11M

7.25

M6.

2M

6.7

M5.

9M

5.6

M7.

8M

4.9

M1.

1M

7.6

M4.

6M

6.4

M7.

2M

7.21

M5.

3M

7.16

M5.

8M

6.20

M6.

10M

5.11

M4.

12M

4.16

M6.

9M

3.2

M7.

15M

4.15

M2.

1M

7.14

M6.

14M

3.1

M4.

1M

7.26

M7.

24M

4.2

M6.

6M

4.8

M6.

13M

4.13

M5.

14M

7.27

M8.

14M

3.6

M7.

35M

8.13

M5.

15M

7.7

M7.

28M

5.2

M4.

5M

4.4

M7.

33M

2.3

M4.

3M

4.14

M8.

59M

8.35

M4.

11

Module

Freq

uenc

y

Apoptosis / SurvivalApoptosis / SurvivalCell CycleCell DeathCytotoxic/NK CellErythrocytesInflammationMitochondrial RespirationMitochondrial Stress / ProteasomeMonocytesNeutrophilsPlasma CellsPlateletsProtein SynthesisT cellT cellsUndetermined

sPLS − component 1

SelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelected Not SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot Selected

0.0

0.2

0.4

0.6

0.8

1.0

M5.

15M

4.2

M6.

6M

3.2

M7.

27M

4.13

M4.

15M

7.1

M6.

13M

3.6

M7.

35M

4.6

M1.

1M

4.8

M5.

14M

6.14

M2.

3M

3.1

M4.

4M

5.1

M4.

7M

4.1

M2.

1M

6.10

M5.

3M

5.11

M7.

8M

6.9

M5.

5M

4.3

M7.

33M

8.14

M4.

14M

6.20

M4.

11M

6.7

M8.

59M

4.12

M7.

14M

6.4

M7.

15M

4.16

M7.

24M

7.25

M5.

7M

5.9

M3.

5M

6.2

M7.

2M

7.6

M8.

13M

8.35

M5.

8M

7.16

M4.

5M

4.9

M5.

2M

7.5

M7.

7M

5.10

M5.

4M

7.11

M7.

4M

5.13

M5.

6M

7.12

M7.

21M

7.26

M7.

28

Module

Freq

uenc

y

Apoptosis / SurvivalApoptosis / SurvivalCell CycleCell DeathCytotoxic/NK CellErythrocytesInflammationMitochondrial RespirationMitochondrial Stress / ProteasomeMonocytesNeutrophilsPlasma CellsPlateletsProtein SynthesisT cellT cellsUndetermined

gPLS − component 1

SelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelectedSelected Not SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot SelectedNot Selected

0.0

0.2

0.4

0.6

0.8

1.0

M7.

1M

5.1

M3.

2M

5.15

M4.

2M

4.13

M4.

15M

4.6

M6.

6M

7.27

M6.

13M

5.7

M5.

14M

1.1

M3.

6M

3.5

M4.

1M

5.5

M5.

3M

4.7

M5.

11M

2.3

M6.

14M

3.1

M7.

2M

4.4

M7.

35M

5.4

M5.

8M

7.5

M7.

12M

7.15

M4.

16M

7.8

M7.

25M

6.4

M6.

10M

6.7

M7.

6M

7.11

M7.

33M

8.14

M4.

3M

5.10

M5.

6M

5.9

M7.

16M

7.7

M4.

12M

4.14

M4.

8M

7.24

M6.

20M

7.21

M5.

13M

4.9

M6.

2M

2.1

M7.

4M

7.14

M7.

26M

4.5

M7.

28M

8.35

M5.

2M

8.13

M4.

11M

6.9

M8.

59

Module

Freq

uenc

y

Apoptosis / SurvivalApoptosis / SurvivalCell CycleCell DeathCytotoxic/NK CellErythrocytesInflammationMitochondrial RespirationMitochondrial Stress / ProteasomeMonocytesNeutrophilsPlasma CellsPlateletsProtein SynthesisT cellT cellsUndetermined

sgPLS − component 1

Stability of the variable selection assessed on 100 bootstrap sampleson DALIA-1 trial data, for the gPLS, sgPLS and sPLS procedures re-spectively. For each procedure, the modules selected on the originalsample are separated from those that were not.

Big Data PLS Methods JSTAR 2016, Rennes 17/54

Page 35: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Now some mathematics ...

Big Data PLS Methods JSTAR 2016, Rennes 18/54

Page 36: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

PLS family

PLS = Partial Least Squares or Projection to Latent Structures

Four main methods coexist in the literature:

(i) Partial Least Squares Correlation (PLSC) also called PLS-SVD;

(ii) PLS in mode A (PLS-W2A, for Wold’s Two-Block, Mode A PLS);

(iii) PLS in mode B (PLS-W2B) also called Canonical CorrelationAnalysis (CCA);

(iv) Partial Least Squares Regression (PLSR, or PLS2).

I (i),(ii) and (iii) are symmetric while (iv) is asymmetric.

I Different objective functions to optimise.

I Good news: all use the singular value decomposition (SVD).

Big Data PLS Methods JSTAR 2016, Rennes 19/54

Page 37: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

PLS family

PLS = Partial Least Squares or Projection to Latent Structures

Four main methods coexist in the literature:

(i) Partial Least Squares Correlation (PLSC) also called PLS-SVD;

(ii) PLS in mode A (PLS-W2A, for Wold’s Two-Block, Mode A PLS);

(iii) PLS in mode B (PLS-W2B) also called Canonical CorrelationAnalysis (CCA);

(iv) Partial Least Squares Regression (PLSR, or PLS2).

I (i),(ii) and (iii) are symmetric while (iv) is asymmetric.

I Different objective functions to optimise.

I Good news: all use the singular value decomposition (SVD).

Big Data PLS Methods JSTAR 2016, Rennes 19/54

Page 38: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Singular Value Decomposition (SVD)

Definition 1Let a matrixM : p × q of rank r :

M =U∆VT =r∑

l=1

δlulvTl , (1)

I U = (ul) : p × p and V = (v l) : q × q are two orthogonal matriceswhich contain the normalised left (resp. right) singular vectors

I ∆ = diag(δ1, . . . , δr , 0, . . . , 0): the ordered singular values δ1 > δ2 >· · · > δr > 0.

Note: fast and efficient algorithms exist to solve the SVD.

Big Data PLS Methods JSTAR 2016, Rennes 20/54

Page 39: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Connexion between SVD and maximum covariance

We were able to describe the optimization problem of the four PLSmethods as:

(u∗, v∗) = argmax‖u‖2=‖v‖2=1

Cov(Xh−1u,Yh−1v), h = 1, . . . ,H.

Matrices Xh and Yh are obtained recursively from Xh−1 and Yh−1.

The four methods differ by the deflation process, chosen so that theabove scores or weight vectors satisfy given constraints.

The solution at step h is obtained by computing only the first triplet(δ1,u1, v1) of singular elements of the SVD ofMh−1 = XT

h−1Yh−1:

(u∗, v∗) = (u1, v1)

Why is this useful ?

Big Data PLS Methods JSTAR 2016, Rennes 21/54

Page 40: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Connexion between SVD and maximum covariance

We were able to describe the optimization problem of the four PLSmethods as:

(u∗, v∗) = argmax‖u‖2=‖v‖2=1

Cov(Xh−1u,Yh−1v), h = 1, . . . ,H.

Matrices Xh and Yh are obtained recursively from Xh−1 and Yh−1.

The four methods differ by the deflation process, chosen so that theabove scores or weight vectors satisfy given constraints.

The solution at step h is obtained by computing only the first triplet(δ1,u1, v1) of singular elements of the SVD ofMh−1 = XT

h−1Yh−1:

(u∗, v∗) = (u1, v1)

Why is this useful ?

Big Data PLS Methods JSTAR 2016, Rennes 21/54

Page 41: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Connexion between SVD and maximum covariance

We were able to describe the optimization problem of the four PLSmethods as:

(u∗, v∗) = argmax‖u‖2=‖v‖2=1

Cov(Xh−1u,Yh−1v), h = 1, . . . ,H.

Matrices Xh and Yh are obtained recursively from Xh−1 and Yh−1.

The four methods differ by the deflation process, chosen so that theabove scores or weight vectors satisfy given constraints.

The solution at step h is obtained by computing only the first triplet(δ1,u1, v1) of singular elements of the SVD ofMh−1 = XT

h−1Yh−1:

(u∗, v∗) = (u1, v1)

Why is this useful ?

Big Data PLS Methods JSTAR 2016, Rennes 21/54

Page 42: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Connexion between SVD and maximum covariance

We were able to describe the optimization problem of the four PLSmethods as:

(u∗, v∗) = argmax‖u‖2=‖v‖2=1

Cov(Xh−1u,Yh−1v), h = 1, . . . ,H.

Matrices Xh and Yh are obtained recursively from Xh−1 and Yh−1.

The four methods differ by the deflation process, chosen so that theabove scores or weight vectors satisfy given constraints.

The solution at step h is obtained by computing only the first triplet(δ1,u1, v1) of singular elements of the SVD ofMh−1 = XT

h−1Yh−1:

(u∗, v∗) = (u1, v1)

Why is this useful ?Big Data PLS Methods JSTAR 2016, Rennes 21/54

Page 43: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

SVD properties

Theorem 2Eckart-Young (1936) states that the (truncated) SVD of a givenmatrixM (of rank r) provides the best reconstitution (in a leastsquares sense) ofM by a matrix with a lower rank k :

minA of rank k

‖M −A‖2F =

∥∥∥∥∥∥∥M −k∑`=1

δ`u`vT`

∥∥∥∥∥∥∥2

F

=r∑

`=k+1

δ2` .

If the minimum is searched for matrices A of rank 1, which are underthe form uvT where u, v are non-zero vectors, we obtain

minu,v

∥∥∥∥M − uvT∥∥∥∥2

F=

r∑`=2

δ2` =

∥∥∥M − δ1u1vT1

∥∥∥2F .

Big Data PLS Methods JSTAR 2016, Rennes 22/54

Page 44: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

SVD properties

Thus, solving

argminu,v

∥∥∥∥Mh−1 − uvT∥∥∥∥2

F(2)

and norming the resulting vectors gives us u1 and v1. This is an-other approach to solve the PLS optimization problem.

Big Data PLS Methods JSTAR 2016, Rennes 23/54

Page 45: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Towards sparse PLSI Shen and Huang (2008) connected (2) (in a PCA context) to least square

minimisation in regression:

∥∥∥∥Mh−1 − uvT∥∥∥∥2

F=

∥∥∥∥∥∥∥∥∥∥vec(Mh−1)︸ ︷︷ ︸y

− (Ip ⊗ u)v︸ ︷︷ ︸Xβ

∥∥∥∥∥∥∥∥∥∥2

2

=

∥∥∥∥∥∥∥∥∥∥vec(Mh−1)︸ ︷︷ ︸y

− (v ⊗ Iq)u︸ ︷︷ ︸Xβ

∥∥∥∥∥∥∥∥∥∥2

2

.

→ Possible to use many existing variable selection techniques usingregularization penalties.

We propose iterative alternating algorithms to find normed vectorsu/‖u‖ and v/‖v‖ that minimise the following penalised sum-of-squarescriterion ∥∥∥∥Mh−1 − uvT

∥∥∥∥2

F+ Pλ(u, v),

for various penalization terms Pλ(u, v).

→ We obtain several sparse versions (in terms of the weights u andv) of the four methods (i)–(iv).

Big Data PLS Methods JSTAR 2016, Rennes 24/54

Page 46: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Towards sparse PLSI Shen and Huang (2008) connected (2) (in a PCA context) to least square

minimisation in regression:

∥∥∥∥Mh−1 − uvT∥∥∥∥2

F=

∥∥∥∥∥∥∥∥∥∥vec(Mh−1)︸ ︷︷ ︸y

− (Ip ⊗ u)v︸ ︷︷ ︸Xβ

∥∥∥∥∥∥∥∥∥∥2

2

=

∥∥∥∥∥∥∥∥∥∥vec(Mh−1)︸ ︷︷ ︸y

− (v ⊗ Iq)u︸ ︷︷ ︸Xβ

∥∥∥∥∥∥∥∥∥∥2

2

.

→ Possible to use many existing variable selection techniques usingregularization penalties.

We propose iterative alternating algorithms to find normed vectorsu/‖u‖ and v/‖v‖ that minimise the following penalised sum-of-squarescriterion ∥∥∥∥Mh−1 − uvT

∥∥∥∥2

F+ Pλ(u, v),

for various penalization terms Pλ(u, v).

→ We obtain several sparse versions (in terms of the weights u andv) of the four methods (i)–(iv).

Big Data PLS Methods JSTAR 2016, Rennes 24/54

Page 47: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Sparse PLS models

For cases (i)–(iv),

I Aim: obtaining sparse weight vectors uh and vh .

I Associated component scores (i.e., latent variables) ξh := Xh−1uh andωh := Yh−1vh , h = 1, . . . ,H, for a small number of components.

I Recursive procedure with objective function involving Xh−1 and Yh−1

→ decomposition (approximation) of the original matrices X and Y:

X = ΞHCTH + FX ,H , Y = ΩHD

TH + FY ,H , (3)

where Ξ = (ξh) and Ω = (ωh).

I For the regression mode, we have the multivariate linear regressionmodel

Y = XBPLS + E,

with BPLS =UH(CTHUH)

−1PHDTH and E is a matrix of residuals.

Big Data PLS Methods JSTAR 2016, Rennes 25/54

Page 48: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Example case (ii): PLS-W2A

Definition 3The objective function at step h is

(uh , vh) = argmax‖u‖2=‖v‖2=1

Cov(Xh−1u,Yh−1v)

subject to the constraints:

Cov(ξh , ξj) = Cov(ωh ,ωj) = 0, 1 6 j < h.

In order to satisfy these constraints:

Xh = Pξ⊥h Xh−1 and Yh = Pω⊥h Yh−1, (X0 = X, Y0 = Y)

where ξh (resp. Ωh) is the first left (resp. right) singular vector obtained byapplying a SVD toMh−1 := XT

h−1Yh−1, h = 1, . . . ,H.

Big Data PLS Methods JSTAR 2016, Rennes 26/54

Page 49: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Regression mode (iv): PLSR, PLS2

I Aim of this asymmetric model is prediction.

I PLS2 finds latent variables that model X and simultaneously predict Y.

I Difference with PLS-W2A is the deflation step:

Xh = Pξ⊥h Xh−1 and Yh = Pξ⊥h Yh−1.

Big Data PLS Methods JSTAR 2016, Rennes 27/54

Page 50: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

The algorithmMain steps of the iterative algorithm

1. X0 = X, Y0 = Y h = 1

2. Mh−1 := XTh−1Yh−1.

3. SVD: extraction of the first pair of singular vectors uh and vh .

4. Sparsity step. Produces sparse weights usparse and vsparse.

5. Latent variables: ξh = Xh−1usparse and ωh = Yh−1vsparse

6. Slope coefficients:I ch = XT

h−1ξh/ξThξh for both modes

I dh = YTh−1ξh/ξ

Thξh for “PLSR regression mode”

I eh = YTh−1ωh/ω

Thωh for “PLS mode A”

7. Deflation:I Xh = Xh−1 − ξhcT

h for both modesI Yh = Yh−1 − ξhdT

h for “PLSR regression mode”I Yh = Yh−1 − ωheT

h for “PLS mode A”

8. If h = H stop, else h = h + 1 and goto step 2.

Big Data PLS Methods JSTAR 2016, Rennes 28/54

Page 51: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Introducing sparsity

“Sparsity” implies many zeros in a vector or a matrix.

(Credits: Jun Liu, Shuiwang Ji, and Jieping Ye)

Big Data PLS Methods JSTAR 2016, Rennes 29/54

Page 52: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Introducing sparsity

Let θ be the model parameters to be estimated. A commonly employedmethod for estimating θ is

min [loss(θ) + λ penalty(θ)] .

This is equivalent to the following method:

min loss(θ)

subject to the constraints penalty(θ) 6 z (for some z).

Example: loss(θ) = 0.5‖θ − v‖22 for some fixed vector v.

Big Data PLS Methods JSTAR 2016, Rennes 30/54

Page 53: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Why does L1 induce sparsity?Analysis in 1D (comparison with L2)

0.5 × (θ − v)2 + λ|θ|

If v > λ, θ = v − λ

If v 6 −λ, θ = v + λ

Else, θ = 0 (sparsity!)

0.5 × (θ − v)2 + λθ2

θ =v

1 + 2λNo sparsity here.

Nondifferentiable at 0 Differentiable at 0

Big Data PLS Methods JSTAR 2016, Rennes 31/54

Page 54: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Why does L1 induce sparsity?

Understanding from the projection

Big Data PLS Methods JSTAR 2016, Rennes 32/54

Page 55: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Why does L1 induce sparsity?Understanding from constrained optimization

Big Data PLS Methods JSTAR 2016, Rennes 33/54

Page 56: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

sparse PLS (sPLS)

In sPLS, the optimisation problem to solve is

minuh ,vh

∥∥∥Mh − uhvTh

∥∥∥2

F+ Pλ1,h (uh) + Pλ2,h (vh),

I ‖Mh − uhvTh ‖

2F =

∑pi=1

∑qj=1(mij − uihvjh)

2,

I Mh = XTh Yh for each iteration h.

I Pλ1,h (uh) =∑p

i=1 2λh1 |ui | and Pλ2,h (vh) =

∑qj=1 2λh

2 |vi |

Iterative solution. Applying the thresholding function gsoft(x, λ) = sign(x)(|x | − λ)+I to the vectorMvh componentwise to get uh .

I to the vectorMTuh componentwise to get vh .

Big Data PLS Methods JSTAR 2016, Rennes 34/54

Page 57: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

sparse PLS (sPLS)

In sPLS, the optimisation problem to solve is

minuh ,vh

∥∥∥Mh − uhvTh

∥∥∥2

F+ Pλ1,h (uh) + Pλ2,h (vh),

I ‖Mh − uhvTh ‖

2F =

∑pi=1

∑qj=1(mij − uihvjh)

2,

I Mh = XTh Yh for each iteration h.

I Pλ1,h (uh) =∑p

i=1 2λh1 |ui | and Pλ2,h (vh) =

∑qj=1 2λh

2 |vi |

Iterative solution. Applying the thresholding function gsoft(x, λ) = sign(x)(|x | − λ)+I to the vectorMvh componentwise to get uh .

I to the vectorMTuh componentwise to get vh .

Big Data PLS Methods JSTAR 2016, Rennes 34/54

Page 58: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

group PLS (gPLS)I X and Y can be divided respectively into K and L sub-matrices (groups) X(k) :

n × pk and Y(l) : n × ql .I Same idea as Yuan and Lin (2006), we use group lasso penalties:

Pλ1(u) = λ1

K∑k=1

√pk ‖u(k)‖2 and Pλ2(v) = λ2

L∑l=1

√ql‖v(l)‖2,

where u(k) (resp. v(l)) is the weight vector associated to the k -th (resp. l-th)block.

In gPLS, the optimisation problem to solve is

K∑k=1

L∑l=1

∥∥∥∥M(k ,l)− u(k)v(l)T

∥∥∥∥2

F+ Pλ1(u) + Pλ2(v),

I M(k ,l) = X(k)Y(l)T .

Remark if the k -th block is composed by only one variable then

‖u(k)‖2 =√(u(k))2 = |u(k)|.

Big Data PLS Methods JSTAR 2016, Rennes 35/54

Page 59: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

group PLS (gPLS)Previous objective function can be written as

K∑k=1

‖M

(k ,·)− u(k)vT‖2F + λ1

√pk ‖u(k)‖2

+ Pλ2(v)

whereM(k ,·) = X(k)YT. We can optimize (for v fixed) over groupwisecomponents of u separately. First term above expands as:

trace[M(k ,·)M

(k ,·)T ] − 2trace[u(k)vTM

(k ,·)T ] + trace[u(k)u(k)T ]

Optimal u(k) thus optimizes

trace[u(k)u(k)T ] − 2trace[u(k)vTM

(k ,·)T ] + λ1√

pk ‖u(k)‖2.

This objective function is convex, so the optimal solution is character-ized by subgradient equations (subdifferential equals to 0).

Big Data PLS Methods JSTAR 2016, Rennes 36/54

Page 60: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

SubdifferentialSubderivative, subgradient, and subdifferential generalize the deriva-tive to functions which are not differentiable (e.g., |x | is nondifferen-tiable at 0). The subdifferential of a function is set-valued.

Blue: convex function (nondifferentiable at x0). Slope of each red line= a subderivative at x0. The set [a, b] of all subderivatives is called thesubdifferential of the function f at x0. If f is convex and its subdifferen-tial at x0 contains exactly one subderivative, then f is differentiable atx0. Big Data PLS Methods JSTAR 2016, Rennes 37/54

Page 61: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

We have

a = limx→x−0

f(x) − f(x0)

x − x0

and

b = limx→x+

0

f(x) − f(x0)

x − x0.

Example: Consider the function f(x) = |x | which is convex. Then,the subdifferential at the origin is the interval [a, b] = [−1, 1]. Thesubdifferential at any point x0 < 0 is the singleton set −1, while thesubdifferential at any point x0 > 0 is the singleton set 1.

Big Data PLS Methods JSTAR 2016, Rennes 38/54

Page 62: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

For group k , u(k) must satisfy that the subdifferential is null:

− 2u(k) + 2M(k ,·)v = λ1√

pkθ, (4)

where θ is a subgradient of ‖u(k)‖2 evaluated at u(k). So,

θ =

u(k)

‖u(k)‖2if u(k) , 0;

∈ θ : ‖θ‖2 6 1 if u(k) = 0.

We can see that subgradient equations (4) are satisfied with u(k) = 0if

‖M(k ,·)v‖2 6 2−1λ1

√pk . (5)

For u(k) , 0, equation (4) gives

− 2u(k) + 2M(k ,·)v = λ1√

pku(k)

||u(k)||2. (6)

Combining equations (5) and (6), we find:

u(k) =

(1 −

λ1

2

√pk

||M(k ,·)v ||2

)+

M(k ,·)v, k = 1, . . . ,K , (7)

where (a)+ = max(a, 0).Big Data PLS Methods JSTAR 2016, Rennes 39/54

Page 63: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

In the same vein, optimisation over v for a fixed u is also obtained byoptimising over groupwise components:

v(l) =

1 − λ2

2

√ql

||M(·,l)Tu||2

+

M(·,l)Tu, l = 1, . . . , L . (8)

We thus obtain the following theorem.

Big Data PLS Methods JSTAR 2016, Rennes 40/54

Page 64: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

group PLS (gPLS)

Theorem 4Solution of the group PLS optimisation problem is given by:

u(k) =

(1 −

λ1

2

√pk

||M(k ,·)v ||2

)+

M(k ,·)v (for fixed v)

and

v(l) =

1 − λ2

2

√ql

||M(·,l)Tu||2

+

M(·,l)Tu (for fixed u).

Note: we will iterate until convergence of u(k) and v(l), using alterna-tively one of the above formulas.

Big Data PLS Methods JSTAR 2016, Rennes 41/54

Page 65: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

sparse group PLS: sparsity within groups

I Following Simon et al. (2013), we introduce sparse group lasso penalties:

Pλ1(u) = (1 − α1)λ1

K∑k=1

√pk ‖u(k)‖2 + α1λ1‖u‖1,

Pλ2(v) = (1 − α2)λ2

L∑l=1

√ql‖v(l)‖2 + α2λ2‖v‖1.

Big Data PLS Methods JSTAR 2016, Rennes 42/54

Page 66: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

sparse group PLS (sgPLS)

Theorem 5Solution of the sparse group PLS optimisation problem is given by:u(k) = 0 if ∥∥∥∥gsoft

(M

(k ,·)v, λ1α1/2)∥∥∥∥26 λ1(1 − α1)

√pk ,

otherwise

u(k) =12

gsoft(M

(k ,·)v, λ1α1/2)− λ1(1 − α1)

√pk

gsoft(M(k ,·)v, λ1α1/2

)‖gsoft

(M(k ,·)v, λ1α1/2

)‖2

.We have v(l) = 0 if ∥∥∥∥∥gsoft

(M

(·,l)Tu, λ2α2/2)∥∥∥∥∥26 λ2(1 − α2)

√ql

and

v(l) =12

gsoft(M

(·,l)Tu, λ1α1/2)− λ2(1 − α2)

√ql

gsoft(M(·,l)Tu, λ2α2/2

)‖gsoft

(M(·,l)Tu, λ2α2/2

)‖2

otherwise.

Similar proof (see our paper in Bioinformatics, 2016).

Big Data PLS Methods JSTAR 2016, Rennes 43/54

Page 67: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

R package: sgPLS

I sgPLS package implements sPLS, gPLS and sgPLS methods:http://cran.r-project.org/web/packages/sgPLS/index.html

I Includes some functions for choosing the tuning parameters related tothe predictor matrix for different sparse PLS model (regression mode).

I Some simple code to perform a sgPLS:

model.sgPLS <- sgPLS(X, Y, ncomp = 2, mode = "regression",keepX = c(4, 4), keepY = c(4, 4),ind.block.x = ind.block.x ,ind.block.y = ind.block.y,alpha.x = c(0.5, 0.5),alpha.y = c(0.5, 0.5))

I Last version also includes sparse group Discriminant Analysis.

Big Data PLS Methods JSTAR 2016, Rennes 44/54

Page 68: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Regularized PLS scalable for BIG-DATA

What happens in a MASSIVE DATA SET context?

Massive datasets. The size of the data is large and analysing it takesa significant amount of time and computer memory.

Emerson & Kane (2012). Dataset considered large if it exceeds 20% ofthe RAM (Random Access Memory) on a given machine, and massiveif it exceeds 50%

Big Data PLS Methods JSTAR 2016, Rennes 45/54

Page 69: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Regularized PLS scalable for BIG-DATA

What happens in a MASSIVE DATA SET context?

Massive datasets. The size of the data is large and analysing it takesa significant amount of time and computer memory.

Emerson & Kane (2012). Dataset considered large if it exceeds 20% ofthe RAM (Random Access Memory) on a given machine, and massiveif it exceeds 50%

Big Data PLS Methods JSTAR 2016, Rennes 45/54

Page 70: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Case of a lot of observations: two massive data sets X: n × p matrixand Y: n × q matrix due to a large number of observations.

We suppose here that n is very large, but not p nor q.

PLS algorithm mainly based on the SVD ofMh−1 = XTh−1Yh−1:

Dimension ofMh−1: p × q matrix !!

This matrix fits into memory.

But not X nor Y.

Big Data PLS Methods JSTAR 2016, Rennes 46/54

Page 71: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Case of a lot of observations: two massive data sets X: n × p matrixand Y: n × q matrix due to a large number of observations.

We suppose here that n is very large, but not p nor q.

PLS algorithm mainly based on the SVD ofMh−1 = XTh−1Yh−1:

Dimension ofMh−1: p × q matrix !!

This matrix fits into memory.

But not X nor Y.

Big Data PLS Methods JSTAR 2016, Rennes 46/54

Page 72: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Case of a lot of observations: two massive data sets X: n × p matrixand Y: n × q matrix due to a large number of observations.

We suppose here that n is very large, but not p nor q.

PLS algorithm mainly based on the SVD ofMh−1 = XTh−1Yh−1:

Dimension ofMh−1: p × q matrix !!

This matrix fits into memory.

But not X nor Y.

Big Data PLS Methods JSTAR 2016, Rennes 46/54

Page 73: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Computation ofM = XTY by chunks

M = XTY =G∑

g=1

XT(g)Y(g)

All terms fit (successively) into memory!

Big Data PLS Methods JSTAR 2016, Rennes 47/54

Page 74: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Computation ofM = XTY by chunks using R

I No need to load the big matrices X and YI Use memory-mapped files (called “filebacking”) through the big-

memory package to allow matrices to exceed the RAM size.I A big.matrix is created which supports the use of shared memory

for efficiency in parallel computing.I foreach: package for running in parallel the computation ofM by

chunks

Regularized PLS algorithm:I Computation of the components (“Scores”):

Xu (n × 1) and Yv (n × 1)

I Easy to compute by chunks and store in a big.matrix object.

Big Data PLS Methods JSTAR 2016, Rennes 48/54

Page 75: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Computation ofM = XTY by chunks using R

I No need to load the big matrices X and YI Use memory-mapped files (called “filebacking”) through the big-

memory package to allow matrices to exceed the RAM size.I A big.matrix is created which supports the use of shared memory

for efficiency in parallel computing.I foreach: package for running in parallel the computation ofM by

chunks

Regularized PLS algorithm:I Computation of the components (“Scores”):

Xu (n × 1) and Yv (n × 1)

I Easy to compute by chunks and store in a big.matrix object.

Big Data PLS Methods JSTAR 2016, Rennes 48/54

Page 76: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Illustration of group PLS with Big-Data

I Simulated: X (5GB) and Y (5GB);I n = 560, 000 observations, p = 400 and q = 500;I Linked by two latent variables, made up of sparse linear combina-

tions of the original variables;I Both X and Y have a group structure: 20 groups of 20 variables

for X and 25 groups of 20 variables for Y;I Only 4 groups in each data set are relevant, 5 variables in each of

these groups are not relevant.

Big Data PLS Methods JSTAR 2016, Rennes 49/54

Page 77: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Figure 1: Comparison of gPLS and BIG-gPLS (for small n = 1, 000)

Big Data PLS Methods JSTAR 2016, Rennes 50/54

Page 78: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Figure 2: Use of BIG-gPLS. Left: small n. Right: Large n.Blue: truth. Red: Recovered.

Big Data PLS Methods JSTAR 2016, Rennes 51/54

Page 79: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Regularised PLS Discriminant Analysis

Categorical response variable becomes a dummy matrix in PLS algo-rithms:

Big Data PLS Methods JSTAR 2016, Rennes 52/54

Page 80: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

Concluding Remarks and Take Home Message

We were able to derive a simple unified algorithm that perfoms stan-dard, sparse, group and sparse group versions of the four classicalPLS algorithms (i)–(iv). (And also PLSDA.)

We used big memory objects, and a simple trick that makes our pro-cedure scalable to big data (large n).

We also parallelized the code for faster computation.

This will soon been made available in our new R package: bigsgPLS.

Big Data PLS Methods JSTAR 2016, Rennes 53/54

Page 81: A Unified Regularized Group PLS Algorithm Scalable to Big Data · Big Data PLS MethodsJSTAR 2016, Rennes 7/54. Group PLS Aim:select groups of variables taking into account the data

References

I Yuan M. and Lin Y. (2006) Model Selection and Estimation in Regression withGrouped Variables. Journal of the Royal Statistical Society: Series B (Sta-tistical Methodology), 68 (1), 49–67.

I Simon N., Friedman J., Hastie T. and Tibshirani R. (2013) A Sparse-groupLasso. Journal of Computational and Graphical Statistics, 22 (2), 231–245.

I Liquet B., Lafaye de Micheaux P., Hejblum B. and Thiebaut R., (2016) Group andSparse Group Partial Least Square Approaches Applied in Genomics Context.Bioinformatics, 32(1), 35–42.

I Lafaye de Micheaux P., Liquet B. and Sutton M., A Unified Parallel Algorithm forRegularized Group PLS Scalable to Big Data (in progress).

Thank you! Questions?

Big Data PLS Methods JSTAR 2016, Rennes 54/54