the singular value decomposition (svd) and principal...

20
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key denitions and results about matrices that will be used in this section. The transpose of a matrix A, denoted A T is the matrix obtained from A by switching its rows and columns. In other words, if A =(a ij ) then A T =(a ji ). The conjugate transpose of a matrix A, denoted A is obtained from A by switching its rows and columns and taking the conjugate of its entries. In other words, if A =(a ij ) then A =( a ji ). A matrix A is said to be symmetric if A = A T . Symmetric matrices have the following properties: Their eigenvalues are always real. They are always diagonalizable. Their eigenvectors are orthogonal. A is orthogonally diagonalizable that is there exists an orthogonal matrix P such that P 1 AP is diagonal. 87

Upload: others

Post on 01-Jun-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

Chapter 5

The Singular ValueDecomposition (SVD) andPrincipal ComponentAnalysis (PCA)

5.1 Basics of SVD

5.1.1 Review of Key Concepts

We review some key definitions and results about matrices that will be used inthis section.

• The transpose of a matrix A, denoted AT is the matrix obtained fromA by switching its rows and columns. In other words, if A = (aij) thenAT = (aji).

• The conjugate transpose of a matrix A, denoted A∗ is obtained from Aby switching its rows and columns and taking the conjugate of its entries.In other words, if A = (aij) then A∗ = (aji).

• A matrix A is said to be symmetric if A = AT . Symmetric matriceshave the following properties:

—Their eigenvalues are always real.

—They are always diagonalizable.

—Their eigenvectors are orthogonal.

—A is orthogonally diagonalizable that is there exists an orthogonalmatrix P such that P−1AP is diagonal.

87

Page 2: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

88CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

• A matrix A is said to be Hermitian if A = A∗. For matrices with realentries, being Hermitian is the same as being symmetric.

• An n × n matrix A is said to be normal if A∗A = AA∗. Obviously,Hermitian matrices are also normal.

• A matrix A is said to be unitary if AA∗ = A∗A = I. Unitary matriceshave the following properties:

—They preserve the dot product that is 〈Ax,Ay〉 = 〈x, y〉—Their columns and rows are orthogonal.

—They are always diagonalizable.

— |detA| = 1.

—A−1 = A∗

• A matrix A is said to be orthogonal if AAT = ATA = I. Orthogonalmatrices have the following properties:

—They preserve the dot product that is 〈Ax,Ay〉 = 〈x, y〉—Their columns and rows are orthogonal.

— |detA| = 1.

—A−1 = AT

• A quadratic form on Rn is a function Q defined on Rn by Q (x) = xTAxfor some n× n matrix A. Here are a few important facts about quadraticforms:

— In the case A is symmetric, there exists a change of variable x = Pythat transforms xTAx into yTDy where D is a diagonal matrix.

— In the case A is symmetric, the maximum value of xTAx is the ab-solute value of the largest eigenvalue λ1 of A and it happens in thedirection of u1 the corresponding eigenvector.

5.1.2 Introduction to the Singular Value Decomposition(SVD) of an mxn Matrix

Recall that an n × n matrix is diagonalizable if and only if it has n linearlyindependent eigenvectors. In particular, symmetric matrices are diagolizable.If P is a matrix whose columns are the eigenvectors of A then P−1AP = Dwhere D is the n × n diagonal matrix whose entries on the diagonal are theeigenvectors of A and all other entries are 0. This means that in the case A isdiagonalizable, we can write A = PDP−1. This factorization, as we know, isnot possible for all matrices. However, a factorization of the form A = QDP−1

is always possible for any m× n matrix A. A special factorization of this type,

Page 3: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 89

the Singular Value Decomposition (SVD) is a very power tool in applied linearalgebra. We discuss this factorization in this section.The SVD is based on the following property of diagonalization which can be

imitated for rectangular matrices. If A is symmetric, then its eigenvalues arereal. Moreover, if Ax = λx and ‖x‖ = 1 then ‖Ax‖ = ‖λx‖ = |λ| ‖x‖ = |λ|.Hence, |λ| measures the amount by which A stretches (or shrinks) vectors whichhave the same direction as its corresponding eigenvectors. If λ1 is the eigenvaluewith largest magnitude and v1 is its corresponding eigenvectors, then v1 givesthe direction in which the stretching effect of A is the greatest. It is this propertyof eigenvalues we will use to derive the SVD. We begin with an example.

Example 5.1.1 Let A =

[4 11 148 7 −2

]and consider the linear transforma-

tionT : R3 → R2

x 7−→ Ax. Find a unit vector x at which the length of ‖Ax‖ is maxi-

mized and compute this maximum length.The problem here is that A is not a square matrix, so we cannot use whatwe said above, we cannot find its eigenvalues. However, we note that ‖Ax‖2

is maximized at the same x that maximizes ‖Ax‖. Also note that ‖Ax‖2 =

(Ax)TAx = xTATAx = xT

(ATA

)x. This is a quadratic form defined by the

matrix ATA. Note that ATA is symmetric since(ATA

)T= AT

(AT)T

= ATA.We know the largest value of this quadratic form is the absolute value of the

largest eigenvalue of its matrix. Since ATA =

80 100 40100 170 14040 140 200

, its eigenval-ues: are 360, 90, 0 and its corresponding eigenvectors are:

1

211

, −1

−1

21

, 2−21

.

Unit vectors in the same direction are

1

32

32

3

,−2

3−1

32

3

,

2

3−2

31

3

. In conclu-sion, we see that for ‖x‖ = 1, the largest values of ‖Ax‖ is

√360 and it is

Remark 5.1.2 In this example, we note the importance of the matrix ATA.To find the direction in which Ax stretches (or shrinks) the most, we computedthe eigenvalues and eigenvectors of ATA. That direction is the direction of theeigenvector corresponding to the eigenvalue of ATA with the largest absolutevalue.

5.1.3 The Singular Values of an mxn Matrix

Let A be an m × n matrix. Then as noted in the example, ATA is an n × nsymmetric matrix hence orthogonally diagonalizable. Let {v1,v2, ...,vn} be

Page 4: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

90CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

an orthonormal basis for Rn consisting of the eigenvectors of ATA and letλ1, λ2, ..., λn be the corresponding eigenvalues of ATA. Then, for 1 ≤ i ≤ n, wehave:

‖Avi‖2 = (Avi)T

(Avi)

= vTi ATAvi

= vTi λivi since λi is an eigenvalue of ATA

= λivTi vi

= λi since vTi vi = ‖vi‖ = 1 (vi is a unit vector)

Thus, we see that all the eigenvalues of ATA are nonnegative. by renumbering,we may assume that λ1 ≥ λ2 ≥ ... ≥ λn ≥ 0. We give this as a theorem as it isan important result.

Theorem 5.1.3 Let A be an m×n matrix. Let λi be the n eigenvalues of ATA.Then λi ≥ 0 for 1 ≤ i ≤ n.

Definition 5.1.4 The singular values of A are the square roots of the eigen-values λi of ATA, denoted σi and they are arranged in decreasing order. Inother words,

σi =√λi

Since ‖Avi‖2 = λi, we see that σi is the length of the vectors Avi, where vi arethe eigenvectors of ATA.

Example 5.1.5 Find the singular values of A where A is as in example 5.1.1.Recall that we found λ1 = 360, λ2 = 90 and λ3 = 0 hence σ1 =

√360 = 6

√10,

σ2 =√

90 = 3√

10 and σ3 = 0.

We have the following important theorem.

Theorem 5.1.6 Suppose that {v1,v2, ...,vn} is an orthonormal basis for Rnconsisting of the eigenvectors of ATA arranged so that the corresponding eigen-values of ATA satisfy λ1 ≥ λ2 ≥ ... ≥ λn, and suppose that A has r nonzerosingular values. Then, {Av1, Av2, ..., Avr} is an orthogonal basis for colA (thespace spanned by the columns of A), and rankA = r.Proof. There are three things to prove.

1. {Av1, Av2, ..., Avr} is orthogonal. Since for i 6= j, vi ⊥ vj hence vi ⊥(λjvj) thus vTi λjvj = 0. It follows that for i 6= i

(Avi)TAvj = vTi A

TAvj

= vTi λjvj

= 0

Thus {Av1, Av2, ..., Avn} are orthogonal. Since A has exactly r nonzerosingular values the the length of the vectors Av1, Av2, ..., Avr are thesesingular values, Avi 6= 0 ⇐⇒ 1 ≤ i ≤ r. So, {Av1, Av2, ..., Avr}areorthogonal.

Page 5: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 91

2. {Av1, Av2, ..., Avr} are linearly independent. Since they are orthogonal,they are linearly independent.

3. {Av1, Av2, ..., Avr} spans colA. If y ∈ colA then y = Ax for some vectorx. Since {v1,v2, ...,vn} is an orthonormal basis for Rn, we can write

x =

n∑i=1

civi thus y =

n∑i=1

ciAvi =

r∑i=1

ciAvi since Avi = 0 for i > r.

Thus {Av1, Av2, ..., Avr} spans colA.

4. It also follows that rankA = r.

5.1.4 The Singular Value Decomposition of an mxn Ma-trix

Let A be an m×n matrix with r nonzero singular values where r ≤ min (m,n).Define D to be the r× r diagonal matrix consisting of these r nonzero singularvalues of A such that σ1 ≥ σ2 ≥ ... ≥ σr. Let

Σ =

[D 00 0

](5.1)

be an m × n matrix. The SVD decomposition of A will involve Σ. Morespecifically, we have the following theorem.

Theorem 5.1.7 Let A be an m × n matrix with rank r. Then there exists anm × n matrix Σ as in 5.1 as well as an m × m orthogonal matrix U and ann× n orthogonal matrix V such that

A = UΣV T

Proof. We outline the proof by constructing the various matrices involved andby showing they satisfy the requirements of the theorem. Let λi and vi be as intheorem 5.1.6. Then {Av1, Av2, ..., Avr} is an orthogonal basis for colA. Wenormalize each Avi to obtain an orthonormal basis {u1,u2, ...,ur}, where

ui =1

‖Avi‖Avi =

1

σiAvi

thusAvi = σiui (1 ≤ i ≤ r) (5.2)

Next, we extend {u1,u2, ...,ur} to an orthonormal basis {u1,u2, ...,um} of Rmand let U = [u1,u2, ...,um] and V = [v1,v2, ...,vn]. By constructions, both Uand V are orthogonal matrices. Also, from 5.2,

AV = [Av1 Av2 · · ·Avr 0 · · · 0]

= [σ1u1 σ2u2 · · ·σrur 0 · · · 0]

Page 6: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

92CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

Let D and Σ be as above, then

UΣ = [u1,u2, ...,um]

σ1 0 · · · 0 0 · · · 0

0 σ2...

......

. . . 0 0...

0 · · · 0 σr 0...

0 0 0 0 · · · 0...

...0 · · · · · · · · · 0 · · · 0

= [σ1u1 σ2u2 · · ·σrur 0 · · · 0]

= AV

Therefore,

UΣV T = AV V T

= A since V is orthogonal (V V T = I)

Definition 5.1.8 Any decomposition A = UΣV T with U and V orthogonal, Σas in 5.1 and positive diagonal entries for D,is called a singular value decom-position (SVD) of A. The matrix U and V are not uniquely determined, butthe diagonal entries of Σ are necessarily the singular values of A. The columnsof U are called the left singular vectors of A and the columns of V are calledthe right singular vectors of A.

Example 5.1.9 Find the SVD of the matrix in example 5.1.1. This exampleis designed to give the reader an idea of where the various components comefrom. An any algorithm developed for a computer would proceed differently.The construction can be divided into three steps.

Recall that A =

[4 11 148 7 −2

]Step 1: Find an orthogonal diagonalization of ATA. For small matri-

ces, this computation can be done by hand. For larger matrices, it willbe done with a computer. Since A is 2 × 3, ATA is 3 × 3. Since the

eigenvalues are 360, 90, 0 the diagonalization of ATA is

360 0 00 90 00 0 0

Step 2: Set up V and Σ. Σ is m× n that is 2× 3 in our case. V is n× n

that is 3×3 in our case. The eigenvectors corresponding to the eigenvalues

of ATA written in decreasing order are [v1 v2 v3] =

1

32

32

3

−2

3−1

32

3

2

3−2

31

3

Page 7: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 93

hence V =

1

3

−2

3

2

32

3

−1

3

−2

32

3

2

3

1

3

. The singular values are σ1 = 6√

10, σ2 =

3√

10 and σ3 = 0 thus D =

[6√

10 0

0 3√

10

]and Σ =

[6√

10 0 0

0 3√

10 0

].

Step 3: Construct U. U is going to be m×m that is 2×2 in our case. The

rows of U = [u1 u2] are1

σiAvi for i = 1, 2 thus u1 =

1

6√

10

[4 11 148 7 −2

]1

32

32

3

=

3

10

√10

1

10

√10

and u2 =1

3√

10

[4 11 148 7 −2

]−2

3−1

32

3

=

1

10

√10

− 3

10

√10

Notethat U is already a basis for R2 so we do not need to extend it. Hence

A = UΣV T

=

3

10

√10

1

10

√10

1

10

√10 − 3

10

√10

[ 6√

10 0 0

0 3√

10 0

]1

3

2

3

2

3

−2

3−1

3

2

32

3−2

3

1

3

5.1.5 Summary

If A is m× n such that rankA = r then A = UΣV T where:

• V = [v1 v2...vn] is an n×n matrix which consists of the unit eigenvectorsof ATA arranged such that the corresponding eigenvalues of ATA satisfyλ1 ≥ λ2 > ... > λn ≥ 0.

• Σ =

[D 00 0

]is an m × n matrix such that D = diag (σ1, σ2, ..., σr) is

an r × r diagonal matrix where σi =√λi.

• U is an m × m matrix built from [Av1 Av2...Avr] then extended to anorthonormal basis of Rm.

• This decomposition gives us information about the directions in which thestretching (shrinking) of A is the largest, second largest, third largest, ...as well as the actual largest, second largest, third largest, ... amount ofthis stretching (shrinking).

Page 8: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

94CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

• From this, we can simplify A (make smaller) by eliminating the directionsin which the stretching (shrinking) is small, less than a fixed threshold.This is what we will do in the applications.

You should recognize the similarity between this approach and what we havedone before. To reduce the size of information we have, we:

1. express that information in terms of "essential components".

2. Identify and eliminate the components which only play a small role in thedata.

5.1.6 The SVD and MATLAB

The MATLAB command is svd. Two useful formats for this command are:

1. X=svd(A) will returns a vector X containing the singular values of A.

2. [U,S,V] = svd(X) produces a diagonal matrix S, of the same dimensionas X and with nonnegative diagonal elements in decreasing order, andunitary matrices U and V so that X = USV T .

5.1.7 An Application: Image Compression

Suppose that A is the m× n matrix which represents an image. We describe atechnique which would compress the image using the SVD of A.Using the notation of this section, we write the SVD of A as A = UΣV T

where U is m×m, Σ is m× n and V is n× n.Σ is a diagonal matrix consisting of zeros except for the elements of its di-

agonal which contain the singular values of A, in decreasing order. The numberof non-zero diagonal elements of Σ is the rank of A. Let r be the rank of A.Then r ≤ min (m,n).The idea behind the compression technique is that the larger a singular value

of A is, the more it contributes to A hence to the image. Singular values whichare very small can therefore be omitted and the image reconstructed withoutthem. More specifically, suppose we decide to keep k of the r singular values ofA and set the other ones to 0. We note the following:

1. Σ is a matrix which consists of zeros and a block at the upper left corner.This block is a k × k diagonal matrix containing the singular values of Awe kept, in decreasing order. For storage purposes, Σ is k × 1.

2. When we do the product ΣV T , we are only using the first k rows of V T

hence the first k columns of V . So V T , for storage purposes, is k × n.

3. The result of the product ΣV T will be a matrix in which only the first krows will contain non-zero entries. The remaining rows will only containzeros. Hence, for storage purposes, ΣV T is k × n.

Page 9: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 95

Figure 5.1: Full Rank Image (399)

4. When we do the product UΣV T , we only use the first k columns of Uhence for storage purposes, U is m× k.

We conclude that the total storage needed for UΣV T if we only keep ksingular values of A is m × k + k + k × n = k (m+ n+ 1). The size of theoriginal image was m×n. So, we even have a formula for the compression ratioas a function of m,n, k. It is

original imagecompressed image

=mn

k (m+ n+ 1)(5.3)

We look at a specific example. Consider the image This image is 617 ×928. The rank of the corresponding matrix is 399. Below are the results ofcompressing this image by reducing the number of singular values we keep. Foreach image, we give how many singular values we kept (rank) the percentage ofthe original rank it represents as well as the compression ratio.The code which performs this can be downloaded here. A nice website whichexplains this compression and where I got this information can be found here.

5.1.8 Exercises

1. Using your old Algebra book and/or doing some research in other booksor the internet, prove the following results.

Page 10: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

96CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

Figure 5.2: Rank 299 Image (75%) - Compression 1.2 x

Page 11: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 97

Figure 5.3: Rank 199 Image (50%) - Compression 1.9 x

Page 12: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

98CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

Figure 5.4: Rank 159 Image (35%) - Compression 2.3 x

Page 13: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 99

Figure 5.5: Rank 99 Image (25%) - Compression 3.7 x

Page 14: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

100CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

Figure 5.6: Rank 79 Image (20%) - Compression 4.7 x

Page 15: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 101

Figure 5.7: Rank 39 Image (10%) - Compression 9.5 x

Page 16: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

102CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

Figure 5.8: Rank 19 Image (5%) - Compression 19.5 x

Page 17: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 103

Figure 5.9: Rank 7 Image (2%) - Compression 52.9 x

Page 18: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

104CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONENT ANALYSIS (PCA)

Figure 5.10: Rank 3 Image (1%) - Compression 123.5 x

Page 19: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

5.1. BASICS OF SVD 105

(a) If A is a symmetric matrix, then the eigenvalues of A are real.

(b) If A is orthogonal then |detA| = 1.

2. Suppose the factorization below is an SVD of a matrix A with the entriesin U and V rounded to two decimal places.

A =

.40 −.78 .47.37 −.33 −.87−.84 −.52 −.16

7.10 0 00 3.10 00 0 0

.30 −.51 −.81.76 .94 −.12.58 −.58 .58

(a) What is the rank of A?

(b) Use this decomposition of A to write without calculations a basis forcolA.

(c) What are the singular values of A?

(d) What are the eigenvalues of ATA?

(e) What is the largest value ‖Ax‖ can have for any unit vector x?

3. Suppose that A is an n × n square and invertible matrix. Find an SVDfor A−1. (hint: If A is invertible, rankA = n, this also gives informationabout Σ)

4. Find an SVD for A =

[1 2 33 2 1

]by following the steps in example 5.1.9.

You cannot use the MATLAB svd function but you can use MATLAB forthe intermediate computations. Show all the intermediate steps.

Page 20: The Singular Value Decomposition (SVD) and Principal ...ksuweb.kennesaw.edu/~plaval/math4490/fall2017/SVD_basics.pdf · The Singular Value Decomposition (SVD) and Principal Component

Bibliography

[1] M. Chen, S. Mao, Y. Zhang, and V. C. Leung, Big data: relatedtechnologies, challenges and future prospects, Springer, 2014.

[2] J. Dean, Big data, data mining, and machine learning: value creation forbusiness leaders and practitioners, John Wiley & Sons, 2014.

[3] L. Grossman, What’s this all about?, Time, 186 (2015), pp. 42—43.

[4] H. A. Karimi, Big Data: techniques and technologies in geoinformatics,CRC Press, 2014.

[5] J. N. Kutz, Data-Driven Modeling & Scientific Computation: Methodsfor Complex Systems and Big Data, Oxford University Press, 2013.

[6] F. Lassagne and R. Iconikoff, Google le nouvel einstein, Science &Vie, 1138 (2012), pp. 46—63.

[7] K.-C. Li, H. Jiang, L. T. Yang, and A. Cuzzocrea, Big Data: Algo-rithms, Analytics, and Applications, CRC Press, 2015.

[8] T. McPhail, Will our data drown us, IEEE Spectrum, (2015), pp. 67—69.

[9] T. Perry, Giving your body a "check engine" light, IEEE Spectrum,(2015), pp. 34—36.

[10] L. Saxon, Should you get paid for your data, IEEE Spectrum, (2015),pp. 60—61.

[11] E. Strickland, Their prescription: Big data, IEEE Spectrum, (2015),pp. 58—59.

[12] S. Q. Ye, Big data analysis for bioinformatics and biomedical discoveries,CRC Press, 2016.

137