analyse factorielle exploratoire
TRANSCRIPT
![Page 1: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/1.jpg)
Analyse Factorielle Exploratoire
Michel Tenenhaus
![Page 2: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/2.jpg)
2
1. Les données de Kendall
48 candidats à un certain poste sont évalués sur 15 variables :
(1) Form of letter of application(2) Appearance(3) Academic ability(4) Likeability(5) Self-confidence(6) Lucidity(7) Honesty(8) Salesmanship
(9) Experience(10) Drive(11) Ambition(12) Grasp(13) Potential(14) Keeness to join(15) Suitability
![Page 3: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/3.jpg)
3
Case Summaries
6 7 2 5 8 7 8 8 3 8 9 7 5 7 109 10 5 8 10 9 9 10 5 9 9 8 8 8 107 8 3 6 9 8 9 7 4 9 9 8 6 8 105 6 8 5 6 5 9 2 8 4 5 8 7 6 56 8 8 8 4 4 9 2 8 5 5 8 8 7 77 7 7 6 8 7 10 5 9 6 5 8 6 6 69 9 8 8 8 8 8 8 10 8 10 8 9 8 109 9 9 8 9 9 8 8 10 9 10 9 9 9 109 9 7 8 8 8 8 5 9 8 9 8 8 8 104 7 10 2 10 10 7 10 3 10 10 10 9 3 104 7 10 0 10 8 3 9 5 9 10 8 10 2 54 7 10 4 10 10 7 8 2 8 8 10 10 3 76 9 8 10 5 4 9 4 4 4 5 4 7 6 88 9 8 9 6 3 8 2 5 2 6 6 7 5 64 8 8 7 5 4 10 2 7 5 3 6 6 4 66 9 6 7 8 9 8 9 8 8 7 6 8 6 108 7 7 7 9 5 8 6 6 7 8 6 6 7 86 8 8 4 8 8 6 4 3 3 6 7 2 6 46 7 8 4 7 8 5 4 4 2 6 8 3 5 44 8 7 8 8 9 10 5 2 6 7 9 8 8 93 8 6 8 8 8 10 5 3 6 7 8 8 5 89 8 7 8 9 10 10 10 3 10 8 10 8 10 87 10 7 9 9 9 10 10 3 9 9 10 9 10 89 8 7 10 8 10 10 10 2 9 7 9 9 10 86 9 7 7 4 5 9 3 2 4 4 4 4 5 47 8 7 8 5 4 8 2 3 4 5 6 5 5 62 10 7 9 8 9 10 5 3 5 6 7 6 4 56 3 5 3 5 3 5 0 0 3 3 0 0 5 04 3 4 3 3 0 0 0 0 4 4 0 0 5 04 6 5 6 9 4 10 3 1 3 3 2 2 7 35 5 4 7 8 4 10 3 2 5 5 3 4 8 33 3 5 7 7 9 10 3 2 5 3 7 5 5 22 3 5 7 7 9 10 3 2 2 3 6 4 5 23 4 6 4 3 3 8 1 1 3 3 3 2 5 26 7 4 3 3 0 9 0 1 0 2 3 1 5 39 8 5 5 6 6 8 2 2 2 4 5 6 6 34 9 6 4 10 8 8 9 1 3 9 7 5 3 24 9 6 6 9 9 7 9 1 2 10 8 5 5 2
10 6 9 10 9 10 10 10 10 10 8 10 10 10 1010 6 9 10 9 10 10 10 10 10 10 10 10 10 1010 7 8 0 2 1 2 0 10 2 0 3 0 0 1010 3 8 0 1 1 0 0 10 0 0 0 0 0 103 4 9 8 2 4 5 3 6 2 1 3 3 3 87 7 7 6 9 8 8 6 8 8 10 8 8 6 59 6 10 9 7 7 10 2 1 5 5 7 8 4 59 8 10 10 7 9 10 3 1 5 7 9 9 4 40 7 10 3 5 0 10 0 0 2 2 0 0 0 00 6 10 1 5 0 10 0 0 2 2 0 0 0 0
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15
![Page 4: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/4.jpg)
4
Tableau des corrélations
One of the questions of interest here is how the variables cluster,in the sense that some of the qualities may be correlated or confused in the judge’s mind. (There was no purpose in clusteringthe candidates - only one was to be chosen).
Correlation Matrix
1.000 .239 .044 .306 .092 .228 -.107 .269 .548 .346 .285 .338 .367 .467 .586.239 1.000 .123 .380 .431 .371 .354 .477 .141 .341 .550 .506 .507 .284 .384.044 .123 1.000 .002 .001 .077 -.030 .046 .266 .094 .044 .198 .290 -.323 .140.306 .380 .002 1.000 .302 .483 .645 .347 .141 .393 .347 .503 .606 .685 .327.092 .431 .001 .302 1.000 .808 .410 .816 .015 .704 .842 .721 .672 .482 .250.228 .371 .077 .483 .808 1.000 .356 .826 .147 .698 .758 .883 .777 .527 .416
-.107 .354 -.030 .645 .410 .356 1.000 .231 -.156 .280 .215 .386 .416 .448 .003.269 .477 .046 .347 .816 .826 .231 1.000 .233 .811 .860 .766 .735 .549 .548.548 .141 .266 .141 .015 .147 -.156 .233 1.000 .337 .195 .299 .348 .215 .693.346 .341 .094 .393 .704 .698 .280 .811 .337 1.000 .780 .714 .788 .613 .623.285 .550 .044 .347 .842 .758 .215 .860 .195 .780 1.000 .784 .769 .547 .435.338 .506 .198 .503 .721 .883 .386 .766 .299 .714 .784 1.000 .876 .549 .528.367 .507 .290 .606 .672 .777 .416 .735 .348 .788 .769 .876 1.000 .539 .574.467 .284 -.323 .685 .482 .527 .448 .549 .215 .613 .547 .549 .539 1.000 .396.586 .384 .140 .327 .250 .416 .003 .548 .693 .623 .435 .528 .574 .396 1.000
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15Correlation
![Page 5: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/5.jpg)
5
2. Classification Ascendante Hiérarchique des variables
* * * H I E R A R C H I C A L C L U S T E R A N A L Y S I S * * *
Dendrogram using Complete Linkage (Méthode des voisins les plus éloignés)
Rescaled Distance Cluster Combine
C A S E 0 5 10 15 20 25 Label Num +---------+---------+---------+---------+---------+
X6 6 X12 12 X8 8 X11 11 X5 5 X10 10 X13 13 X2 2 X4 4 X14 14 X7 7 X9 9 X15 15 X1 1 X3 3
![Page 6: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/6.jpg)
6
Interprétation des blocs
Bloc 1 : Qualités humaines favorables au poste(Appearance), Self-confidence, Lucidity, Salesmanship,Drive, Ambition, Grasp, Potential
Bloc 2 : Qualités de franchise et de communicationLikeability, Honesty, Keenness to join
Bloc 3 : ExpérienceForm of letter of application, Experience, Suitability
Bloc 4 : DiplômeAcademic ability
![Page 7: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/7.jpg)
7
3. Uni-dimensionabilité d’un bloc de variables
Question : Un bloc de variables Xj est-il essentiellement unidimensionnel ?
Réponse :1) La première valeur propre 1 de l’analyse en composante principale du bloc est supérieure à 1, les autres sont inférieures à 1.2) Chaque variable est plus corrélée à la première composante principale qu’aux autres composantes principales.3) Chaque variable Xj a une corrélation supérieure à 0.5, en valeur absolue, avec la première composante.
![Page 8: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/8.jpg)
8
Application : ACP de chaque blocBloc 1
Total Variance Explained
5.977 74.706 74.706.759 9.491 84.198.434 5.428 89.626.369 4.606 94.233.179 2.239 96.471.134 1.677 98.148.090 1.129 99.277.058 .723 100.000
Component12345678
Initial Eigenvalues
Extraction Method: Principal Component Analysis.
Component Matrix
.576
.877
.900
.920
.857
.924
.913
.893
X2X5X6X8X10X11X12X13
1Component
Bloc 1 unidimensionnel
![Page 9: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/9.jpg)
9
ApplicationBloc 2 Bloc 3
Total Variance Explained
2.191 73.050 73.050.553 18.440 91.490.255 8.510 100.000
Component123
Initial Eigenvalues
Extraction Method: Principal Component Analysis.
Total Variance Explained
2.220 74.002 74.002.476 15.853 89.856.304 10.144 100.000
Component123
Initial Eigenvalues
Extraction Method: Principal Component Analysis.
Component Matrix
.918
.810
.832
X4X7X14
1Component
Component Matrix
.819
.872
.888
X1X9X15
1Component
![Page 10: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/10.jpg)
10
4. Fiabilité de l’instrument de mesureMesure globale de l’homogénéité d’un
bloc de variables positivement corrélées entre elles :L’Alpha de Cronbach
Question : Comment mesurer globalement la fiabilité de l’instrument de mesure ? C’est à dire le niveau d’homogénéité d’un bloc de variables xi positivement corrélées entre elles ?
Réponse :
Utilisation du Alpha de Cronbach
![Page 11: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/11.jpg)
11
Le modèle , 1,...,i ix e i p
où : vraie mesure
item n° iix
erreur de mesureie
avec les ei et indépendants.
![Page 12: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/12.jpg)
12
Définition du de Cronbach
1
Score p
ii
H x
2 de Cronbach = ( , )Cor H
( , ) ( ) 1
1 ( ) 1 ( )
i j ii j i
Cov x x Var xp p
p Var H p Var H
Formule de calcul du de Cronbach
1, et = 1 lorsque toutes les corrélations entre les xi sont égales à 1et toutes les variances des xi sont égales.
![Page 13: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/13.jpg)
13
de Cronbach pour items centrés-réduits
On a la décomposition suivante :
1 1
( ) ( ) ( , )p p
i i j ki i j k
Var x Var x Cov x x
Si les variables sont centrées-réduites on obtient :
( ) ( , )j kj k
Var H p Cor x x
Un bloc de variables positivement corrélées entre elles esthomogène si la corrélation moyenne
1 ( , )( 1) j k
j k
r Cor x xp p
est grande.
![Page 14: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/14.jpg)
14
de Cronbach pour items centrées-réduites
Le rapport
Un bloc est considéré comme homogène si : - 0.6 pour des recherches exploratoires
- 0.7 pour des recherches confirmatoires
2
1 ( , )( 1)
( ) 1 ( 1)
i ji j
Cor x xp p prp
Var H p r
( , )
1 ( )
i ji j
Cov x xp
p Var H
devient :
![Page 15: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/15.jpg)
15
Application : de Cronbach de chaque bloc
Bloc 1
Correlation Matrix
1.000 .431 .371 .477 .341 .550 .506 .507.431 1.000 .808 .816 .704 .842 .721 .672.371 .808 1.000 .826 .698 .758 .883 .777.477 .816 .826 1.000 .811 .860 .766 .735.341 .704 .698 .811 1.000 .780 .714 .788.550 .842 .758 .860 .780 1.000 .784 .769.506 .721 .883 .766 .714 .784 1.000 .876.507 .672 .777 .735 .788 .769 .876 1.000
X2X5X6X8X10X11X12X13
X2 X5 X6 X8 X10 X11 X12 X13Correlation
Les corrélations sont toutes positives.
![Page 16: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/16.jpg)
16
Bloc 1 R E L I A B I L I T Y A N A L Y S I S - S C A L E (A L P H A)
Item-total Statistics
Scale Scale Corrected Mean Variance Item- Squared Alpha if Item if Item Total Multiple if Item Deleted Deleted Correlation Correlation Deleted
X2 41.2708 364.1591 .5052 .4435 .9599X5 41.4167 327.0142 .8356 .7957 .9435X6 42.0417 300.9344 .8633 .8823 .9404X8 43.5625 289.2726 .8883 .8530 .9391X10 43.0417 312.5940 .8122 .7783 .9438X11 42.3750 305.6011 .8937 .8493 .9384X12 42.1042 303.3293 .8834 .8853 .9390X13 42.6667 301.1206 .8570 .8345 .9409
Reliability Coefficients 8 items
Alpha = .9503 Standardized item alpha = .9489
Scale = Somme des variables
![Page 17: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/17.jpg)
17
Bloc 2 Correlation Matrix
1.000 .645 .685.645 1.000 .448.685 .448 1.000
X4X7X14
X4 X7 X14Correlation
Item-total Statistics
Scale Scale Corrected Mean Variance Item- Squared Alpha if Item if Item Total Multiple if Item Deleted Deleted Correlation Correlation Deleted
X4 13.6042 19.5208 .7823 .6127 .6185X7 11.7083 25.1472 .5986 .4166 .8125X14 14.1875 23.4747 .6312 .4695 .7820
Reliability Coefficients 3 items
Alpha = .8153 Standardized item alpha = .8138
![Page 18: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/18.jpg)
18
Bloc 3 Correlation Matrix
1.000 .548 .586.548 1.000 .693.586 .693 1.000
X1X9X15
X1 X9 X15Correlation
Item-total Statistics
Scale Scale Corrected Mean Variance Item- Squared Alpha if Item if Item Total Multiple if Item Deleted Deleted Correlation Correlation Deleted
X1 10.1875 36.9641 .6165 .3824 .8184X9 11.9583 28.3812 .7043 .5107 .7287X15 10.2292 27.7974 .7318 .5405 .6981
Reliability Coefficients 3 items
Alpha = .8223 Standardized item alpha = .8237
![Page 19: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/19.jpg)
19
5. ACP des données de KendallTotal Variance Explained
7.499 49.996 49.996 7.499 49.996 49.9962.058 13.717 63.713 2.058 13.717 63.7131.462 9.750 73.462 1.462 9.750 73.4621.207 8.049 81.511 1.207 8.049 81.511.739 4.928 86.439.493 3.285 89.724.351 2.342 92.066.310 2.066 94.132.256 1.706 95.838.198 1.322 97.159.149 .995 98.154.093 .620 98.775.085 .564 99.338.064 .429 99.768.035 .232 100.000
Component123456789101112131415
Total % of Variance Cumulative % Total % of Variance Cumulative %Initial Eigenvalues Extraction Sums of Squared Loadings
Extraction Method: Principal Component Analysis.
![Page 20: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/20.jpg)
20
ACP des données de KendallComponent Matrixa
.912
.908
.881
.873
.865
.864
.799
.710 .560
.646 .605
.616 .575
.583 .795 .618 -.576 .710
X13X12X8X11X6X10X5X14X15X4X2X9X1X7X3
1 2 3 4Component
Extraction Method: Principal Component Analysis.4 components extracted.a.
Les corrélations inférieures à 0.5 en valeur absolue ne sont pas montrées.
![Page 21: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/21.jpg)
21
ACP + « Rotation Varimax »
Seules sont montrées les corrélations maximum en valeur absolue surchaque ligne.
Rotated Component Matrixa
.918
.917
.917
.863
.806
.798
.741
.436 .852 .830 .797 .872 .863
.538 .928
X5X11X8X6X12X10X13X2X9X1X15X4X7X14X3
1 2 3 4Component
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
Rotation converged in 5 iterations.a.
![Page 22: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/22.jpg)
22
6. Analyse Factorielle orthogonale6.1. Les données
p variables aléatoires X1,…, Xp, en général centrées-réduites.
6.2. Le modèle
X1 = 11Y1 + … + 1mYm + e1...Xi = i1Y1 + … + imYm + ei...Xp = p1Y1 + … + pmYm + ep
où : Yj = facteurs communs centrés-réduits
ei = facteurs spécifiques centrés et de variance i
Les facteurs Y1,…, Ym, e1,…, em sont tous non corrélés entre eux.
![Page 23: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/23.jpg)
23
6.3. Analyse Factorielle (Option analyse en composantes principales)
p variables X1,…, Xp centrées-réduites.
Estimation des facteurs Y1, …, Ym
Les données
Les m premières composantes principales réduites.
Choix de m
Nombre de valeurs propres supérieures à 1.
![Page 24: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/24.jpg)
24
Application KendallTotal Variance Explained
7.499 49.996 49.996 7.499 49.996 49.9962.058 13.717 63.713 2.058 13.717 63.7131.462 9.750 73.462 1.462 9.750 73.4621.207 8.049 81.511 1.207 8.049 81.511.739 4.928 86.439.493 3.285 89.724.351 2.342 92.066.310 2.066 94.132.256 1.706 95.838.198 1.322 97.159.149 .995 98.154.093 .620 98.775.085 .564 99.338.064 .429 99.768.035 .232 100.000
Component123456789101112131415
Total % of Variance Cumulative % Total % of Variance Cumulative %Initial Eigenvalues Extraction Sums of Squared Loadings
Extraction Method: Principal Component Analysis.
![Page 25: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/25.jpg)
25
Les loadings ij sont les coefficients de régression des Yj
dans la régression de Xi sur les facteurs Y1,…, Ym.
Les facteurs étant orthogonaux (= non corrélés) on a :
ij = Cor(Xi, Yj)
Calcul des saturations (loadings) ij
Calcul des communautés (communalities) hi2
m m2 2 2 2i i 1 m i j ij
j 1 j 1
h R (X ;Y ,..., Y ) cor (X ,Y )
![Page 26: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/26.jpg)
26
Application KendallComponent Matrixa
.445 .618 .372 -.119
.583 -.048 -.017 .289
.109 .340 -.500 .710
.616 -.180 .575 .361
.799 -.358 -.295 -.178
.865 -.188 -.182 -.070
.433 -.576 .361 .448
.881 -.056 -.245 -.230
.365 .795 .099 .070
.864 .067 -.100 -.165
.873 -.098 -.256 -.206
.908 -.031 -.135 .092
.912 .035 -.078 .213
.710 -.114 .560 -.234
.646 .605 .103 -.028
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
1 2 3 4Component
Extraction Method: Principal Component Analysis.4 components extracted.a.
Communalities
.732
.426
.882
.873
.886
.822
.851
.893
.780
.788
.879
.851
.885
.885
.795
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
Extraction
Extraction Method: Principal Component Analysis.
ij2ih
Matrice des corrélations entre les variables et les
facteurs
![Page 27: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/27.jpg)
27
Calcul des spécificités i
Qualité de la décomposition
m2
i ij ij 1
Var(X ) 1
hi2 =
communautéVar(ei) = spécificité
p2 2
i i1 im ii 1 i i i
Var(X ) p ...
Varianceexpliquée par Y1 ( = 1)
Varianceexpliquée par Ym ( = m)
Variancerésiduelle
Variancetotale
![Page 28: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/28.jpg)
28
Application Kendall avec m = 4Component Matrix
.445 .618 .372 -.119
.583 -.048 -.017 .289
.109 .340 -.500 .710
.616 -.180 .575 .361
.799 -.358 -.295 -.178
.865 -.188 -.182 -.070
.433 -.576 .361 .448
.881 -.056 -.245 -.230
.365 .795 .099 .070
.864 .067 -.100 -.165
.873 -.098 -.256 -.206
.908 -.031 -.135 .092
.912 .035 -.078 .213
.710 -.114 .560 -.234
.646 .605 .103 -.028
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
1 2 3 4Component
m2 2i ij
j 1
h
Communauté
p2
j iji 1
Varianceexpliquée
![Page 29: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/29.jpg)
29
6.4. Décomposition de R en AF orthogonale
Modèle : Xi = i1Y1 + … + imYm + ei
2 2i i1 im i
i1
i1 im i
im
Var(X ) 1 + ... + +
Formules de décomposition :
i k i k i1 k1 im km
k1
i1 im
km
Cor(X ,X ) Cov(X ,X ) ...
![Page 30: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/30.jpg)
30
Formule générale
1 2 1 p
2 p
11 12 1m 11 21 p1 1
21 22 2m 12 22 p2 2
p1 p2 pm 1m 2m pm p
1 Cor(X ,X ) Cor(X , X )1 Cor(X ,X )
1
0 0 00 0 0
0 0
R
'
R = +
![Page 31: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/31.jpg)
31
6.5. Les objectifs de l’AF orthogonale
L’analyse factorielle orthogonale consiste à rechercherune décomposition de la matrice des corrélations R de la forme :
R = +
Les ij sont les saturations et les i les spécificités.
Méthodes usuelles d’extraction des saturations :
- Analyse en composantes principales- Méthodes des facteurs principaux
- Méthodes des moindres carrés - Méthodes des moindres carrés pondérés - Maximum de vraisemblance
![Page 32: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/32.jpg)
32
Correlation Matrix
1.000 .239 .044 .306 .092 .228 -.107 .269 .548 .346 .285 .338 .367 .467 .586.239 1.000 .123 .380 .431 .371 .354 .477 .141 .341 .550 .506 .507 .284 .384.044 .123 1.000 .002 .001 .077 -.030 .046 .266 .094 .044 .198 .290 -.323 .140.306 .380 .002 1.000 .302 .483 .645 .347 .141 .393 .347 .503 .606 .685 .327.092 .431 .001 .302 1.000 .808 .410 .816 .015 .704 .842 .721 .672 .482 .250.228 .371 .077 .483 .808 1.000 .356 .826 .147 .698 .758 .883 .777 .527 .416
-.107 .354 -.030 .645 .410 .356 1.000 .231 -.156 .280 .215 .386 .416 .448 .003.269 .477 .046 .347 .816 .826 .231 1.000 .233 .811 .860 .766 .735 .549 .548.548 .141 .266 .141 .015 .147 -.156 .233 1.000 .337 .195 .299 .348 .215 .693.346 .341 .094 .393 .704 .698 .280 .811 .337 1.000 .780 .714 .788 .613 .623.285 .550 .044 .347 .842 .758 .215 .860 .195 .780 1.000 .784 .769 .547 .435.338 .506 .198 .503 .721 .883 .386 .766 .299 .714 .784 1.000 .876 .549 .528.367 .507 .290 .606 .672 .777 .416 .735 .348 .788 .769 .876 1.000 .539 .574.467 .284 -.323 .685 .482 .527 .448 .549 .215 .613 .547 .549 .539 1.000 .396.586 .384 .140 .327 .250 .416 .003 .548 .693 .623 .435 .528 .574 .396 1.000
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15Correlation
Application Kendall
R =
![Page 33: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/33.jpg)
33
Component Matrix
.445 .618 .372 -.119
.583 -.048 -.017 .289
.109 .340 -.500 .710
.616 -.180 .575 .361
.799 -.358 -.295 -.178
.865 -.188 -.182 -.070
.433 -.576 .361 .448
.881 -.056 -.245 -.230
.365 .795 .099 .070
.864 .067 -.100 -.165
.873 -.098 -.256 -.206
.908 -.031 -.135 .092
.912 .035 -.078 .213
.710 -.114 .560 -.234
.646 .605 .103 -.028
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
1 2 3 4Component
m = 4
ik i1 k1 i4 k4ˆ ˆ ˆ ˆr ...
= Corrélation reproduite à l'aide du modèle
![Page 34: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/34.jpg)
34
Reproduced Correlations
.732b .189 -.012 .334 .046 .210 -.082 .294 .682 .408 .257 .324 .373 .482 .703
.189 .426b .261 .462 .437 .496 .404 .455 .193 .455 .459 .560 .593 .342 .338-.012 .261 .882b -.025 -.014 .072 -.011 .036 .310 .050 .043 .221 .302 -.408 .205.334 .462 -.025 .873b .323 .436 .740 .329 .164 .403 .333 .520 .588 .695 .338.046 .437 -.014 .323 .886b .825 .366 .838 -.035 .726 .845 .760 .702 .485 .274.210 .496 .072 .436 .825 .822b .386 .834 .143 .765 .834 .809 .782 .550 .429
-.082 .404 -.011 .740 .366 .386 .851b .223 -.233 .226 .250 .404 .443 .470 -.044.294 .455 .036 .329 .838 .834 .223 .893b .236 .820 .885 .814 .772 .549 .517.682 .193 .310 .164 -.035 .143 -.233 .236 .780b .347 .200 .299 .368 .207 .725.408 .455 .050 .403 .726 .765 .226 .820 .347 .788b .807 .781 .763 .588 .593.257 .459 .043 .333 .845 .834 .250 .885 .200 .807 .879b .811 .769 .536 .484.324 .560 .221 .520 .760 .809 .404 .814 .299 .781 .811 .851b .857 .551 .551.373 .593 .302 .588 .702 .782 .443 .772 .368 .763 .769 .857 .885b .550 .596.482 .342 -.408 .695 .485 .550 .470 .549 .207 .588 .536 .551 .550 .885b .454.703 .338 .205 .338 .274 .429 -.044 .517 .725 .593 .484 .551 .596 .454 .795b
.050 .056 -.027 .046 .018 -.024 -.025 -.133 -.063 .027 .014 -.006 -.014 -.117.050 -.137 -.083 -.006 -.125 -.050 .023 -.052 -.114 .091 -.053 -.086 -.058 .046.056 -.137 .026 .015 .005 -.019 .010 -.045 .044 .001 -.023 -.011 .084 -.065
-.027 -.083 .026 -.020 .046 -.094 .018 -.023 -.010 .013 -.017 .018 -.009 -.011.046 -.006 .015 -.020 -.017 .044 -.021 .050 -.021 -.003 -.039 -.030 -.002 -.024.018 -.125 .005 .046 -.017 -.030 -.008 .004 -.067 -.077 .074 -.004 -.023 -.012
-.024 -.050 -.019 -.094 .044 -.030 .008 .077 .055 -.035 -.018 -.027 -.022 .047-.025 .023 .010 .018 -.021 -.008 .008 -.004 -.009 -.025 -.048 -.037 .000 .032-.133 -.052 -.045 -.023 .050 .004 .077 -.004 -.009 -.005 .000 -.019 .008 -.032-.063 -.114 .044 -.010 -.021 -.067 .055 -.009 -.009 -.027 -.066 .025 .025 .030.027 .091 .001 .013 -.003 -.077 -.035 -.025 -.005 -.027 -.027 .000 .011 -.049.014 -.053 -.023 -.017 -.039 .074 -.018 -.048 .000 -.066 -.027 .019 -.001 -.023
-.006 -.086 -.011 .018 -.030 -.004 -.027 -.037 -.019 .025 .000 .019 -.010 -.023-.014 -.058 .084 -.009 -.002 -.023 -.022 .000 .008 .025 .011 -.001 -.010 -.058-.117 .046 -.065 -.011 -.024 -.012 .047 .032 -.032 .030 -.049 -.023 -.023 -.058
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
Reproduced Correlation
Residual a
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15
Extraction Method: Principal Component Analysis.Residuals are computed between observed and reproduced correlations. There are 24 (22.0%) nonredundant residuals with absolute values greater than 0.05.a.
Reproduced communalitiesb.
ˆ ˆCorrélations reproduites et résidus R R R
![Page 35: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/35.jpg)
35
6.6. Les méthodes de rotation
Formule de décomposition (p = 3, m = 2) :
1 2 1 3
2 3
11 12 111 21 31
21 22 212 22 32
31 32 3
1 Cor(X ,X ) Cor(X , X )R 1 Cor(X ,X )
1
0 0 0 0
0 0
1 0TT '
0 1
![Page 36: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/36.jpg)
36
Les méthodes de rotation
Matrice de rotation d’un angle :
X
Y Y´
X´
A*
x
y
x´
y´x ' cos( ) sin( ) xy ' sin( ) cos( ) y
T
Matrice de rotation T : T´T = T T´= I
*
cos
sin
x´ = Proj(A) sur l’axe X´y´ = Proj(A) sur l’axe Y´
*
-sin
cos
![Page 37: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/37.jpg)
37
Indétermination de la décomposition
1 2 1 3
2 3
3
11 12 111 21 31
21 22 212 22 32
31 32 3
1 Cor(X ,X ) Cor(X ,X )R 1 Cor(X ,X )
Var(X )
0 0 TT ' 0 0
0 0
INouvelle matrice des saturations après rotation :
11 12 11 12
21 22 21 22
31 32 31 32
cos( ) sin(sin( ) cos( )
T
![Page 38: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/38.jpg)
38
Les méthodes de rotation VARIMAX et QUARTIMAX
11 12 11 12
21 22 21 22
31 32 31 32
T
Objectifs :
(1) Pour chaque colonne de les |ij| sont proches de 0 ou 1 :
==> Facteurs bien typés. C’est l’objectif de VARIMAX.
(2) Sur chaque ligne de il y a un |ij| proche 1 et tous
les autres proches de 0 :
==> Typologie des variables. C’est l’objectif de QUARTIMAX.
![Page 39: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/39.jpg)
39
Exemple avec les blocs 2 et 3
Communalities
.704
.753
.765
.844
.808
.750
X1X9X15X4X7X14
Extraction
Component Matrixa
.740 -.395
.633 -.594
.777 -.402
.730 .558
.354 .826
.790 .355
X1X9X15X4X7X14
1 2Component
Extraction Method: Principal Component Analysis.2 components extracted.a.
=
Correlation Matrix
1.000 .548 .586 .306 -.107 .467.548 1.000 .693 .141 -.156 .215.586 .693 1.000 .327 .003 .396.306 .141 .327 1.000 .645 .685
-.107 -.156 .003 .645 1.000 .448.467 .215 .396 .685 .448 1.000
X1X9X15X4X7X14
X1 X9 X15 X4 X7 X14Correlation
R2(Xj;Y1,Y2)
Seulement dansl’option ACP
![Page 40: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/40.jpg)
40
Exemple avec les blocs 2 et 3
Component Plot
Component 1
1.0.50.0-.5-1.0
Com
pone
nt 2
1.0
.5
0.0
-.5
-1.0
x14
x7
x4
x15
x9
x1
![Page 41: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/41.jpg)
41
Component Matrixa
.740 -.395
.633 -.594
.777 -.402
.730 .558
.354 .826
.790 .355
X1X9X15X4X7X14
1 2Component
Extraction Method: Principal Component Analysis.2 components extracted.a.
Utilisation de la rotation VarimaxComponent Transformation Matrix
.778 .628-.628 .778
Component12
1 2
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
( TT = I )T
*
Rotated Component Matrixa
.824 .157
.866 -.064
.857 .175
.218 .893-.243 .865.391 .773
X1X9X15X4X7X14
1 2Component
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
Rotation converged in 3 iterations.a.
= =
![Page 42: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/42.jpg)
42
Utilisation de la rotation varimax
Component Plot in Rotated Space
Component 1
1.0.50.0-.5-1.0
Com
pone
nt 2
1.0
.5
0.0
-.5
-1.0
x14x7 x4
x15
x9
x1
![Page 43: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/43.jpg)
43
Exemple Kendall completApplication (ACP + Varimax)
Rotated Component Matrixa
.116 .830 .108 -.136.436 .152 .401 .228.061 .128 .007 .928.216 .245 .872 -.082.918 -.104 .166 -.062.863 .099 .259 .004.216 -.242 .863 .001.917 .206 .087 -.051.083 .852 -.052 .212.798 .352 .161 -.049.917 .162 .106 -.038.806 .257 .338 .146.741 .329 .419 .227.437 .364 .538 -.522.381 .797 .077 .085
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
1 2 3 4Component
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
Rotation converged in 5 iterations.a.
![Page 44: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/44.jpg)
44
Application (ACP + Varimax)Présentation améliorée
Corrélations inférieures à 0.4 en valeur absolue non montrées
Rotated Component Matrixa
.918
.917
.917
.863
.806
.798
.741 .419
.436 .401 .852 .830 .797 .872 .863
.437 .538 -.522 .928
X5X11X8X6X12X10X13X2X9X1X15X4X7X14X3
1 2 3 4Component
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
Rotation converged in 5 iterations.a.
![Page 45: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/45.jpg)
45
6.7. Estimation des facteurs communs(AF orthogonale)
On recherche une variable (ou score)
j j1 1 jp pY a X ... a X
aussi proche que possible de Yj.
La régression de Yj sur X1,…, Xp donne :
1 1j j j
ˆ ˆ ˆ ˆa (X 'X) X 'Y ( ' )
jˆ ˆoù est la j-ième colonne de .
![Page 46: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/46.jpg)
46
Application (ACP + Varimax)
Component Score Coefficient Matrix
-.097 .372 .013 -.141.016 -.009 .167 .189
-.020 .002 .064 .697-.158 .070 .478 -.008.249 -.171 -.101 -.048.184 -.075 -.031 .001
-.093 -.158 .490 .079.224 -.026 -.155 -.062
-.083 .372 -.050 .110.154 .055 -.086 -.060.226 -.048 -.141 -.048.126 -.004 .039 .108.078 .032 .109 .174
-.026 .126 .186 -.381-.013 .311 -.045 .020
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
1 2 3 4Component
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. Component Scores.
Coefficients appliqués aux variables centrées-réduites
![Page 47: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/47.jpg)
47
Estimation des facteursCase Summariesa
.86178 .15784 -.74318 -2.202811.10076 .69387 .26214 -.96989.91036 .35676 -.19108 -1.82176
-.43999 .25308 .23857 .60429-.83430 .88855 1.06611 .67769-.01945 .51131 .30657 .12490.62943 1.48060 .20267 .25407.86439 1.45197 .23711 .45751.36865 1.39632 .33369 -.11445
2.01276 -.48584 -1.61259 1.453021.95937 -.53124 -2.65795 1.532711.57364 -.81012 -.93716 1.55880-.84271 .42179 1.28155 .55237-.85894 .52630 .96316 .69400-.86601 .11793 .85848 1.04007.66933 .72651 -.12675 -.16103.27014 .64031 -.08353 -.40225.20549 -.42199 -.65175 .10155.12961 -.22923 -.89729 .28080.48935 -.33081 .88878 -.01435
1234567891011121314151617181920
Facteur 1 Facteur 2 Facteur 3 Facteur 3
Limited to first 20 cases.a.
![Page 48: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/48.jpg)
48
7. Test de sphéricité de Bartlett
Test : H0 : R = Identité (aucune corrélation entre les X)
On rejette H0 au risque de se tromper si
2
2ik
i k
21-
2p 5 (n 1 ) ln | R |6
2p 5 (n 1 ) r6
p(p 1)est supérieur au seuil 2
![Page 49: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/49.jpg)
49
Application
KMO and Bartlett's Test
.783
648.400105
.000
Kaiser-Meyer-Olkin Measure of SamplingAdequacy.
Approx. Chi-SquaredfSig.
Bartlett's Test ofSphericity
![Page 50: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/50.jpg)
50
8. Kaiser-Meyer-Olkin Measureof Sampling Adequacy
La corrélation partielle
Xi = i0 + i1Y1 + … + imYm + i
Xk = k0 + k1Y1 + … + kmYm + k
==> Cor(Xi, Xk / Y1, …, Ym) = Cor(i, k)
Pour un modèle factoriel :
Xi = i1Y1 + … + imYm + ei
==> Cor(Xi, Xk / Y1, …, Ym) = Cor(ei, ek) = 0
« Anti-image correlation » -aik :
Si le modèle factoriel est vrai les aik = cor(Xi, Xk/ Autres X)
sont faibles en valeur absolue.
Les facteursspécifiques sontnon corrélés entre eux.
![Page 51: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/51.jpg)
51
Application Kendall
Anti-image Matrices
.787a -.151 -.131 .017 -.041 -.034 .310 .174 -.136 .094 .034 .020 -.092 -.415 -.237-.151 .768a -.016 -.118 -.004 .270 -.304 -.189 .140 .320 -.361 -.147 .031 .194 -.274-.131 -.016 .354a -.200 -.006 .172 .040 -.124 -.251 -.176 .146 -.197 -.180 .540 .252.017 -.118 -.200 .643a .351 -.436 -.460 .158 .088 .305 -.133 .415 -.440 -.566 -.175
-.041 -.004 -.006 .351 .822a -.449 -.493 -.130 -.023 -.048 -.502 .237 -.023 -.025 .149-.034 .270 .172 -.436 -.449 .775a .264 -.431 .095 -.020 .282 -.695 .083 .239 .082.310 -.304 .040 -.460 -.493 .264 .583a .066 .064 -.188 .509 -.271 -.040 -.110 .145.174 -.189 -.124 .158 -.130 -.431 .066 .892a .070 -.160 -.274 .207 .032 -.208 -.316
-.136 .140 -.251 .088 -.023 .095 .064 .070 .765a .111 -.017 -.054 -.064 -.150 -.510.094 .320 -.176 .305 -.048 -.020 -.188 -.160 .111 .843a -.232 .228 -.399 -.355 -.426.034 -.361 .146 -.133 -.502 .282 .509 -.274 -.017 -.232 .820a -.286 -.181 -.063 .259.020 -.147 -.197 .415 .237 -.695 -.271 .207 -.054 .228 -.286 .797a -.444 -.294 -.168
-.092 .031 -.180 -.440 -.023 .083 -.040 .032 -.064 -.399 -.181 -.444 .882a .272 .003-.415 .194 .540 -.566 -.025 .239 -.110 -.208 -.150 -.355 -.063 -.294 .272 .721a .227-.237 -.274 .252 -.175 .149 .082 .145 -.316 -.510 -.426 .259 -.168 .003 .227 .755a
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15Anti-image Correlation
Measures of Sampling Adequacy(MSA)a.
![Page 52: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/52.jpg)
52
Kaiser-Meyer-Olkin Measure of Sampling Adequacy
Comparaison entre les corrélations rik et les corrélationspartielles aik :
2ik
i k2 2ik ik
i k i k
rKMO
r a
KMO Qualité espérée del'analyse factorielle
.90
.80
.70
.60
.50<.50
MarvelousMeritorious
MiddlingMediocreMiserable
Unacceptable
![Page 53: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/53.jpg)
53
9. CONCLUSION (!!!!)… we find ourselves in sympathy with the growing group of statisticians who doubt if FA is worth using except in a few particular types of application. For example Hills (1977) has said that FA is not « worth the time necessary to understand itand carry it out ». He goes on to say that he regards FA as an « elaborate way of doing something which can only be crude,namely picking out clusters of inter-related variables, and then finding some sort of average of the variables in a cluster in spite of the fact that the variables may be measured on different scales. »
C. Chatfield & A.J. Collins, 1980
![Page 54: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/54.jpg)
54
10. Autres méthodes d’extraction des saturations
- Méthodes des facteurs principaux
- Méthodes des moindres carrés
- Méthodes des moindres carrés pondérés
- Maximum de vraisemblance
![Page 55: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/55.jpg)
55
10.1 La matrice des saturations
Modèle : Xi = i1Y1 + … + imYm + ei
Les ij sont les saturations (ou loadings)
Matrice des saturations dans SPSS
11 1m
i1 im
p1 pm
- Yj = Composantes principales
réduites : Component Matrix
- Yj orthogonaux :
Factor Matrix- Yj corrélés :
Pattern Matrix
Si les Yj sont orthogonaux : ij = Cor(Xi, Yj).
Si les Yj sont corrélés, les Cor(Xi, Yj) sont données dans la « Structure Matrix ».
![Page 56: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/56.jpg)
56
10.2 Communauté et spécificité en AF orthogonale
Modèle : Xi = i1Y1 + … + imYm + ei
Décomposition de la variance :
m2
i ij ij 1
Var(X ) 1
hi2 =
communautéVar(ei) = spécificité
Communauté initiale et finale :m
2 2i i j
j 1
2i 1 m
2i
h cor (X , Y )
= R (X ;Y ,..., Y ) = communauté finale (extraction)
R (X ;Autres X) = communauté initiale
(option autre que l’ACP)
![Page 57: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/57.jpg)
57
10.3 Qualité de la décomposition en AF orthogonale
Modèle : Xi = i1Y1 + … + imYm + ei
Décomposition de la variance :
2 2i i1 im iVar(X ) 1 + ... + +
De
On déduit :
p2 2
i i1 im ii 1 i i i
Var(X ) p ...
Varianceexpliquée par Y1
Varianceexpliquée par Ym
Variancerésiduelle
Variancetotale
![Page 58: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/58.jpg)
58
10.4 Méthodes des facteurs principaux
Modèle : Xi = i1Y1 + … + imYm + ei
Utilisation des formules de décomposition :
i1
2i i i i1 im
im
Var(X ) - = h =
k1
i k i1 im
km
Cor(X ,X ) =
![Page 59: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/59.jpg)
59
Méthode des facteurs principauxExemple p=3 et m=2
1 1 1 2 1 3
2 2 2 3
3 3
21 1 2 1 3 11 12
11 21 3122 2 3 21 22
12 22 3223 31 32
0 0 1 Cor(X , X ) Cor(X ,X )R 0 0 1 Cor(X ,X )
0 0 1
h Cor(X ,X ) Cor(X , X ) = h Cor(X ,X )
h
Algorithme itératif : on part des communautés initiales, on estime les saturations, puis on recalcule les communautés à l’aide des saturations. On itère jusqu’à convergence des communautés.
![Page 60: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/60.jpg)
60
Application KendallCommunalities
.573 .551
.588 .314
.522 .478
.813 .820
.878 .876
.910 .785
.741 .698
.877 .881
.603 .669
.848 .745
.900 .863
.912 .843
.894 .910
.831 .989
.785 .756
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
Initial Extraction
Extraction Method: Principal Axis Factoring.
Factor Matrixa
.422 .530 .274 -.130
.536 -.029 -.001 .160
.101 .275 -.289 .556
.609 -.175 .585 .277
.798 -.354 -.312 -.130
.852 -.172 -.171 -.004
.422 -.531 .353 .336
.879 -.042 -.265 -.189
.351 .730 .096 .062
.846 .077 -.098 -.117
.868 -.086 -.272 -.167
.900 -.020 -.114 .141
.912 .046 -.044 .270
.719 -.116 .568 -.369
.631 .592 .088 -.017
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
1 2 3 4Factor
Extraction Method: Principal Axis Factoring.4 factors extracted. 21 iterations required.a.
![Page 61: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/61.jpg)
61
Application Kendall
Total Variance Explained
7.499 49.996 49.996 7.310 48.737 48.737 6.771 45.141 45.1412.058 13.717 63.713 1.740 11.599 60.336 1.984 13.224 58.3651.462 9.750 73.462 1.260 8.401 68.737 1.410 9.401 67.7651.207 8.049 81.511 .868 5.790 74.527 1.014 6.761 74.527.739 4.928 86.439.493 3.285 89.724.351 2.342 92.066.310 2.066 94.132.256 1.706 95.838.198 1.322 97.159.149 .995 98.154.093 .620 98.775.085 .564 99.338.064 .429 99.768.035 .232 100.000
Factor123456789101112131415
Total % of Variance Cumulative % Total % of Variance Cumulative % Total % of Variance Cumulative %Initial Eigenvalues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings
Extraction Method: Principal Axis Factoring.
ACPFacteursprincipaux
Facteur principaux+ rotation varimax
![Page 62: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/62.jpg)
62
Application KendallRotated Factor Matrixa
.922
.919
.900
.884
.883
.861
.826
.508 .769
.491 .714 .685
.443 .762 .696 .672
.575 .419 -.627
X8X11X5X12X6X13X10X2X9X15X1X4X7X3X14
1 2 3 4Factor
Extraction Method: Principal Axis Factoring. Rotation Method: Quartimax with Kaiser Normalization.
Rotation converged in 5 iterations.a.
Factor Transformation Matrix
.954 .237 .183 -.037-.150 .897 -.326 .257-.257 .367 .772 -.451-.049 -.066 .513 .854
Factor1234
1 2 3 4
Extraction Method: Principal Axis Factoring. Rotation Method: Quartimax with Kaiser Normalization.
![Page 63: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/63.jpg)
63
10.5 Méthode des moindres carrées
ik
2ik ik
i k
ˆMinimiser (r r )
où
ij i1 k1 im kmˆ ˆ ˆ ˆr ...
= Corrélation reproduite à l'aide du modèle
![Page 64: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/64.jpg)
64
Application KendallCommunalitiesa
.573 .551
.588 .314
.522 .471
.813 .818
.878 .876
.910 .785
.741 .699
.877 .881
.603 .669
.848 .745
.900 .863
.912 .843
.894 .910
.831 .997
.785 .756
X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15
Initial Extraction
Extraction Method: Unweighted Least Squares.One or more communalitiy estimates greater than1 were encountered during iterations. The resultingsolution should be interpreted with caution.
a.
![Page 65: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/65.jpg)
65
10.6 Méthodes des moindres carrés généralisée
Modèle : Xi = i1Y1 + … + imYm + ei, Var(ei) = i
ik
2ik ik
i k i k
ˆ(r r )Minimiserˆ ˆ
oùij i1 k1 im km
ˆ ˆ ˆ ˆr ...
= Corrélation reproduite à l'aide du modèle
et ik iki k 1 m i k
i k
ˆr r Cor(X ,X | Y ,...,Y ) Cor(e ,e )ˆ ˆ
calculé sur les données.
![Page 66: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/66.jpg)
66
10.7 Méthode du maximum de vraisemblance
Modèle : Xi = i1Y1 + … + imYm + ei , Var(ei) = i
Hypothèse : Les variables Xj suivent une loi multinormale demoyenne et de matrice de covariance .
Notations : S = matrice de covariances observée sur un échantillon de taille n matrice de covariance reconstituée par le modèle
Maximisation : On recherche les saturations et les spécificités estiméesmaximisant le logarithme de la vraisemblance des données :
11 1 1( , ) ( 2) ln | | ( 1) ln | | ( 1)2 2 2
L S C cste n p S n C n Tr C S
'ˆ ˆ ˆC
![Page 67: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/67.jpg)
67
10.8 Test de validité du modèle à m facteurs
On rejette le modèle à m facteurs au risque de se trompersi :
Remarque :2
2 ik ik
i k i k
1 2 (s c ) n 1 (2p 5) mˆ ˆ6 3
2 21
2
1 2 | C | n 1 (2p 5) m Ln ( )6 3 | S |
où C est calculée par maximum de vraisemblance1et = (p m) p m .2
ik iki k 1 m
i k
s coù est une estimation de cor(X , X | Y ,..., Y )ˆ ˆ
![Page 68: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/68.jpg)
68
Application aux données de Kendall
Goodness-of-fit Test
86.610 51 .001Chi-Square df Sig.m = 4
m = 5Goodness-of-fit Test
62.425 40 .013Chi-Square df Sig.
m = 6Goodness-of-fit Test
42.868 30 .060Chi-Square df Sig.
Ce test est connu pour rejeter trop facilement le modèle.
![Page 69: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/69.jpg)
69
11. Analyse Factorielle oblique11.1. Les données
p variables aléatoires X1,…, Xp, en général centrées-réduites.
11.2. Le modèle
X1 = 11Y1 + … + 1mYm + e1...Xi = i1Y1 + … + imYm + ei...Xp = p1Y1 + … + pmYm + ep
où :- Les facteurs communs Yj peuvent être corrélés entre eux.
- Les facteurs spécifiques ei ,…, em sont tous non corrélés entre eux
et avec les facteurs communs.
![Page 70: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/70.jpg)
70
X1 = 11Y1 + … + 1mYm + e1...Xi = i1Y1 + … + imYm + ei...Xp = p1Y1 + … + pmYm + ep
Le modèle
s’écrit aussi
1 11 1 1 1
1
Λ eYX
m
p p pm m m
X Y e
X Y e
X = ΛY + e
![Page 71: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/71.jpg)
71
Le modèle de l’analyse factorielle oblique :
X = ΛY + e
(XX') = [(ΛY + e)(ΛY + e) '] Λ (YY ') Λ ' (ee ')
E EE E
(YY ') matrice des corrélations entre les facteurs communs
E
(ee ') matrice de covariance des facteurs spécifiques
E
Λ Λ '
![Page 72: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/72.jpg)
72
12.3. Les méthodes de rotation oblique
Formule de décomposition (p = 3, m = 2) :
1 2 1 3
2 3
11 12 111 21 31
21 22 212 22 32
31 32 3
1 Cor(X ,X ) Cor(X , X )R 1 Cor(X ,X )
1
0 0 0 0
0 0
T T ' où = (T’T)-1 est une matrice de corrélation
![Page 73: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/73.jpg)
73
Options SPSS• Direct Oblimin Method A method for oblique (nonorthogonal) rotation. When delta equals 0 (the default), solutions are most oblique. As delta becomes more negative, the factors become less oblique. To override the default delta of 0, enter a number less than or equal to -0.8.
• Promax Rotation An oblique rotation, which allows factors to be correlated. This rotation can be calculated more quickly than a direct oblimin rotation, so it is useful for large datasets.
![Page 74: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/74.jpg)
74
Component Correlation Matrix
1.000 .325 .452 .046.325 1.000 .156 .074.452 .156 1.000 -.021.046 .074 -.021 1.000
Component1234
1 2 3 4
Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization.
Application aux données de KendallMatrice des corrélations entre les facteurs
![Page 75: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/75.jpg)
75
Pattern Matrixa
.999 -.259 -.015 -.054
.968 .020 -.087 -.037
.967 .068 -.110 -.053
.871 -.041 .096 .010
.787 .237 -.007 -.052
.750 .126 .195 .150
.637 .206 .298 .232-.049 .870 -.094 .191-.038 .853 .066 -.153.263 .767 -.018 .069.026 -.304 .911 .027
-.056 .209 .903 -.067.332 .071 .349 .236.000 .085 .016 .929.291 .321 .472 -.519
x5x11x8x6x10x12x13x9x1x15x7x4x2x3x14
1 2 3 4Component
Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization.
Rotation converged in 8 iterations.a.
Matrice des saturations ih
Difficile à interpréter car les facteurs sont corrélées entre eux.
![Page 76: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/76.jpg)
76
Structure Matrix
.937 .361 .339 .000
.934 .318 .354 .011
.906 .059 .398 -.027
.902 .257 .483 .045
.886 .411 .550 .190
.858 .488 .386 .002
.849 .477 .613 .270
.523 .251 .505 .249
.507 .855 .219 .138
.200 .853 .016 .255
.261 .839 .185 -.093
.417 .326 .912 -.072
.339 -.152 .874 -.013
.585 .451 .665 -.492
.078 .157 .010 .935
x8x11x5x6x12x10x13x2x15x9x1x4x7x14x3
1 2 3 4Component
Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization.
Matrice des Cor(Xi, Yj)
Cette matrice est plus naturelle à interpréter.
![Page 77: Analyse factorielle exploratoire](https://reader036.vdocument.in/reader036/viewer/2022062600/586b7fea1a28ab06068bf318/html5/thumbnails/77.jpg)
77
Matrice des Cor(Xi, Yj) améliorée
Cette matrice est encore plus facile à interpréter.
Structure Matrix
.937
.934
.906
.902
.886 .550
.858
.849 .613
.523 .505
.507 .855 .853 .839 .912 .874
.585 .665 .935
x8x11x5x6x12x10x13x2x15x9x1x4x7x14x3
1 2 3 4Component
Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization.