multiway array models for dynamic relational data

Post on 04-Jan-2017

233 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Introduction and examples Modeling mean structure Modeling covariance structure

Multiway Array Models for Dynamic Relational Data

Peter Hoff

Statistics, Biostatistics and the CSSSUniversity of Washington

Introduction and examples Modeling mean structure Modeling covariance structure

Outline

Introduction and examples

Modeling mean structurereduced rank arrays via PARAFACleast-squares versus model-based PARAFACordered probit PARAFAC

Modeling covariance structureseparable covariance structureseparable covariance via the Tucker productinternational trade example

Introduction and examples Modeling mean structure Modeling covariance structure

Array-valued data

yi,j,k =

• jth measurement on ith subjectunder condition k (psychometrics)

• sample mean of variable i for groupj in state k (cross-classified data)

• type-k relationship between i and j(multivariate relational data)

• time-k relationship between i and j(dynamic relational data)

y125

y124

y123

y122

y121

Introduction and examples Modeling mean structure Modeling covariance structure

Array-valued data

yi,j,k =

• jth measurement on ith subjectunder condition k (psychometrics)

• sample mean of variable i for groupj in state k (cross-classified data)

• type-k relationship between i and j(multivariate relational data)

• time-k relationship between i and j(dynamic relational data)

y125

y124

y123

y122

y121

Introduction and examples Modeling mean structure Modeling covariance structure

Array-valued data

yi,j,k =

• jth measurement on ith subjectunder condition k (psychometrics)

• sample mean of variable i for groupj in state k (cross-classified data)

• type-k relationship between i and j(multivariate relational data)

• time-k relationship between i and j(dynamic relational data)

y125

y124

y123

y122

y121

Introduction and examples Modeling mean structure Modeling covariance structure

Array-valued data

yi,j,k =

• jth measurement on ith subjectunder condition k (psychometrics)

• sample mean of variable i for groupj in state k (cross-classified data)

• type-k relationship between i and j(multivariate relational data)

• time-k relationship between i and j(dynamic relational data)

y125

y124

y123

y122

y121

Introduction and examples Modeling mean structure Modeling covariance structure

Array-valued data

yi,j,k =

• jth measurement on ith subjectunder condition k (psychometrics)

• sample mean of variable i for groupj in state k (cross-classified data)

• type-k relationship between i and j(multivariate relational data)

• time-k relationship between i and j(dynamic relational data)

y125

y124

y123

y122

y121

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

Cold war cooperation and conflict

• 66 countries

• 8 years (1950,1955, . . . , 1980, 1985)

• yi,j,t =relation between i , j in year t

• also have data on gdp and polity

A 66× 66× 8 data array AFGALB

ARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUB

CZE

DENDOM

ECU

EGY

ETH

FRN

GDRGFR

GRC

GUAHAI

HON

HUNIND

INS

IRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NIC

NORNTHOMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAFSAL

SAU

SPN

SRISWD

TAWTHI

TUR

UKG

USA

USR

VEN

YUG

Introduction and examples Modeling mean structure Modeling covariance structure

Longitudinal trade relations

Yearly change in log-trade for six commodity types

Germany Italy France Spain

Thailand Rep. of Korea Malaysia Indonesia

Introduction and examples Modeling mean structure Modeling covariance structure

Mean and variance structure

Y = Θ + E

Θ describes the “main features” we hope to recover (the mean),

E describes deviations from main features (the variance).

Questions:

• How do we define and estimate the “main features” of an array?

• How can we summarize the deviations from the main features?

Data model:

• Θ represents main features of the data

• E represents “residual” features

• Goal is to compactly represent/summarize/describe the data

Probability model:

• Θ represents a fixed process or population parameter

• E represents measurement error or sample-to-sample variation

• Goal is to estimate Θ and describe our estimation uncertainty

Introduction and examples Modeling mean structure Modeling covariance structure

Mean and variance structure

Y = Θ + E

Θ describes the “main features” we hope to recover (the mean),

E describes deviations from main features (the variance).

Questions:

• How do we define and estimate the “main features” of an array?

• How can we summarize the deviations from the main features?

Data model:

• Θ represents main features of the data

• E represents “residual” features

• Goal is to compactly represent/summarize/describe the data

Probability model:

• Θ represents a fixed process or population parameter

• E represents measurement error or sample-to-sample variation

• Goal is to estimate Θ and describe our estimation uncertainty

Introduction and examples Modeling mean structure Modeling covariance structure

Mean and variance structure

Y = Θ + E

Θ describes the “main features” we hope to recover (the mean),

E describes deviations from main features (the variance).

Questions:

• How do we define and estimate the “main features” of an array?

• How can we summarize the deviations from the main features?

Data model:

• Θ represents main features of the data

• E represents “residual” features

• Goal is to compactly represent/summarize/describe the data

Probability model:

• Θ represents a fixed process or population parameter

• E represents measurement error or sample-to-sample variation

• Goal is to estimate Θ and describe our estimation uncertainty

Introduction and examples Modeling mean structure Modeling covariance structure

Mean and variance structure

Y = Θ + E

Θ describes the “main features” we hope to recover (the mean),

E describes deviations from main features (the variance).

Questions:

• How do we define and estimate the “main features” of an array?

• How can we summarize the deviations from the main features?

Data model:

• Θ represents main features of the data

• E represents “residual” features

• Goal is to compactly represent/summarize/describe the data

Probability model:

• Θ represents a fixed process or population parameter

• E represents measurement error or sample-to-sample variation

• Goal is to estimate Θ and describe our estimation uncertainty

Introduction and examples Modeling mean structure Modeling covariance structure

Mean and variance structure

Y = Θ + E

Θ describes the “main features” we hope to recover (the mean),

E describes deviations from main features (the variance).

Questions:

• How do we define and estimate the “main features” of an array?

• How can we summarize the deviations from the main features?

Data model:

• Θ represents main features of the data

• E represents “residual” features

• Goal is to compactly represent/summarize/describe the data

Probability model:

• Θ represents a fixed process or population parameter

• E represents measurement error or sample-to-sample variation

• Goal is to estimate Θ and describe our estimation uncertainty

Introduction and examples Modeling mean structure Modeling covariance structure

Mean and variance structure

Y = Θ + E

Θ describes the “main features” we hope to recover (the mean),

E describes deviations from main features (the variance).

Questions:

• How do we define and estimate the “main features” of an array?

• How can we summarize the deviations from the main features?

Data model:

• Θ represents main features of the data

• E represents “residual” features

• Goal is to compactly represent/summarize/describe the data

Probability model:

• Θ represents a fixed process or population parameter

• E represents measurement error or sample-to-sample variation

• Goal is to estimate Θ and describe our estimation uncertainty

Introduction and examples Modeling mean structure Modeling covariance structure

Mean and variance structure

Y = Θ + E

Θ describes the “main features” we hope to recover (the mean),

E describes deviations from main features (the variance).

Questions:

• How do we define and estimate the “main features” of an array?

• How can we summarize the deviations from the main features?

Data model:

• Θ represents main features of the data

• E represents “residual” features

• Goal is to compactly represent/summarize/describe the data

Probability model:

• Θ represents a fixed process or population parameter

• E represents measurement error or sample-to-sample variation

• Goal is to estimate Θ and describe our estimation uncertainty

Introduction and examples Modeling mean structure Modeling covariance structure

Reduced rank models for mean structure

Y = Θ + E

Θ describes the “main features” we hope to recover,

E describes deviations from main features.

Matrix decomposition: If Θ is a rank-R matrix, then

θi,j = 〈ui , vj〉 =RX

r=1

ui,rvj,r Θ =RX

r=1

ur vTr =

RXr=1

ur ◦ vr

Array decomposition: If Θ is a rank-R array, then

θi,j,k = 〈ui , vj ,wk〉 =RX

r=1

ui,rvj,rwk,r Θ =RX

r=1

ur ◦ vr ◦ wr

(PARAFAC: Harshman[1970], Kruskal[1976,1977], Harshman and Lundy[1984],

Kruskal[1989])

Introduction and examples Modeling mean structure Modeling covariance structure

Reduced rank models for mean structure

Y = Θ + E

Θ describes the “main features” we hope to recover,

E describes deviations from main features.

Matrix decomposition: If Θ is a rank-R matrix, then

θi,j = 〈ui , vj〉 =RX

r=1

ui,rvj,r Θ =RX

r=1

ur vTr =

RXr=1

ur ◦ vr

Array decomposition: If Θ is a rank-R array, then

θi,j,k = 〈ui , vj ,wk〉 =RX

r=1

ui,rvj,rwk,r Θ =RX

r=1

ur ◦ vr ◦ wr

(PARAFAC: Harshman[1970], Kruskal[1976,1977], Harshman and Lundy[1984],

Kruskal[1989])

Introduction and examples Modeling mean structure Modeling covariance structure

Some things to worry about

1. Computing the rank• matrix: easy to do• array: no known algorithm

2. Possible rank• matrix: Rmax = min(m1, m2)• array: max(m1, m2, m3) ≤ Rmax ≤ min(m1m2, m1m3, m2m3)

3. Probable rank• matrix: “almost all” matrices have full rank.• array: a nonzero fraction (w.r.t. Lebesgue measure) have less than full rank.

4. Least squares approximation• matrix: SVD of Y provides the rank R least-squares approximation to Θ.• array: iterative “least squares” methods, but solution may not exist

(de Silva and Lim[2008] )

5. Uniqueness

• matrix: The representation Θ = 〈U, V〉 = UVT is not unique.• array: The representation Θ = 〈U, V, W〉 i s essentially unique.

Introduction and examples Modeling mean structure Modeling covariance structure

A model-based approach

For a K -way array Y,

Y = Θ + E

Θ =RX

r=1

u(1)r ◦ · · · ◦ u(K)

r ≡ 〈U(1), . . . ,U(K)〉

u(k)1 , . . . , u(k)

mk

iid∼ multivariate normal(µk ,Ψk),

with {µk ,Ψk , k = 1, . . . ,K} to be estimated.

Some motivation:

• shrinkage: Θ contains lots of parameters.

• hierarchical: covariance among columns of U(k) is identifiable.

• estimation: p(Y|U(1), . . . ,U(K)) multimodal, MCMC “stochastic search”

• adaptability: incorporate reduced rank arrays as a model component• multilinear predictor in a GLM• multilinear effects for regression parameters

Introduction and examples Modeling mean structure Modeling covariance structure

A model-based approach

For a K -way array Y,

Y = Θ + E

Θ =RX

r=1

u(1)r ◦ · · · ◦ u(K)

r ≡ 〈U(1), . . . ,U(K)〉

u(k)1 , . . . , u(k)

mk

iid∼ multivariate normal(µk ,Ψk),

with {µk ,Ψk , k = 1, . . . ,K} to be estimated.

Some motivation:

• shrinkage: Θ contains lots of parameters.

• hierarchical: covariance among columns of U(k) is identifiable.

• estimation: p(Y|U(1), . . . ,U(K)) multimodal, MCMC “stochastic search”

• adaptability: incorporate reduced rank arrays as a model component• multilinear predictor in a GLM• multilinear effects for regression parameters

Introduction and examples Modeling mean structure Modeling covariance structure

A model-based approach

For a K -way array Y,

Y = Θ + E

Θ =RX

r=1

u(1)r ◦ · · · ◦ u(K)

r ≡ 〈U(1), . . . ,U(K)〉

u(k)1 , . . . , u(k)

mk

iid∼ multivariate normal(µk ,Ψk),

with {µk ,Ψk , k = 1, . . . ,K} to be estimated.

Some motivation:

• shrinkage: Θ contains lots of parameters.

• hierarchical: covariance among columns of U(k) is identifiable.

• estimation: p(Y|U(1), . . . ,U(K)) multimodal, MCMC “stochastic search”

• adaptability: incorporate reduced rank arrays as a model component• multilinear predictor in a GLM• multilinear effects for regression parameters

Introduction and examples Modeling mean structure Modeling covariance structure

A model-based approach

For a K -way array Y,

Y = Θ + E

Θ =RX

r=1

u(1)r ◦ · · · ◦ u(K)

r ≡ 〈U(1), . . . ,U(K)〉

u(k)1 , . . . , u(k)

mk

iid∼ multivariate normal(µk ,Ψk),

with {µk ,Ψk , k = 1, . . . ,K} to be estimated.

Some motivation:

• shrinkage: Θ contains lots of parameters.

• hierarchical: covariance among columns of U(k) is identifiable.

• estimation: p(Y|U(1), . . . ,U(K)) multimodal, MCMC “stochastic search”

• adaptability: incorporate reduced rank arrays as a model component• multilinear predictor in a GLM• multilinear effects for regression parameters

Introduction and examples Modeling mean structure Modeling covariance structure

Simulation study

K = 3 , R = 4 , (m1,m2,m3) = (10, 8, 6)

1. Generate M, a random array of roughly full rank

2. Set Θ = ALS4(M)

3. Set Y = Θ + E, {ei,j,k}iid∼ normal(0, v(Θ)/4).

For each of 100 such simulated datasets, we obtain ΘLS and ΘHB.

Questions: How well do ΘLS and ΘHB

• recover the “truth” Θ?

• represent Y?

Introduction and examples Modeling mean structure Modeling covariance structure

Simulation study

K = 3 , R = 4 , (m1,m2,m3) = (10, 8, 6)

1. Generate M, a random array of roughly full rank

2. Set Θ = ALS4(M)

3. Set Y = Θ + E, {ei,j,k}iid∼ normal(0, v(Θ)/4).

For each of 100 such simulated datasets, we obtain ΘLS and ΘHB.

Questions: How well do ΘLS and ΘHB

• recover the “truth” Θ?

• represent Y?

Introduction and examples Modeling mean structure Modeling covariance structure

Simulation study: known rank

●●

● ●

● ●

● ●

●●

●●

●●

●●

●●

● ●●

●●

● ●●

●●

● ●

●●

0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

least squares MSE

Bay

esia

n M

SE

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

● ●●

●●

●●

● ●●

●●

●●

modemean

● ●

● ●●●

●●

●●

●●

●●

● ●●

●●●

● ●

●●●

●●

● ●●

●●

0.12 0.14 0.16 0.18 0.20

0.12

0.14

0.16

0.18

0.20

least squares RSS

Bay

esia

n R

SS

●●

●●

●●

●●

●●

●● ●

●● ●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●

●●

●●●

●●

Introduction and examples Modeling mean structure Modeling covariance structure

Simulation study: misspecified rank

1 2 3 4 5 6 7 8

−2.

5−

1.5

−0.

5lo

g R

SS

least squares

●●

●●

1 2 3 4 5 6 7 8

−3

−2

−1

0

assumed rank

log

MS

E

1 2 3 4 5 6 7 8

−2.

5−

1.5

−0.

5

hierarchical Bayes

●●●● ● ●●

●●●

1 2 3 4 5 6 7 8

−3

−2

−1

0

assumed rank

Introduction and examples Modeling mean structure Modeling covariance structure

Longitudinal network example

• yi,j,t ∈ {−5,−4, . . . ,+1,+2}, the level of military conflict/cooperation

• xi,j,t,1 = log gdpi + log gdpj , the sum of the log gdps of the two countries;

• xi,j,t,2 = (log gdpi )× (log gdpj), the product of the log gdps;

• xi,j,t,3 = polityi × polityj , where polityi ∈ {−1, 0,+1};• xi,j,t,4 = (polityi > 0)× (polityj > 0).

Models:Regression on the raw data scale:

yi,j,t = βT xi,j,t + uTi Λtuj + εi,j,t

Y = {yi,j,t − βT xi,j,t} = UΛtUT + E

Ordered probit regression:

yi,j,t = f (zi,j,t , c−5, . . . , c+2) = max{y : zi,j,t > cy}zi,j,t = βT xi,j,t + uT

i Λtuj + εi,j,t

Z = {zi,j,t − βT xi,j,t} = UΛtUT + E

Introduction and examples Modeling mean structure Modeling covariance structure

Longitudinal network example

• yi,j,t ∈ {−5,−4, . . . ,+1,+2}, the level of military conflict/cooperation

• xi,j,t,1 = log gdpi + log gdpj , the sum of the log gdps of the two countries;

• xi,j,t,2 = (log gdpi )× (log gdpj), the product of the log gdps;

• xi,j,t,3 = polityi × polityj , where polityi ∈ {−1, 0,+1};• xi,j,t,4 = (polityi > 0)× (polityj > 0).

Models:Regression on the raw data scale:

yi,j,t = βT xi,j,t + uTi Λtuj + εi,j,t

Y = {yi,j,t − βT xi,j,t} = UΛtUT + E

Ordered probit regression:

yi,j,t = f (zi,j,t , c−5, . . . , c+2) = max{y : zi,j,t > cy}zi,j,t = βT xi,j,t + uT

i Λtuj + εi,j,t

Z = {zi,j,t − βT xi,j,t} = UΛtUT + E

Introduction and examples Modeling mean structure Modeling covariance structure

Longitudinal network example

• yi,j,t ∈ {−5,−4, . . . ,+1,+2}, the level of military conflict/cooperation

• xi,j,t,1 = log gdpi + log gdpj , the sum of the log gdps of the two countries;

• xi,j,t,2 = (log gdpi )× (log gdpj), the product of the log gdps;

• xi,j,t,3 = polityi × polityj , where polityi ∈ {−1, 0,+1};• xi,j,t,4 = (polityi > 0)× (polityj > 0).

Models:Regression on the raw data scale:

yi,j,t = βT xi,j,t + uTi Λtuj + εi,j,t

Y = {yi,j,t − βT xi,j,t} = UΛtUT + E

Ordered probit regression:

yi,j,t = f (zi,j,t , c−5, . . . , c+2) = max{y : zi,j,t > cy}zi,j,t = βT xi,j,t + uT

i Λtuj + εi,j,t

Z = {zi,j,t − βT xi,j,t} = UΛtUT + E

Introduction and examples Modeling mean structure Modeling covariance structure

Longitudinal network example

• yi,j,t ∈ {−5,−4, . . . ,+1,+2}, the level of military conflict/cooperation

• xi,j,t,1 = log gdpi + log gdpj , the sum of the log gdps of the two countries;

• xi,j,t,2 = (log gdpi )× (log gdpj), the product of the log gdps;

• xi,j,t,3 = polityi × polityj , where polityi ∈ {−1, 0,+1};• xi,j,t,4 = (polityi > 0)× (polityj > 0).

Models:Regression on the raw data scale:

yi,j,t = βT xi,j,t + uTi Λtuj + εi,j,t

Y = {yi,j,t − βT xi,j,t} = UΛtUT + E

Ordered probit regression:

yi,j,t = f (zi,j,t , c−5, . . . , c+2) = max{y : zi,j,t > cy}zi,j,t = βT xi,j,t + uT

i Λtuj + εi,j,t

Z = {zi,j,t − βT xi,j,t} = UΛtUT + E

Introduction and examples Modeling mean structure Modeling covariance structure

Longitudinal network example

• yi,j,t ∈ {−5,−4, . . . ,+1,+2}, the level of military conflict/cooperation

• xi,j,t,1 = log gdpi + log gdpj , the sum of the log gdps of the two countries;

• xi,j,t,2 = (log gdpi )× (log gdpj), the product of the log gdps;

• xi,j,t,3 = polityi × polityj , where polityi ∈ {−1, 0,+1};• xi,j,t,4 = (polityi > 0)× (polityj > 0).

Models:Regression on the raw data scale:

yi,j,t = βT xi,j,t + uTi Λtuj + εi,j,t

Y = {yi,j,t − βT xi,j,t} = UΛtUT + E

Ordered probit regression:

yi,j,t = f (zi,j,t , c−5, . . . , c+2) = max{y : zi,j,t > cy}zi,j,t = βT xi,j,t + uT

i Λtuj + εi,j,t

Z = {zi,j,t − βT xi,j,t} = UΛtUT + E

Introduction and examples Modeling mean structure Modeling covariance structure

An aside: scaling the response

R=1

●●●●●

●●●●●

●●

●●●●●●●●●●●

●●

●●

●●●●

●●●●●

●●●●●●

●●

●●

●●●●

●●

●●●●●●●●●

●●●

●●

●●●

●●●●

●●

●●●●

●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●

●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●

●●●●●●

●●●●●

●●●

●●●●●●●●●●

●●●●●●●●●

●●

●●●●

●●●●●●●●●●

●●●●●●

●●

●●●

●●●●

●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●

●●●●●●●●●●

●●●●●●

●●

●●

●●

●●●●

●●●●

●●●●

●●●●●●●●●●

●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●

●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●

●●●●●

●●

●●●●●●●●

●●●●●●

●●

●●●●●●

●●●●

●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●

●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●

●●

●●●●●●●

●●

●●●●

●●●●●●●●●●●●

●●●●●●

●●

●●●●●●

●●●

●●

●●●●

●●●●●●●●

●●●●

●●

●●

●●●

●●

●●●●●●●●●

●●●●●●

●●●

●●●●●

●●●

●●●●

●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●

●●●●●

●●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●

●●●

●●

●●

●●

●●

●●●

●●●

●●

●●●●●●

●●

●●

●●

●●●

●●

●●●

●●●

●●

●●●●●

●●

●●

●●

●●●

leas

t squ

ares

R=1

−1 0 1

●●●●●●

●●●●●

●●

●●●●●●●●●●●

●●

●●

●●●●●

●●●●

●●●●●●●●

●●

●●●●●

●●●

●●●●●●●●●●●●●

●●●

●●●

●●●●

●●

●●●●

●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●

●●●●●●

●●●●

●●●●●●●●●●●

●●●

●●

●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●●●

●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●

●●●●●●●

●●●

●●●●●●●●●●

●●●●●●●●●

●●

●●●●

●●

●●

●●●●●●

●●●●

●●

●●●

●●●●

●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●

●●

●●●●●

●●●●

●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●

●●

●●

●●●●●●

●●●●

●●

●●

●●●●●●

●●●

●●●●

●●

●●

●●●●●●

●●●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●

●●

●●●●●●

●●●●

●●

●●

●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●●●●

●●●●

●●

●●

●●

●●●●●●●

●●

●●●●

●●

●●

●●●

●●

●●●●●●

●●●

●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●

●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●

●●●●●

●●●●●●●●●●●●

●●●

●●●●●●●●●●●

●●●●

●●●●

●●

●●●●●●●

●●

●●●●

●●●●●●●●●

●●

●●

●●●

●●

●●●●●●

●●●

●●●●

●●

●●

●●

●●●●

●●

●●

●●●

●●

●●

●●●●●●

●●●●●

●●

●●

●●●

●●

●●●●●●●●

●●

●●

●●●●

●●●●

●●

●●●●

●●●●●●●●●●

●●●●●

●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●

●●●●●

●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●

●●●●●●

●●

●●●

●●●

●●●

●●

●●

●●●●●●●

●●●

●●

●●

●●

●●●●●●

●●

●●●

●●

●●

●●

orde

red

prob

it

R=2

●●

●●●●●●●●●●●

●●

●●

●●

●●

●●●●●●

●●●●●●

●●

●●●

●●●

●●●●●●●

●●●●

●●

●●●●●●●●●

●●

●●●

●●●●

●●

●●●●●●

●●

●●●

●●●●●●●

●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●

●●

●●●●

●●●●

●●

●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●

●●

●●

●●●●

●●

●●●●

●●

●●●

●●

●●

●●●●

●●

●●●●●●●●●●●

●●●●

●●●●●

●●

●●●●

●●

●●●●

●●

●●●

●●

●●●●

●●●

●●●●●●●●●●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●

●●

●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●

●●

●●●●

●●

●●●

●●

●●

●●●●

●●

●●●●

●●

●●●●

●●

●●●

●●

●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●

●●●●●●

●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●

●●●●●●

●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●

●●

●●●●●

●●

●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●

●●

●●●●●●●

●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●

●●●●

●●●●●●●●●●●●●●

●●●●●

●●●

●●●●●●●●●●●

●●

●●●

●●

●●●●●●●●●

●●●●●●

●●●

●●

●●

●●●

●●●●

●●●●●●●●●●●

●●●●●●●●

●●●●●●●

●●●●

●●●●●

●●●

●●

●●●●

●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●

●●

●●●●●●●

●●

●●●●

●●

●●●●●●●

●●●

●●●

●●

●●

●●●●●

●●●●●●●●

●●●●

●●●●●●

●●●●●●

●●●

●●

●●

●●

●●

●●●●

●●

●●●●●●●

●●●

●●●●●

R=2

−1 0 1

●●●●●●●●●●●●●●●●

●●

●●●●

●●

●●

●●

●●

●●●●●●●●●●●●●

●●

●●●

●●●

●●●●●●●●●●●●●●

●●

●●●●●●●●●

●●●●

●●

●●●

●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●

●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●

●●●●●●●

●●●●

●●●●●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●

●●

●●

●●●●

●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●

●●●●

●●●●

●●

●●

●●

●●

●●●●●●

●●●●

●●

●●●

●●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●

●●●

●●

●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●

●●

●●

●●●●●●

●●●●

●●

●●

●●●●●●

●●

●●●●

●●

●●●●●●●●

●●●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●

●●●●

●●●●●●

●●

●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●

●●●●●●●●●

●●●●●●

●●

●●●●

●●●

●●●●●●

●●

●●●●●

●●●

●●

●●●●

●●

●●

●●●●●●●●●●●

●●●●●●●

●●●

●●

●●●●●●●

●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●

●●●●●●●●●

●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●

●●●●●●●●●

●●●

●●●●●●●●

●●●●●●

●●●●●●●●●●

●●

●●●●●●●●●●●

●●●●●●●●

●●●

●●●●●

●●

●●●●

●●●●●●

●●●●

●●●●●●

●●

●●●

●●●

●●

●●●

●●●●●

●●

●●●●●

●●●●

●●

●●●

●●

●●

●●●●●●

●●●

●●●

●●

●●

●●

●●

●●

●●●●●●●●●●●

●●

●●●●

●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●

●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●

●●●●●●●●●●●●●●

●●●●●●●

●●●●●●

●●

●●

●●●●

●●●●●●●●●●●●●●●●●

●●●●●●

R=3

●●●

●●

●●●●

●●●●●●●●●●●●

●●

●●

●●

●●●●●●●

●●●

●●●●●

●●●●●●

●●●●●

●●●

●●

●●

●●●●

●●●

●●●

●●●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●

●●

●●●●●●

●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●

●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●

●●

●●●●●●

●●

●●

●●

●●●●

●●●●●

●●

●●●

●●●●●●

●●●●

●●●●

●●●●●●●●

●●

●●●

●●●●●

●●●

●●●●●●

●●

●●●●●●

●●●●●

●●●●

●●●

●●●●●●●●●●●●

●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●●●●

●●

●●●●●●

●●●●●●

●●

●●

●●●●

●●

●●●●●●

●●

●●●●●●

●●●●●

●●

●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●

●●●

●●

●●●●●●●●●●●●●

●●●

●●●

●●●

●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●

●●

●●●

●●●

●●●●●●●●●●●

●●

●●●●●

●●

●●●●●●●●

●●●●●●●

●●

●●●●

●●●

●●

●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●

●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●

●●●

●●●●●●

●●●●●

●●●●●●●●●●●●●●●●

●●

●●

●●

●●●

●●

●●●●●

●●●●●●

●●●●●

●●●●●

●●

●●●

●●●

●●●●●●

●●

●●

●●●

●●●●

●●●●●●

●●●●

●●

R=3

−1 0 1

●●●●●●●●●●●●●●●●

●●●●

●●●●●●

●●

●●●●●●●●●●●●●●

●●

●●

●●●●

●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●

●●

●●●

●●●

●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●

●●●●

●●

●●●

●●●●●●●●●

●●●●●●

●●●●●●●●●

●●●●

●●●●●●●●

●●●●

●●●●●

●●●

●●●●

●●●●●●

●●●●●●●

●●●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●

●●

●●●

●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●

●●●●●●

●●●●●●●

●●

●●

●●●●

●●●

●●●

●●●●

●●●●●●

●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●

●●●●

●●●

●●

●●

●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●

●●●●●●

●●●●●●●●●

●●●●●●

●●

●●

●●

●●●

●●

●●●●

●●

●●

●●●●●●●●●●●

●●●●●●

●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●

●●●●●●●

●●●●

●●●●●●

●●

●●●

●●●

●●

●●

●●●●

●●

●●●●●●●●●

●●

●●

●●●

●●

●●

●●●●●●

●●●●●●

●●

●●●●●

●●

●●●●●●●●●

●●

●●

●●

●●●●●●●●

●●●●●●●●

●●●●●

●●●

●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

●●

●●

●●

●●

●●●●●●●

●●●

R=4

●●●

●●●

●●

●●

●●●●●

●●

●●

●●●●

●●●

●●●

●●●●

●●

●●

●●●●

●●●●●

●●●●●

●●

●●●●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●●●●

●●

●●●

●●●●●●●●●●

●●●●●●●●●●

●●

●●●●●

●●●

●●●●●●●●●●●

●●●●●●

●●●●●

●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●

●●

●●●●●●

●●●●●●●

●●

●●

●●●●

●●

●●●●●●●●

●●●●●●

●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●

●●

●●●●●●●●●●

●●●●

●●●●

●●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●

●●

●●●●●●●●●●●●

●●

●●●●●

●●

●●●●●●●●●

●●●●●●●

●●●

●●●●●

●●●

●●

●●

●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●

●●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●

●●●●●●●●

●●●●●●

●●●●●●●●●●●

●●

●●●

●●●

●●●●●●

●●

●●

●●●

●●●●

●●

●●●●

●●

R=4

−1 0 1

●●●●●●●●●●●●●●●●

●●

●●

●●●●

●●●●●●

●●

●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●

●●

●●●●●

●●●●●●

●●

●●

●●●

●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●

●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●

●●

●●●

●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●

●●●●

●●●●●●

●●●

●●

●●

●●●●●●

●●●●●●

●●●

●●●●

●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●●

●●●●●

●●

●●

●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●

●●●●●●

●●●●●●

●●

●●●●●●

●●●

●●●

●●●●

●●●●●●

●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●

●●●●●●

●●●

●●

●●

●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●

●●●●●●

●●●

●●●●●●●●

●●●●●●●

●●●●●●

●●

●●●

●●

●●●

●●

●●●●●

●●

●●

●●●●●●●●●●●

●●●●●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●

●●●●●●

●●

●●●●●●

●●

●●●

●●●

●●●●

●●●●

●●

●●●●●●●●●

●●

●●●

●●

●●

●●●●●●

●●●●●●

●●

●●

●●

●●

●●●●●●●●

●●

●●

●●●●●

●●●●●●●●●●

●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●

●●●●●●●

●●●●●●●

●●●

●●●●●●●●

●●

●●

●●●●

●●●●●●●

●●●

R=5

●●

●●●●

●●

●●

●●●●●●●

●●

●●

●●

●●●●

●●●●

●●●

●●●

●●●

●●

●●

●●●

●●●●●●

●●●

●●

●●●●●

●●●

●●

●●

●●

●●●●

●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●

●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●

●●●●●●●●●

●●

●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●

●●●●●

●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●

●●

●●●

●●●●

●●

●●

●●●●

●●

●●

●●●●●●●

●●●●●●●●●●

●●

●●●●

●●●

●●●●●●●●●●●

●●●●●

●●●●●

●●●●●●●●

●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●

●●

●●

●●●●●

●●●●●●●●

●●●

●●●●●●

●●

●●●●●●●●●

●●●●●●

●●●●●

●●

●●●

●●

●●

●●●●●●●

●●●●●

●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●

●●●●●●●●●

●●●●●●●●

●●●●

●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●

●●●●●●

●●●

●●●●●●

●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●

●●●

●●

●●●●

●●

●●●●●

●●●●●●

●●

●●●●●●●●●●

●●●●●●●●

●●●●●●●●

●●●●●●●●●●

●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●●●

●●

●●●●●●●●●●●●●●

●●●●●

●●

●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●

●●

●●

●●

●●●●

●●●

●●●●

●●●●●●●●●●●●●●

●●●●●

●●●●●

●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●

●●●●●●

●●●●●●

●●●●●●

●●●●●

●●

●●

●●●

●●

●●

●●●●

●●●●●●

R=5

−1 0 1

●●●●●●●●●●●●●●●●

●●

●●

●●●●

●●●●●●

●●

●●●●●●●●●●●●●●

●●●●●●●●

●●●

●●●●●●●●●●●

●●

●●●●●●●●●●●

●●

●●

●●●

●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●

●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●

●●●●

●●●●

●●●●●●

●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●●

●●●●

●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●●

●●●●●●

●●

●●

●●●

●●●

●●●

●●●●●●●●●

●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●

●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●

●●●

●●●●●●●●●●●●●●

●●●●●

●●●

●●

●●●

●●●●

●●

●●●●●●●●●●●

●●●●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●

●●

●●●●

●●

●●●

●●●

●●●●

●●●●

●●●●●●●●

●●

●●

●●●●●●

●●●●●

●●

●●

●●

●●●●●●

●●

●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

●●

●●

●●●●

●●●●●●●

●●●●

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

zi,j,t = βT xi,j,t + uTi Λtuj + εi,j,t

−0.10 −0.06 −0.02 0.02

05

1525

gdp sum

dens

ity

−0.08 −0.06 −0.04 −0.02 0.00

010

2030

40

gdp product

−0.2 0.0 0.1 0.2 0.3 0.4 0.5

01

23

45

polity product

dens

ity

−0.4 −0.2 0.0 0.2 0.4 0.6

0.0

1.0

2.0

3.0

jointly positive polity

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Cold war conflict network

u1

u 2 AFGALBARG

AUL

AUS

BELBRA

BUL

CAN

CHL

CHN

COLCOS

CUBCZE

DENDOM

ECU

EGY

ETH

FRNGDR

GFR

GRC

GUAHAI

HON

HUNIND

INSIRE

IRN

IRQ

ISR

ITA

JOR

LBR

LEB

MYA

NEP

NEW

NICNORNTH OMA

PAN

PER

PHI

POR

PRK

ROK

RUM

SAF SAL

SAU

SPN

SRISWD

TAWTHITUR

UKG

USA

USR

VEN

YUG

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 1

1950 1960 1970 1980

0.0

0.2

0.4

0.6

v 2

Introduction and examples Modeling mean structure Modeling covariance structure

Modeling covariance structure

Y = Θ + E

Θ can be defined and estimated using reduced rank array representations

Can we compactly summarize deviations from Θ?

Introduction and examples Modeling mean structure Modeling covariance structure

Modeling covariance structure

Y = Θ + E

Θ can be defined and estimated using reduced rank array representations

Can we compactly summarize deviations from Θ?

Introduction and examples Modeling mean structure Modeling covariance structure

Covariance structure of multiple relational arrays

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×10

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

“Replications” over time: Y = {Y1, . . . ,Y10}

Yt = Θ + Et

• Θ ∈ R30×30×6, constant over time;

• Et ∈ R30×30×6, changing over time.

How should the covariance among E = {E1, . . . ,E10} be described?

Introduction and examples Modeling mean structure Modeling covariance structure

Covariance structure of multiple relational arrays

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×10

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

“Replications” over time: Y = {Y1, . . . ,Y10}

Yt = Θ + Et

• Θ ∈ R30×30×6, constant over time;

• Et ∈ R30×30×6, changing over time.

How should the covariance among E = {E1, . . . ,E10} be described?

Introduction and examples Modeling mean structure Modeling covariance structure

Covariance structure of multiple relational arrays

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×10

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

“Replications” over time: Y = {Y1, . . . ,Y10}

Yt = Θ + Et

• Θ ∈ R30×30×6, constant over time;

• Et ∈ R30×30×6, changing over time.

How should the covariance among E = {E1, . . . ,E10} be described?

Introduction and examples Modeling mean structure Modeling covariance structure

Covariance structure of multiple relational arrays

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×10

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

“Replications” over time: Y = {Y1, . . . ,Y10}

Yt = Θ + Et

• Θ ∈ R30×30×6, constant over time;

• Et ∈ R30×30×6, changing over time.

How should the covariance among E = {E1, . . . ,E10} be described?

Introduction and examples Modeling mean structure Modeling covariance structure

Covariance structure of multiple relational arrays

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×10

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

“Replications” over time: Y = {Y1, . . . ,Y10}

Yt = Θ + Et

• Θ ∈ R30×30×6, constant over time;

• Et ∈ R30×30×6, changing over time.

How should the covariance among E = {E1, . . . ,E10} be described?

Introduction and examples Modeling mean structure Modeling covariance structure

Covariance structure of multiple relational arrays

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×10

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

“Replications” over time: Y = {Y1, . . . ,Y10}

Yt = Θ + Et

• Θ ∈ R30×30×6, constant over time;

• Et ∈ R30×30×6, changing over time.

How should the covariance among E = {E1, . . . ,E10} be described?

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance structure for matrices

Yi = Θ + Ei

If Yi ,Ei ,Θ are m × n matrices, then covariance is described by an(m ×m)× (n × n) array:

Cov[E] = {cov[ej1,k1 , ej2,k2 ]}

Usually the data are insufficient to estimate this covariance.

A parsimonious alternative is to assume a separable covariance structure:

Cov[E] = Σ1 ◦Σ2

Cov[vec(E)] = Σ2 ⊗Σ1

E[EET ] = Σ1 × tr(Σ2)

E[ET E] = Σ2 × tr(Σ1)

This is the covariance structure of the “matrix normal model”

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance structure for matrices

Yi = Θ + Ei

If Yi ,Ei ,Θ are m × n matrices, then covariance is described by an(m ×m)× (n × n) array:

Cov[E] = {cov[ej1,k1 , ej2,k2 ]}

Usually the data are insufficient to estimate this covariance.

A parsimonious alternative is to assume a separable covariance structure:

Cov[E] = Σ1 ◦Σ2

Cov[vec(E)] = Σ2 ⊗Σ1

E[EET ] = Σ1 × tr(Σ2)

E[ET E] = Σ2 × tr(Σ1)

This is the covariance structure of the “matrix normal model”

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance structure for matrices

Yi = Θ + Ei

If Yi ,Ei ,Θ are m × n matrices, then covariance is described by an(m ×m)× (n × n) array:

Cov[E] = {cov[ej1,k1 , ej2,k2 ]}

Usually the data are insufficient to estimate this covariance.

A parsimonious alternative is to assume a separable covariance structure:

Cov[E] = Σ1 ◦Σ2

Cov[vec(E)] = Σ2 ⊗Σ1

E[EET ] = Σ1 × tr(Σ2)

E[ET E] = Σ2 × tr(Σ1)

This is the covariance structure of the “matrix normal model”

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance structure for matrices

Yi = Θ + Ei

If Yi ,Ei ,Θ are m × n matrices, then covariance is described by an(m ×m)× (n × n) array:

Cov[E] = {cov[ej1,k1 , ej2,k2 ]}

Usually the data are insufficient to estimate this covariance.

A parsimonious alternative is to assume a separable covariance structure:

Cov[E] = Σ1 ◦Σ2

Cov[vec(E)] = Σ2 ⊗Σ1

E[EET ] = Σ1 × tr(Σ2)

E[ET E] = Σ2 × tr(Σ1)

This is the covariance structure of the “matrix normal model”

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance structure for arrays

Yi = Θ + Ei

If Yi ,Ei ,Θ are m × n × p arrays, then covariance is described by an(m ×m)× (n × n)× (p × p) array:

Cov[E] = {cov[ej1,k1,l1 , ej2,k2,l2 ]}

This suggests an “array normal” model with the following covariance structure:

Cov[E] = Σ1 ◦Σ2 ◦Σ3

Cov[vec(E)] = ΣK ⊗ · · · ⊗Σ1

E[E(k)ET(k)] = Σk ×

Yj 6=k

tr(Σj)

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance structure for arrays

Yi = Θ + Ei

If Yi ,Ei ,Θ are m × n × p arrays, then covariance is described by an(m ×m)× (n × n)× (p × p) array:

Cov[E] = {cov[ej1,k1,l1 , ej2,k2,l2 ]}

This suggests an “array normal” model with the following covariance structure:

Cov[E] = Σ1 ◦Σ2 ◦Σ3

Cov[vec(E)] = ΣK ⊗ · · · ⊗Σ1

E[E(k)ET(k)] = Σk ×

Yj 6=k

tr(Σj)

Introduction and examples Modeling mean structure Modeling covariance structure

The Tucker product

Y = Θ + E

Decompose Θ using the Tucker decomposition (Tucker 1964,1966):

θi,j,k =RX

r=1

SXs=1

TXt=1

zr,s,tai,rbj,rck,r

Θ = Z× {A,B,C}

• Z is the R × S × T core array

• A , B , C are R ×m1, S ×m2, T ×m3 matrices.

• R, S and T are the 1-rank, 2-rank and 3-rank of Θ

• “×” is array-matrix multiplication (De Lathauwer et al., 2000)

Introduction and examples Modeling mean structure Modeling covariance structure

The Tucker product

Y = Θ + E

Decompose Θ using the Tucker decomposition (Tucker 1964,1966):

θi,j,k =RX

r=1

SXs=1

TXt=1

zr,s,tai,rbj,rck,r

Θ = Z× {A,B,C}

• Z is the R × S × T core array

• A , B , C are R ×m1, S ×m2, T ×m3 matrices.

• R, S and T are the 1-rank, 2-rank and 3-rank of Θ

• “×” is array-matrix multiplication (De Lathauwer et al., 2000)

Introduction and examples Modeling mean structure Modeling covariance structure

The Tucker product

Y = Θ + E

Decompose Θ using the Tucker decomposition (Tucker 1964,1966):

θi,j,k =RX

r=1

SXs=1

TXt=1

zr,s,tai,rbj,rck,r

Θ = Z× {A,B,C}

• Z is the R × S × T core array

• A , B , C are R ×m1, S ×m2, T ×m3 matrices.

• R, S and T are the 1-rank, 2-rank and 3-rank of Θ

• “×” is array-matrix multiplication (De Lathauwer et al., 2000)

Introduction and examples Modeling mean structure Modeling covariance structure

The Tucker product

Y = Θ + E

Decompose Θ using the Tucker decomposition (Tucker 1964,1966):

θi,j,k =RX

r=1

SXs=1

TXt=1

zr,s,tai,rbj,rck,r

Θ = Z× {A,B,C}

• Z is the R × S × T core array

• A , B , C are R ×m1, S ×m2, T ×m3 matrices.

• R, S and T are the 1-rank, 2-rank and 3-rank of Θ

• “×” is array-matrix multiplication (De Lathauwer et al., 2000)

Introduction and examples Modeling mean structure Modeling covariance structure

The Tucker product

Y = Θ + E

Decompose Θ using the Tucker decomposition (Tucker 1964,1966):

θi,j,k =RX

r=1

SXs=1

TXt=1

zr,s,tai,rbj,rck,r

Θ = Z× {A,B,C}

• Z is the R × S × T core array

• A , B , C are R ×m1, S ×m2, T ×m3 matrices.

• R, S and T are the 1-rank, 2-rank and 3-rank of Θ

• “×” is array-matrix multiplication (De Lathauwer et al., 2000)

Introduction and examples Modeling mean structure Modeling covariance structure

The Tucker product

Y = Θ + E

Decompose Θ using the Tucker decomposition (Tucker 1964,1966):

θi,j,k =RX

r=1

SXs=1

TXt=1

zr,s,tai,rbj,rck,r

Θ = Z× {A,B,C}

• Z is the R × S × T core array

• A , B , C are R ×m1, S ×m2, T ×m3 matrices.

• R, S and T are the 1-rank, 2-rank and 3-rank of Θ

• “×” is array-matrix multiplication (De Lathauwer et al., 2000)

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance via Tucker products

Multivariate normal model:

z = {zj : j = 1, . . . ,m} iid∼ normal(0, 1)

y = µ + Az ∼ multivariate normal(µ,Σ = AAT )

Matrix normal model:

Z = {zi,j}m1,m2i=1,j=1

iid∼ normal(0, 1)

Y = M + AZBT ∼ matrix normal(M,Σ1 = AAT ,Σ2 = BBT )

NOTE: AZBT = Z× {A,B}

Array normal model:

Z = {zi,j,k}m1,m2,m3i=1,j=1,k=1

iid∼ normal(0, 1)

Y = M + Z× {A,B,C} ∼ array normal(M,Σ1 = AAT ,Σ2 = BBT ,Σ3 = CCT )

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance via Tucker products

Multivariate normal model:

z = {zj : j = 1, . . . ,m} iid∼ normal(0, 1)

y = µ + Az ∼ multivariate normal(µ,Σ = AAT )

Matrix normal model:

Z = {zi,j}m1,m2i=1,j=1

iid∼ normal(0, 1)

Y = M + AZBT ∼ matrix normal(M,Σ1 = AAT ,Σ2 = BBT )

NOTE: AZBT = Z× {A,B}

Array normal model:

Z = {zi,j,k}m1,m2,m3i=1,j=1,k=1

iid∼ normal(0, 1)

Y = M + Z× {A,B,C} ∼ array normal(M,Σ1 = AAT ,Σ2 = BBT ,Σ3 = CCT )

Introduction and examples Modeling mean structure Modeling covariance structure

Separable covariance via Tucker products

Multivariate normal model:

z = {zj : j = 1, . . . ,m} iid∼ normal(0, 1)

y = µ + Az ∼ multivariate normal(µ,Σ = AAT )

Matrix normal model:

Z = {zi,j}m1,m2i=1,j=1

iid∼ normal(0, 1)

Y = M + AZBT ∼ matrix normal(M,Σ1 = AAT ,Σ2 = BBT )

NOTE: AZBT = Z× {A,B}

Array normal model:

Z = {zi,j,k}m1,m2,m3i=1,j=1,k=1

iid∼ normal(0, 1)

Y = M + Z× {A,B,C} ∼ array normal(M,Σ1 = AAT ,Σ2 = BBT ,Σ3 = CCT )

Introduction and examples Modeling mean structure Modeling covariance structure

International trade example

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×7

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

Full “cell means” model:

yi,j,k,l = µi,j,k + ei,j,k,l

Let E = {ei,j,k,l}• iid error model: E ∼ array normal(0, I, I, I, σ2I)

• vector normal error model: E ∼ array normal(0, I, I,Σ3, I)

• matrix normal error model: E ∼ array normal(0, I, I,Σ3,Σ4)

• array normal model: E ∼ array normal(0,Σ1,Σ2,Σ3,Σ4}

Introduction and examples Modeling mean structure Modeling covariance structure

International trade example

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×7

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

Full “cell means” model:

yi,j,k,l = µi,j,k + ei,j,k,l

Let E = {ei,j,k,l}• iid error model: E ∼ array normal(0, I, I, I, σ2I)

• vector normal error model: E ∼ array normal(0, I, I,Σ3, I)

• matrix normal error model: E ∼ array normal(0, I, I,Σ3,Σ4)

• array normal model: E ∼ array normal(0,Σ1,Σ2,Σ3,Σ4}

Introduction and examples Modeling mean structure Modeling covariance structure

International trade example

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×7

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

Full “cell means” model:

yi,j,k,l = µi,j,k + ei,j,k,l

Let E = {ei,j,k,l}• iid error model: E ∼ array normal(0, I, I, I, σ2I)

• vector normal error model: E ∼ array normal(0, I, I,Σ3, I)

• matrix normal error model: E ∼ array normal(0, I, I,Σ3,Σ4)

• array normal model: E ∼ array normal(0,Σ1,Σ2,Σ3,Σ4}

Introduction and examples Modeling mean structure Modeling covariance structure

International trade example

Yearly change in log exports (2000 dollars) : Y = {yi,j,k,l} ∈ R30×30×6×7

• i ∈ {1, . . . , 30} indexes exporting nation

• j ∈ {1, . . . , 30} indexes importing nation

• k ∈ {1, . . . , 6} indexes commodity

• l ∈ {1, . . . , 10} indexes year

Full “cell means” model:

yi,j,k,l = µi,j,k + ei,j,k,l

Let E = {ei,j,k,l}• iid error model: E ∼ array normal(0, I, I, I, σ2I)

• vector normal error model: E ∼ array normal(0, I, I,Σ3, I)

• matrix normal error model: E ∼ array normal(0, I, I,Σ3,Σ4)

• array normal model: E ∼ array normal(0,Σ1,Σ2,Σ3,Σ4}

Introduction and examples Modeling mean structure Modeling covariance structure

International trade example

Model comparison:

reduced: array normal(0, I, I,Σ3,Σ4)

full: array normal(0,Σ1,Σ2,Σ3,Σ4)

0 5 10 15

0.0

0.2

0.4

0.6

0.8

1.0

1.2

t1(Y)

dens

ity

0 2 4 6 8 10

0.0

0.5

1.0

1.5

t2(Y)

dens

ity

0.0 0.2 0.4 0.6 0.8

02

46

810

1214

t3(Y)de

nsity

Introduction and examples Modeling mean structure Modeling covariance structure

International trade example

0 5 10 15 20 25 30

12

34

0.05 0.15 0.25 0.35

−0.

4−

0.2

0.0

0.2

0.4

0.6

AUS

AUT

BRA

CAN

CHN

HKG

CZEDEN

FIN

FRNDEUGRC

IDN

IRE ITA

JPN

MYS

MEX

NLDNZLNOR

KORSIN

SPNSWE

SWI

THA

TUR

UKG

USA

0 5 10 15 20 25 300.

51.

52.

53.

5

0.05 0.15 0.25

−0.

4−

0.2

0.0

0.2

AUS

AUT

BRA

CAN

CHNHKG

CZE

DENFIN

FRNDEUGRC

IDN

IREITA

JPN

MYS

MEX NLD

NZL

NOR

KOR

SIN

SPN

SWE

SWI

THA

TUR

UKG

USA

1 2 3 4 5 6

0.9

1.0

1.1

1.2

1.3

1.4

0.2 0.3 0.4 0.5−

1.0

−0.

50.

00.

5

chemicals

crude materials

food

machinery

textiles

mfg goods

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

Introduction and examples Modeling mean structure Modeling covariance structure

Summary

Longitudinal network data can be described by multi-way arrays

Reduced-rank array representations can summarize main network features• least-squares close to data, model-based close to parameter• model-based approach allows for scaling, regressors,. . .

Separable covariance can describe network covariance features• construction of separable structure with the Tucker product• estimation available with MCMC

References:

Hoff(2011) “Hierarchical multilinear models for multiway data”

Hoff(2010) “Separable covariance arrays via the Tucker product, with applications tomultivariate relational data”

Available at http://www.stat.washington.edu/hoff/.

top related