learning stochastic systems with global pac boundslearning stochastic systems with global pac bounds...

16
Learning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore Sun Jun SMU, Singapore

Upload: others

Post on 03-Apr-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Learning Stochastic Systemswith Global PAC Bounds

Hugo Bazille INRIA/IRISA,

Blaise Genest CNRS/IRISA,

Cyrille Jégourel SUTD, Singapore

Sun Jun SMU, Singapore

Page 2: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Formalism

Stochastic System

observations

reset

Learning Algorithmfrom AI

(converges towards theMarkov System at ∞)

Stochastic Model(e.g. Markov Chain)

PAC bounds:

with probability 97%, error < 3%

+Information on the system:

Set of states – known in this talkSupport of transitions – dependsTransition probabilities – never

Observation W:Sequence of States observed

e.g.: s1 s2 s3 s1 s3 s5

after finite time

Page 3: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Formalism

Stochastic System

observations

reset

Learning Algorithmfrom AI

Stochastic Model(e.g. Markov Chain)

+Learning algorithms:

Frequency estimator:

Estimated_Proba(s1,s2) = 𝑛𝑏(𝑠

1,𝑠2)

𝑛𝑏(𝑠1)

𝛽-Laplace smoothing:

Estimated_Proba(s1,s2) = 𝑛𝑏 𝑠

1,𝑠2+𝛽

𝑛𝑏 𝑠1+𝑘

1𝛽

PAC bounds:

with probability 97%, error < 3%

Page 4: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Formalism

Stochastic System

observations

reset

Learning Algorithmfrom AI

Stochastic Model(e.g. Markov Chain)

+Error analysis: In AI, bounds on transition probabilities:P(Estimated_Proba(s,t)-Prob(s,t) < 3%) > 97% using statistical « Okamoto Bounds » withenough observations of transitions.

PAC bounds:

with probability 97%, error < 3%

Page 5: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Goals

Understand learning algorithms from AI.e.g., where 𝛽 -Laplace smoothing useful? Which 𝛽 to use?

Provide more meaningful bounds for model learnt:Global bounds on behaviors of the model rather than local transition probabilities.

How? Use logics LTL, CTL… to specify global behavior. We want that the probabilities to fulfil logical formula in model and in system must be close.

Page 6: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Local vs Global probabilities

Global Property : reach state 3. From state 1: PA()= ½.

From state 1: has probability (t-e)/2t

Global error depends on conditioning t of system e.g.: t = 2e, PAW()=¼.

When learning AW, smallstatistical errors possible

local error on proba: e

Page 7: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Result 1:One fixed « Time Before failure » objective

Probability to see fault before seeing initial state again

=> Frequency estimator, giving MC AW

Reset strategy: reset when we reach sF or s0:

e.g.:

Page 8: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Result 2:What about learning a Markov chain good for all properties?

=> find AW , e such that uniformely, for all formula , we have:

- All properties of LTL?

- All properties of PCTL?

- All properties of CTL?

Not possible (ask arbitrarily high precisionfor arbitrarily nested formula). [Daca et al.’16]Possible for depth k formulas, but O(exp(k))

Not possible (PCTL cannot handle any statistical error)

Possible, and not so complex!In particular, all reachability properties

Page 9: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

PCTL cannot handle statistical error

PCTL Property : (s2 => Proba(X(sF)) ≥ ½)PA()=1 in the original system A.

If we learn AW from observations W, we cando a small statstical error e and obtain

PAW()=0

-e+e

Page 10: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Global PAC bounds for CTLError wrt Probability(X | Uy | G), for all ,y CTL formulae.

observations W={« u s v s »}: reset when same state seen twice

Need to know the support of transitions (not the case for fixed timed before failure)

Without it, learn: from system

with t very small

=> We use Laplace smoothing to ensure the probabilities are all >0.

Page 11: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Global PAC bounds for CTLError wrt Probability(X | Uy | G), for all ,y CTL formulae.

observations W={« u s v s »}: reset when same state seen twice

… Next slide

Page 12: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Conditioning

R(2)=R(3)={1} and R(1)={}

P1(Leave≤m 1) = 1-(1-2t)m ≈ 2 m t

=> Cond(M) ≈ 2 m t

Page 13: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Conditioning

g wrt Probability(X | Uy | G), for all ,y CTL formulae.

…true thanks to Laplace smoothing

Page 14: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Experimental Results

Page 15: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

Conclusion

Understand learning algorithms from AI:Frequentist estimator enough for one fixed property.𝛽 -Laplace smoothing to keep >0 transitions.Useful for providing bounds for CTL, rationale to fix 𝛽.

Provide Global bounds on behaviors of the model learnt for CTL.

Not possible for PCTL or unbounded LTL.

Page 16: Learning Stochastic Systems with Global PAC BoundsLearning Stochastic Systems with Global PAC Bounds Hugo Bazille INRIA/IRISA, Blaise Genest CNRS/IRISA, Cyrille Jégourel SUTD, Singapore

16