bayesian optimization with experimental constraints javad azimi advisor: dr. xiaoli fern phd...

59
Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Upload: earl-bell

Post on 26-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

1

Bayesian Optimization with

Experimental Constraints

Javad AzimiAdvisor: Dr. Xiaoli Fern

PhD Proposal ExamApril 2012

Page 2: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

2

Outline

• Introduction to Bayesian Optimization

• Completed Works– Constrained Bayesian Optimization– Batch Bayesian Optimization– Scheduling Methods for Bayesian Optimization

• Future Works– Hybrid Bayesian optimization

• Timeline

Page 3: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

3

Bayesian Optimization

• We have a black box function and we don’t know anything about its distribution

• We are able to sample the function but it is very expensive

• We are interested to find the maximizer (minimizer) of the function

• Assumption:– lipschitz continuity

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 4: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

4

Big Picture

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Current Experiments Posterior Model Select Experiment(s)

Run Experiment(s)

Page 5: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

5

Posterior Model (1): Regression approaches

• Simulates the unknown function distribution based on the prior– Deterministic (Classical Linear Regression,…)

• There is a deterministic prediction for each point x in the input space

– Stochastic (Bayesian regression, Gaussian Process,…)• There is a distribution over the prediction for each point x in the

input space. (i.e. Normal distribution)

– Example• Deterministic: f(x1)=y1, f(x2)=y2

• Stochastic: f(x1)=N(y1,0.2) f(x2)=N(y2,5)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 6: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

6

Posterior Model (2): Gaussian Process

• Gaussian Process is used to build the posterior model– The prediction output at any

point is a normal random variable

– Variance is independent from observation y

– The mean is a linear combination of observation y

Points with high output expectation

Points with high output variance

Page 7: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

7

Selection Criterion

• Goal: Which point should be selected next to get to the maximizer of the function faster.

• Maximum Mean (MM)– Selects the points which has the highest output mean– Purely exploitative

• Maximum Upper bound Interval (MUI)– Select point with highest 95% upper confidence bound– Purely explorative approach

• Maximum Probability of Improvement (MPI)– It computes the probability that the output is more than (1+m)

times of the best current observation , m>0. – Explorative and Exploitative

• Maximum Expected of Improvement (MEI)– Similar to MPI but parameter free– It simply computes the expected amount of improvement after

sampling at any pointIntroduction to BO Constrained BO Batch BO Scheduling Future Work

MM MUI MPI MEI

Page 8: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

8

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Motivating Application: Fuel Cell

Anode Cathode

bact

eria

Oxidation products (CO2)

Fuel (organic matter)

e-

e-

O2

H2OH+

This is how an MFC works

SEM image of bacteria sp. on Ni nanoparticle enhanced carbon fibers.

Nano-structure of anode significantly impact the electricity production.

We should optimize anode nano-structure to maximize power by selecting a set of experiment.

Page 9: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

9

Other Applications

• Financial Investment• Reinforcement Learning• Drug test• Destructive tests• And …

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 10: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

10

Constrained Bayesian optimization(AAAI 2010, to be submitted Journal)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 11: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

11

Problem Definition(1)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

• BO assumes that we can ask for specific experiment

• This is unreasonable assumption in many applications– In Fuel Cell it takes many trials to create a nano-

structure with specific requested properties.– Costly to fulfill

Page 12: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

12

Problem Definition(2)• It is less costly to fulfill a request that specifies ranges for the

nanostructure properties

• E.g. run an experiment with Averaged Area in range r1 and Average Circularity in range r2

• We will call such requests “constrained experiments”Space of Experiments

Average Circularity

Ave

rage

d A

rea

Constrained Experiment 1• large ranges • low cost• high uncertainty about which experiment will be run

Constrained Experiment 2• small ranges• high cost• low uncertainty about which experiment will be run

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 13: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

13

Proposed Approach

• We introduced two different formulation• Non Sequential

– Select all experiments at the same time

• Sequential– Only one constraint experiment is selected at each iteration

• Two challenges:– How to compute heuristics for constrained experiment?– How to take experimental cost into account?(which has

been ignored by most of the approaches in BO)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 14: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

14

Non-Sequential

• All experiments must be chosen at the same time

• Objective function:– A sub set of experiments (with cost B) which jointly

have the highest expected maximum is selected, i.e. E[Max(.)]

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 15: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

15

Submodularity

• It simply means adding an element to the smaller set provides us with more improvement than adding an element to the larger set

• Example: We show that max (.) is submodular– S1={1, 2, 4}, S2={1, 2, 4, 8}, (S1 is a subset of S2), g=max(.) and x=6

– g(S1, x) - g(S1)=2, g(S2,x)-g(S2)=0

• E[max(.)] over a set of jointly normal random variable is a submodular function

• Greedy algorithm provides us with a “constant” approximation bound

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 16: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

16

Greedy Algorithm

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 17: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

17

Sequential Policies

• Having the posterior distribution of p(y|x,D) and px(.|D) we can calculate the posterior of the output of each constrained experiment which has a closed form solution

• Therefore we can compute standard BO heuristics for constrained experiments– There are closed form solution for these heuristics

Input spaceDiscretization

Level

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 18: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

18

Budgeted Constrained

• We are limited with Budget B.• Unfortunately heuristics will typically select the smallest and most

costly constrained experiments which is not a good use of budget

• How can we consider the cost of each constrained experiment in making the decision?– Cost Normalized Policy (CN)– Constraint Minimum Cost Policy(CMC)

-Low uncertainty

-High uncertainty

-Better heuristic value

-Lower heuristic value

-Expensive -Cheap

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 19: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

19

Cost Normalized Policy

• It selects the constrained experiment achieving the highest expected improvement per unit cost

• We report this approach for MEI policy only

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 20: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Constraint Minimum Cost Policy (CMC)

• Motivation:1. Approximately maximizes the heuristic value2. Has expected improvement at least as great as spending

the same amount of budget on random experiments• Example:

Very expensive: 10 random experiments likely to be better

Selected Constrained experiment

Poor heuristic value: not select due to 1st

condition20

Cost=4 random Cost=10 random Cost=5 random

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 21: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

21

Results (1)

CMC-MEICosines

Fuel CellReal

Rosenbrock

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 22: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Results (2)

22

NSCosines

Fuel CellReal

Rosenbrock

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 23: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

23

Batch Bayesian Optimization(NIPS 2010)

Sometimes it is better to select batch.(Javad Azimi)

Page 24: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

24

Motivation

• Traditional BO approach request a single experiment at each iteration

• This is not time efficient when running an experiment is very time consuming and there is enough facilities to run up to k experiments concurrently

• We would like to improve performance per unit time by selecting/running k experiments in parallel

• A good batch approach can speedup the experimental procedure without degrading the performance

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 25: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

25

Main Idea

• We Use Monte Carlo simulation to select a batch of k experiments that closely match what a good sequential policy selection in k steps

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Given a sequential Policy and batch size k

x11

x12

x13

x1k

.....

x21

x22

x23

x2k

.....

x31

x32

x33

x3k

.....

xn1

xn2

xn3

xnk

...... . . . . . . .

Return B*={x1,x2,…,xk}

Page 26: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

26

Objective Function(1)

• Simulated Matching:– Having n different trajectories with length k from a given

sequential policy– We want to select a batch of k experiments that best matches the

behavior of the sequential policy

• This objective can be viewed as minimizing an upper bound on the expected performance difference between the sequential policy and the selected batch.

• This objective is similar to weighted k-medoid

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 27: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

27

Supermodularity

• Example: Min(.) is a supermodual function– B1={1, 2, 4}, B2={1, 2, 4, -2}, f=min(.) and x = 0

– f(B1) -f(B1, x)=1, f(B2)-f(B2, x)=0

• Quiz: What is the difference between submodular and supermodular function?– If the inequality is changed then we have submodular function

• The proposed objective function is a supermodular function

• The greedy algorithm provides us with an approximation bound

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 28: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

28

Algorithm

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 29: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

29

Results (5)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Greedy

Page 30: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

30

Scheduling Methods for Bayesian Optimization (NIPS 2011(spotlight))

Page 31: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Extended BO Model

Introduction to BO Constrained BO Batch BO Scheduling Future Work31

Problem:Schedule when to start new experiments and which ones to start

Stochastic Experiment Durations

Lab 1

Lab 2

Lab 3

Lab l

x1

x2

x3

x4 xn-1

x5 x8

xnx7

x6

Time Horizon h

We consider the following:• Concurrent experiments

(up to l exp. at any time)

• Stochastic exp. durations

(known distribution p)

• Experiment budget

(total of n experiments)

• Experimental time horizon h

Page 32: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Challenges

Introduction to BO Constrained BO Batch BO Scheduling Future Work32

Objective 2: maximize info. used in selecting each experiments(favors minimizing concurrency)

x1 x2 ⋯ xn

We present online and offline approaches that effectively trade off these two conflicting objectives

Lab 4

x1

x2

x3

x4

x5

x6

x4

Lab 1

Lab 2

Lab 3

x7

Page 33: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

33

Objective Function• Cumulative prior experiments (CPE) of E is measured as follows:

• Example: Suppose n1=1, n2=5, n3=5, n4=2, Then CPE=(1*0)+(5*1)+(5*6)+(2*11)=57

• We found a non trivial correlation between CPE and regret

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 34: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

34

Offline Scheduling

• Assign start times to all n experiments before the experimental process begins

• The experiment selection is done online

• Two class of schedules are presented– Staged Schedules – Independent Labs

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 35: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

35

Staged Schedules

• There are N stage and each stage is represent as <ni,di> such that– CPE is calculated as: – We call an schedule uniform if |ni-nj|<2

Introduction to BO Constrained BO Batch BO Scheduling Future Work

x1

x2

x3

x4

x5

x6

x7

x8

x9

x10

x11

x12

x13

x14

d1 d2d3 d4

n1=4 n2=3 n4=3 n3=4

h

• Goal: finding a p-safe uniform schedule with maximum number of stages.

Page 36: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

36

Staged Schedules: Schedule

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 37: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

37

Independent Lab (IL)

• Assigns mi experiment to each lab i such that

• Experiments are distributed uniformly within the labs

• Start times of different labs are decoupled

• The experiments in each lab have equal duration to maximize the finishing probability within horizon h

• Mainly designed for policy switching schedule

h

x11Lab1

Lab2

Lab3

Lab4

x12 x13 x14

x21 x22 x23 x24

x31 x32 x33

x41 x42 x43

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 38: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

38

Online Schedules

• p-safe guarantee is fairly pessimistic and we can decrease the parallelization degree in practice

• Selects the start time of experiments online rather than offline

• More flexible than offline schedule

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 39: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

39

Baseline online Algorithms

• Online Fastest Completion policy (OnFCP)– Finish all of the n experiments as quickly as possible– Keeps all l labs busy as long as there are experiments left

to run– Achieves the lowest possible CPE

• Online Minimum Eager Lab Policy (OnMEL)– OnFCP does not attempt to use the full time horizon– use only k labs, where k is the minimum number of labs

required to finish n experiments with probability p

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 40: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

40

Policy Switching (PS)

• PS decides about the number of new experiments at each decision step

• Assume a set of policies or a policy generator is given

• The goal is defining a new policy which performs as well as or better than the best given policy at any state s

• The i-th policy waits to finish i experiments and then call offIL algorithm to reschedule

• The policy which achieves the maximum CPE is returned

• The CPE of the switching policy will not be much worse than the best of the policies produced by our generator

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 41: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

41

Experimental Results

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Setting: h=4,5,6; pd=Truncated normal distribution, n=20 and L=10

Best CPE in each setting

Best Performance

Page 42: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

42

Future Work

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 43: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

43

Traditional Approaches

•Sequential:– Only one experiment is selected at each iteration

– Pros: Performance is optimized– Cons: Can be very costly when running one experiment takes long time

• Batch:– k>1 experiments are selected at each iteration

– Pros: k times speed-up comparing to sequential approaches – Cons: Can not performs as well as sequential algorithms

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 44: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

44

Batch Performance (Azimi et.al NIPS 2010)

k=5

k=10

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 45: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

45

Hybrid Batch

• Sometimes, the selected points by a given sequential policy at a few consequent steps are independent from each other

• Size of the batch can change at each time step (Hybrid batch size)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 46: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

46

First Idea(NIPS Workshop 2011)

• Based on a given prior (blue circles) and an objective function (MEI), x1 is

selected

• To select the next experiment, x2 , we

need, y1=f(x1) which is not available

• The statistics of the samples inside the red circle are expected to change after observing at actual y1

• We set y1 =M and then EI of the next step is upper bounded

• If the next selected experiment is outside of the red circle, we claim it is independent from x1

x1

x2

x3

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 47: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

47

Next

• Very pessimistic to set Y=M and then the speedup is small

• Can we select the next point based on any estimation without degrading the performance?

• What is the distance of selected experiments in batch and the actual selected experiments by sequential policy?

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 48: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

48

TimeLine

• Spring 2012: Finishing the Hybrid batch approach

• Summer 2012: Finding a job and final defend (hopefully )

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 49: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

49

Publications

Page 50: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

50

And

• I would like to thank Dr. Xaioli Fern and Dr. Alan Fern

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 51: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

51Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 52: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

52

Results (1)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Random

Page 53: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

53

Results (2)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Sequential

Page 54: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

54

Results (3)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

EMAX

Page 55: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

55

Results (4)

Introduction to BO Constrained BO Batch BO Scheduling Future Work

K-means

Page 56: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Constrained BO: Results

56

Random

Cosines

Fuel CellReal

Rosenbrock

CMC-MUI

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 57: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Constrained BO: Results

57

CN-MEI

Cosines

Fuel CellReal

Rosenbrock

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 58: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

Constrained BO: Results

58

CMC-MPI(0.2)

Cosines

Fuel CellReal

Rosenbrock

Introduction to BO Constrained BO Batch BO Scheduling Future Work

Page 59: Bayesian Optimization with Experimental Constraints Javad Azimi Advisor: Dr. Xiaoli Fern PhD Proposal Exam April 2012 1

59

PS Performance Bound

• is our policy generator at each time step t and state s

• State s is the current running experiments with their starting time and completed experiments.

• denotes is the policy switching result where is the base policy selected in the last step

• The decision by is returned by N independent simulations.

• is the CPE of policy with error

),( ts

),,( ts

)(stC

Introduction to BO Constrained BO Batch BO Scheduling Future Work