experimental algorithmics reading group, ubc, cs

16
Experimental Algorithmics Reading Group, UBC, CS Presented paper: Fine-tuning of Algorithms Using Fractional Experimental Designs and Local Search by Belarmino Adenso-Díaz (Barcelona) and Manuel Laguna (Colorado) OR Journal 2006 Presenter: Frank Hutter, 23 Aug 2006

Upload: sidone

Post on 21-Jan-2016

46 views

Category:

Documents


0 download

DESCRIPTION

Experimental Algorithmics Reading Group, UBC, CS. Presented paper: Fine-tuning of Algorithms Using Fractional Experimental Designs and Local Search by Belarmino Adenso-Díaz (Barcelona) and Manuel Laguna (Colorado) OR Journal 2006 Presenter: Frank Hutter, 23 Aug 2006. Motivation. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Experimental Algorithmics  Reading Group, UBC, CS

Experimental Algorithmics Reading Group, UBC, CS

Presented paper:Fine-tuning of Algorithms Using Fractional Experimental

Designs and Local Search by Belarmino Adenso-Díaz (Barcelona) and

Manuel Laguna (Colorado)OR Journal 2006

Presenter: Frank Hutter, 23 Aug 2006

Page 2: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 2

Motivation

• Anecdotal evidence that of the total time for designing and testing a new (meta) heuristic 10% is spent on development 90% is spent on fine-tuning parameter (In my opinion, 90% is maybe a little bit too high) If you see any real stats about this sometime, please let

me know !

Page 3: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 3

Motivation (2)

• Barr et al (1995) (we read this Nov 2004)“The selection of parameter values that drive heuristics is

itself a scientific endeavor and deserves more attention than it has received in the operations research literature. This is an area where the scientific method and statistical analysis could and should be employed.”

Page 4: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 4

Motivation (3)

• Parameter tuning in the OR literature 1) “Parameter values have been established

experimentally” (without stating the procedure) 2) Just give parameters without explanation, often

different for problem classes or even for each instance 3) Use parameter values that were previously

determined to be effective (simulated annealing, Guided Local Search for MPE)

4) Sometimes employed experimental design is stated

Page 5: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 5

Objective function to be minimized

• Runtime for solving a training set of decision problems They only do one run per instance and (wrongly?) refer

to runs on different instances as replication

• Average deviation from optimal solution in optimization algorithms

• In general: some combination of speed and accuracy

Page 6: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 6

Design of experiments

• Includes 1) The set of treatments included in the study 2) The set of experimental units included in the study 3) The rules and procedures by which treatments are

assigned to experimental units 4) Analysis (measurements that are made on the

experimental units after the treatments have been applied)

Page 7: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 7

Different designs

• Full factorial experimental design 2k factorial – 2 levels (critical values) per variable 3k factorial

• Fractional factorial experiment Orthogonal array with n=8 runs, k=5 factors,

s=2 levels and strength t=2 n x k array with entries 0 to s-1 and property that

in any t columns the st possible combinations appear equally often (the projections to lower dimensions are balanced)

Page 8: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 8

Aside: Taguchi design of experiments

• Genichi Taguchi Robust parameter design

• Set controllable parameters to achieve maximum output with low variance

Page 9: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 9

Taguchi design applied here

• L9(34) is a design with nine runs,4 variables with 3 values eachand strength 2 (for each combination of variables, each of the 9 value combinations occurs exactly once)

• Based on this, you can estimate the “optimal condition”, even if it’s not one of the 9 runs performed (how ? separate topic)

Page 10: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 10

The CALIBRA software

• Limited to 5 parameters• Starts with full factorial bi-level design (32 runs)

25% and 75% “quantiles” of each parameter as levels Fix parameter with least significant main effect to its best value

• From then on, do “local search” Choose 3 levels around last best setting Do L9(34) Taguchi design Narrow down the levels around a the best predicted solution

• When local optimum is reached Build new starting point for local search by combining previous local

optima/previous worst solutions This is meant to trade off exploration and exploitation but seem fairly ad-hoc

Page 11: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 11

The CALIBRA software (2)

• Only in Windows

• Requires the algorithm to be tuned as a .exe file Just write a .exe wrapper

Page 12: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 12

The CALIBRA software (3)

• Objective function can be based on multiple instances

• Deal with that inside the algorithm Well, in the wrapper

Page 13: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 13

The CALIBRA software - Live demo

• Let’s hope it works …

• They do some caching that’s not mentioned in the paper

Page 14: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 14

Backup in case it doesn’t work

Page 15: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 15

Experimental analysis

• Pretty straight-forward

• MAXEX is a parameter of major importance !!

• They do a little bit better than the manually found parameter settings (or those found with Taguchi designs)

• For these domains, not too much promise in per-instance tuning (Table 5 compared to Table 2)

• Figure 9 vs 10 probably only shows that their performance metric means different things for different domains

Page 16: Experimental Algorithmics  Reading Group, UBC, CS

23 Aug, 2006 Fine-tuning of algorithms 16

Points for improvements

• Objective function evaluation requires solving many instances (possibly many times) Takes lots of time even if results are abysmal Can stop evaluation if it’s (statistically) clear that the result

won’t be better than the best one we already have

• “CALIBRA should be more effectice in situations when the interactions among parameters are negligible” But then you really don’t need anything like this ! Related work (DACE) builds a model of the whole surface - I

expect that to work better