exact exponential-time algorithms

121
1 Exact Exponential-Time Algorithms Algorithms and Networks 2017/2018 Johan M. M. van Rooij Hans L. Bodlaender

Upload: others

Post on 18-Jan-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Exact Exponential-Time Algorithms

1

Exact Exponential-Time

Algorithms

Algorithms and Networks 2017/2018

Johan M. M. van Rooij

Hans L. Bodlaender

Page 2: Exact Exponential-Time Algorithms

2

Solutionquality

Computationtime

Optimal Bound on quality

Goodsolution, no qualityguarantee

PolynomialPolynomial

solution algorithms

Approximationalgorithms

Constructionheuristics

Super polynomialand/or no guarantee

Exact algorithms:• Tree search• Dynamic

programming• Integer linear

programming• ……

Hybridalgorithms• Column

generationwithout complete branch-and-price

Meta heuristics:• Local

search • Genetic

algorithms

Page 3: Exact Exponential-Time Algorithms

3

Solutionquality

Computationtime

Optimal Bound on quality

Goodsolution, no qualityguarantee

PolynomialPolynomial

solution algorithms

Approximationalgorithms

Constructionheuristics

Super polynomialand/or no guarantee

Exact algorithms:• Tree search• Dynamic

programming• Integer linear

programming• ……

Hybridalgorithms• Column

generationwithout complete branch-and-price

Meta heuristics:• Local

search • Genetic

algorithms

Algorithmsand networks

Algorithmsand networks

Page 4: Exact Exponential-Time Algorithms

4

What to do if a problem isNP-complete?

1. Solve on special cases.

2. Heuristics and approximations.

3. Algorithms that are fast on average.

4. Good exponential-time algorithms.

5. …

Exact exponential-time algorithms is one of many options.

Practical applications exist.

If the instance is small, and a simple exponential-time algorithm exist, then why not use it?

Sometimes useful as a quick way to solve small subproblems.

If one needs the guarantee that an optimal solution is always to be found one has little other choice.

Page 5: Exact Exponential-Time Algorithms

5

Exact exponential-time algorithms in this course

Four or Five Lectures

1. Introduction to exponential-time algorithms.

2. Overview of some techniques.

3. Inclusion/Exclusion.

4. Measure-and-conquer analysis.

5. Complexity theory of parameterised and exact algorithms (end of the course; exact algorithms part if time allows).

Many ideas from exact exponential-time algorithms are also applicable to or are related to parameterised algorithms.

Page 6: Exact Exponential-Time Algorithms

6

Today: lecture one.

Introduction to exponential-time algorithms.

Techniques in this lecture:

1. Clever enumeration.

2. Dynamic programming.

Problems:

Maximum/Maximal Independent Set.

Graph Colouring / k-Colouring.

Travelling Salesman Problem.

Page 7: Exact Exponential-Time Algorithms

7

INTRODUCTION

Exponential-time algorithms – Algorithms and Networks

Page 8: Exact Exponential-Time Algorithms

8

Good exponential-time algorithms

Algorithms with a running time of O( cn p(n) )

c a constant

p() a polynomial

Notation: O*(cn)

Smaller c helps a lot!

O*(f(n)): hides polynomial factors, i.e.,

O*(f(n)) = O(p(n)*f(n)) for some polynomial p.

Page 9: Exact Exponential-Time Algorithms

9

Good exponential-time algorithms improve upon the trivial algorithm

What would be a trivial algorithm for the following problem?

Maximum Independent Set:

Instance: graph G = (V,E), integer k.

Question: does G have an independent set of size ¸ k?

An Independent set is a subset of the vertices s.t. no two vertices are adjacent.

What would we consider a trivial algorithm in general?

For problems in NP (e.g. NP-Complete problems), the answer involves the definition of P and NP:

... I give the answer after a short recap on P and NP.

Page 10: Exact Exponential-Time Algorithms

10

P and NP

A decision problem belongs to the class P if there is a algorithm solving the problem with a running time that is polynomial in the input size.

A decision problem belongs to the class NP if:

Any solution y leading to ‘yes' can be encoded in polynomial space with respect to the size of the input x.

Checking whether a given solution leads to `yes' can be done in polynomial time with respect to the size of x and y.

Slightly more formal:

Problem Q belongs to the class NP, if there exists a polynomial-time 2-argument algorithm A, such that: For each instance i, i is a yes-instance to Q, if and only if,

there is a polynomial-size certificate c for which A(i,c) = true.

Page 11: Exact Exponential-Time Algorithms

11

Certificates

What are natural certificates for the following problems?

Maximum Independent Set:

Instance: graph G = (V,E), integer k.

Question: does G have an independent set of size ¸ k?

Travelling Salesman Problem:

Instance: n vertices (cities) with distance between every pair of vertices, integer k.

Question: Is there shortest (simple) cycle that visits every city oftotal length · k?

1 2

3 4

4

5

2

23

211

Page 12: Exact Exponential-Time Algorithms

12

Good exponential-time algorithms improve upon the trivial algorithm

What is a trivial algorithm?

For problems in NP (e.g. NP-Complete problems), the answer lies in the definition of NP:

Choose a definition for the problem as part op NP, i.e., choose a way to encode certificates.

Trivial algorithm: brute force enumerate all certificates and check whether there one proves that we have a ‘Yes’ instance.

Exponential part of the running time of the trivial algorithm corresponds to the number of possible certificates.

We consider trivial algorithms those that enumerate naturalcertificates.

Sometimes one can use a smarter encoding than the natural encoding to make the certificates smaller.

Thus: formally trivial is defined after defining the encoding.

Page 13: Exact Exponential-Time Algorithms

13

Good exponential-time algorithms improve upon a trivial algorithm

There is a lot of choice in defining these certificates, so there can be multiple natural ‘trivial algorithms’.

Satisfiability: O*( 2n ).

Independent set / vertex cover: O*( 2n ).

TSP: O*( n! ) = O*( 2n log n ) or O*( 2m ).

Choices often depend on what parameter of ‘certificate size’ you use (nr of nodes, nr of edges, etc.)

The goal of exact exponential-time algorithms is to improve upon this brute force approach as much as possible.

In this lecture, we will do so using:

1. Clever enumeration.

2. Dynamic programming

Page 14: Exact Exponential-Time Algorithms

14

Time and space

Exact-exponential time algorithms are often compared on two properties:

Running time.

Space usage.

Exponential time is a resources we often have.

As long as the input is small and the algorithm is fast enough.

Just keep you computer running all night (or worse).

Exponential space can be a much bigger problem.

One can easily use all your computers memory.

Algorithms that require a lot of disk swapping often are too slow for practical usage.

Page 15: Exact Exponential-Time Algorithms

15

CLEVER ENUMERATION

Exponential-time algorithms – Algorithms and Networks

Page 16: Exact Exponential-Time Algorithms

16

Enumeration for maximum independent set

Maximum Independent Set:

Instance: graph G = (V,E), integer k.

Question: does G have an independent set of size ¸ k?

Algorithm:

If |V| = 0, return 0.

Choose a vertex v 2 V of minimum degree.

For v and each neighbour of v: u 2 N[v]

Solve the subproblem on G\N[u] (G without u and its neighbours).

Return 1 + the maximum size solution found.

Page 17: Exact Exponential-Time Algorithms

17

Algorithm:

If |V| = 0, return 0.

Choose a vertex v 2 V of minimum degree.

For v and each neighbour of v: u 2 N[v]

Solve the subproblem on G\N[u] (G without u and its neighbours).

Return 1 + the maximum size solution found.

Analysis of enumeration for maximum independent set

We generate d(v) + 1 sub-problems, each with at least d(v) + 1 vertices removed.

T(n) · s £ T( n – s ).

Where T(n) is the number of leaves of the seach tree on a graph with n vertices, and s = d(v) + 1.

One can show T(n) = O*(sn/s).

sn/s is maximal w.r.t. s if s = 3.

So: T(n) = O*(3n/3) = O( 1.4423n ).

This improves the trivial O*( 2n ) algorithm.

Page 18: Exact Exponential-Time Algorithms

18

Maximum and maximal independent sets

Maximum independent set:

Maximum size over all independent sets in G.

Maximal independent set:

Maximal in the sense that we cannot add a vertex to the set.

We will use a slight modification of the algorithm to enumerate all maximal independent sets.

Because each leaf of the search tree contains at most one maximal independent set, this also proves a bound on the number of maximal independent sets in a graph.

Corollary:

Any graph contains at most O(1.4423n) maximal independent sets.

Page 19: Exact Exponential-Time Algorithms

19

The enumeration algorithm

Algorithm (for enumerating maximal independent sets):

1. If |V| = 0, return ;.

2. Choose a vertex v 2 V of minimum degree.

3. For v and each neighbour of v: u 2 N[v]

Solve the subproblem on G\N[u] (G without u and its neighbours), add u to the generated solutions.

4. Return all maximal independent sets found.

All maximal independent sets can be enumerated in O*(3n/3) = O( 1.4423n ) time.

Each leaf of the search tree at most one maximal independent set: so any graph has at most O*(3n/3) = O(1.4423n) maximal independent sets.

This bound is tight: consider a collection of triangles.

Page 20: Exact Exponential-Time Algorithms

20

Graph colouring

Graph Colouring

Given: Graph G=(V,E), integer k.

Question: Can we colour the vertices with k colours, such that for all edges {v,w} in E, the colour of v differs from the colour of w.

k-Colouring

Given: Graph G=(V,E).

Question: Can we colour the vertices with k colours, such that for all edges {v,w} in E, the colour of v differs from the colour of w.

1 2

3

21

3

2

Page 21: Exact Exponential-Time Algorithms

21

Graph colouring

1-Colouring is easy in O(n) time.

2-Colouring is easy in O(n+m) time.

k-Colouring is NP-Complete for k ¸ 3.

Applications:

Scheduling

Frequency assignment

...

Practical problems are usually more complex variants of the standard graph colouring problem.

Page 22: Exact Exponential-Time Algorithms

22

3-Colouring

O*(3n) is trivial or can we do this faster?

G is 3-colourable, if and only if, there is a set of vertices S with:

S is an independent set.

G[V-S] is 2-colourable.

Algorithm: enumerate all sets, and test these properties.

2n tests of O(n+m) time each.

So depending on how we define certificates, 3-colouring is ‘trivial’ to solve in O*(3n) or O*(2n) time.

Page 23: Exact Exponential-Time Algorithms

23

3-Colouring

Lawler, 1976: G is 3-colourable, if and only if, there is a set of vertices S with:

S is a maximal independent set.

G[V-S] is 2-colourable.

Enumerating all maximal independent sets in O*(3n/3) = O(1.4423n) time.

Thus O(1.4423n) time algorithm for 3-colouring.

Schiermeyer, 1994; O(1.398n) time.

Beigel, Eppstein, 1995: O(1.3446n) time.

Beigel, Eppstein, 2005: O(1.3289n) time.

Page 24: Exact Exponential-Time Algorithms

24

4-Colouring in O*(2n) time

Lawler, 1976: G is 4-colourable, if and only if, we can partition the vertices in two sets X and Y such that:

G[X] and G[Y] are both 2-colorable.

Enumerate all partitions in O*(2n) time

For each, check both halves in O(n+m) time.

k-colouring is ‘trivial’ to solve in O*( dk/2en ) time.

Enumerate all partitions into dk/2e disjoint sets.

Check that each partition is 2-colourable (if k is odd, for all but one partition).

Page 25: Exact Exponential-Time Algorithms

25

Faster 4-colouring

Using 3-Colouring: Enumerate all maximal independent sets S.

For each, check 3-colourability of G[V-S].

1.4423n * 1.3289n = 1.9167n.

Better: there is always a colour with at least n/4 vertices. Enumerate all m.i.s. S with at least n/4 vertices.

For each, check 3-colourability of G[V-S].

1.4423n * 1.32893n/4 = 1.7852n.

Byskov, 2004: O(1.7504n) time.

Fomin, Gaspers, Saurabh, 2007: O(1.7272n) time.

Page 26: Exact Exponential-Time Algorithms

26

More Techniques for Exact

Exponential-Time Algorithms

Algorithms and Networks 2017/2018

Johan M. M. van Rooij

Hans L. Bodlaender

Page 27: Exact Exponential-Time Algorithms

27

Content of this lecture

Five Lectures

1. Introduction to exponential-time algorithms.

2. Overview of some techniques.

Dynamic Programming

Branching algorithms (divide and conquer).

Meet in the middle (a.k.a. Sort and Search).

Local search – will probably fall off due to time restrictions.

3. Inclusion/Exclusion.

4. Measure-and-conquer analysis.

5. Complexity theory of parameterised and exact algorithms (end of the course).

Why these?

Useful also in other contexts, e.g., FPT algorithms (later on).

Page 28: Exact Exponential-Time Algorithms

28

DYNAMIC PROGRAMMING

Exponential-time algorithms – Algorithms and Networks

Page 29: Exact Exponential-Time Algorithms

29

Graph colouring with dynamic programming

Consider the graph colouring problem, not k-colouring. (The number of colours is no longer fixed).

Lawler, 1976: using DP for solving graph colouring. C(G) = minS is m.i.s. in G 1 + X(G[V-S])

Tabulate chromatic number of G[W] over all subsets W: In increasing size.

Using formula above.

Total time: 2n * 1.4423n = 2.8868n.

Total space: 2n.

Page 30: Exact Exponential-Time Algorithms

30

Graph colouring

Lawler 1976: O(2.4423n) (improved analysis) .

Eppstein, 2003: O(2.4151n).

Byskov, 2004: O(2.4023n).

All using O*(2n) memory.

Improvements on DP method.

Björklund, Husfeld, 2005: O(2.3236n).

Björklund, Husfeld, 2006: Inclusion/Exclusion: O(2n).

The last algorithm will be in the next lecture!

Page 31: Exact Exponential-Time Algorithms

31

Held-Karp algorithm for Travelling Salesman Problem

Take one starting vertex s arbitrarily.

For set of vertices S µ V\{s} , vertex v S, let:

B(S,v) = minimum length of a path, that:

Starts in s.

Visits s and all vertices in S (and no other vertices).

Ends in v.

Algorithm:

1. B({v},v) = d(s,v).

2. For j = 2 to |V|-1:

For all sets S, |S| = j, for all v S:

B(S,v) = minwS\{v}{ B(S\{v},w}) + d(w,v) }

3. Return min{ B(V\{s},v) + d(v,s) | v V }.

1 2

3 4

4

5

2

23

211

Page 32: Exact Exponential-Time Algorithms

32

Held-Karp algorithm for TSP

Algorithm:

1. B({v},v) = d(s,v).

2. For j = 2 to |V|-1:

For all sets S, |S| = j, for all v S:

B(S,v) = minwS\{v}{ B(S\{v},w}) + d(w,v) }

3. Return min{ B(V\{s},v) + d(v,s) | v V }.

This improves the trivial O*( n! ) algorithm.

Time: O(n22n)

Space: O(n2n)

1 2

3 4

4

5

2

23

211

Page 33: Exact Exponential-Time Algorithms

33

BRANCHING ALGORITHMS(DIVIDE AND CONQUER)

Exponential-time algorithms – Algorithms and Networks

Page 34: Exact Exponential-Time Algorithms

34

Branching algorithm: divide and conquer

Branching Algorithm:

Algorithm that splits the current problem instance into multiple easier subinstances that are solved recursively.

Earlier example: maximum/maximal independent set algorithm.

Let T(n) be the number of subproblems generated on an instance of size n.

Analysis of branching algorithm generating k subproblems, where subproblem i is di smaller than the original instance:

T( n ) · T( n – d1 ) + T( n – d2 ) + … + T( n – dk ) + 1

T( 1 ) = 1

Solution of this recurrence is O*( an ) where a is the solution to: xn – xn-d1 – xn-d2 - … - xn-dk = 0.

a often denoted as ¿( d1, d2, …, dk ).

Page 35: Exact Exponential-Time Algorithms

35

Branching algorithm for k-SAT

k-SAT (k-Satisfiability):

Instance: set of clauses C (logical formula in CNF format) where each clause contains at most k literals.

Question: does there exist a satisfying assignment to the variables in C, i.e., a truth assignment such that each clause has at least one literal set to true?

Trivial algorithm: O*(2n).

Page 36: Exact Exponential-Time Algorithms

36

Branching algorithm for k-SAT

Algorithm (for k-SAT formula F):

1. If F contains the empty clause: return false.

2. If F is the empty formula: return true.

3. Choose the smallest clause ( l1, l2, ..., lc ) from F.

4. Recursively solve:

1. F with l1 = true.

2. F with l1 = false, l2 = true.

3. ...

4. F with l1 = false, l2 = false, ..., lc-1 = false, lc = true.

5. Return true if and only if any recursive call returned true.

Page 37: Exact Exponential-Time Algorithms

37

Branching algorithm for k-SAT: analysis

Algorithm is correct since in each clause at least one literal must be set to true.

Recurrence relation describing the algorithm:

T( n ) · T( n – 1 ) + T( n – 2 ) + … + T( n – c )

Worst case:

T( n ) · T( n – 1 ) + T( n – 2 ) + … + T( n – k )

For different values of k, this leads to:

k = 3: ¿( 1,2,3 ) < 1.8393. O( 1.8393n ).

k = 4: ¿( 1,2,3,4 ) < 1.9276. O( 1.9276n ).

k = 5: ¿( 1,2,3,4,5 ) < 1.9660. O( 1.9660n ).

Page 38: Exact Exponential-Time Algorithms

38

Reduction rules for independent set

Often, branching algorithms use reduction rules.

Rules that simplify the instance without branching.

For Maximum Independent Set we can formulate the following rules for any v 2 V:

1. Reduction rule 1: if v has degree 0, put v in the solution set and recurse on G-v.

2. Reduction rule 2: if v has degree 1, then put v in the solution set. Suppose v has neighbor w. Recurse on G –{v,w}.

If v has degree 1, then there is always a maximum independent set containing v.

3. Reduction rule 3: if all vertices of G have degree at most two, solve the problem directly. (Easy in O(n) time.)

Page 39: Exact Exponential-Time Algorithms

39

A faster algorithm

New branching rule:

Take vertex v of maximum degree

Take best of two recursive steps:

• v not in solution: recurse on G – {v}

• v in solution: recurse on G – N[v]; add 1 to solution size.

Algorithm:

Exhaustively apply all reduction rules.

Apply branching rule.

Analysis: T(n) T(n – 1) + T(n – 4), as v has degree at least 3, we lose

at least 4 vertices in the second case.

O*( ¿(1,4)n ) = O(1.3803n).

This is faster than the O(1.4423n) enumeration algorithm.

Page 40: Exact Exponential-Time Algorithms

40

Maximum independent set:final remarks

More detailed analysis gives better bounds. Many possibilities for improvement. Many papers with huge case analyses exist.

Long time best known: O(1.1844n) (Robson, 2001) Extensive, computer generated case analysis! Includes memorization (DP)

Recently improved (I think), often discussions on correctness of such partially computer generated algorithms.

The measure and conquer technique allows for better analysis of branch and reduce algorithms (Fomin, Grandoni, Kratsch, 2005) Much simpler algorithm and only slightly slower compared to

Robson. See lecture 4 on exact exponential-time algorithms.

Page 41: Exact Exponential-Time Algorithms

41

MEET IN THE MIDDLE(OR SORT AND SEARCH)

Exponential-time algorithms – Algorithms and Networks

Page 42: Exact Exponential-Time Algorithms

42

Subset Sum

Subset sum problem:

Given: set of positive integers S = {a1,a2,..,an}, integer t.

Question: Is there a subset of S with total sum t?

Example:

S = {1,2,3,8,12,15,27}.

t = 34

Example 2:

S = {1,2,3,8,12,15,27}.

t = 33

Page 43: Exact Exponential-Time Algorithms

43

Sort and Search: algorithm

Algorithm:

1. We split S into two subsets S1 and S2:S1 = {a1,a2,…,an/2}, S2 = {an/2+1,an/2+2,…,an}.

2. For S1 and S2 we compute 2n/2 size tables containing all sums that can be made using elements from S1 and S2

respectively.

3. Sort the two tables in O(2n/2 log(2n/2)) = O*(2n/2) time.

4. Start with the smallest value in S1 and the largest value in S2 and repeat these steps:

1. Move one value further in S1 if the sum is too small.

2. Move one value back in S2 if the sum is too large.

3. Output ‘Yes’ if the sum is exactly t.

4. Output ‘No’ if we finished searching S1 and S2.

Page 44: Exact Exponential-Time Algorithms

44

The knapsack problem

Knapsack

Given: Set S of items, each with integer value v and integer weight w, integers W and V.

Question: is there a subset of S of weight no more than W, with total value at least V?

We split S into two subsets S1 and S2:

S1 = {s1, s2,…, sn/2}, S2 = {sn/2+1, sn/2+2,…,sn}.

Page 45: Exact Exponential-Time Algorithms

45

Sort and Search: knapsack

Algorithm:

1. For S1 and S2 we compute 2n/2 size tables containing all subsets of items that can be picked from S1 or S2.

Sort the sets S1 and S2 by their weight.

2. Remove all dominated subsets.

Dominated subsets: subsets for which there exists another subset with more (or equal) value and less (or equal) weight.

S1 and S2 are now sorted increasingly on weight and value.

3. Start with the smallest item in S1 and the largest item in S2and keep track of the highest value combination found:1. Update the highest value if the combination of the subset from

S1 and the subset from S2 is not too large and more valuable as the current most valuable solution.

2. Move one value further in S1 if the sum of the sizes is too small.3. Move one value back in S2 if the sum of the sizes is is too large.

Page 46: Exact Exponential-Time Algorithms

46

Sort and search space improvement

Both algorithms can be improved to use O*(2n/2) time and O*(2n/4) space (Schroeppel, Shamir, 1981).

Compare to O*(2n/2) time and space.

Algorithm:

1. Split S into four subsets S1, S2, S3, S4 of equal size.

2. For each subset create a sorted table with all possible sums that we can make using elements from that set.

3. We list all possible sums from elements from S1 [ S2 in

increasing order and all possible sums from elements fromS3 [ S4 in decreasing order.

How? Next Slides.

Combining values from the increasing list on S1 [ S2 and the decreasing list on S3 [ S4 is identical to the previous

algorithms.

Page 47: Exact Exponential-Time Algorithms

47

Sort and search space improvement: analysis

Algorithm uses O*(2n/4) space:

Four tables of all values one can make using numbers in Si.

Algorithm uses O*(2n/2) time:

If we can list all values from S1 [ S2 in increasing order within

this amount of time, without using extra space.

Similarly for all values from S3 [ S4 in decreasing order.

Page 48: Exact Exponential-Time Algorithms

48

Sort and search space improvement: listing the numbers

How to list all elements from S1 [ S2 in increasing order?

1. Select the smallest element e from S1.

2. Make a priority queue with all e + s2 with all elements s2 2 S2.

3. Whenever we need the next element we take the smallest element from the priority queue e + s2 and replace it by e’+s2

(where e’ is the next biggest element after e in S1).

4. Then, return the next smallest element of the priority queue.

Priority queue holds tuples (e,si) with priority e + si.

The priorities cause the numbers to be listed in increasing order.

The tuple representation allow (e,si) to be replaced by (e,si+1), causing all possible numbers from S1 [ S2 to be generated.

2n/2 updates, each in O*(log(2n/4)) time gives O*(2n/2) time.

Page 49: Exact Exponential-Time Algorithms

49

LOCAL SEARCH

Exponential-time algorithms – Algorithms and Networks

Page 50: Exact Exponential-Time Algorithms

50

Local search

Use exponential time to search a ‘local’ neighbourhood of possible solutions.

Consider 3-Satisfiability.

For two truth assignment a1 and a2 that assign true/false to the variables x1, ..., xn, we define the Hamming distance to be the number of variables on which a1 and a2 differ.

Lemma:

Given any truth assignment a and a 3-SAT formula F, we can decide in O*(3d) time whether there is an truth assign-ment that satisfies F at Hamming distance at most d from a.

Page 51: Exact Exponential-Time Algorithms

52

A simple local search algorithm for 3-Satisfiability

Any truth assignment lies either at Hamming distance at most n/2 from the all-true or the all-false assignment.

Algorithm:

Apply the local search algorithm and look within the Hamming-Balls of size n/2 from the all-true and all-false assignments.

Running time: O*(3n/2) = O(1.7321n)

This already improves the O(1.8393n)-time branching algorithm we saw earlier.

Page 52: Exact Exponential-Time Algorithms

53

Randomised local search

We improve the previous local search algorithm for 3-SAT using randomisation.

Monte-Carlo Algorithm:

If it outputs ‘Yes’ it is correct.

If it outputs ‘No’ there is a probability that the algorithm is wrong.

Algorithm:

1. Choose uniformly at random a truth assignment a.

2. Search the Hamming ball of size n/4 centred around a for a satisfying assignment.

3. If found: output ‘Yes’.

4. If not found, repeat from 1 (a fixed large number of times).

5. Output ‘No’.

Page 53: Exact Exponential-Time Algorithms

54

On the error probability

Lemma 1:

When performing repeated trials of an experiment with success probability p, the expected number of repetitions until the first success is 1/p.

Lemma 2: (Markov’s inequality)

Let X be a non-negative random variable and let c >0, then:

Corollary:

The repeated application of a Monto-Carlo algorithm with one-sided error probability p, either until successful, or at most 2/p tries, has an overall error probability of at most ½.

Proof: use Markov’s inequality with c = 2/p.

Page 54: Exact Exponential-Time Algorithms

57

Randomised local search: running time

Given that a satisfying assignment exist, we find it in a Hamming ball of size n/4 with probability at least p:

With 200/p repetitions:

Error probability at most (½)100.

Running time: O*( 200¢1/p ¢3n/4) = O*(1.5n).

By Stirling’s approximation the sum over the binomials is roughly (256/27)n/4.

Check details yourself!

4/

12

1 n

in i

np

Page 55: Exact Exponential-Time Algorithms

58

Concluding remarks on local search for 3-SAT

The O*(1.5n) algorithm can be derandomised.

Family of O(1/p) balls that together cover the entire search space (covering code).

Using faster algorithms to search the Hamming ball, this can be improved to O(1.465n) [Scheder].

Significantly faster algorithms can be obtained by:

Not brute-force searching the Hamming ball,

But, performing a random walk through the Hamming ball.

Schönings algorithm: O(1.334n).

Later improved to O(1.323n) by Rolf.

Page 56: Exact Exponential-Time Algorithms

59

CONCLUSION

Exponential-time algorithms – Algorithms and Networks

Page 57: Exact Exponential-Time Algorithms

60

Conclusion

We discussed three or four other techniques in exact exponential-time algorithms:

1. Dynamic programming

Graph colouring, travelling salesman problem.

2. Branching algorithms

For k-satisfiability and maximum independent set.

3. Sort and Search (a.k.a. Split and List).

For subset sum and knapsack.

4. Local search.

For algorithms for 3-satisfiability.

Page 58: Exact Exponential-Time Algorithms

61

Questions?

Research on exact exponential-time algorithms is relatively recent.

Several interesting open problems.

Today we considered:

1. Clever enumeration.

2. Dynamic programming.

Page 59: Exact Exponential-Time Algorithms

62

Inclusion/Exclusion

Algorithms and Networks 2017/2018

Johan M. M. van Rooij

Hans L. Bodlaender

Page 60: Exact Exponential-Time Algorithms

63

Exact exponential-time algorithms in this course

Five Lectures

1. Introduction to exponential-time algorithms.

2. Overview of some techniques.

3. Inclusion/Exclusion.

4. Measure-and-conquer analysis.

5. Complexity theory of parameterised and exact algorithms (end of the course).

Many ideas from exact exponential-time algorithms are also applicable to or are related to parameterised algorithms.

Page 61: Exact Exponential-Time Algorithms

64

Question...?

How many integers exist in the range [1,2,...,10.000] with the property that they are:

Not divisible by 2.

Not divisible by 3.

Not divisible by 5.

So:

1,7,11,13,17,19,23,29,31,...

Page 62: Exact Exponential-Time Algorithms

65

Inclusion/exclusion and a Venn diagram

P2

P5P3

1

11

2

2

23

Px: integers divisible by x.

|P2| = 5.000

|P3| = 3.333

|P5| = 2.000

10.000 - |P2| - |P3| - |P5| = -333

We subtracted too many!

How many too many?

Page 63: Exact Exponential-Time Algorithms

66

Inclusion/exclusion and a Venn diagram

|P2 Å P3| = 1.666

|P2 Å P5| = 1.000

|P3 Å P5| = 666

Divisible by 6, 10, 15.

10.000 - |P2| - |P3| - |P5|

+ |P2 Å P3| + |P2 Å P5|+ |P3 Å P5| = 2.999

We added toomany!

How many?

P2

P5P3

1

11

1

1

10

Page 64: Exact Exponential-Time Algorithms

67

Inclusion/exclusion and a Venn diagram

How many integers exist in the range [1,2,...,10.000] with the property that they are:

Not divisible by 2.

Not divisible by 3.

Not divisible by 5.

|P2 Å P3 Å P5| = 333

10.000 - |P2| - |P3| - |P5| + |P2 Å P3| + |P2 Å P5|

+ |P3 Å P5|- |P2 Å P3 Å P5|

= 2.666

P2

P5P3

1

11

1

1

11

Page 65: Exact Exponential-Time Algorithms

68

What are we going to do in this lecture?

Apply the principle of inclusion/exclusion for faster algorithms.

1. The inclusion/exclusion formula.

2. Algorithmic applications:

Graph colouring.

Hamilton cycle.

Steiner tree (if time allows).

Page 66: Exact Exponential-Time Algorithms

69

INCLUSION/EXCLUSION FORMULA

Exponential-time algorithms – Algorithms and Networks

Page 67: Exact Exponential-Time Algorithms

70

The inclusion/exclusion formulain its general form

Let N be a collection of objects (anything).

Let 1,2, ...,n be a series of requirements on objects.

Finally, let for a subset R µ {1,2,...,n}, N(R) be the number

of objects in N that do not satisfy the requirements in R.

N(;) = |N|

N({1,2...n}) counts objects that satisfy no requirement at all.

Then, the number of objects X that satisfy all requirements is:

In the counting integers example:

N = [1,2,....,10.000]

Requirements: integer is not divisible by x, for x = 2, 3, 5.

},...,2,1{

|| )()1(nR

R RNX

Page 68: Exact Exponential-Time Algorithms

71

Intuition behind the inclusion/exclusion formula

1. If we count all objects in N, we also count some wrong objects that do not satisfy all properties.

2. So, subtract from this number all objects that do not satisfy property 1. Also subtract from this number all objects that do not satisfy property 2. Etc.

3. However, we now subtract too many, as objects that do not satisfy two or more properties are subtracted twice.

4. So, add for all pairs of properties, the number of objects that satisfy both properties.

5. But, then what happens to objects that avoid 3 properties?

6. Continue, and note that the parity tells if we add or subtract…

7. This gives the formula of the previous slide.

Page 69: Exact Exponential-Time Algorithms

72

Proof of the inclusion/exclusion formula

The number of objects X that satisfy all requirements is:

Proof:

An object that has all properties is only counted with R=;.

Consider an object that does not have the properties in

P µ {1,2,...,n} is counted once for all S µ P:

By the binomial theorem.

},...,2,1{

|| )()1(nR

R RNX

0)1(||

)1(||

0

||

iS

iPS

S

i

S

Page 70: Exact Exponential-Time Algorithms

73

The inclusion/exclusion formula:alternative proof

(that I find more intuitive)

See the formula as a branching algorithm branching on a requirement:

required = optional – forbidden

‘Not divisible by x’ = ‘can be divisible by x’ – ‘divisible by x’

The full branching tree has 2|X| leaves.

Each corresponding to a set R.

Requirements not in R are ‘optional’ and can be violated.

Requirements in R are ‘forbidden’ and must be violated.

},...,2,1{

|| )()1(nR

R RNX

Page 71: Exact Exponential-Time Algorithms

74

INCLUSION/EXCLUSION FOR GRAPH COLOURING

Exponential-time algorithms – Algorithms and Networks

Page 72: Exact Exponential-Time Algorithms

75

Graph colouring with inclusion/exclusion

Björklund and Husfeld, and independently Koivisto, 2006.

O*(2n) time algorithm for graph colouring.

Covering G with independent sets.

Objects to count:

Sequences (I1,…,Ik) where each Ii is an independent set.

Requirements:

For every vertex v we require that v is contained in at least one independent set Ii.

Let ck(G) be the number of ways we can cover all vertices in G with k independent sets, where the independent sets may be overlapping, or even the same.

Lemma: G is k-colorable, if and only if ck(G) > 0.

Page 73: Exact Exponential-Time Algorithms

76

Expressing ck in s

VX

kX

k XVsGc )\()1()( ||

Let ck(G) be the number of ways we can cover all vertices in G with k independent sets, where the independent sets may be overlapping, or even the same.

Let for S µ V, s(S) be the number of independent sets in G[S].

Objects to count:

Sequences (I1,…,Ik) where each Ii is an independent set.

Requirements:

For every vertex v we require that v is contained in at least one independent set Ii.

},...,2,1{

|| )()1(nR

R RNX

Page 74: Exact Exponential-Time Algorithms

77

Counting independent sets

Let for SµV, s(S) be the number of independent sets in G[S].

s(S) = s( S\{v} ) + s( S\N[v] ) for some v 2 S.

s(;) = 1.

We can compute all values s(S) in O*(2n) time.

Use bottom-up dynamic programming and store all values.

The algorithm:

1. Compute all s(S) for all S µ V by dynamic programming.

2. Compute values ck(G) with the inclusion/exclusion formula.

3. Take the smallest k for which ck(G) > 0.

O*(2n) time and space

Page 75: Exact Exponential-Time Algorithms

78

A note on the algorithm

Polynomial space algorithm is also possible.

Then we computing s(X) each time again.

This requires additional exponential time.

Running time depends on how fast we count independent sets.

O(1.4423n) time enumeration, then O(2.4423n) time total.

Gaspers and Lee 2016: O(2.2356n).

Lots of techniques:

Measure and conquer.

Potentials.

Compound measures.

Branching to separate the graph.

Out of scope for this lecture...

Page 76: Exact Exponential-Time Algorithms

79

INCLUSION/EXCLUSION FOR HAMILTON CYCLE

Exponential-time algorithms – Algorithms and Networks

Page 77: Exact Exponential-Time Algorithms

80

Space improvement for Hamiltonian cycle

Dynamic programming algorithm for Hamiltonian cycle or TSP uses:

O( n22n ) time.

O( n2n ) space.

In practice, space becomes a problem before time does.

We now give an algorithm for Hamiltonian cycle that uses:

O(n32n) time.

O(n) space.

This algorithm counts Hamiltonian cycles using the principle of inclusion/exclusion.

Page 78: Exact Exponential-Time Algorithms

81

Counting (non-)Hamiltonian cycles

Computing/counting tours is much easier if we do not care about which cities are visited.

This uses exponential space in the DP algorithm.

Define: Walks[ vi, k ] = the number of ways to travel from v1 to vi traversing k times an edge.

We do not care whether nodes/edges are visited (or twice).

Using Dynamic Programming:

Walks[ vi, 0 ] = 1 if i = 0

Walks[ vi, 0 ] = 0 otherwise

Walks[ vi, k ] = ∑vj ∈ N(vi) Walks[ vj, k – 1 ]

Walks[ v1, n ] counts all length n walks that return in v1.

This requires O(n3) time and O(n) space.

Page 79: Exact Exponential-Time Algorithms

82

Apply the formula to every city:the inclusion/exclusion formula

Let CycleWalks( V’ ) be the number of walks of length n from v1 to v1 using only vertices in V’, then:

Formula requires 2n times the counting of cyclic walks through a graph:

O(n32n) time.

O(n) space.

Page 80: Exact Exponential-Time Algorithms

83

INCLUSION/EXCLUSION FOR STEINER TREE

Exponential-time algorithms – Algorithms and Networks

Page 81: Exact Exponential-Time Algorithms

84

The Steiner tree problem

Let G = (V,E) be an undirected graph, and let N µ V be a

set of terminals.

A Steiner tree is a tree T = (V’,E’) in G connecting all terminals in N:

V’ µ V, E’ µ E, N µ V’

We use k=|N|.

Streiner tree problem:

Given: an undirected graph G = (V,E), a terminal set N µ V,

and an integer t.

Question: is there a Steiner tree consisting of at most t edges in G.

Page 82: Exact Exponential-Time Algorithms

85

Some background on the algorithm

Algorithm invented by Jesper Nederlof.

Just after he finished his Master thesis supervised by Hans (and a little bit by me).

Master thesis on Inclusion/Exclusion algorithms.

O*(2k) time and polynomial space algorithm.

n is the number of vertices (disappears in de O*).

k is the number of terminals.

Page 83: Exact Exponential-Time Algorithms

86

Using the inclusion/exclusion formulafor Steiner tree (problematic version)

One possible approach:

Objects: trees in the graph G.

Requirements: contain every terminal.

Then we need to compute 2k times the number of trees in a subgraph of G.

For each W µ N, compute trees in G[V\W].

However, counting trees is difficult:

Hard to keep track of which vertices are already in the tree.

Compare to Hamiltonian Cycle:

We want something that looks like a walk, so that we do not need to remember where we have been.

},...,2,1{

|| )()1(nW

W WNX

Page 84: Exact Exponential-Time Algorithms

87

Branching walks

Definition: Branching walk in G=(V,E) is a tuple (T,Á):

Ordered tree T.

Mapping Á from nodes of T to nodes of G, s.t. for any edge {u,v} in the tree T we have that {Á(u),Á(v)} 2 E.

The length of a branching walk is the number of edges in T.

When r is the root of T, we say that the branching walk starts in Á(r) 2 V.

For n 2 T, we say that the branching walk visits the vertices Á(n) 2 V.

Some examples on the blackboard...

Page 85: Exact Exponential-Time Algorithms

88

Branching walks and steiner tree

Definition: Branching walk in G=(V,E) is a tuple (T,Á):

Ordered tree T.

Mapping Á from nodes of T to nodes of G, s.t. for any edge {u,v} in the tree T we have that {Á(u),Á(v)} 2 E.

Lemma: Let s 2 N a terminal. There exists a Steiner tree T

in G with at most c edges, if and only if, there exists a branching walk of length at most c starting in s visiting all terminals N.

Page 86: Exact Exponential-Time Algorithms

89

Using the inclusion/exclusion formula for Steiner tree

Approach:

Objects: branching walks from

some s 2 N of length c in the graph G.

Requirements: contain every terminal in N\{s}.

We need to compute 2k-1 times the number of branching walks of length c in a subgraph of G.

For each W µ N\{s}, compute branching walks from s in

G[V\W].

Next: how do we count branching walks?

Dynamic programming (similar to ordinary walks).

},...,2,1{

|| )()1(nW

W WNX

Page 87: Exact Exponential-Time Algorithms

90

Counting branching walks

Let BW(v,j) be the number of branching walks of length j starting in v in G[W].

BW(v,0) = 1 for any vertex v.

BW(v,j) = u2(N(v)ÅW) j1 + j2 = j-1BW(u,j1) BW(v,j2)

j2 = 0 covers the case where we do not branch / split up and walk to vertex u.

Otherwise, a subtree of size j1 is created from neighbour u, while a new tree of size j2 is added starting in v.This splits off one branch, and can be repeated to split of more branches.

We can compute BW(v,j) for j = 0,1,2,....,t.

All in polynomial time.

Page 88: Exact Exponential-Time Algorithms

91

Putting it all together

Algorithm:

Choose any s 2 N.

For t = 1, 2, …

Use the inclusion/exclusion formula to count the number of branching walks from s of length t visiting all terminals N.

This results in 2k-1 times counting branching walks from s of length c in G[V\W].

If this number is non-zero: stop the algorithm and output that the smallest Steiner tree has size t.

},...,2,1{

|| )()1(nW

W WNX

Page 89: Exact Exponential-Time Algorithms

92

Conclusion

Today we discussed:

1. The inclusion/exclusion formula.

2. Algorithmic applications:

Graph colouring.

Hamilton cycle.

Steiner tree.

Many more applications.

Different fields of mathand/of algorithms.

FPT algorithms.

Fast subset convolution.

P2

P5P3

Page 90: Exact Exponential-Time Algorithms

93

Measure-and-Conquer Analysis

Algorithms and Networks 2017/2018

Johan M. M. van Rooij

Hans L. Bodlaender

Page 91: Exact Exponential-Time Algorithms

94

Exact exponential-time algorithms in this course

Five Lectures

1. Introduction to exponential-time algorithms.

2. Overview of some techniques.

3. Inclusion/Exclusion.

4. Measure-and-conquer analysis.

5. Complexity theory of parameterised and exact algorithms (end of the course).

Many ideas from exact exponential-time algorithms are also applicable to or are related to parameterised algorithms.

Page 92: Exact Exponential-Time Algorithms

95

This lecture

‘Measure and conquer’ is a technique for better analyses of branching algorithms.

Contents of this lecture:

1. Dominating set and set cover.

2. A simple algorithm for set cover and for dominating set.

3. Introducing measures: a better analysis of the same algorithm.

4. Measure and conquer.

Page 93: Exact Exponential-Time Algorithms

96

SET COVER AND DOMINATING SET

Exponential-time algorithms – Algorithms and Networks

Page 94: Exact Exponential-Time Algorithms

97

Set cover

Given a collection of sets S over a universe U, a set cover is a subcollection C of S such that X2CX = U.

Example:

U = {1,2,3,4,5,6,7}

S = { {1}, {1,2}, {3,4}, {5,6}, {1,5,6}, {2,4,7}, {1,6,7} }

Set Cover C = { {1,5,6}, {2,4,7}, {3,4} }

Set Cover (problem):

Given: a collection of sets S over a universe U, and an integer k.

Question: does there exists a subcollection of S of size at most k that is a set cover?

Page 95: Exact Exponential-Time Algorithms

98

Algorithms for set cover

For a set cover instance (S,U):

Let n be the number of sets: n = |S|.

Let m be the number of elements: m = |U|.

Let d be the dimension (d = n + m).

Algorithms:

Brute force: O*(2n) time and polynomial space.

Dynamic programming: O*(2m) time and space.

How?

Inclusion/exclusion: O*(2m) time and polynomial space.

How?

Page 96: Exact Exponential-Time Algorithms

99

Dominating set

A dominating set is a subset D of the vertices V such that N[D]=V.

Dominating Set (problem):

Given: a graph G=(V,E) and an integer k.

Question: does there exists a dominating set of size at most k in G?

Page 97: Exact Exponential-Time Algorithms

100

Algorithms for dominating set

Dominating set is NP-complete.

Algorithms beating the trivial O*(2n)-time algorithm using polynomial space:

2004: Fomin, Kratsch, Woeginger O(1.9327n).

2004: Grandoni* O(1.9053n).

2004: Randerath and Schiermeyer O(1.8899n).

2005: Fomin, Grandoni, Kratsch* O(1.5263n).

2009: van Rooij (PhD Thesis) O(1.4969n).

2011: Iwata* O(1.4864n).

(results with a * are improved using exponential space)

Fomin, Grandoni and Kratsch introduced measure and conquer.

Their algorithm is (almost) the same as Grandoni’s.

Page 98: Exact Exponential-Time Algorithms

101

Dominating Set as a set cover problem

We can model an instance of dominating set as a set cover problem:

A set for every vertex (choose the vertex).

An element for every vertex (dominate the vertex).

Incedence graph of set cover instance:

Bipartite graph with a vertex for every set S and element e and an edge if and only if e2S.

Dominating set on n vertices becomes a set cover instance of dimension d.

b

c

d e

f

ha g a b c d e f g h

Page 99: Exact Exponential-Time Algorithms

102

A SIMPLE ALGORITHM

Exponential-time algorithms – Algorithms and Networks

Page 100: Exact Exponential-Time Algorithms

103

A branching algorithm for set cover

Algorithm (for set cover instances (S,U))

1. Exhaustively apply the following reduction rules to (S,U):

1. Subsets rule: if S contains two sets X1, X2 such that X1 µ X2, then

remove X1 from S.

2. Unique elements rule: if U contains an element that occurs in only one set X in S, then add X to the solution and let U := U\X.

2. Take a set X in S of maximum size.

3. If |X| · 2, solve the problem using maximum matching.

4. Branch into two subproblems:

1. X not in solution: remove X from S and recurse.

2. X in the solution: add X to the solution, let U := U\X and recurse.

Note: when we remove elements from U, we (implicitly) also remove these elements from all sets in S.

Page 101: Exact Exponential-Time Algorithms

104

Analysis of running time

Reduction rules can be applied in polynomial time.

Branching rule generates subproblems:

Take X branch: remove 1 set, at least 3 elements.

Discard X branch: remove 1 set.

Let d be the dimension of (S,U) = d = |S| + |U|.

Let T(d) be the number of subproblems generated by the algorithm starting with a problem of dimension d.

Worst case recurrence relation:

T( d ) · T( d-1 ) + T( d-4 )

T( 1 ) = 1

Running time O*( ¿(1,4)d )=O(1.3803d).

¿(1,4) is unique positive real soltuion to: 1 - x-1 - x-4 = 0

Page 102: Exact Exponential-Time Algorithms

105

What does this mean?

We have an O(1.3803d) time algorithm for set cover.

Not comparable to earlier results expressed in n or m.

What does this mean?

Why is this interesting?

Algorithm for dominating set!

Set cover instance obtained from dominating set has d = 2n.

Running time: O(1.38022n) = O(1.9053n).

b

c

d e

f

ha g a b c d e f g h

Page 103: Exact Exponential-Time Algorithms

106

INTRODUCING MEASURES

Exponential-time algorithms – Algorithms and Networks

Page 104: Exact Exponential-Time Algorithms

107

Reconsider the algorithm for a moment

Algorithm (for set cover instances (S,U))

1. Exhaustively apply the following reduction rules to (S,U):

1. Subsets rule: if S contains two sets X1, X2 such that X1 µ X2, then

remove X1 from S.

2. Unique elements rule: if U contains an element that occurs in only one set X in S, then add X to the solution and let U := U\X.

2. Take a set X in S of maximum size.

3. If |X| · 2, solve the problem using maximum matching.

4. Branch into two subproblems:

1. X not in solution: remove X from S and recurse.

2. X in the solution: add X to the solution, let U := U\X and recurse.

Note: when we remove elements from U, we (implicitly) also remove these elements from all sets in S.

Page 105: Exact Exponential-Time Algorithms

108

Some observations

Observation on sets of size 2:

These sets are ‘almost done’.

If they lose an element they are removed.

If all sets have size at most 2, then we solve the problem in polynomial time.

What would happen if we define the dimension (or some other measure of instance size) not to count these for full?

Note that a similar observation can be made on elements of frequency 1 or 2 (but we ignore this for the moment):

Frequency one elements cause reduction rules to fire (‘done’).

Frequency two elements are ‘almost done’.

Page 106: Exact Exponential-Time Algorithms

109

A different measure

What would happen if we define the dimension (or some other measure of instance size) not to count these for full?

Let us try this, using a measure ¹.

Define:

Observe that ¹ · d.

Hence, any set cover algorithm running in O*(®¹) runs in O*(®d).

This gives a dominating set algorithm running in O*(®2n).

Next step:

Analyse the algorithm using the measure ¹.

Page 107: Exact Exponential-Time Algorithms

110

Analysis using a different measure

Define:

Reduction rules can still be applied in polynomial time.

For the branching rule, we consider two cases:

1. |S| ¸ 4.

Take branch: remove 1 large set, at least 4 elements.measure decreases by at least 5.

Discard branch: remove 1 large set.measure decreases by at least 1.

2. |S| = 3.

Take branch: remove 1 large set, at least 3 elements.measure decreases by at least 4 + 1.5.1.5 for removing the elements from other sets.

Discard branch: remove 1 large set.measure decreases by at least 1.

Page 108: Exact Exponential-Time Algorithms

111

Different measure, different running time!

Let T(¹) be the number of subproblems generated by the algorithm starting with a problem of measure ¹.

Recurrence relation:

T( ¹ ) · T( ¹ – 5 ) + T( ¹ – 1 ) ¿( 5, 1 ) < 1.3248

T( ¹ ) · T( ¹ – 5.5 ) + T( ¹ – 1 ) ¿( 5.5, 1 ) < 1.3035

T( 0 ) = 1

Clearly, the algorithm runs in O*( ¿(5,1)¹ ) · O(1.3248d).

Improvement compared to the last analysis: O(1.3803d).

For dominating set O(1.9053n) is improved to O(1.7551n).

Big improvement without modifying the algorithm!

Page 109: Exact Exponential-Time Algorithms

112

LET US TRY THAT AGAIN

Exponential-time algorithms – Algorithms and Networks

Page 110: Exact Exponential-Time Algorithms

113

Let us try that again!

Why did we choose precisely this measure?

No real reason.

Maybe because it was the most simple thing to try.

Let us try something else!

With different weights for sets than for elements.

Page 111: Exact Exponential-Time Algorithms

114

A different measure

Let us try something else!

Use weight w(i) for the measure of a set of size i.

And weight v(i) for the measure of an element of frequency i.

Define:

Note that sets of size less than 2 and elements of frequency 1 are directly removed by the reduction rules.

Page 112: Exact Exponential-Time Algorithms

115

A (partial) analysis on the blackboard

Consider several cases:

1. |S| ¸ 4, S does not contain a frequency 2 element.

a) |S| ¸ 5

b) |S| = 4

2. |S| ¸ 4, S contains a frequency 2 element.

a) |S| ¸ 5,

b) |S| = 4

3. |S| = 3, with ei elements of frequency i for:

All e2, e3, e>3 with values 0, 1, 2 and 3, such that their sum is 3.

We need a method to quickly compute ¿ function values.

E.g., excel with solver adding.

Page 113: Exact Exponential-Time Algorithms

119

Different measure, different running time!

Over all cases, the worst case is ¿ (6, 1) < 1.2852.

For set cover:

Original analysis: O(1.3803d).

First measure: O(1.3248d).

Second measure: O(1.2852d).

For dominating set:

Original analysis: O(1.9053n).

First measure: O(1.7551n).

Second measure: O(1.6518n).

Again improvement without modifying the algorithm!

Shall we try yet another measure?

Page 114: Exact Exponential-Time Algorithms

120

MEASURE AND CONQUER

Exponential-time algorithms – Algorithms and Networks

Page 115: Exact Exponential-Time Algorithms

121

Measure and Conquer

Measure and conquer is a more structural approach to trying different measures.

1. Define a measure using weight functions, e.g.:

2. Define an analysis using this measure in which the weights remain variables.

3. Now we have a numerical optimisation problem:

Find the weights leading to the best running time.

Finite problem if we set v(i)=w(i)=1 for i¸p for some integer p.

Often, one also needs to define some constraints one the weights to prevent the number of cases in the analysis from exploding.

Page 116: Exact Exponential-Time Algorithms

122

An analysis using the weight functions

Define:

Cases:

All 3 · |S| · p+1, all (e2,e3,e4,...,e>p+1) with sum equal to |S|.

Recurrences:

Where:

¢take = w(jSj) +p+1X

i=2

eiv(i) + ¢w(jSj)p+1X

i=2

ei(i ¡ 1)

¢discard = w(jSj) +p+1X

i=2

ei¢v(i)

+[e2 > 0]

0@v(2) +

min(e2;jSj¡1)X

i=1

¢w(i + 1)

1A+ [r2 6= jSj]w(2)

Page 117: Exact Exponential-Time Algorithms

123

An analysis using the weight functions

Using p=6 (or larger) one can obtain T(¹) · 1.2353¹

Using these weights:

Running time for dominating set: O(1.23532n) = O(1.5259n).

w(i) =

8>><>>:

0:377443 if i = 2

0:754886 if i = 3

0:909444 if i = 4

0:976388 if i = 5

and v(i) =

8>><>>:

0:399418 if i = 2

0:767579 if i = 3

0:929850 if i = 4

0:985614 if i = 5

Page 118: Exact Exponential-Time Algorithms

124

On solving the numerical problems

One needs to optimise quasi-convex functions.

The set of weights with solution value ® or smaller is convex.

This means that there are no local optima.

Gradient descent based hill climbing suffices.

For small instances one can use the solver in excel.

For large instance one needs to implement a smart gradient descent.

When formulating the problem slightly differently, the numerical optimisation can also be translated into a convex optimisation problems for which large scale solvers exist.

Page 119: Exact Exponential-Time Algorithms

125

CONCLUSION

Exponential-time algorithms – Algorithms and Networks

Page 120: Exact Exponential-Time Algorithms

126

Conclusion

We have seen how to use measures to get better bounds on the running time of a branching algorithm.

Measure and conquer is a structural approach to this.

Several improvements possible to what we have seen:

Weights for sets and elements do both not need to converge to one.

Several new reduction rules possible if one analyses the resulting worst-case recurrences better.

Add memorisation (never solve the same subproblem twice).

Switch to counting set covers/dominating sets and use two branching rules: one branching on sets and one branching on elements (based on inclusion/exclusion).

Page 121: Exact Exponential-Time Algorithms

127

Conclusion 2

We have seen how to use measures to get better bounds on the running time of a branching algorithm.

Measure and conquer is a structural approach to this.

Combination with other analysis techniques (outside the scope of this course) makes this approach even stronger.

Potentials (Iwata, and earlier Robson).

Compound measures (Wahlstrom).

Bottom-up using average degrees (Bourgeious et al. (me))

Separate, measure and conquer (Gaspers and Sorkin).