a one-phase algorithm for semi-infinite linear programming

19

Click here to load reader

Upload: hui-hu

Post on 10-Jul-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A one-phase algorithm for semi-infinite linear programming

Mathematical Programming 46 (1990) 85-103 85 North-Holland

A O N E - P H A S E A L G O R I T H M FOR SEMI-INFINITE LINEAR P R O G R A M M I N G

Hui HU Department of Operations Research, Stanford University, Stanford, CA 94305, USA

Received 28 September 1987 Revised manuscript received 2 May 1988

We present an algorithm for solving a large class of semi-infinite linear programming problems. This algorithm has several advantages: it handles feasibility and optimality together; it has very weak restrictions on the constraints; it allows cuts that are not near the most violated cut; and it solves the primal and the dual problems simultaneously. We prove the convergence of this algorithm in two steps. First, we show that the algorithm can find an e-optimal solution after finitely many iterations. Then, we use this result to show that it can find an optimal solution in the limit. We also estimate how good an e-optimal solution is compared to an optimal solution and give an upper bound on the total number of iterations needed for finding an e-optimal solution under some assumptions. This algorithm is generalized to solve a class of nonlinear semi-infinite programming problems. Applications to convex programming are discussed.

Key words: Semi-infinite linear programming, generalized linear programming, convex program- ming, linear programming, duality.

1. Introduction and preliminaries

S e m i - i n f i n i t e l i n e a r p r o g r a m m i n g , w h i c h a l l o w s e i t h e r i n f i n i t e l y m a n y c o n s t r a i n t s

or i n f i n i t e l y m a n y v a r i a b l e s , is a n a t u r a l e x t e n s i o n o f l i n e a r p r o g r a m m i n g . T h e r e

are m a n y p r a c t i c a l as wel l as t h e o r e t i c a l p r o b l e m s in w h i c h t h e c o n s t r a i n t s d e p e n d

on t i m e o r s p a c e a n d t h u s c a n b e f o r m u l a t e d as s e m i - i n f i n i t e l i n e a r p r o g r a m s (see ,

e.g., G u s t a f s o n a n d K o r t a n e k (1973) ) . T h e p r i m a l p r o b l e m o f s e m i - i n f i n i t e l i n e a r

p r o g r a m m i n g is d e f i n e d as

(S IL) m i n i m i z e crx

s u b j e c t to a(u )Tx - -b (u )>~O f o r all u c U ,

~¢here c, x c R ' , U _~ R " is a n in f in i t e p a r a m e t e r set , a(u) : U-~ R n, a n d b(u) : U

R ~. T h e d u a l o f ( S I L ) is d e f i n e d as: fo r all f in i te s u b s e t s { u i : i c A } o f U

I S I D ) m a x i m i z e ~ b(u~)y~ i ~ A

s u b j e c t to ~ a(u ~ )Yi = c, i c_J

A is f in i te a n d {u i : i ~ A}_~ U,

y i~>0 fo r all i c A .

Page 2: A one-phase algorithm for semi-infinite linear programming

86 H. Hu / Semi-infinite programming

We can always rewrite (SID) in the format of generalized linear programming (see Dantzig (1963, Chapter 22)) as follows:

(GLP) maximize Yo

subject to poYo + py = q,

p may be freely chosen from S,

y~>0,

where S = conv{(a(u) T, --b(u))T: u c U}, P0 = ( 0 , . . . , 0, 1), and q = (e T, 0) T. Indeed,

for any { u i : i c A } c_ U with finite A and any { Yi/> 0: i c A } satisfying Y.i~ ~ a (u ~ )yi = c,

let y o = Y ~ a b ( u ~ ) y ~ , y = Y ~ , a y i > 0 (assuming c # 0 ) and p = ~ c a ( a ( u ~ ) T, _ b ( u ~ ))Ty~/y. Then, p c S and poYo+py = q. Conversely, i fpoYo+py = q, where y ~> 0 and p ~ S , then there exist u~c U and A~>0 for i = 1 , . . . , t ( t < ~ n + l ) such that Y.I-~ )t~= 1 and P =Y~I-~ ( a (u i ) T, - b ( u e ) ) T& (see, e.g., Rockafellar (1972, p. 155)).

Let y~ = Aiy/> 0 for i = 1 , . . . , t. Then, ~'~-1 a( ui )Y~ = c and Y0 = Y~i-i b( u~ )Y~. Hence, (SID) and (GLP) are equivalent.

The first algorithm for solving (GLP) and therefore (SID) was given by Dantzig and Wolfe (see Dantzig (1963, Chapters 22 and 24)). Starting with a nondegenerate

basic feasible solution of (GLP), the Dantzig-Wolfe algorithm solves (GLP) by generating a pk with the smallest negative reduced cost at each iteration k and then solves the new restricted master program. Although one can start this algorithm with any basic feasible solution, the nondegeneracy of the starting basic feasible

solution is required by the convergence proof. How to find a starting nondegenerate basic feasible solution is still an open question.

Gustafson and Kortanek proposed an algorithm, the alternating procedure, for solving the primal semi-infinite linear program (SIL) (see Gustafson and Kortanek

(1973)) under the following assumptions: (A1) U is compact, (A2) a(u) and b(u) are continuous,

(A3) the feasible region of (SIL) is nonempty, (A4) the feasible region of (SIL) is contained in T, where T c R" is a nonempty

polytope. The alternating procedure solves (SIL) as follows: let x ° be an optimal solution

of the linear program

minimize c Tx

subject to x c T.

At iteration k, if a(u)Tx k - b(u)>~ 0 for all u ~ U, then x k is an optimal solution of

(SIL) and the procedure is stopped. Otherwise, find a

u k+l 6 arg min{a(u )Tx -- b(u): u c U}

Page 3: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 87

and let x k+~ be an optimal solution of the linear program

minimize e Vx

subjec t to a ( u i ) T x - b ( u i ) > ~ O fo ra l l i = l , . . . , k + l ,

x c T .

The convergence proof of the alternating procedure provided by Gustafson and Kortanek (1973) is incomplete. Under the above assumptions, they proved that cTx *= v(SIL), where x* is a cluster point of the sequence x k generated by the alternating procedure and v(SIL) stands for the optimal objective function value

of program (SIL). But, they did not prove the feasibility of x*. There are many types of methods for solving semi-infinite programs (see, e.g.,

Hettich (1979)). A discretization-type method generates 0, a finite subset of the infinite parameter set U, and finds an 2 that minimizes f ( x ) over finitely many constraints g(x, u) >! 0 for all u c U. If the grid size d = maxu~ u min,~ c]l u - t7 II is sufficiently small, then it is likely that ff will be a good approximate optimal solution to (SIP). The discretization-type methods are limited to those U having nice structures and generally fail when U is unbounded. A semi-continuous-type method

(or exchange method) generates an increasing sequence of finite subsets of U,{U k : k - l , 2 , . . . } with U k~ U/"+1 for all k, and finds x k that m i n i m i z e s f ( x ) over g(x, u) >~ 0 for all u c U k, where { U j" : k - 1, 2 , . . . } are generated in such a way that any cluster point of the sequence {x k : k = 1, 2, . . . } is an optimal solution of (SIP). The Dantzig-Wolfe algorithm and the alternating procedure are semi-

continuous-type methods. It is well known that discretization methods and semi- continuous methods are closely related to cutting plane methods (see, e.g., Hettich (1979)). A continuous-type method typically reduces the semi-infinite program to an ordinary convex program in the neighborhood of an optimal solution of the semi-infinite program. The drawback of continuous-type methods is that the assump- tions are often restrictive and convergence is often only local. Some recent globally convergent methods for solving semi-infinite programs include: the exact penalty function method by Conn and Gould (1987), the projected Lagrangian method by

Coope and Watson (1985), and the new primal algorithm by Anderson (1985). Herein we present a one-phase algorithm for solving a large class of semi-infinite

!inear programs. This algorithm can be considered as an extension of the Dantzig- Wolfe algorithm and the alternating procedure with the following advantages:

1. It handles feasibility and optimality together and thus eliminates the extra ecork of finding a starting feasible solution that might be far away from an optimal

;olution. 2. It does not impose any restriction on U, a (u ) , and b(u) , i.e., U could be any

aonempty set in R m, a ( u ) could be any vector in R", and b(u) could be any real

lumber. 3. The cut it generates at each iteration need not be the most violated cut or near

he most violated cut. The strength of the cuts can be controlled by a parameter.

Page 4: A one-phase algorithm for semi-infinite linear programming

88 H. Hu / Semi-infinite programming

4. It solves the primal program and the dual program simultaneously.

The rest of this paper is organized as follows. In Section 2 we present an algorithm

for solving a class of semi-infinite programs and prove its convergence in two steps.

First, we show that the algorithm can find an e-optimal solution after finitely many

iterations. Then, we use this result to show that it can find an optimal solution in

the limit. In Section 3 we prove a duality theorem and show that this algorithm

solves the primal and the dual programs simultaneously. In Section 4 we estimate how good an e-optimal solution is compared to an optimal solution and give an

upper bound on the total number of major iterations needed for finding an e-optimal

solution. In Section 5 we show that this algorithm can be generalized to solve a

class of nonlinear semi-infinite programming problems. In Section 6 we show how

to use this algorithm to solve convex programs.

2. An algorithm for semi-infinite linear programming

We first present a one-phase algorithm for solving the primal problem (SIL) of

semi-infinite linear programming that satisfies the assumption that the feasible region

is contained in a nonempty polytope T, where T = {x c R " : a ( u ) T x <~ b(u) for all

uc ~ = U}.

Algorithm 1 Step 1

Let k := 0;

let a be a constant such that 0 < a < 1;

let (LP(k)) be the linear program

minimize c T x

subject to x~ T.

Step 2

If (LP(k)) is infeasible, then (SIL) is infeasible, stop.

Else, find an optimal solution x k of (LP(k)), if a(u)Txk- -b(u)>~O for all u~ U,

then x k is an optimal solution of (SIL), stop.

Step 3

Find a u k e U such that

either a ( u k ) T x k - - b ( u k ) < ~ - - a

or a(u k )Txk -- b(u k ) <~ a . in f {a (u)Tx k -- b(u): u c U};

form (LP(k+ 1)) by adding a cut, a ( u k ) T x - - b ( u k ) > - O , to (LP(k));

k:= k + l ;

go to Step 2.

Page 5: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 89

Comments on Algori thm 1

(a) We assume that there are a lgor i thms tha t can find an a p p r o x i m a t e so lu t ion

o f the non l inea r p r o g r a m i n f {a (u )Tx k - b(u ) : u e U}. Namely , given any smal l

pos i t ive n u m b e r 6, these a lgor i thms can find a 6 e U such that

a ( u ) T x k - - b(~) < in f {a (u )Tx k -- b ( u ) : u e U} + 3.

(b) In Step 2, to test whe the r a(u)Wx k - b ( u ) ~ 0 for all u e U, one can p roceed

as fol lows: given a smal l pos i t ive n u m b e r 6, find a t / e U such that

a ( / ~ ) W x k - - b(t i ) <inf{a(u)Wx k -- b(u ) : u e U } + 8.

I f a ( a ) T x k - - b ( f O > - 6 , then i n f { a ( u ) T x k - - b ( u ) : u e U}~>0 and x k is op t imal . I f

a(O)Tx k - b ( t / ) < 0, then i n f {a (u )Tx k - b ( u ) : u e U} < 0 and one can go to Step 3.

Otherwise , 0 <~ a(O)Yx k -- b (~ ) <~ 3, which impl ies that inf{a (u )Tx k -- b(u ) : u ~ U} I>

- 3 , and therefore , x k is an 3 -op t ima l so lu t ion (see Def in i t ion 1 be low) . I f one wants

an exact op t ima l solut ion, then one needs an a lgor i thm that can find i n f {a (u )Tx k -

b(u ) : u c U} in the last few i terat ions.

(c) In Step 3, e i ther there is a u k that satisfies a ( u k ) T x k - - b ( u k ) ~ - - a o r else

0 > i n f {a (u )Tx k -- b(u) : u ~ U} ~> - a > -oo. In the la t ter case,

c~. i n f { a ( u ) T x k - - b ( u ) : UC U} > i n f { a ( u ) T x k - b ( u ) : u c U}

(s ince a < 1),

and hence there exists a u k c U such that

a(u k ) T x k - b(u k) <~ a . i n f {a (u )Tx k - b ( u ) : u c U}. (1)

To find a u k sat isfying (1) by any a lgor i thm that has the p rope r ty s ta ted in (a), a

s topp ing rule is

a ( u k ) T x k - - b ( u k ) ~ i n f { a ( u ) V x k - - b ( u ) : UC U } + 3 and

15 ~< (~ - 1 ) ( a (u k )Txk -- b(u k )).

Indeed , if this rule is satisfied, then

3 < ~ ( c e - - 1 ) ( a ( u k ) T x k - - b ( u k ) ) < ~ ( c ~ - - l ) i n f { a ( u ) T x k - - b ( u ) : UC U},

and therefore ,

a ( u k ) V x k - - b ( u k ) < ~ i n f { a ( u ) T x k - - b ( u ) : Ue U } + 3

<~ a" i n f {a (u )Tx k - b ( u ) : u c U}.

(d) The mos t v io la ted cut a ( u * ) T x k - - b ( u * ) ~ O , where a ( u * ) T x k - - b ( u * ) =

in f {a (u )Tx k -- b(u) : u c U}, may not exist. Even if it does exist, it is in genera l very

ha rd to find. The p a r a m e t e r a specif ied in Step 1 controls the s t rength o f the cut

gene ra t ed in Step 3. When c~ is close to one, the cut is near the most v io la ted cut

(or cou ld be the most v io la ted cut if that cut exists). When a is sufficiently close

to zero, the cut is very weak but the a lgor i thm still converges .

Page 6: A one-phase algorithm for semi-infinite linear programming

90 H. Hu / Semi-infinite programming

It is easy to see that, if Algor i thm 1 stops at a certain i terat ion k, then it either finds an opt imal solut ion of (SIL) or detects the infeasibil i ty of (S1L). Next , we are going to p rove that if the a lgor i thm does not stop finitely, then any cluster poin t o f the sequence x k for all k = 1, 2 , . . . , genera ted in Step 2 is an opt imal solut ion of

(SIL).

Definition 1 (e -op t imal solution). Let e > O. A vector ~ is an e -opt imal solut ion of (SIL) if a(u)X:~-b (u )>>--e for all u ~ U and c X ~ v(SIL) , where v(SIL) denotes the opt imal objective funct ion value of (SIL).

Theorem 1. Suppose that {ll(a(u), b(u))ll~: u ~ u } is bounded by K > 0 and that the algorithm does not stop finitely. Let the sequence x 1, x 2, . . . , be generated by Step 2

o f Algori thm 1. Then: (a) (Finite e -convergence) For any e satisfying 0 < e < a, there exists an integer

N ( e ) such that x k is an e-optimal solution o f (SIL) for all k >>- N ( e ) .

(b) (Convergence) A n y cluster point o f the sequence x ~, x 2 . . . . , is an optimal

solution o f (SIL).

Proof. Let W = {s ~ Rn+l: [Isllo~ ~</(} and G(x , y, z) = yXx - z, where x, y ~ R" and

z c R ~. Then, G ( x , y , z) is un i formly cont inuous on T x W. I f (a) does not hold, then for some 0 < e ' < a , N ( e ' ) does not exist. Therefore , the set J = { k : x k is not an e ' - o p t i m a l solution of (SIL)} contains infinitely m a n y integers. Let *7 = a e ' > O.

By the un i fo rm continuity of G(x, y, z) , there exists 6(*7)> 0 such that

II(x, y, z ) - (~, 37, ~)11 < ~(.7) implies that [G(x, y, z) - G(~, .9, ~)[ < *7

for all (x, y, z) and (if, 37, ~) c T x W. (2)

Since { (a (u ) , b(u)) : u c U} is b o u n d e d and J is infinite, we can find i , j c J with

i < j such that

II (x j, a(u ' ), b (u ' ) ) - (x j, a (u j ), b(u j ))II < 3(*7). (3)

Because i < j , we have

O ( x j, a ( u i ), b ( u i )) = a(u i )WxJ - b(u i ) >10. (4)

Because j c J and eVx j <~ v(SIL) , there exists a e U such that a(~)Xx j - b(ft) < - e '

and, thus, in f {a(u)Xx j - b(u) : u e U } < - e ' . Therefore , by Step 3 of the a lgor i thm we have either

O( x j, a( u j ), b( u j )) = a( u j ) Tx J - b( u j ) <~ - a < -*7 (5a)

o r

G ( x j, a (u j ) , b (u j ) ) = a(u j ) T x j - b ( u j)<~ a . in f {a(u)Xx j - b ( u ) : u c U}

< - a e ' - - -*7. (5b)

However , (3), (4), (5a), and (5b) imply that II(xJ, a (u i ) , b ( u i ) ) - ( x J , a (uJ ) ,

b (u j ))]] < 6(*7) while I G ( x j, a (u i ), b(u ~ )) - G ( x j, a (u j ), b (u j ))] ~> *7, which contra- dicts (2). Consequent ly , (a) is proved.

Page 7: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 91

It remains to prove (b). Since the sequence x k, k = 1, 2 , . . . is bounded , it has at

least one cluster point. Let x* be a cluster point o f this sequence. From (a) we know that for any e satisfying 0 < e < a and k>~N(e ) we have a(u)Txk--b(u)>~--e for

all u e U. Hence, for any e satisfying 0 < e < a , we have a(u)Tx*--b(u)>~--e for

all u c U, which implies that x* is a feasible solution o f (SIL). To complete the

proof, we need only to show that cTx * = v(SIL). First, we know that CTX * >1 v(SIL)

since x* is a feasible solution o f (SIL). Secondly, we know that cTx k <~ v(SIL) holds for all k since the feasible region of (LP(k)) contains that o f (SIL). It follows that cTx * = v ( S I L ) . []

Remark 1. 1. Without loss o f generality, we may always assume that

{l[(a(u), b(u))[[o~: u c U} is bounded. For we can pre-mult iply (a(u) , b(u)) by a

positive number , e.g., [[(a(u), b(u))ll-1 and the feasible region remains the same.

2. We proved the convergence o f Algori thm 1 without assuming the continui ty o f a(u) and b(u), and without assuming the compactness o f U. We are able to do so because G(x, y, z) = yTx - z is uniformly cont inuous on T x W.

In the rest o f this section, we discuss how to solve a semi-infinite l inear program

that does not have the constraint x ~ T.

Suppose we know that for ~ 1 , . . . , ti' c U the linear program

(LP(O)') minimize cTx

subject to a(~ii)Vx>~b(~ ~) for i = l , . . . , t

has an optimal solution. Then we can start the algorithm with (LP(0) ') as (LP(0)).

Since the feasible region of (LP(k)) contains that o f ( L P ( k + 1)), either ( L P ( k + 1))

is infeasible or ( L P ( k + I ) ) is optimal. Hence, Algori thm 1 can be carried out. I f

the algorithm stops finitely, then it correctly solves the problem. Otherwise, a

sequence x ~, x 2 , . . . , w i l l be generated. It is easy to see that, if this sequence is

bounded , then both (a) and (b) o f Theorem 1 hold. If this sequence is not bounded but has a cluster point, then (b) of Theorem 1 still holds and (a) should be stated as

(a) ' For any 0 < e < a , there exists an x ~ in this sequence such that x ~ is an

e-opt imal solution o f (SIL).

A sufficient condit ion that guarantees the boundedness of the sequence x ~, x 2, . . . , was given by Dantzig (see Dantzig (1963, pp. 476-477)) for the Dantz ig-

Wolfe algorithm. This condi t ion works for our algori thm after some minor

modifications.

Lemma 1. Let (DP(k) ) be the dual program of ( L P ( k ) ) ) o r all k. I f (SIL) is feasible and a nondegenerate basic feasible solution exists for some (DP(k) ) , then the sequence x k is bounded.

Proof. As in ordinary linear programming, if (SIL) is feasible, then v(SID) < +oo.

Let B k be the basis associated with a nondegenera te basic feasible solution of some

Page 8: A one-phase algorithm for semi-infinite linear programming

92 H. Hu / Semi-infinite programming

(DP(k)) and b k be the vector of corresponding cost coefficients. Then, by non- degeneracy, we have (B k ) 1c > 0. Let yi = (x i )TBk __ (b k )T for all i = 1, 2 , . . . . Since

x i is feasible for (LP(i)) and for all i ~ > k the feasible region of (LP(i)) is contained in that of (LP(k)), we know that yi/> 0 for all i/> k. Hence, for all i ~> k, we have

0<~ y ~ ( B k ) - l c = ( x i ) T c - - ( b k ) T ( B k ) - l e = v ( D P ( i ) ) - ( b k ) T ( B k) ~c

< ~ v ( S I D ) - ( b k ) T ( B k ) 'c.

It follows that 3/ is bounded and thus x ~ is bounded because B k is invertible. []

Other simple sufficient conditions for the boundedness of x k or for the existence of a cluster point of x k are:

Lemma 2. (1) I f a (u j ) < 0 for some u j generated by the algorithm and x k >~ 0 for all

sufficiently large k, then the sequence x k is bounded.

(2) I f a ( u ~ ) < O for some u j generated by the algorithm and xk>~o for infinitely

many k, then the sequence x k has a cluster point.

(3) The assumption, a (u j ) < 0 for some u i generated by the algorithm, in (1) and

(2) may be replaced by: there exist uJ~,. . . , u j' generated by the algorithm such that

a(uJ,)<~O for i= 1 , . . . , l and ~.ti= 1 a (u ' , ) <0 .

Proof. (1) For all x k >i 0 and k ~>j, we have, by the feasibility of x k, a( u j ) Txk ~ b( u j ).

Therefore, x k <<- b(u j ) / a ( u j )i for all i = 1 , . . . , n and thus (1) is proved. (2) and (3) can be proved similarly. []

3. A duality theorem

It is well known that, if a primal linear program has an optimal solution, then its dual program also has an optimal solution and the optimal objective function values of the primal program and the dual program are equal. This nice property does not hold for all semi-infinite linear programs (see, e.g., Duffin and Darlovitz (1965)). Herein we show that if Algorithm 1 converges for a primal semi-infinite linear

program, then there is no duality gap between the primal program and its dual program and Algorithm 1 solves the primal program and the dual program simul- taneously.

Let (DP(k)) be the dual program of (LP(k)) for all k, where (LP(k)) is generated by Algorithm 1, and yk be an optimal solution of (DP(k)) . Since (LP(0)) has an optimal solution, we know that (DP(0)) is feasible and thus (DP(k)) is feasible for all k. I f at a certain iteration k, (LP(k)) is infeasible, then by the duality theory of linear programming, we have v ( D P ( k ) ) = oo. Since the feasible region of (DP(k))

is a subset of that of (SID), we know that in this case v(SID) = co. I f at a certain iteration k, x k is an optimal solution of (SIL), then by the duality theory of linear programming, we have v(SIL) = cTx k = v(LP(k)) = v(DP(k)) . On the other hand,

Page 9: A one-phase algorithm for semi-infinite linear programming

14. H u / Semi - in f in i t e p r o g r a m m i n g 93

it is easy to show that v(SID) ~< v(SIL). Since yk is a feasible solution of (SID) and the objective function value of (SID) at yk is equal to v(SIL), it follows that yk is an optimal solution of (SID). I f Algorithm 1 generates an infinite sequence x k for all k = 1, 2 , . . . , with a cluster point x*, then we have

crx k = v(LP(k)) = v(DP(k)) ~< v(SID) < v(SIL). (6)

Since the feasible region of (LP(k)) contains that of ( L P ( k + 1)), v(LP(k)) = cTx k is nondecreasing. Hence, we have limk~o~ CTX k = cTx *. By Theorem 1, CTX * = v(SIL).

It follows from (6) that cTx * = v(SIL) = v(SID). Now suppose that the method used

for solving (LP(k)) in Step 2 can solve (LP(k)) and (DP(k)) simultaneously (e.g., the simplex method) and yk is an optimal solution of (DP(k)) generated by this method. Then, yk for k = 1, 2 , . . . , is a sequence of feasible solutions of (SID) on which the objective function tends to v(SID). Thus, the following theorem is established.

Theorem 2. Suppose that Algorithm 1 convergences. (1) I f (SIL) is infeasible, then v(SID) = co. (2) I f (SIL) has an optimal solution, then v(SIL) = v(SID).

(3) Algorithm 1 solves (SIL) and (SID) (in the asymptotic sense mentioned above) simultaneously. []

Corollary 1. Suppose that {(a(u), b(u)): u c U} is compact, the feasible region of (SIL) has an interior point, v(SID) is finite, and Algorithm 1 converges. Then, v(SID) can be attained.

Proof. Let ~ be an interior point of the feasible region of (SIL), then a(U)T~> b(u)

for all u c U. As {(a(u), b(u)): u c U} is compact, we know that there exists a positive number 0 such that a(u)T~ -- b(u)/> 0 > 0 for all u c U. Let yk be the optimal

solution of (DP(k)) generated by Algorithm 1 and A k be the index set such that y~ > 0 if and only if i c Ak. I f we solve (LP(k)) and (DP(k)) by the simplex method,

then A k contains no more than n elements. Let Ak = { ix , , . . . , iklj~l} where ]Ak ] denotes k k the cardinal number of Jk and yk =(y ik l , . . . ,y ik :~ ' 0 , ' ' ' , 0 ) C R n, that is, the first

]Ak] components of yk are y~ k and the rest are zero in the case ]Ak] < n. k t , ' ' ' , Y 6 1 % I

We claim that { yk C R ~ : k = 1, 2 , . . .} is bounded. Indeed,

0 • y~<~ ~ a(u i )T~y~- Z b ( u i ) y ~ = c T ~ - v ( D P ( k ) ) i~A k i ~ J t i~-At

Let

and

cT~ -- v(DP(1)).

A k (a(ui~,,), . . . , i~ = a ( u i ~ ) , O , . . . , O ) 6 R "×"

b k (b(ui~,), i~ = . . . , b ( u ~ k l ) , O , . . . , o ) T 6 R "

Page 10: A one-phase algorithm for semi-infinite linear programming

94 H. Hu / Semi-infinite programming

for all k = l , 2 , . . . . Then, we have A k Y k = c and (bk )vY k = v ( D P ( k ) ) for all

k = 1, 2 , . . . . Taking subsequences we may assume that

limk_~ A k = ( a ( u J ' ) , . . . , a(u~r), 0 , . . . , O) =- A*,

limk_~o~ b k = ( b ( u J l ) , . . . , b(uJr), 0 , . . . , O) ~ b*,

and

limk~o~ V k = ( Y * , . . . , V * , O , . . . , O ) = - V*,

r<~n. Then, A * Y * = c , ( b * ) T y * = v ( S I D ) , and thus, y j l = Y * l , . . . , y j r = []

where Y * , y i = 0 for all i ~ { j ~ , . . . ,jr} is an optimal solution of (SID).

A more general zero duality gap theorem was given by Karney (1983). Duality gaps of semi-infinite programs was also studied by Charnes, Cooper and Kortanek (1965), Duffin and Karlovitz (1965), and many others.

4. Estimation of upper bounds

We have shown that for any e satisfying 0 < e < a, the algorithm can find an e-optimal solution of (SIL) after finitely many iterations (Theorem 1). Now we estimate how good an e-optimal solution is compared to an optimal solution. In this section, we assume that U is compact, a(u) and b(u) are continuous on U, and the feasible

region of (SIL) is n-dimensional. First, we state two lemmas that we are going to

use in the following discussion.

Lemma 3A (Hoffman). Let A be an (m × n)-matrix and b an m-vector. I f A x >~ b has

a solution, then there exists a constant z such that for any x c R ' , there exists an x °

satisfying

Z x ° ~ b and IIx-x°ll<-~ll(Ax-b) lion,

where ( A x - b) stands for the negative part o f the vector ( A x - b), see Hoffman

(1952). []

Lemma 3B. I f ( ( a ( u ) , b ( u ) ) : u c U ) is compact and F = { x ~ R ' : a ( u ) T x ~ b ( u ) ,

u c U} is n-dimensional, then a given halfspace (a*)Wx >t b* (where a* ~ O) contains

F i f and only i f there exist u i ~ U and hi > O, i = 1 , . . . , t (where t <~ n), such that

a la (u 1)+. • .+ & a ( u ' ) = a *

and

h l b ( u l ) + . . . + h , b ( u ' ) > ~ b * . []

(See, e.g., Rockafellar (1972, p. 160).)

Page 11: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 95

Definition 2. Let g : F ~ R n. A vector x c F is a s tat ionary point o f the pair (F, g) if xTg(x ) <<-yTg(x) for all y ~ F.

Suppose that x k is an e-opt imal solution o f (SIL). Cons ider an auxiliary p rogram

(P) m i n { l J x - x k ]12: a ( u ) T x - b ( u ) ~ O for all u 6 U}.

Let x* be the optimal solution o f the auxiliary p rogram (P). It is easy to see that x* is a s tat ionary point o f (F, V [Ix - x k 112), where F = {x E R" : a(u)Tx >I b(u) , u c U} and V I ] x - x k [ I Z = Z x - Z x k is the gradient o f I I x - x k [I 2. Therefore, the halfspace

2(x* - x k )Tx >~ 2(x* - x k )TX*

contains F. It follows f rom Lemma 3B that there exist u i ~ U and Ai > 0, i = 1 , . . . , t (where t<~ n), such that

Ala (u 1 ) + . • . + A , a ( u ' ) = 2 ( x * - x k),

IX, b(u 1 ) +. • • + A,b(u' ) ~> 2(x* - x k )Tx*. (7a)

We claim that

a ( u i ) T x * = b ( u i) for i = l , . . . , t . (7b)

I f not, say, a(u i )Tx*> b(u i) for i - 1, . . . , r, and a(u i)Tx* = b(u ~) for i = r + 1 , . . . , t. Then, we have

2( x* - xg ) Tx* = Z Aia( ui ) Tx* > ~ Aib( ui ) >~ 2( X* -- xk ) Tx *, i 1 i 1

which is a contradict ion. Thus (7b) is proven. Since I ] x - x k [[2 is a convex function

o f x, (7a) and (7b) imply that x* is an optimal solution o f the following program:

(P(k)) m i n { H x _ x k [[2: a k x > ~ bk},

where Ak = (a(u ~), . . . , a (u ~))w and bk = (b(u 1 ) , . . . , b(u ~))w. Applying Hoffman 's

Lemma to the finite system of linear inequalities Akx >~ bk, we know that there exists a constant ~- and an x ° satisfying

A k x ° ~ b k and [[xk--x°[[<~ r t l (Akxk- -bk) [[~.~'re.

Because x* is an optimal solution o f (P(k)) and x ° is a feasible solution of (P(k)) , we have

IIx k-x*jP llx k-x°N and therefore,

I c T x k -- c T x * I ~ Te It Ctl and - re II eli + cTx:~ ~ c T x k ~ v(SIL) ~< c T x :~.

Thus, the following theorem is proved.

Page 12: A one-phase algorithm for semi-infinite linear programming

96 H. Hu / Semi-infinite programming

Theorem 3. The distance between an e-optimal solution and the feasible region is no

greater than "re, and the difference between the objective funct ion value o f an e-optimal

solution and the optimal objective function value is no greater than ~ellcll. []

There is a simple way to estimate e for the kth approximate solution X k such that x k is an e-optimal solution. Suppose that

a(u k )Vxk -- b(u k) <~ a" in f {a(u)Vx k - b(u): u ~ U}

of Step 3 holds for x k and u k. Let

Wk = min{l(a( ui ) Txk -- b(u i ) ) - (a( uk ) Txk -- b( uk ))l}- i<k

Since a ( u i ) T x k - - b ( u i ) > ~ O for all i < k , we have a ( u k ) T x k - - b ( u k ) > ~ - - W k. This

implies that i n f { a ( u ) T x k - - b ( u ) : UC U}>~--wka ~, i.e., x k is a (wka- l ) -op t ima l

solution.

Finally, we give an upper bound of the total number of iterations needed for finding an e-optimal solution. We need the following assumptions:

(A5) a(u ) and b(u) are Lipschitz continuous on U with Lipschitz constants L2

and L 3 respectively, i.e., I l a (u ) - a(a)ll <~ t21lu - all and Ib(u) - b(a)l ~< t31lu - all for all u, a c U.

(A6) Ilxkll~<K for all k = l , 2 , . . . , w h e r e x k is generated by Step 2 of Algorithm 1.

(A7) Every cut, a(u k ) T x - - b ( u k ) >1 O, generated by Step 3 of Algorithm 1 is the most violated cut.

Lemma 4. I f assumptions (A5) and (A6) are satisfied and a(~)Txk--b(a)>~O, then a(U)TX k -- b (u) >1 - e for all u that satisfies II u - a II <~ e ( t 2 g + t3) -1.

Proof.

la (u)Tx k - b ( u ) - (a(O)Tx k - b(~))l ~ I l a (u ) - a(a)ll IIx ~ II + I b ( u ) - b(a)l

<~(L~K + L~)llu-all~e. []

Let Bin(u, r) denote the m-dimensional Euclidean ball with center u c R m and radius r. I f x k is not an e-optimal solution, then a ( u k ) T x k - - b ( u k) < - e (by (A7)).

Since a(u i ) x x k - b(u ~ ) >1 0 for all i < k, we have, by Lemma 4,

k--I uk ~ U Bin( ui, e ( L 2 K + L 3 ) - I ) • (8 )

i=1

On the other hand, since U c R m is compact, we can find a box Q c R m containing

U. Let/3 be the side length of Q and M be the smallest integer that is greater than x / m f l / ( e ( L 2 K + L3)-1 ), i.e.,

M = [x / -mf i / (e (L2K + L3 )-~)] + 1.

Let us divide Q into M m subboxes of the same m-content, Q', i = 1 , . . . , M m,

satisfying int(Q ~) c~ int(Q j ) = 0 for all i # j , where int(Q ~) denotes the interior of

Page 13: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 97

Qi, For every u k generated by the algorithm, we can find a subbox Q~k containing

u k. Since the diameter of Qik is ,~/-mfl/M and x/-m~/M<~ e ( L 2 K + L 3 ) -~, we have

B,,,(u k, e(L2K + L 3 ) - ' ) ~ Q~k. (9)

From (8) and (9) we know that ukC:U~.Yll Q') and Q i ~ ¢ { Q i ' , . . . , Q ik '}. Consequently, the following theorem is established.

Theorem 4. I f assumptions (A5), (A6), and (A7) are satisfied, then an upper bound of the number of major iterations needed for finding an e-optimal solution is: ([v/-~fl /(e(L2K + L3)-1)] + 1)m, where fl is the side length of the box Q ~ U. []

5. Nonlinear semi-infinite programming

We now show that Algorithm 1 can be generalized to solve the nonlinear semi-infinite

program

(SIN) minimize f ( x )

subjec t to g ( x , u ) ~ O fora l l u c U,

x c S ,

where U and S are nonempty compact sets, f ( x ) is continuous on S, and g(x, u) is continuous on S x U.

Definition 3 (e-optimal solution). Let e > 0. A vector ff is an e-optimal solution of (SIN) if g(ff, u) ~>-e for all u c U and f(:~)<~ v(SIN), where v(SIN) denotes the

optimal objective function value of (SIN).

Algorithm 2 Step I

Let k : - 0; let a be a constant such that 0 < c~ < 1; let (NP(k)) be the nonlinear program

minimize f ( x )

subject to x ~ S .

Step 2 I f (NP(k)) is infeasible, then (SIN) is infeasible, stop. Else, find an optimal solution x k of (NP(k)) ; i f g ( x k, u)~>0 for all u c U,

then x k is an optimal solution of (SIN), stop.

Page 14: A one-phase algorithm for semi-infinite linear programming

98 H. Ha / Semi-infinite programming

Step 3 Find a u k ~ U such that g(x k, U k ) ~ OZ" m i n { g ( x k, u): u e U};

form ( N P ( k + 1)) by adding a cut, g(x, u k ) ~ o , to (NP(k));

k : = k + l ;

go to Step 2.

Notice that the convergence proof of Algorithm 1 is based on the uniform continuity of G ( x , y , z) on T x W, and now g(x, u) is uniformly continuous on

S x U. It is easy to see that the following theorem holds.

Theorem 5. Suppose that U and S are compact, g(x, u) is continuous on S x U, and

the algorithm does not stop finitely. Let the sequence x 1, x2, . . . , be generated by Step

2 o f Algorithm 2. Then:

(a) (Finite e-convergence) For any e satisfying O< e < cq there exists an integer N ( e ) such that x k is an e-optimal solution o f (SIN) for all k >~ N ( e ) .

(b) (Convergence) A n y cluster point o f the sequence x 1, x 2, . . . , is an optimal

solution o f (SIN). []

Remark 2. If (NP(k)) is a hard nonlinear program, one might want to find an

such that g(~, u) ~> - e for all u ~ U and f (g ) ~< v(SIN) + 6, where 6 > 0. In this case,

the optimal solution x k of (NP(k)) generated in Step 2 of Algorithm 2 may be replaced by any ~k that is feasible for (NP(k)) and that satisfiesf(~ k ) ~< v(NP(k)) + 6.

6. Applications to convex programming

The idea of solving an ordinary convex program by semi-infinite linear programming

comes from Dantzig (1963, Chapter 24). There he showed how to solve an ordinary convex program by generalized linear programming. His algorithm starts with a

feasible solution of the convex program that provides a nondegenerate basic feasible

solution for an initial restricted master program and then generates and solves a

sequence of linear programs. As we have seen in Section 1, a generalized linear

program can be considered as a semi-infinite linear program of the dual type. We

now show how to apply Algorithm 1 to a certain semi-infinite linear program so as

to obtain a feasible solution of the convex program. Using this feasible solution as a starting point, we then apply Algorithm 1 to another semi-infinite linear program

and obtain an optimal solution of the convex program. In particular, for a strongly

consistent convex program Algorithm 1 can find a feasible solution after a finite

number of iterations.

First, we show how to find a feasible solution of the convex program

(CP) minimize f ( x )

subject to gi(x)<~O for i = 1 , . . . , m,

x c T,

Page 15: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 9 9

where T ~ R n is a polytope, and f ( x ) and gi(x) for i = 1 , . . . , m are real valued

convex functions on R".

Let us construct a semi-infinite linear program whose optimal solutions can provide

feasible solutions to (CP). Let G ( x ) = ( 1 , - g l ( x ) , . . . , - g m ( X ) ) T and e i be the ith unit vector in R "+~.

Consider the following dual pair of semi-infinite linear programs:

(SL) maximize e~Ty

subject to G(x)Ty <~ 0 for all x ~ T,

0~<y~<l f o r i = 2 , . . . , m + l ,

and, for all finite subset {x~: i c A} of T,

(SD) minimize s~+. • • +sin

subject to ~ G(xi)hi - ~ ei+lt~iq- ~ ei+lsi=e 1, i c A i ~ l i : 1

h~>0 for all i cA,

/z,,si~>0 f o r / 1 , . . . , m .

Lemma 5. Algorithm 1 convergesJbr (SL) and (SD), and v(SL) = v(SD).

Proof. Let us pick up an x°c T randomly and start Algorithm 1 with the linear

program

(LP(0)) maximize el'y

subject to G(x°)Ty~O,

0<~yi~<l f o r i = 2 , . . . , m + l .

It is easy to see that (LP(0)) has an optimal solution. Then, Algorithm 1 generates

a sequence of linear programs

(LP(k)) maximize el'y

subject to G(x ~)vy~<O for i = 0 , 1 , . . . , k ,

0<~yi~<l for i = 2 , . . . , m + l ,

where x ~ c T for i= 0, ! , . . . , k. The dual of (LP(k)) is

(DP(k)) minimize s ,+ . . .+s , , ,

k ~ m subject to Z G(xi)hi - e~+ltzi+ E ei+' si = e 1,

i = 0 i = 1 i = 1

A~>0 for i = 0 , 1 , . . . , k ,

/z~,s~>0 f o r i = l , . . . , m .

Page 16: A one-phase algorithm for semi-infinite linear programming

1 0 0 H. Hu / Semi-infinite programming

Let yk be the optimal solution of (LP(k)) and (A k, i k, S k ) be the optimal solution

of (DP(k)) generated by Algorithm 1. To show that Algorithm 1 converges for (SL) and (SD), we only need to show that yk for k = 0, 1, 2 , . . . , is bounded. Indeed, by the duality theory of linear programming, we have

yk = ¢lTyk = ski "4- ' ' ' + S k, ~ O.

On the other hand, since yk is a feasible solution of (LP(k)), we have

y k ~ g l ( x ° ) y k + ' ' "+gm(xO)yk+l ~ ~ Ig,(x°)l. i--1

Hence, yk for k =0 , 1, 2 , . . . , is bounded. Let y* be a cluster point of the sequence yk. Then, by Theorems 1 and 2, we know that Algorithm 1 converges for (SL) and (SD) and v (SL)= v(SD). []

Lemma 6. (CP) is feasible i f and o n l y / f v ( S D ) = 0.

Proof. I f (CP) is feasible, i.e., there exists an x l c T satisfying g~(x~)~<0 for i = 1,. m, let A {1},A 1 1 . . , = ~= 1, tx~=-g i (x l )~>O for i = 1 , . . . , m , and s~=O~ for i = 1 , . . . , m. Then, we have

i + 1 1 e l 1 G(x~)AII- Z ei+ll~l e s i= and si=O. i = l i ~ l i = 1

Since v(SD) ~> 0 and we have a feasible solution of (SD) whose objective function value attains zero, we know that v(SD) = 0. Conversely, if v(SD) = 0, then by Lemma 5 we have

lira v(DP(k)) = v(SD) = v(SL) = 0. k ~ o o

First, let us consider the case v ( D P ( k ) ) = 0 for some k. Let (l~k,/,zk, sk) be the optimal solution of (DP(k)) generated by Algorithm 1 and ~k k =~i=oAkX '. By the feasibility of A k, we have A k >~0 and ~ = o Ak = 1, and thus ~k C T. By the convexity of gj(x) and the feasibility of (A k, IX k, s k ), we have (for all j = 1 , . . . , m),

gj(~k)=gj a fx ' <~ Y k , • A , g j ( x ) = - ~ < 0 . i = 0 i = 0

Namely, ~k is a feasible solution of(CP). Now let us consider the case v(DP(k)) > 0 for all k and l i m k ~ v(DP(k) ) = 0. Let (h k, txk, S k ) be the optimal solution of (DP(k)) generated by Algorithm 1, and ~k = Y'~-0 h k x i c T for all k = 0, 1 , . . . . Since ~=1 sk <~ ~ k s o and sk>~o for all k, the sequence s k is bounded. Therefore, the sequence t xk = ~ - o G( xi ) hk + sk is bounded. As T is convex and compact, we know that the sequence ~k has a cluster point x*. Without loss of generality, we assume that

Page 17: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 101

~k__) X:g, / k_.~ /[£*, and S k ' ' ~ S :~. Then, by the convexity and continuity o f gj(x) and the feasibility o f (h k, i~ k, sk), we have

k Ak gj(x*) = gj(l im ; k ) = lim gj(x )<~ lim E Afgj (x ' )= lim (-u~+s~)<~O.

k~oo k~c~ k~oo i = 0 k~co

Hence, x* is a feasible solution o f (CP). []

We have seen how to find a feasible solution o f (CP) by applying Algori thm 1

to (SL) and (SD). I f (CP) has a feasible solution, then in the case v (DP(k) ) = 0 for

some k, ~k is a feasible solution of (CP) and Algor i thm 1 stops finitely; otherwise,

v ( D P ( k ) ) ~ 0 as k ~ o o and hence, for any e > 0 , there exists a number N ( e ) > 0

such that 0 < v (DP(k) ) < e for all k>~ N(e) . This implies that for all k>~ N(e ) , we

have

k g j ( ~ k ) <~ Z k , Aigj(x ) = - / x ; + s ; ~ e.

i= 0

That is, for any e > 0 Algori thm 1 can find an x that satisfies gj(x)<~ e for all

j = 1, . . . , m after a finite number of iterations. I f we know that there exists a feasible

solution x ° of (CP) and a small positive number 6 such that gj(x °) ~< 6 < 0 for all

j = l , . . . , m , then let ~ j ( x ) - g j ( x ) + & As we have just shown, for e = 6 / 2 > 0 , Algori thm 1 can find an g that satisfies ~j(ff) ~< e after finitely many iterations. Since gi(g) = gj(g) + 6, we have gi(g) <~ - 6 / 2 < 0 for all j - 1 . . . . , m. Thus, the following lemma is established.

Lemma 7. For a strongly consistent (CP) (i.e., there exists an x ° ~ T such that gi( x °) < 0 for all j = 1 , . . . , m), if 6 is sufficiently small, then Algorithm 1 can find a feasible solution of (CP) after a finite number of iterations. []

In the rest of this section, we show how to find an opt imal solution o f (CP). Let

(~(x) = (1, g l ( x ) , . . . , gin(x)) T and e i be the ith unit vector in R "+1. Consider the dual pair o f semi-infinite linear programs:

i T (CSL) maximize e y

subject to (3(x)Ty<~f(x) for all x c T,

y i ~ 0 for i = 2 , . . . , m + l ,

and for all finite subset {x i" i c A} of T,

(CSD) minimize ~ f ( x ~)A~ iEz~

m subject to • d ( x i )A i+ }] i+, e /,,~i = e l ,

h~>O for all i c A ,

/xi~>0 for i = l , . . . , m .

Page 18: A one-phase algorithm for semi-infinite linear programming

1 0 2 H. H u / Semi- in f in i te p r o g r a m m i n g

Lemma 8. I f (CP) is strongly consistent, then Algorithm 1 converges for (CSL) and (CSD), and v(CSL) = v(CSD).

Proof. Find an x ° satisfying g i (x° )<0 for i = 1 , . . . m and start Algorithm 1 with the linear program

(LP(0)) maximize elTy

subject to G(x°)Ty<~f(x°),

yi<~0 for i = 2 , . . . , m + l .

It is easy to see that (LP(0)) has an optimal solution. Then, Algorithm 1 generates a sequence of linear programs

(LP(k)) maximize el~y

subject to t~(x i)Ty<~f(x ~) for i=0 , 1 , . . . , k,

y~<~0 for i = 2 , . . . , m + l ,

where x ic T for i=0 , 1 , . . . , k. The dual of (LP(k)) is

k

(DP(k)) minimize • f ( x ~)h~ i = 0

k m

subject to • (~(x ~)hi+ Z ei+l/zi = el, i--O i 1

h~>O for i=O, 1 , . . . , k ,

/z~>O for i = l , . . . , m .

Let yk be the optimal solution of (LP(k)) and (h k,/z k, s k ) be the optimal solution of (DP(k)) generated by Algorithm 1. We want to show that the sequence yk is bounded. By the optimality of yk and convexity o f f ( x ) , we have

k k k = elTyk

Y l = Y. f(xi)h/k~< 2 ]f(xi)lhki<maxlf(x)[, i--O i 0 x~ T

and

y~= f(xi)Ak>~f ~ A~x i ~>minf(x). i = 0 i 0 x~ T

Thus, yl k is bounded. By the feasibility of yk, we have

k 0 k y, + g,(x )Y2 + ' ' " + g,,(x°)yk+, <~f(x°).

Since gi(x °) < 0 for all i = 1, 2 , . . . , m and yk~<0 for all i = 2 , . . . , m + 1, we have (for all k =0, 1 , . . . ) ,

yk+l>~(f(x°)--yk)/gi(x° ) for i = l , . . . , m .

Thus yk is bounded and Algorithm 1 converges for (CSL) and (CSD). []

Page 19: A one-phase algorithm for semi-infinite linear programming

H. Hu / Semi-infinite programming 103

Remark 3. I f y k is an optimal solution of (CSL), then ~k = Y.~=o Akx~ is an optimal solution of (CP). If Algorithm 1 converges in the limit, then any cluster point of the sequence ~ J = Y / Akx ~ is an optimal solution of (CP) (see Dantzig (1963, i=O Chapter 24)).

Remark 4. Suppose that (CP) is not strongly consistent and that one can find x ic T, i = 1 , . . . , m, satisfying gj(x i)<~O for all i = 1 , . . . , m and j = 1 , . . . , m, and gi(x i) < 0 for all i = 1 , . . . , m. Then, Algorithm 1 converges for (CSL) and (CSD) (see Lemma 2).

Acknowledgements

I wish to thank my dissertation advisor, Professor G.B. Dantzig, for his encourage- ment, guidance, and support, to thank Professor A.J. Hoffman for his helpful suggestions, and to thank the referees for their valuable comments.

References

E.J. Anderson, "'A new primal algorithm for semi-infinite linear programming," in : E.J. Anderson and A.B. Philpott, eds., lnJinite Programming (Springer-Verlag, Berlin, 1985) pp. 108 122.

A. Charnes, W.W. Cooper and K. Kortanek, "On representations of semi-infinite programs which have no duality gaps," Management Science 12 (1965) 113 121.

A.R. Conn and N.I.M. Gould, "An exact penalty function for semi-infinite programming," Mathematical Programming 37 (1987) 19-40.

I.D. Coope and G.A. Watson, "A projected Lagrangian algorithm for sembinfinite programming," Mathematical Programming 32 (1985) 337 356.

G.B. Dantzig, Linear Programming and Extensions (Princeton University Press, Princeton, New Jersey, 1963).

R.J. Duflin and L.A. Karlovitz, "An infinite linear program with a duality gap," Management Science 12 (1965) 122-134.

R. Fletcher, "A nonlinear programming problem in statistics (educational testing)," SIAM Journal on Scientific and Statistical Computing 2 (1981) 257-267.

S.-A. Gustafson and K.O. Kortanek, "Numerical treatment of a class of semidnfinite programming problems," Naval Research Logistics Quarterly 20 (1973) 477 504.

R. Hettich, "An implementation o f a discretization method for semi-infinite programming," Mathematical Programming 34 (1986) 354 361.

R. Hettich, "A comparison of some numerical methods for semi-infinite programming," in: R. Hettich ed., Semi-#{linite Programming (Springer-Verlag, New York, 1979) pp. 112-125.

A.J. Hoffman, "On approximate solutions of systems of linear inequalities," Journal of Research of the National Bureau of Standards 49 (1952) 263-265.

D.E. Karney, "A duality theorem for semi-infinite convex programs and their finite subprograms," Mathematical Programming 27 (1983) 75 82.

R.T. Rockafellar, Convex Analysis (Princeton University Press, Princeton, New Jersey, 1972). G.A. Watson, "Lagrangian methods for semi-infinite programming problems," in: E.J. Anderson and

A.B. Philpott, eds., h!finite Programming (Springer-Verlag, Berlin, 1985) pp. 90 107.