zhao14_chanceconstrainedoptimization

Upload: sandeep-gogadi

Post on 02-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/10/2019 Zhao14_ChanceConstrainedOptimization

    1/7

    A MCMC/Bernstein Approach to Chance Constrained Programs

    Zinan Zhao and Mrinal Kumar

    Abstract This paper presents an extension of convex Bern-stein approximations to non-affine and dependent chance con-strained optimization problems. The Bernstein approximationtechnique transcribes probabilistic constraints into conservativeconvex deterministic constraints, relying heavily upon the eval-uation of exponential moment generating functions. This is acomputationally burdensome task for non-affine probabilisticconstraints involving dependent random variables. In thispaper, the theoretical framework of Bernstein approximationsis combined with the practical benefits of Markov chain MonteCarlo (MCMC) integration for its use in a range of highdimensional applications. Numerical results for the combinedBernstein/MCMC approach are compared with scenario ap-proximations.

    I. INTRODUCTION ANDP ROBLEMS TATEMENT

    Chance constrained programs arise in numerous appli-

    cations including portfolio optimization (Ref.[1]), water

    management (Ref.[2]) and chemical processes optimization

    (Refs.[3], [4]). We consider the following optimization prob-

    lem involving a deterministic cost function and a mixture of

    probabilistic and deterministic constraints:

    minxX

    J(x)

    subject to Prob{F(x,) 0} G(x) 0H(x) =0

    , (1)

    where x is the decision variable in a nonempty set X Rn,J : Rn R is a real valued cost function and is arandom variable with probability measure Z() supportedon a set Rd. Also, is the prescribed risk parameterwhich should be no less than the violation rate of the

    joint probabilistic constraints represented by the vector func-

    tion F= (f1, . . . ,fm): Rn Rm. The vector functions

    G= (g1, . . . ,gp): Rn Rp and H= (h1, . . . ,hq): R

    n Rq

    represent deterministic constraints. Finally, Prob{} denotesprobability and / denote vector inequalities.

    Chance constrained programs were first introduced by

    Charnes et al. (Ref.[5]) and have been studied extensively in

    the stochastic programming literature (Ref.[6]). Except forsome special cases, e.g., linear chance constraint Prob{Ax 0} , where A is a deterministic matrix and islogarithmically concave distributed, they remained largely

    intractable and little progress had been made until recently.

    This work was not supported by any organizationZ. Zhao is a graduate student in the Department of Mechanical and

    Aerospace Engineering, University of Florida, Gainesville, FL 32608, [email protected]

    M. Kumar is with the Department of Mechanical and AerospaceEngineering, University of Florida, Gainesville, FL 32608, [email protected]

    Recent developments can be grouped into two distinct cate-

    gories. The first category includes sampling-based methods

    that use Monte Carlo samples to approximate the probabilis-

    tic constraints and the final approximation is stochastic (i.e.

    a random variable). Essentially, the probabilistic constraints

    are enforced in a deterministic manner at a set of (stochastic)

    collocation points, which in turn are sampled from the

    underlying random variable ( in Eq.1). Notable methods

    include sample average approximation (Ref.[7], [8]) and

    scenario approximation (Ref.[9], [10]). Ref.[10] shows that

    the sample size for scenario approximation should be at least

    inversely proportional to the risk parameter .

    In the second category, at least part of the solution isdeveloped analytically. The probabilistic constraints are ana-

    lytically transcribed into equivalent deterministic constraints,

    so that the resulting problem can be numerically solved

    using existing nonlinear programming (NLP) solvers. One

    suite of methods, developed in the field of reliability based

    design optimization include the first-order and second-order

    reliability methods (Ref.[11], [12]). These are based on the

    first and second order approximations of the constraining

    function F and can be inaccurate in highly nonlinear situ-

    ations. Another approach, called the Bernstein approxima-

    tion was proposed by Nemirovski et al. (Ref.[13]). Here,

    the exponential moment generating function is utilized for

    transcription of the probabilistic constraints into conservativeconvex deterministic constraints. Care must be taken to avoid

    over-conservatism during the transcription or else severely

    suboptimal solutions may result.

    In order to ensure solvability of the transcribed opti-

    mization problem in the traditional Bernstein approximation

    approach, it is assumed that the cost function J() andthe search space X are both convex. In addition, there are

    three key assumptions about the nature of the probabilistic

    constraints: (1) the function F(x, ) (assume scalar, can beextended to vector functions in similar manner) is convex in

    x; (2) function F(,) is affine in and; (3) random vari-able (if multivariate) has independent components. Under

    these assumptions, the approach was applied successfully tochance constraints of the form:

    F(x,) = f0(x) +d

    j=1

    fj(x)j. (2)

    If the affinity and/or independence assumptions stated above

    are violated, the approach requires the evaluation of multi-

    dimensional integrals, which is computationally burdensome.

    Unfortunately, several real-life optimization problems do not

    satisfy the three assumptions on the chance constraints. To

    the best of knowledge of the authors, the performance of

  • 8/10/2019 Zhao14_ChanceConstrainedOptimization

    2/7

    Bernstein approximations in these problems has not been

    investigated. The current paper presents a numerical adap-

    tation of the Bernstein approximation approach for opti-

    mization problems with general probabilistic constraints not

    satisfying the above listed assumptions. Markov chain Monte

    Carlo (MCMC) numerical integration is utilized to solve

    high dimensional problems lacking affinity and convexity

    of F(, ) as well as independence of . The transcribednonconvex optimization problem is solved using off-the-shelf

    NLP solvers.

    The rest of the paper is organized as follows: In Section II,

    the scenario approach is reviewed and the proposed MCMC

    variant of the Bernstein approximation is developed. Section

    III discusses the need for multi-dimensional integration in the

    Bernstein approximation. Section IV, the MCMC-Bernstein

    approach is applied to several optimization problems with

    non-affine constraints involving dependent random variables,

    and the results are compared with the scenario approximation

    technique. Concluding remarks are provided in Section VI

    along with directions for future research.

    I I . SCENARIO ANDB ERNSTEIN A PPROXIMATION

    METHODS

    In this section, a brief review of the scenario and Bern-

    stein approximation techniques is provided. Under the three

    assumptions on probabilistic constraints discussed in Section

    I, Bernstein approximation is able to circumvent the need

    for multi-dimensional integration, which can be a significant

    roadblock in high dimensional applications.

    A. Scenario Approximation

    Consider the scalar case for the constraining function

    F, with m = 1. Assuming that N samples are availablefor the random variable, i.e., {i}Ni=1 Z(), the scenarioapproximation enforces the probabilistic constraint of Eq.1

    as a set of Ndeterministic constraints

    F(x,i) 0, i=1, ,N. (3)

    Although Eq.3 is a set of deterministic constraints, the

    scenario approximation method itself is non-deterministic.

    This approach is over-conservative in the sense that each

    constraint listed in Eq.3 must be satisfied as a hard constraint.

    In other words, there is zero tolerance (i.e. zero violation)

    for the samples considered. At the same time, there is no

    gaurantee for any other realization of the random variable .A question often faced in the scenario approach is determin-

    ing the size of the sample (N). It was shown in Ref.[10] that

    N should be at least inversely proportional to the acceptable

    risk parameter , which can be extremely large. While thisis unattractive, rapid advancements in our computational

    capabilities have rendered it less of a concern. The fact

    however remains that the scenario approximation is still

    rigorously valid only for the particular samples drawn and

    gross statistical estimates about the approximation cannot be

    inferred with ease.

    B. Bernstein Approximation

    In this semi-analytical approach, the idea is to determine a

    deterministic upper bound for the probabilistic constraint in

    Eq.1. Nemirovski (Ref.[13]) achieved this via the exponen-

    tial moment generating function, to construct the so-called

    Bernstein approximation of the probabilistic constraint

    inf>0

    {log{E[exp(1F(x,))]}log()} 0. (4)

    Note that the parameter appearing in Eq.4 is different fromthe original risk parameter and in general >. This ison account of the fact that the approximation of Eq.4 is more

    conservative than the original constraint of Eq.1. If =is used, suboptimal results will be obtained. However, there

    are no guidelines for the actual margin in this relaxation and

    a numerical study is presented in Sec IV. For >0, theexpectation term in Eq.4 represents the exponential moment

    generating function of F. Note that Prob{F(x,) 0} =E[1[0,+)(

    1F(x, ))] E[exp(1F(x,))] (> 0).

    Thereby, Eq.4 is a conservative approximation of the original

    probabilistic constraint in Eq.1. The benefit of using therefore is that it serves as an auxiliary variable minimizing

    the moment generating function and hence achieving the

    tightest upper bound. This minimization process can also

    be jointly carried out in the optimization routine by treating

    [x,] as an augmented decision variable, hence the currentpaper uses a variant of Eq.4, given below:

    B(x,) 0, (5)

    where B(x,) = log{E[exp(1F(x, ))]} log().

    Note that in implementation, the convergence of the moment

    generating function is a concern, and depends on the nature

    of the function F(, ).

    C. Numerical Integration for Moment Generating Function

    The exponential moment generating function appearing in

    Eq.5 involves a d-dimensional integral whose evaluation can

    be greatly simplified if the function F is assumed to be

    affine (Eq.2) in , which in turn in assumed to be a vector

    of independent random variables, i.e. Z() =dj=1Zj(j),whereZj(j)is the marginal pdf ofj with support j. Then,the left hand side of Eq.5 becomes (Ref.[13])

    f0+log[1

    exp(1f1)Z1 d1] + +

    log[d

    exp(1fd)Zddd], (6)

    whose evaluation involves only a set ofdone-dimensional in-

    tegrals. In many applications, the above assumptions are not

    applicable, causing the Bernsteing approximation approach

    to become computationally tedious. In the next section, we

    utilize Markov chain Monte Carlo integration to alleviate this

    problem.

    III. MARKOVC HAINM ONTE C ARLOI NTEGRATION

    Markov chain Monte Carlo (MCMC) is a suite of ran-

    domized algorithms used to generate discrete samples from

    nonstandard probability density functions. MCMC samples

  • 8/10/2019 Zhao14_ChanceConstrainedOptimization

    3/7

    are equivalent in measure to the target pdf, partly due to

    which they have been highly successful in tackling several

    high dimensional numerical problems in engineering, includ-

    ing numerical multiple-integration. Let {i}Ni=1 Z() bean MCMC sample. Then the moment generating function

    appearing in Eq.5 can be approximated as:

    E[exp(1F(x, ))] 1N

    N

    i=1

    exp(1F(x, i)). (7)

    Substituting Eq.7 into Eq.5 gives the evaluation Bernstein

    approximation B(x,). In addition, it can be seen that thegradientsxB(x,)and B(x,)are also readily available.With these quantities, we can now give the algorithm for

    MCMC/Bernstein approach.

    Algorithm 1 MCMC/Bernstein approach

    1) Generate N MCMC samples {i}Ni=1 following thedistribution Z().

    2) Initialize decision variablex and auxiliary variable .Let [x,] be an augmented decision vector.

    3) DOgradient-based optimization (e.g. using SNOPTc)

    until the breaking condition is reached

    a) Cost and constraints: Evaluate B(x,) via in-tegration using MCMC samples {i}Ni=1. Alsocompute J(x), G(x) and H(x);

    b) Gradients: Evaluate B(x,) and xB(x,)using MCMC integration. Also compute xJ(x),xG(x) and xH(x);

    c) Update the augmented decision vector [x,] perSNOPT c rules

    4) END DO

    5) Optimal augmented decision vector is [x,].

    The accuracy of MCMC integration (Eq.7) depends on

    the quality of the samples, which can in turn impact the

    optimization results. The current paper uses a variant of the

    Metropolis-Hastings (MH) algorithm (Refs. [14], [15], [16])

    to generate samples. Adaptive-proposal tuning based on the

    history of acceptance probability is used to encourage better

    mixing in the chain (Refs.[17], [18]), along with particle

    thinning to reduce correlation among samples.

    Here, we study the quality of MH-sampling with both

    adaptive proposal tuning and thinning for a high-dimensional

    pdf used in numerical examples in Sec.IV. Consider atwenty dimensional zero-mean Gaussian distribution with a

    covariance matrix containing ones along the diagonal and

    0.15 in each off-diagonal location. In adaptive tuning, of the

    most recent Q samples generated if more than 70% were

    accepted, the covariance matrix of the proposal density is

    scaled up by a factor u. If less than 30% are accepted,the covariance is scaled down by the factor d. The test isrepeated one hundred times and in each run, one million

    samples are generated after burn-in. The results for Q=10are shown in Table I.

    u/d no adaptation 1.1/0.9 1.2/0.8 1.2/0.7e 0.0701 0.0577 0.0611 0.0615e 3.83% 3.16% 3.36% 3.29%

    TABLE I

    MH-SAMPLING OF A20-D CORRELATEDG AUSSIAN WITH A DAPTIVE

    SCALING

    Note that Table.I uses two quality metrics:

    e= (8a)

    e=P P

    P 100 (8b)

    where is the 2-norm. The metric erepresents the normof the error in the mean ( is the sample mean) and e is the

    relative error in the covariance matrix. Table I suggests that

    u = 1.1,d=0.9 (i.e. 10% scaling) provides the greatestimprovement over fixed proposal sampling, and is adopted

    in the current paper. Combined with a thinning strategy in

    which the particles are reduced in number by half, Fig.1

    shows the convergence of quality metrics e and e. Settinga threshold of 2% error in e, convergence is seen for N8104 samples.

    (a) norm of error of mean

    (b) norm of relative error of covariance

    Fig. 1. Error plots of simulated Gaussian samples

    IV. RESULTS

    In this section, the MCMC/Bernstein approach is ap-

    plied to chance constrained programs that require numerical

  • 8/10/2019 Zhao14_ChanceConstrainedOptimization

    4/7

    evaluation of multi-dimensional integrals due to the non-

    affine nature of the constraints and/or dependence among

    random parameters contained therein, as is true for most

    real-life problems. The transcribed deterministic nonlinear

    optimization problems are solved using the NLP package

    SNOPT c. For the computed optimal decision vector, Monte

    Carlo tests are performed using 1 million realizations of the

    random variable () to compute the constraint violation rate.

    A. Chance Program 1: Portfolio Optimization

    Consider a static-model portfolio optimization problem in

    which capital is allocated among d assets. Let xj be the

    fraction invested in asset j, j = 1, ,dsuch that dj=1xj= 1.Excluding short sales, investments in all assets are non-

    negative, i.e.,xj 0 for alli. The objective is to maximize the-th quantile of the total return, denoted by t. The decisionvector is x= {x1,x2, . . . ,xd, t}. Letting rj be the return ininvestment for asset j, the optimization problem can be

    formulated as (Ref.[13])

    maxJ(x) = t, such that,

    d

    j=1

    xj=1, xj 0, j=1, . . .d, and,

    Prob

    d

    j=1

    rjxj< t

    (9)

    Let rj = (1 + 0.06j) + (0.04j)j , j = 1, ,d. Here, ={1, . . .d} is a correlated Gaussian random variable withthe distribution specified in Sec.III, as a result of which

    numerical integration is required for transcription. We use

    d=20 and set the allowable risk parameter = 0.01 (i.e. athreshold of 1% violation rate). The Bernstein approximation

    of the probabilistic constraint using MCMC integration leads

    to the following inequality:

    log{1

    N

    N

    i=1

    exp(1(td

    i=1

    rijxj))}log() 0, (10)

    whererij= (1 +0.06j)+(0.04j)i

    j,i = 1, ,N, j= 1, ,d.As mentioned in Sec.II-B, the risk factor used in the

    Bernstein approximation can be allowed to be greater than

    the originally specified risk, i.e. , and we study fivecases: 1= 0.01,

    2= 0.02, . . .,

    5= 0.05. The simulation

    results are presented in Figure 2 as a function of the number

    of MCMC samples used for numerically evaluating the

    Bernstein approximation (x-axis). Fig.2(a) shows the optimal

    total return (t) obtained for each case and Fig.2(b) theviolation rate at the obtained optimal solution (t), computed

    using 106 MC realizations of.

    The following observations can be made: It was seen in

    Sec.III that the MCMC samples for the twenty dimensional

    correlated Gaussian random variable converge for N> 8104. In accordance with this result, the optimization results

    (optimal value t and the corresponding violation rate at

    t) also show convergence for N> 8 104 (Fig.2). As theBernstein risk parameter is allowed to increase, theoptimal return t improves, albeit with greater associated

    (a) Optimal Value (t)

    (b) Violation Rate at t

    Fig. 2. Simulation results for portfolio optimization using Bernsteinapproximation

    violation rate. However, for N> 8 104, the computed

    violation rate is under the specified allowable risk (= 0.01)for all five cases. For as high as 0.05, the computedviolation rate is still under 1%, emphasizing the conservatism

    of the Bernstein approximation. It is therefore possible to

    improve the quality of optimization by tuning .

    In Fig.3, comparative results for the portfolio optimization

    problem is shown using the scenario approach. Figs.3(a) and

    3(b) illustrate the maximum variations above and below the

    mean observed in the optimal return (t) and violation raterespectively over ten runs of the algorithm. It is clear that

    there is significant variation even when a large number of

    constraining scenarios are included. Moreover, while one

    may talk about the mean optimal value (solid line in

    Fig.3(a)), it does not make any sense to talk about the meandecision vector. Finally, as mentioned in Sec.II-A, it is

    difficult in the scenario approach to ascertain the level of

    risk associated with the obtained approximation, until an MC

    simulation is performed to numerically compute the violation

    rate (att), shown in Fig.3(b). It is clear that as the number of

    scenarios increases, the approximation becomes increasingly

    conservative (actual violation rate well below prescribed risk

    of 1%), at the cost of suboptimal results (note the decreasing

    total return). It is evident that the MCMC/Bernstein approach

    is better for this problem.

  • 8/10/2019 Zhao14_ChanceConstrainedOptimization

    5/7

    (a) Optimal Value (t)

    (b) Violation Rate at t

    Fig. 3. Simulation results for portfolio optimization using scenarioapproximation

    B. Chance Program 2: Flood Control

    Described in Ref.[19], this problem involves the minimiza-tion of the construction-cost of a five-reservoir flood control

    system designed for retaining flood water. The capacities

    of individual reservoirs are denoted as xi, i= 1, ,5 andthe volume of incoming flood water as i, i=1, ,5 (fiveinlets). The variable = {1, . . . ,5} follows a correlateddistribution. The optimization problem can be formulated as

    follows

    minJ(x) =0.4x1+ 0.5x2+ 0.6x3+ 1.2x4+ 1.8x5, (11a)

    s.t. Prob

    5x5>0

    4+5x4x5>01+4+5x1x4x5>02+4+5x2x4x5>03+4+5x3x4x5>0

    1+2+4+5x1x2x4x5>01+3+4+5x1x3x4x5>02+3+4+5x2x3x4x5>0

    1+2+3+4+5x1x2x3x4x5>0

    ,

    (11b)

    0 xi 1, i=1,2,3

    0 x4 2, 0 x5 3. (11c)

    The function J(x) represents the cost of construction of thesystem. Eq.11b is a cascading set of probabilistic constraints

    capturing various water overflow situations. Note that this

    joint constraint begins by looking at the overflow probability

    at a single reservoir (#5) and single inlet (5), all the waydown to the combined capacity of all reservoirs filling up

    through all inlets. Nemirovski et al. provided a few strategies

    in Ref.[13] to separate out joint chance constraints. Cur-

    rently, there exist no methods for dealing with the full joint

    constraint and no established guidelines for separating them.

    Motivated by the finite subadditivity of probability measureP(Mi=1Ai)

    Mi=1 P(Ai)

    , Ref.[13] suggests the assignment

    of individual risk, i to each constraint such that Mi=1 i ,

    assuming there areMjoint constraints. The actual assignment

    of numerical values to the individual risks is an open problem

    (Ref.[13]). In Ref.[19], the supporting hyperplane method

    was employed for two cases, =0.1 and = 0.2, i.e. a riskof 10% and 20% respectively, using correlated Gaussian and

    Gamma distributions. The Bernstein approach of Ref.[13]

    cannot directly be applied here because of the correlated

    nature of the random variable .Using the MCMC based Bernstein approximation, opti-

    mization results are shown in Fig.4 as a function of the

    number of MCMC samples. Four cases of risk assignment

    are tested, listed in Table II. In case 1, each constraint

    is assigned a Bernstein risk of i =0.2. Compounded bythe fact that the Bernstein approximation is conservative

    to begin with, this simple risk assignment strategy results

    is excessively conservative estimates (note the computed

    joint violation rate in Table II). Motivated by the previously

    demonstrated fact that assignment of a high Bernstein risk

    () does not violate the originally prescribed risk (), case2 onwards ramps up the individual risk of each constraint as

    shown in Table II. Increasing Bernstein risks (i ) uniformlyfrom 20% per constraint to 50% per constraint improves the

    optimality of the solution while maintaining the overall joint

    violation rate under 15% (see Fig.4).

    Case # i , i =1, . . .9 Violation Rate (%) Optimal Value1 0.2 4.1 7.8

    2 0.3 6.7 7.3

    3 0.4 9.7 6.8

    4 0.5 13.2 6.4

    TABLE II

    RISK ASSIGNMENT( IN %) T OI NDIVIDUALC HANCEC ONSTRAINTS:

    FLOODC ONTROL

    In comparison, the scenario approach fails to produce

    a result for this optimization problem. Recall that in this

    method, probabilistic constraints are imposed as a set of hard

    deterministic constraints for a sample of scenarios generated

    from the random variable ( in the above problem). In the

    flood control problem, this technique becomes infeasible due

    to the following reason: note that the maximum possible

    value of the sum 5i=1xi is 8 by virtue of the deterministic

    constraints on the individual reservoir capacities (Eq.11c).

    However, upon sampling the random variable , the value

  • 8/10/2019 Zhao14_ChanceConstrainedOptimization

    6/7

    (a) optimal value (t)

    (b) violation rate at t

    Fig. 4. Simulation results for the flood control problem using Bernsteinapproximation

    of the sum 5i=1 i often exceeds 8

    =max

    5i=1xi

    . For

    these scenarios, the ninth probabilistic constraint in Eq.11b

    becomes impossible to satisfy. Since each constraint isimposed in a deterministic manner, the feasible region for

    optimization shrinks down to the null set. For example, in

    15% of the samples in a small set of 100 realizations of,

    5i=1 i > max

    5i=1xi

    , which leads to a null search space.

    Note that as the number of scenarios considered becomes

    larger, the likelihood of creating impossible constraints only

    becomes greater, causing the method to fail.

    Note that in the current problem, it was easy to detect the

    incidence of impossible constraints due to the simple form

    of Eq.11b. In general, this may not be feasible and therefore

    a strategy of simply rejecting the scenarios that lead to

    impossible constraints cannot be employed.

    V. DISCUSSION

    The Bernstein method results in a conservative deter-

    ministic approximation of the probabilistic constraints. The

    current paper employs MCMC integration (a randomized

    technique) to extend the Bernstein approach to the case

    of non-affine probabilistic constraints involving correlated

    random variables. MCMC integration is a well established,

    successful numerical integration technique that generally

    scales well to high dimensional problems. We observed that

    convergence of optimization follows the same trend as the

    convergence of the MCMC integration. On the other hand,

    the scenario approximation is a stochastic technique that

    implements the probabilistic constraints in a deterministic

    manner for a sample of constraining scenarios. There is

    significant variation about the mean optimization results and

    the approximation becomes increasingly conservative as the

    number of constraining scenarios is increased.

    One must note however that in very high dimensional

    applications, MCMC sampling may require significant tun-

    ing. While numerous adaptive MCMC algorithms exist

    (Refs.[20], [21]), care is needed to ensure that enough

    samples are used for convergence of optimization. In con-

    trast, scenario approximation offers speed in implementation.

    While sampling is needed in the scenario approach as well

    (to generate constraining scenarios), the number of samples

    required is significantly less.

    Both the Bernstein and scenario approaches are conserva-

    tive and a tradeoff exists between optimality and violation

    rate. In the Bernstein method, it is straightforward to control

    the violation rate by tuning the Bernstein risk, and

    generally,

    > (the actual prescribed risk). However, thereare no guidelines for the margin in this relaxation and tuningmust be carried out empirically. On the other hand, there

    is little control over the risk associated with the scenario

    method. Given the large variations in the optimal values

    as well as violation rates, one is typically forced to pick

    a relatively suboptimal solution.

    Finally, neither the MCMC/Bernstein technique nor the

    scenario method are applicable to all types of problems. The

    Bernstein approach is based on the evaluation of a moment

    generating function, which may diverge if the function F is

    not integrable, or if during the execution of the algorithm,

    the parameter becomes too small (Eq.7). It has already

    been demonstrated that the scenario approach is not suitableto problems which involve complex probabilistic constraints

    which may lead to impossible deterministic constraints for

    certain realizations of the random variable .

    VI. CONCLUSIONS

    In this paper, the Bernstein approximation method was

    combined with MCMC numerical integration to extend it

    to optimization problems containing non-affine chance con-

    straints with correlated random variables. It was shown

    through numerical examples that convergence of the op-

    timization procedure follows the same trend as the con-

    vergence of numerical integration. Generally, the Bernstein

    approach provides greater flexibility and wider applicability

    than the popular scenario method, although at the cost of

    greater computational effort. Through the numerical ex-

    amples considered, it has been shown that it is relatively

    straightforward to control the constraint violation rate by

    tuning the so-called Bernstein risk parameter.

    REFERENCES

    [1] P. Krokhmal, J. Palmquist, and S. Uryasev, Portfolio optimizationwith conditional value-at risk objective and constraints, Journal ofRisk, vol. 4, pp. 4368, 2002.

  • 8/10/2019 Zhao14_ChanceConstrainedOptimization

    7/7

    [2] J. Dupacova, A. Gaivoronski, Z. Kos, and T. Szantai, Stochasticprogramming in water management: a case study and a comparisonof solution techniques, European Journal of Operational Research,vol. 52, pp. 2844, 1991.

    [3] R. Henrion, P. Li, A. Moller, M. C. Steinback, M. Wendt, andG. Wozny, Stochastic optimization for operating chemical processesunder uncertainty, in Online Optimization of Large Scale Systems,M. Grotschel, S. Krunke, and J. Rambau, Eds. Springer, 2001, pp.457478.

    [4] R. Henrion and A. Moller, Optimization of a continuous distillation

    process under random inflow rate, Computers and Mathematics withApplications, vol. 45, pp. 247262, 2003.

    [5] A. Charnes, W. W.Cooper, and G. H. Symmonds, Cost horizonsand certainty equivalents: an approach to stochastic programming ofheating oil, Management Science, vol. 4, pp. 235263, 1958.

    [6] A. Prekopa, Stochastic Programming. Dordrecht: Kluwer AcademicPublishers, 1995.

    [7] J. Luedtke and S. Ahmed, A sample approximation approachfor optimization with probabilistic constraints, SIAM Journal onOptimization, vol. 19, pp. 674699, 2008.

    [8] B. K. Pagnoncelli, S. Ahmed, and A. Shapiro, Sample averageapproximation method for chance sample average approximationmethod for chance constrained programming: Theory and applica-tions, Journal of Optimization Theory and Applications, vol. 142,pp. 399416, 2009.

    [9] G. C. Calafiore and M. C. Campi, Uncertain convex programs: ran-domized solutions and confidence levels, Mathematical Programming,

    vol. 102, pp. 2546, 2005.[10] , The scenario approach to robust control design, IEEE

    Transactions on Automatic Control, vol. 51, pp. 742753, 2006.[11] J. Tu, K. K. Choi, and Y. H. Park, A new study on reliability-based

    design optimization, Journal of Mathematical Design, vol. 121, pp.557564, 1999.

    [12] I. Lee, Y. Noh, and D. Yoo, A novel second-order reliability method(sorm) using noncentral or generalized chi-squared distributions,Journal of Mathematical Design, vol. 134, 2012.

    [13] A. Nemirovski and A. Shapiro, Convex approximations of chanceconstrained programs, SIAM Journal on Optimization, vol. 17, pp.969996, 2006.

    [14] N. Metropolis, A. Rosenbluth, M. N. Rosenbluth, A. H. Teller,and E. Teller, Equations of state calculations by fast computingmachines, Journal of Chemical Physics, vol. 21, pp. 10871092, 1953.

    [15] W. K. Hastings, onte carlo sampling methods using markov chainsand their applications, Bioamietrika, vol. 57, pp. 97109, 1970.

    [16] S. Chib and E. Greenberg, Understanding the metropolis-hastingsalgorithm, American Statistician, vol. 49, no. 4, pp. 327335, 1995.

    [17] P. Muller, A generic approach to posterior integration and gibbssampling, Purdue University, Tech. Rep., February 1991.

    [18] L. Tierney, Markov chains for exploring posterior distributions,Annals of Statistics, pp. 17011762, 1994.

    [19] A. Prekopa, Flood control reservoir system design flood controlreservoir system design using stochastic programming, MathematicalProgramming Study, vol. 9, pp. 138151, 1978.

    [20] R. M. Neal, The short-cut metropolis method, Tech. Rep., 2005.[21] H. Haario, E. Saksman, and J. Tamminen, Componentwise adaptation

    for high dimensional mcmc, Computational Statistics, vol. 20, no. 2,pp. 265273, 2005.