solving multivariate polynomial systems using hyperplane arithmetic and linear programming

9
Computer-Aided Design 46 (2014) 101–109 Contents lists available at ScienceDirect Computer-Aided Design journal homepage: www.elsevier.com/locate/cad Solving multivariate polynomial systems using hyperplane arithmetic and linear programming Iddo Hanniel Technion, Israel Institute of Technology, Haifa, Israel highlights A new scalable algorithm for solving systems of multivariate polynomials. The concept of hyperplane arithmetic, which is used in our algorithm. A benchmark of example systems that are scalable in the number of variables. Implementation and comparison with previous algorithms. article info Keywords: Subdivision solver Multivariate solver Geometric constraints abstract Solving polynomial systems of equations is an important problem in many fields such as computer-aided design, manufacturing and robotics. In recent years, subdivision-based solvers, which typically make use of the properties of the Bézier/B-spline representation, have proven successful in solving such systems of polynomial constraints. A major drawback in using subdivision solvers is their lack of scalability. When the given constraint is represented as a tensor product of its variables, it grows exponentially in size as a function of the number of variables. In this paper, we present a new method for solving systems of poly- nomial constraints, which scales nicely for systems with a large number of variables and relatively low degree. Such systems appear in many application domains. The method is based on the concept of bound- ing hyperplane arithmetic, which can be viewed as a generalization of interval arithmetic. We construct bounding hyperplanes, which are then passed to a linear programming solver in order to reduce the root domain. We have implemented our method and present experimental results. The method is compared to previous methods and its advantages are discussed. © 2013 Elsevier Ltd. All rights reserved. 1. Introduction Solving polynomial systems of equations is a crucial problem in many fields such as robotics [1], computer aided design and manu- facturing [2,3], and many others [4]. This problem, namely finding the roots of a set of multivariate polynomial equations, is a difficult one and various approaches have been proposed for it. Symbolical approaches, such as Gröbner bases and similar elimination-based techniques [5], map the original system to a simpler one, which preserves the solution set. Polynomial continuation methods (also known as homotopy methods [4]) start at roots of a simpler system and trace a continuous transformation of the roots to the desired solution. These methods handle the system in a purely algebraic manner, find all complex and real roots, and give general informa- tion about the solution set. Such methods are typically not well- suited if only real roots are required. E-mail address: [email protected]. 1.1. Subdivision methods In recent years a family of solvers that focuses only on real roots in a given domain has been introduced. These methods are based on subdividing the domain and purging away subdomains that cannot contain a root. Thus, they are known as subdivision meth- ods (sometimes such methods are referred to as exclusion or gen- eralized bisection methods). Given n implicit algebraic equations in n variables, F i (x 1 , x 2 ,..., x n ) = 0, i = 1,..., n, (1) we seek all x = (x 1 , x 2 ,..., x n ) that simultaneously satisfy Eq. (1). A typical frame of a subdivision algorithm for finding the roots of a polynomial system F (x 1 ,..., x n ) = 0 over an n-dimensional domain box b within some predefined tolerance ϵ goes as follows: Algorithm: root_isolation_in_box Input: F (x 1 ,..., x n ), Box b[x min 1 , x max 1 ]×···×[x min n , x max n ] Output: list Boxboxes. 0010-4485/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cad.2013.08.022

Upload: iddo

Post on 23-Dec-2016

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

Computer-Aided Design 46 (2014) 101–109

Contents lists available at ScienceDirect

Computer-Aided Design

journal homepage: www.elsevier.com/locate/cad

Solving multivariate polynomial systems using hyperplane arithmeticand linear programming

Iddo HannielTechnion, Israel Institute of Technology, Haifa, Israel

h i g h l i g h t s

• A new scalable algorithm for solving systems of multivariate polynomials.• The concept of hyperplane arithmetic, which is used in our algorithm.• A benchmark of example systems that are scalable in the number of variables.• Implementation and comparison with previous algorithms.

a r t i c l e i n f o

Keywords:Subdivision solverMultivariate solverGeometric constraints

a b s t r a c t

Solving polynomial systems of equations is an important problem in many fields such as computer-aideddesign, manufacturing and robotics. In recent years, subdivision-based solvers, which typically make useof the properties of the Bézier/B-spline representation, have proven successful in solving such systems ofpolynomial constraints. A major drawback in using subdivision solvers is their lack of scalability. Whenthe given constraint is represented as a tensor product of its variables, it grows exponentially in size as afunction of the number of variables. In this paper, we present a new method for solving systems of poly-nomial constraints, which scales nicely for systems with a large number of variables and relatively lowdegree. Such systems appear in many application domains. Themethod is based on the concept of bound-ing hyperplane arithmetic, which can be viewed as a generalization of interval arithmetic. We constructbounding hyperplanes, which are then passed to a linear programming solver in order to reduce the rootdomain. We have implemented our method and present experimental results. The method is comparedto previous methods and its advantages are discussed.

© 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Solving polynomial systems of equations is a crucial problem inmany fields such as robotics [1], computer aided design andmanu-facturing [2,3], and many others [4]. This problem, namely findingthe roots of a set ofmultivariate polynomial equations, is a difficultone and various approaches have been proposed for it. Symbolicalapproaches, such as Gröbner bases and similar elimination-basedtechniques [5], map the original system to a simpler one, whichpreserves the solution set. Polynomial continuation methods (alsoknown as homotopymethods [4]) start at roots of a simpler systemand trace a continuous transformation of the roots to the desiredsolution. These methods handle the system in a purely algebraicmanner, find all complex and real roots, and give general informa-tion about the solution set. Such methods are typically not well-suited if only real roots are required.

E-mail address: [email protected].

0010-4485/$ – see front matter© 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.cad.2013.08.022

1.1. Subdivision methods

In recent years a family of solvers that focuses only on real rootsin a given domain has been introduced. These methods are basedon subdividing the domain and purging away subdomains thatcannot contain a root. Thus, they are known as subdivision meth-ods (sometimes such methods are referred to as exclusion or gen-eralized bisection methods). Given n implicit algebraic equationsin n variables,

Fi(x1, x2, . . . , xn) = 0, i = 1, . . . , n, (1)

we seek all x = (x1, x2, . . . , xn) that simultaneously satisfy Eq. (1).A typical frame of a subdivision algorithm for finding the roots

of a polynomial system F (x1, . . . , xn) = 0 over an n-dimensionaldomain box bwithin some predefined tolerance ϵ goes as follows:

Algorithm: root_isolation_in_boxInput: F (x1, . . . , xn), Box b[xmin

1 , xmax1 ] × · · · × [xmin

n , xmaxn ]

Output: list⟨Box⟩boxes.

Page 2: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

102 I. Hanniel / Computer-Aided Design 46 (2014) 101–109

(1) If (max(xmaxi −xmin

i ) < ϵ) append b to output boxes and return.(2) Evaluate bound interval on F in b.(3) If bound does not contain 0, return (there is no solution in b).(4) Otherwise: Split b into subdomains, b1, b2.(5) root_isolation_in_box (F , b1, boxes).(6) root_isolation_in_box (F , b2, boxes).

There are various modifications and enhancements to this gen-eral framework. The no-solution test in step (3) can be enhancedwith single solution tests [6,7], which enable stopping the sub-division process earlier. Then, the algorithm can switch to fasternumeric methods such as the Newton–Raphson iteration, whichconverge to a single root. Another commonmodification performsa more sophisticated domain reduction [8,9] in step (4), which en-ables to find tighter subdomains that contain roots and thereforeaccelerates the convergence of the algorithm.

A common approach for subdivision solvers, popular for its sim-plicity and wide generality, is interval arithmetic [10–12]. In in-terval arithmetic a value x is represented by a bounding intervalX = [xmin, xmax

]. Let Fi(x1, x2, . . . , xn) be a scalar function in n un-knowns, defined in a box b = [xmin

1 , xmax1 ] × · · · × [xmin

n , xmaxn ]. An

interval evaluation of Fi in b is an interval [Fmin, Fmax] such that

Fmin≤ Fi ≤ Fmax for any value of (x1, . . . , xn) ∈ b. That is, an

interval evaluation of a function for the box gives an interval thatcontains all possible values of the function evaluated on points inthe box. Therefore, if the interval evaluation of Fi ∈ b does notcontain zero, then no root can exist in b. This makes it suitable foruse in the root isolation algorithmdescribed above. There aremanyimplementations of interval arithmetic software packages [13,14]and in particular the ALIAS library [15] implements interval meth-ods for the determination of real roots of system of equations andinequalities. The main drawback of interval arithmetic is that thebounds given by the interval evaluation are not tight and with ev-ery arithmetic operation the looseness may accumulate. Thus, theinterval evaluationmay give bounds that are too loose to be useful.

1.2. Bézier/B-spline subdivision methods

Subdivision methods that are based on the tensorial BézierB-spline representation [16,2,8,9] give tight bounds for an exclusiontest, based on the convex hull property of the basis. They have beenimplemented in recent years and applied successfully to a widevariety of problem domains [2,3].

In B-spline/Bézier subdivision solvers (e.g., [2]), the Fi, i =

1, . . . , n, from Eq. (1) are usually represented as B-spline orBézier multivariate scalar functions, i.e.,

Fi =

i1

· · ·

in

Pi1,...,inBi1,ki1(x1) · · · Bin,kin (xn), (2)

where Bij,kijare the ij’th kij-degree Bézier/B-spline basis functions.

Patrikalakis and Sherbrooke [9] exploited the special propertiesof the Bézier representation for efficient reduction of the subdo-main where roots can exist. In their Projected Polyhedron (PP) al-gorithm the points of the control polyhedron are projected ontotwo-dimensional planes and the convex hull of their projectionis computed. The intersections of the convex hulls are then usedto reduce the domain. To achieve more robustness, Maekawa andPatrikalakis [17,18,3] extended the PP algorithm to operate inrounded interval arithmetic. This resulted in the Interval ProjectedPolyhedron (IPP) algorithm. Mourrain and Pavone [8] proposed amodification of the IPP algorithm so that instead of using the con-vex hull of the projected control points, the upper and lower en-velopes of the projections would be used as control polygons oftwo new Bézier forms. These Bézier forms still bound the origi-nal function from above and below, and therefore can be used asa tighter bound. They use a univariate root solver to find the roots

of these Bézier forms, and use them to construct the bounding in-tervals. Mourrain and Pavone also suggest a preconditioning stepthat uses an orthogonalization approach, whichmakes the domainreduction more efficient.

A single-solution test for B-spline/Bézier based solvers was pro-posed in [19]. This termination criterion was based on computingthe normal cones of the function using the Bézier or B-spline repre-sentation. The single solution test is then implemented using a dualhyperplane representation of the normal cones (see [6] for details).

1.3. Limitations of B-spline/Bézier subdivision methods

As noted above, B-spline/Bézier subdivision solvers have beenused successfully in recent years. However, the usage of the tensorform has a scalability limitation [20,21], which makes it imprac-tical for systems with a large number of variables. It can be seenfrom Eq. (2), that the B-spline/Bézier representation grows expo-nentiallywith the number of variables n. Given amultivariate poly-nomial inRn (i.e., of dimension n, with n unknowns x1, . . . , xn) anddegree d, it is typically represented with O(nd) coefficients usingthe standardmonomial form. However, it will be represented withO((d + 1)n) coefficients using the tensorial B-spline/Bézier rep-resentation. Thus, the B-spline/Bézier representation grows expo-nentiallywith n, whereas themonomial representation only growspolynomially in n. Therefore, when the degree d is much smallerthan n, the B-spline/Bézier representation is not efficient.

Furthermore, in many cases in practice, the actual monomialrepresentation in standard form is sparse and consists of fewercoefficients. For example, representing the constant 1 inmonomialform requires just one coefficient, compared to O((d + 1)n) in thedense Bernstein-Bézier representation. Similarly, representing alinear polynomial a0+a1x1+a2x2+· · ·+anxn requiresn coefficientsin the power basis and O((d + 1)n) in the Bernstein-Bézier basis.Due to its exponential growth and its dense representation, usingB-spline/Bézier subdivision methods is especially problematic formany engineering problems that are characterized by (or can betransformed to) systems of high-dimension (i.e., a large number ofvariables n) and relatively low degree d. For example, computingthe forward kinematics of a parallel robot [1,8] can be transformedto a system of quadratic constraints, but the number of variablesgrows with each joint of the mechanism.

Little work has addressed the explosion of the B-spline/Bézierform for high-dimensional polynomials. Elber and Grandine [20]represent multivariates as expression trees and compute boundson the expressions using interval arithmetic, to overcome thisproblem. Their approach is natural for symbolic manipulations offree-form curves and surfaces. It is thus suited to handle problemsarising from manipulations of splines with a large number of con-trol points (see [20]). However, the bounds given by the intervalarithmetic over the expression tree are not tight. Furthermore, theexpression tree structure is not well suited for more advanced do-main reduction algorithms such as the Projected Polyhedron al-gorithm. Fünfzig et al. [21] proposed a method based on linearprogramming (LP [22]) to address the high-dimensionality prob-lem for quadratic polynomials. Theyuse a linearization of the termsin the polynomials, representing each term of type XiXj as a sep-arate variable of an LP problem. Tight bounds on these variablesare constructed using Bernstein polyhedra (see [21] for details)and these inequalities are solved using an LP solver, resulting ina domain reduction. While their method is successful in handlingrelatively high-dimensional systems of quadratic multivariatepolynomials, it is not easily extended to higher degrees. Further-more, the LP problem constructed by thismethod is relatively largesince the number of variables depends on the number of terms inthe problem, which is quadratic in the general case.

In this paper, wewill present a newmethod for solving systemsof multivariate polynomials, which scales nicely for systems witha large number of variables and relatively low degree. In Section 2,

Page 3: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

I. Hanniel / Computer-Aided Design 46 (2014) 101–109 103

we will introduce the basic building blocks of our method anddescribe our algorithm. In Section 3 we will show experimentalresults comparing our algorithm to previous algorithms from theliterature. We conclude in Section 4, where we discuss our resultsand future directions of research.

2. Algorithm

The general concept of our algorithm is to bound the mul-tivariate polynomials by n-dimensional hyperplanes and reducethe domain using linear programming. Our bounding scheme isbased on the general bounding scheme presented in [6], which canbe computed for any dimension and any degree of Bézier hyper-surfaces. However, computing the bounding hyperplanes directlyfrom the dense Bézier representation is infeasible for high-dimensional problems. Therefore, we present a bounding schemethat is based on a sparse representation of the hypersurfaces. Thisbounding scheme is inspired by interval arithmetic. However, in-stead of summing bounding intervals of sub-expressions, as isdone in interval arithmetic, we sum bounding hyperplanes of sub-expressions. Thus, we call our bounding scheme hyperplane arith-metic.

In Section 2.1 we recall the general hyperplane boundingscheme presented in [6]. In Section 2.2 we propose a naive linearprogramming algorithm based on the bounding scheme presentedin Section 2.1. This naive algorithm does not scale well andtherefore is not feasible for high dimensional problems. We thenexplain in Section 2.3 the concept of hyperplane arithmetic and itsuse in boundingmultivariate polynomials in sparse representation.In Section 2.4 we summarize our algorithm and show how its basicbuilding blocks are assembled together.

2.1. Bounding a multivariate polynomial with hyperplanes

Our method is based on the hyperplane bounding scheme pre-sented in [6], we therefore present it here for completeness. If weconsider the functions Fi(x1, x2, . . . , xn) as hypersurfaces in Rn+1,we can use the Bézier/B-spline representation to bound the hyper-surface with two parallel hyperplanes.

Denote by promotion the process of converting the scalar func-tionFi(x1, x2, . . . , xn) to its vector function counterpart Fi : Rn

Rn+1, Fi = (x1, x2, . . . , xn, Fi). The normal of the explicit scalarfunction Fi is given by the vector vi = (

∂Fi∂x1

, . . . ,∂Fi∂xn

, −1). Thus,

we can compute the normal of Fi at a given point by evaluatingthe partial derivatives at that point. The unit normal vector v i isthen vi

∥vi∥.

The two bounding hyperplanes are constructed as follows.Compute the unit normal, v i = (vx1 , . . . , vxn+1) ∈ Rn+1, of Fi atthe midpoint of the subdomain. Then, project all control points ofFi onto v i (see Fig. 1(a) and (b) for an illustration on the one dimen-sional case).

Denote by xmax ∈ Rn+1 (resp. xmin ∈ Rn+1) the point on v i that isthe maximal (resp. minimal) projection of the control points ontov i. Then, the (n+1)-dimensional parallel bounding hyperplanes ofFi are:⟨x − xmin, v i⟩ = 0, ⟨x − xmax, v i⟩ = 0,where x ∈ Rn+1 (see Fig. 1(c)).

Since we are only interested in bounding Fi = 0, we only needthe intersection of these two (n + 1)-dimensional hyperplaneswith the xn+1 = 0 hyperplane. Eliminating the xn+1 coordinate,we remain with the d-dimensional hyperplanes bounding Fi = 0:

Kmini : ⟨x, v i⟩ |xn+1=0 = x1vi

1 + x2vi2 + · · · + xnvi

n = bmini ,

andKmaxi : ⟨x, v i⟩ |xn+1=0 = x1vi

1 + x2vi2 + · · · + xnvi

n = bmaxi ,

where bmini = ⟨xmin, v i⟩ and bmax

i = ⟨xmax, v i⟩ (see Fig. 1(c)).

Denote by Kmini and Kmax

i the half-spaces bounded by Kmini and

Kmaxi , oriented so that Fi = 0 is on their positive side. Fi = 0 is

thus bounded in the region Kmini

Kmaxi .

In [6] the authors presented this method of bounding a multi-variate polynomial and used it to check for subdomains that con-tain no solution. The authors suggested there that LP methods canbe used for this check, but did not implement such a method inpractice.

It should be noted that while this bounding scheme is general,it is not necessarily optimal. However, finding an optimal orientedbounding box of a set of points is hard even in R3 [23]. Therefore,this general scheme, which can easily be implemented for anydimension and degree, is a good choice for our needs. Still, forspecial cases, tighter specialized bounds can be implemented.

For example, a tighter bound of the function F (x) = x2 in adomain [xmin, xmax], can be computed using the convexity propertyof the function. The function graph is bounded from above by theline connecting its endpoints and from below by the parallel linetangent to the function. Thus, it is bounded by:

ax −

xmin + xmax

2

2

≤ F (x) ≤ ax − (xminxmax),

where a =x2min−x2maxxmin−xmax

= xmin + xmax. We use such a specializedscheme in our implementation (see also Section 3.2).

2.2. A naive linear programming algorithm

Based on the hyperplane bounding scheme from Section 2.1,we can construct an algorithm that uses linear programming fordomain reduction. The main idea of the algorithm is to bound thefunctions Fi = 0 by a pair of bounding hyperplanes Kmin

i andKmaxi for all 1 ≤ i ≤ n. Furthermore, the domain [xmin

1 , xmax1 ] ×

· · · [xmini , xmax

i ] · · · × [xminn , xmax

n ] under consideration is boundedby the 2n hyperplanes: xmin

1 ≤ x ≤ xmax1 , . . . , xmin

n ≤ x ≤ xmaxn .

Thus, we can give these 4n bounding hyperplanes as inequalitiesto a linear programming solver (e.g., the GLPK library [24], whichwe use in our implementations, see Section 3). Assigning goalfunctions of maximal and minimal xi will result in 2n linearprogramming problems, whose solution is a tighter n-dimensionalbox bounding the domain.

The algorithm then follows the general scheme described inSection 1.1, and the domain is reduced in Step (3) using the linearprogramming procedure outlined above.

The algorithm can be described as follows:Algorithm: Naive_LP_root_isolationInput: F (x1, . . . , xn), Box b[xmin

1 , xmax1 ] × · · · × [xmin

n , xmaxn ]

Output: list⟨Box⟩boxes

(1) If (max(xmaxi −xmin

i ) < ϵ) append b to output boxes and return.(2) Bound F in b and reduce the domain:

(a) Bound Fi by two parallel hyperplanes, using scheme fromSection 2.1.

(b) Perform2n LP problems to compute new [xmini , xmax

i ] valuesfor each xi.

(c) If one of the LP problems is infeasible, return (there is nosolution in b).

(d) Subdivide each Fi to the new domain with the new[xmin

i , xmaxi ] for each xi.

(3) Split b into subdomains, b1, b2 along the maximal axis of b.(4) Naive_LP_root_isolation (F , b1, boxes).(5) Naive_LP_root_isolation (F , b2, boxes).

Note that the LP reduction in Step (2) may not be satisfactory.For example, if two roots are in the domain, then the reduction stepcannot reduce the domain to be smaller than the distance between

Page 4: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

104 I. Hanniel / Computer-Aided Design 46 (2014) 101–109

a b c

Fig. 1. An illustration of bounding Fi for the simple case where n = 1. (a) Presents the promoted function Fi and its control points, along with the normal line v i of Fi atthe midpoint of the subdomain. In (b), the control points are projected onto the normal line. (c) Presents the two extreme bounding hyperplanes Kmin

i and Kmaxi (the thick

gray lines), which are intersected with the xn+1 = 0 hyperplane to construct the bounding region, in thick black, of the implicit function Fi = 0.Source: Taken from [6].

the roots, since both roots should be inside the reduced domain.Still, if this happens, Step (3) will split the domain (just as is donein the basic root_isolation_in_box algorithm) until the two rootsare separated.

While this algorithm enhances the basic root isolation schemeby LP-based domain reduction, it does not address the scalabilityproblem. Namely, the procedure described in Section 2.1 requiresgoing over all the control points of the hypersurface and projectthem onto the normal vector. Thus, as the dimension grows theprocedure runtime complexity grows exponentially and becomesinfeasible. We have implemented this algorithm for reference (seeSection 3) and denote it a naive LP algorithm. To make it feasiblefor high-dimensional systems, we need to compute a bound onthe function with a procedure that does not grow exponentially.In Section 2.3 we will explain our approach to this problem.

2.3. Hyperplanes arithmetic

Using the bounding scheme from Section 2.1 requires project-ing all of the control points onto the normal of the hypersurface.Thus, the construction of the bounding hyperplanes is exponentialin the number of variables (i.e., the dimension) n of the system.Therefore, we wish to compute bounding hyperplanes for high-dimensional systems using a more sparse representation.

Let the polynomial expression be represented in its monomialform. Then:

Fi(x) =

j

cjMj(x),

where Mj(x) is the j’th monomial term of the polynomial and cj isits coefficient. Themonomial terms of the polynomial consist of theproduct:

M(x1, x2, . . . , xn) = xα11 xα2

2 · · · xαnn

where αi are integer degrees and the sum of αi is less than or equalto the polynomial degree d. For an illustration, the polynomialF (x1, x2) = x21 + x22 has two monomial terms, namely x21 and x22.Note that in our context, of a relatively low degree compared tothe dimension, the number of variables xk actually participating inthe monomial (i.e., with αk = 0) is relatively small since it cannotexceed d. Furthermore, although the total number of monomialterms in a polynomial is O(nd), in many cases in practice thepolynomial is sparse and the actual number of monomial terms ismuch smaller (see examples in Section 3).

In interval arithmetic, a sub-expression fj is bounded by an in-terval [f min

j , f maxj ], and the sum of sub-expressions fi + fj is then

bounded by the interval [f mini + f min

j , f maxi + f max

j ]. A generalizationof this idea is to bound a sub-expression fj not by two values but bytwo functions [f min

j (x), f maxj (x)]. Thus, if f min

j (x) ≤ fj(x) ≤ f maxj (x)

for any point x in the domain and similarly f mini (x) ≤ fi(x) ≤

f maxi (x), then we get:

f mini (x) + f min

j (x) ≤ fi(x) + fj(x) ≤ f maxi (x) + f max

j (x).

In our context, the sub-expressions are the monomialsMj(x) ofthe polynomial and the bounding functions are linear functions,i.e., hyperplanes. Therefore, we call this generalization of intervalarithmetic hyperplane arithmetic.1

For an illustration, consider the polynomial F = x21 + x22 − 1 inthe domain [0, 1] × [0, 1] (see also Fig. 2). We can bound the firstmonomial term from above by the hyperplane f max

1 = x1 (since inthe domain x21 ≤ x1) and from below by f min

1 = x1 − 0.25 (sincein the domain x21 ≥ x1 − 0.25). Similarly, for the second term thebounds will be f max

2 = x2 and f min2 = x2 − 0.25. The total bounds

on f1 + f2 = x21 + x22 is therefore:

x1 + x2 − 0.5 ≤ x21 + x22 ≤ x1 + x2.

For the unit term (and also for linear terms) the bound is tight andthus f min

3 = −1 = f max3 . When we add f3 to the expression we

received (i.e., subtract 1 from the upper and lower bounds), we getthe total bounds on F = x21 + x22 − 1 (see Fig. 2):

x1 + x2 − 1.5 ≤ x21 + x22 − 1 ≤ x1 + x2 − 1.

Bounding any monomial term by two hyperplanes can be doneusing the procedure described in Section 2.1.We store the differentterm types as representative multivariate hypersurfaces in free-form representation.2 For example, for a degree-3 systemwe storethe following types of hypersurfaces: one-variable hypersurfaces– x2, x3, two-variable hypersurfaces – xy, x2y, and a three-variablehypersurface xyz. When a monomial term of type x21x2 is encoun-tered, for example, we reduce the representative hypersurface x2yto the subdomain [xmin

1 , xmax1 ] × [xmin

2 , xmax2 ] and bound the x2y hy-

persurface in the new subdomain by two hyperplanes Kmin(x1, x2)and Kmax(x1, x2).

Once a monomial term Mj(x) is bounded by two hyperplanesKminj and Kmax

j , we multiply it by its constant coefficient cj. If thecoefficient is positive, then we just multiply the hyperplane coeffi-cients by cj. For a negative cj, however, we have to switch betweenthe lower and upper hyperplanes since if:

Kmin≤ f ≤ Kmax

and cj < 0 then:

cjKmax≤ cjf ≤ cjKmin.

Thus, to bound the polynomial Fi(x) =

j cjMj(x), the bound-ing procedure, goes over all termsMj, bounds them, andmultipliesby their coefficient cj. The upper and lower bounding hyperplanesKminij and Kmax

ij of each term are summed resulting in two hyper-planes Kmin

i and Kmaxi that bound Fi.

1 Note that the analogy to interval arithmetic is not full aswe do not supportmul-tiplication of two sub-expressions, only addition, subtraction and multiplicationby a scalar.2 In our implementation we use IRIT [25] multivariates.

Page 5: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

I. Hanniel / Computer-Aided Design 46 (2014) 101–109 105

a b c

Fig. 2. An illustration of hyperplane arithmetic for bounding F = x2 + y2 − 1, in the domain [0, 1] × [0, 1]. (a) F = x2 and its bounding hyperplanes: F max= x and

F min= x − 0.25. (b) F = y2 and its bounding hyperplanes: F max

= y and F min= y − 0.25. (c) F = x2 + y2 − 1 and its bounding hyperplanes, which are the sum of the

upper and lower bounding planes of (a) and (b) minus the 1 unit: F max= x + y − 1 and F min

= x + y − 1.5. The intersection curve with the xy-plane, and its boundinglines are also depicted in bold.

The advantage of hyperplane arithmetic over interval arith-metic (for addition and subtraction) is that it gives tighter boundssince the bound is a linear function and not a constant function.For linear functions, the hyperplane arithmetic is actually exact.Furthermore, the linear bounds can then be used by an efficient LPmethod for domain reduction.

The advantage of hyperplane arithmetic over the naive methodfromSection 2.1 is that it does not require the full dense Bézier rep-resentation for its computation. The sparse monomial represen-tation suffices to compute the bounding hyperplanes. In ourimplementation we use a sparse representation that stores onlythemonomial terms actually participating in the system (i.e.,O(nd)in the worst case, but less for sparse systems).

2.4. Algorithm overview

Using all the building blocks presented in the preceding sec-tions, our algorithm is now straightforward. Basically, we im-plement the naive LP algorithm from Section 2.2, but use thehyperplane arithmetic from Section 2.3 to bound the function fromabove and below.

The algorithm can therefore be described as follows:Algorithm: LP_root_isolation_using_HP_arithmeticInput: F (x1, . . . , xn), Box b[xmin

1 , xmax1 ] × · · · × [xmin

n , xmaxn ]

Output: list⟨Box⟩boxes(1) If (max(xmax

i −xmini ) < ϵ) append b to output boxes and return.

(2) Bound F in b using hyperplane arithmetic and LP:(a) Bound Fi by two parallel hyperplanes, using hyperplane

arithmetic of bounds on its terms (Section 2.3).(b) Perform2n LP problems to compute new [xmin

i , xmaxi ] values

for each xi.(c) If one of the LP problems is infeasible, return (there is no

solution in b).(d) Set b to be the new domain with the new [xmin

i , xmaxi ] for

each xi.(3) Split b into subdomains, b1, b2 along the maximal axis of b.(4) LP_root_isolation_using_HP_arithmetic (F , b1, boxes).(5) LP_root_isolation_using_HP_arithmetic (F , b2, boxes).

Note that unlike the dense BézierB-spline representation, thesplitting in Steps (3) and (2.d) does not require a full subdivisionprocess in a sparse representation. Only the box is updated and thesubdivision is applied for each termwhen bounding it in Step (2.a).

3. Experimental results

In this section we present several examples of solving poly-nomial systems and compare our method to implementations ofsome of the previous subdivision methods from the literature.

We have implemented the hyperplane arithmetic linear pro-gramming algorithm presented in this paper (Section 2.4) and de-note it in the following sections by HPA_LP. Another method wehave implemented for comparison is the naive linear program-ming algorithm described in Section 2.2, we denote it by Naive_LP.Furthermore, we have implemented a basic variant of the Bern-stein Polytope LP method described in [21], which we denote byBernst_LP.

We also compare our method to subdivision methods imple-mented in the IRIT library [25]. IRIT has several flags that enableit to switch between subdivision methods. We use two IRIT algo-rithm variants. The first IRIT implementation is a basic subdivisionthat performs no reduction, which was the initial algorithm pre-sented in [2], we denote this method by IRIT_Naive. The secondmethod incorporates a reduction scheme that is a variant of theprojected polyhedron method from [9,3], we denote this methodby IRIT_PP.

In all the implementations of the LP-based methods (HPA_LP,Naive_LP, Bernst_LP), we used the GNU Linear Programming Kit(GLPK 4.45 [24]), with the basic simplexmethod.We alsomade useof the IRIT library for basic geometric and multivariate operationsin Bézier and B-spline bases. All tests were run on a Windows 7system with an Intel Core i7 CPU (1.6 GHz) and 10 GB of installedmemory.

The main goal of our tests is to see how our method scales incomparisonwith othermethods. Therefore, to test ourmethod, weconcentrated on problems that can easily be scaled and that theirsolutions can easily be verified. In all the following examples/teststhe subdivision was performed up to a tolerance of ϵ = 10−3.

3.1. Intersecting hypercylinders

The first test benchmark we devised in order to compare thescaling of the different methods is the following system:

Fi(x1, . . . , xn) =

j=i

x2j − 1 = 0. (3)

For n = 3, this system represents the intersection of threecylinders of radius 1 aligned along the main axes (see Fig. 3).

It is easy to verify, by subtracting every consecutive pair ofequations in the system, that x2i = x2j for all i, j. Thus, the solutionof this system is given by:

xi = ±

1

n − 1.

From the form of the solution it is clear that there are 2n so-lutions (corresponding to the 2n combinations of plus and minus)in the [−1, 1]n domain. However, we were mostly interested the

Page 6: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

106 I. Hanniel / Computer-Aided Design 46 (2014) 101–109

Fig. 3. The Hypercylinders system from Eq. (3) for n = 3 in the domain [−2, 2] ×

[−2, 2] × [−2, 2]. The solutions correspond to the intersection points between thered, green and blue cylinders. (For interpretation of the references to colour in thisfigure legend, the reader is referred to the web version of this article.)

Fig. 4. Comparison of the IRIT_Naive, IRIT_PP, Naive_LP, Bernst_LP and HPA_LP forn = 3, . . . , 7 on the Hypercylinders input. See also Table 1.

comparison between the different reduction schemes. Therefore,we tested the different methods in the [0, 1]n domain, where onlya single root exists (the all-positive root).

Table 1 presents the running times (in milliseconds) for the dif-ferent methods. Note, that for the IRIT_Naive, IRIT_PP and Naive_LPmethods, a preprocessing step is performed, which converts thestandard polynomial form to the dense Bézier/B-Spline represen-tation. This preprocessing time should be added to the computa-tion times for these methods to get their total computation times.In Figs. 4–11, the values are plottedwithout taking the preprocess-ing time into consideration.

Fig. 4 shows a graph comparison of the different methods forn = 3, . . . , 7. As can be seen the naive Bézier representationmeth-ods IRIT_Naive and Naive_LP perform poorly on this input. Eventhough it uses a dense Bézier representation, the IRIT_PP methodperforms rather well. However, as can be seen in Fig. 5, its runtimedeteriorates as n increases. Also, as noted above, the slow prepro-cessing step should also be taken into account. The performance ofthe Bernst_LP method is exceptionally good on this input.

Fig. 5. Comparison of only the fastest methods from Fig. 4 for n = 3, . . . , 10. Seealso Table 1.

Fig. 6. The Hypercylinders system of degree 3 for n = 3 in the domain[0, 1] × [0, 1] × [0, 1]. The solution corresponds to the (single) intersection point

( 3

12 , 3

12 , 3

12 ) between the red, green and blue cylinders. (For interpretation of

the references to colour in this figure legend, the reader is referred to the webversion of this article.)

3.2. Extension to degree-3

We extend the hypercylinder system from Section 3.1 to thecubic system:

Fi(x1, . . . , xn) =

j=i

x3j − 1 = 0. (4)

The solution of this system is given by:

xi =3

1

n − 1.

Therefore, there is only a single real solution for this system. Fig. 6shows the surfaces represented by this system for n = 3.

We tested the differentmethods in the [0, 1]n domain. As notedin Section 1.2, the Bernst_LP method is not extended to higher de-grees and therefore was not tested on this system. Table 2 presentsthe running times (in milliseconds) for the different methods.

As can be seen in Fig. 7, the naive Bézier representation meth-ods IRIT_Naive and Naive_LP perform poorly on this input as well.While the IRIT_PP method performs better, its runtime still in-creases as n grows and it has a large overhead in the preprocessingstep.

As mentioned in Section 2.1, the general hyperplane boundingscheme is not necessarily optimal. Therefore, wewanted to test theeffect of tighter specialized hyperplane bounds.

Page 7: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

I. Hanniel / Computer-Aided Design 46 (2014) 101–109 107

Table 1Computation time for the intersecting Hypercylinders system as a function of n = 3, . . . , 10. Times are given in milliseconds. The times in parentheses are the conversion(preprocess) times from standard polynomial form to the Bézier representation, which is required only for the IRIT_Naive, IRIT_PP and Naive_LP methods.

3 4 5 6 7 8 9 10

Preprocess (0.27) (1.15) (9.74) (82.8) (902) (4004) (52135) (223409)IRIT_Naive 6.43 74.63 1322 42759 – – – –IRIT_PP 2.08 4.1 10.96 75.04 427.6 1043 8444 20891Naive_LP 12.29 69.74 720.5 6360 137130 – – –Bernst_LP 18.34 21.08 26.23 38.46 79.06 95.25 137.1 138.9HPA_LP 4.3 6.16 24.14 96.58 455.5 995.9 6058 13115

Table 2Computation time for the degree-3 intersecting Hypercylinders system as a function of n = 3, . . . , 10. Times are given in milliseconds. The Bernst_LP method is notimplemented for degree-3 systems.

3 4 5 6 7 8 9 10

Preprocess (0.6) (7.7) (124) (1467) (18336) (269331) (–) (–)IRIT_Naive 7.79 138 3429 267003 – – – –IRIT_PP 2.13 6.71 40.1 269 1644 11196 – –Naive_LP 18.78 87.4 1257 31730 759257 – – –HPA_LP 3.34 12.2 32.66 127.9 522.8 2215 12380 52393HPA_LP_no_op 5.33 11.9 176.6 623.2 3234 18012 93438 480143

Fig. 7. Comparison of the IRIT_Naive, IRIT_PP, Naive_LP, and HPA_LP for n =

3, . . . , 7 on the degree-3 Hypercylinders input. See also Table 2.

Fig. 8. Comparison of computation with and without the specialized boundingmethod optimized for x3 . See also Table 2.

For F (x) = x3 in a domain [xmin, xmax] such a tighter bound isgiven by:

ax + l ≤ F (x) ≤ ax + u,

where a is the slope between the endpoints:

a =x3max − x3min

xmax − xmin= x2max + xminxmax + x2min.

For a totally positive domain (i.e., xmin > 0), u can be computed asthe line passing through the point (xmin, x3min):

u = x3min − axmin,

Fig. 9. Comparison of the IRIT_PP,Naive_LP, Bernst_LP andHPA_LP for n = 1, . . . , 6on the Broyden tridiagonal input. Note that between n = 5 and n = 6 Bernst_LPbecomes faster than the Bézier representation methods IRIT_PP and Naive_LP. Seealso Table 3.

Fig. 10. Comparison of the sparse-representation methods Bernst_LP and HPA_LPfor n = 3, . . . , 10 on the Broyden tridiagonal input. See also Table 3.

and l can be computed for the tangent line with slope a:

l = −2a3

a3.

Similar bounds are derived for totally negative domains or domainscontaining zero.

To test the effect of these tighter bounds, we implemented aversion of our algorithm without the optimized bounds describedabove. We denote this version HPA_LP_no_opt. Fig. 8 shows acomparison of the two versions. As can be seen, the effect of tighterhyperplane bounds is indeed significant.

Page 8: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

108 I. Hanniel / Computer-Aided Design 46 (2014) 101–109

Table 3Computation time for the Broyden tridiagonal system as a function of n. Times aregiven in milliseconds.

3 4 5 6 7 8 9 10

Preprocess (0.32) (1.17) (9.07) (80.59) (528.6)IRIT_Naive 7.6070 38.231 141.95 696.26 4569 – – –IRIT_PP 6.0878 24.008 163.50 1917.7 21998 – – –Naive_LP 19.970 93.846 340.81 1409.1 6588 – – –Bernst_LP 102.60 181.57 466.94 929.89 2186 4644.7 9347.1 1763HPA_LP 4.9676 9.3681 12.06 19.966 36.17 52.49 85.550 152.12

3.3. Broyden tridiagonal system

The Broyden Tridiagonal system is a classic test function usedin the optimization community [26,27]. The system is defined by:

(3 − 2x1)x1 − 2x2 + 1 = 0· · ·

(3 − 2xi)xi − xi−1 − 2xi+1 + 1 = 0, i = 2, . . . , n − 1· · ·

(3 − 2xn)xn − xn−1 + 1 = 0.As can be seen, the system can easily be constructed for

increasing n values. Furthermore, it has two real roots in thedomain [−2, 2]n [28].While as an optimization problem itmay notnecessarily be required to compute all the roots, in our context weare interested in computing all real roots of the system within thedomain.

Table 3 presents the running times (in milliseconds) for thedifferent methods. As expected, while the methods using Bézierrepresentation (namely IRIT_PP, IRIT_Naive and Naive_LP) are rel-atively fast for small n values, they become slower as n increases. Inparticular, the Bernst_LPmethod becomes faster than theNaive_LPand IRIT_PP methods when n is greater than 5 (see Fig. 9). A some-what surprising result is the fact that the IRIT_Naive method per-forms better than the IRIT_PP and Naive_LP methods (see Table 3).This is contrary to our expectations that the more sophisticatedmethodswill outperform the naive ones. Another interesting resultis the relatively poor performance of the sparse Bernst_LP methodon this input. Even so, as expected, for a sufficiently large n (7 in ourcase) the Bernst_LP method outperforms all the dense Bézier rep-resentation methods including IRIT_Naive.

As can be seen in Fig. 10, our HPA_LP algorithm works well onthis benchmark. This might be related to the fact that the Broydeninput containsmany linear terms. Since for linear termsour boundsare exact, the total bounds on the polynomials are tight. In fact, itshould be noted that for a totally linear system, theHPA_LPmethodreturns the solution within one iteration of the LP reduction.

3.4. A dense example system

The previous examples have all been sparse systems. To testthe behavior of a dense system we use the following system takenfrom [29], where it was used in the context of network manage-ment algorithms. In our context, we use it as an example of a densescalable system with verifiable roots.

The system is given by the equations:

∀l : Fl(x1, . . . , xl) =

li=1

(x2i − xi) − δ2

l

i=1

xi

2

= 0, (5)

where δ is a given (user-defined) constant and xi, i = 1, . . . , nare the variables of the system. As can be seen, this is a quadraticsystem containing all the xi, x2i , and xixj monomial term types. TheFl functions also have a geometric meaning. For δ2 < 1

2 they canbe viewed as l-dimensional ellipsoids (in (x1, . . . , xl)-space) withtheir main axis along the vector (1, 1, . . . , 1) and going throughthe origin.

Table 4Computation time for the dense systemwith δ = 0.5, as a function of n = 2, . . . , 6.Times are given in milliseconds.

2 3 4 5 6

Preprocess (0.069) (0.214) (1.59495) (8.66) (70.73)IRIT_Naive 2.2 3.6 37.4 383.4 7966IRIT_PP 1.45 5.4 51.4 626 22693Bernst_LP 28.7 115 712.3 4254 35195HPA_LP 5.019 37.09 56.1 261.9 914

Fig. 11. Comparison of runs on the dense system for n = 2, . . . , 6. See also Table 4.

This system is triangular and has an analytic solution that can becomputed recursively. The solution point (x1, x2, . . . , xn) is givenby the following recursive equation (see [29] for details):

xl =

1 + 2δ2l−1i=1

xi

1 − δ2or xl = 0,

with

x1 =1

1 − δ2or x1 = 0.

The total number of solutions in [0, ∞)n is therefore 2n (in everystep of the recursion one can choose 0 or the recursion formula, sothe solution is equivalent to n-digit binary numbers). In the strictlypositive domain (0, ∞)n the system has a single solution.

To compare the convergence of the different methods, wetested the system for δ = 0.5 and for n = 2, . . . , 6. The initialdomain was set to [ϵ, 20]n, where ϵ = 0.01. Table 4 summarizesthe results.

Fig. 11 shows a graph comparison of the different methods forn = 2, . . . , 6. As can be seen in the figure, for this input ourHPA_LPalgorithm performs much better than the other algorithms. It isinteresting to note that for this benchmark too the IRIT_Naivealgorithm performs better than the IRIT_PP algorithm and bothperform better than the Bernst_LP algorithm.

4. Discussion and future work

In this paper,we have presented a newmethod for solving poly-nomial systems, which is based on a representation that scalesnicely for systems with relatively low degree and many variables.The method is based on the concept we termed bounding hyper-plane arithmetic, which can be viewed as a generalization of inter-val arithmetic. The advantages of thismethod compared to intervalarithmetic methods is in its tight bounds. Compared to previoussubdivisionmethods,which are based ondense Bézier/Bspline rep-resentations, the sparse representation used in ourmethod enablesbetter scaling of the problems, as the number of variables grows.On the other hand, our bounding method is general enough to be

Page 9: Solving multivariate polynomial systems using hyperplane arithmetic and linear programming

I. Hanniel / Computer-Aided Design 46 (2014) 101–109 109

implemented for any degree. The experimental results show thatour method compares favorably to previous methods from the lit-erature.

Our algorithm can be improved and extended in several ways.Adding inequality constraints to the system can be done by bound-ing the inequality constraint with a hyperplane from only one side(above or below depending on the inequality). The additional hy-perplanes from the inequalities only enhance the LP problem andtherefore might be incorporated into the algorithmwithout modi-fying its flow. As shown in Section 3.2, the tightness of the boundsis an important factor in the performance of the algorithm. Thus,any specialized bound that will produce tighter bounding hyper-planes compared to the general scheme from Section 2.1, will re-sult in better reduction and improved performance. Such tighterbounds might be based on the piecewise linear upper and lowerbounds termed slefes (subdividable linear efficient function enclo-sures), whichwere described in [30]. The computation of slefeswillincur an overhead compared to using control points. However, spe-cialized bounds that are based on slefes are expected to be tighterthan the general bounding scheme since the slefe points bound thepolynomial more tightly than the control points [31]. Further re-search should also be conducted on subdivision stopping criteria,which can work on our sparse representation.

Solving polynomial systems is a difficult and important prob-lem. Different parameters affect the efficiency of the differentmethods. In this paper we concentrated on the number of vari-ables of the system.Other parameters are the systemdegree and itsdenseness. The experimental results in Section 3 seem to indicatethat no one solution method is superior for all inputs. This leads usto believe that further research should be conducted to comparedifferent methods and discover specialized algorithms that can beexpected towork better on specific types of input, as we have donefor scalable systems. A practical algorithm might be to run combi-nations of the different reduction methods, or run them in paralleland choose the best reduction or an intersection of the reductions.

One research direction should be theoretical, trying to identifyand analyze new parameters that affect the runtime of the algo-rithms. It seems that the geometry of the problem affects the run-ning time (compare, for example, the results in Sections 3.1 and3.3), but we currently have nomeasure to tell us if a given problemgeometry is well suited for a given method. Such measures shouldbe given further study. Another direction of study should be empir-ical, comparing implementations of different methods on differentbenchmarks. We hope that the scalable benchmarks presented inthis paper are a first step in this direction, and other benchmarkswill follow.

Acknowledgments

The author wishes to thank Hagay Bamberger for helpfuldiscussions and comments. This work was partly supported by theLoewengart Research Fund.

References

[1] Merlet J-P. Parallel robots. Solid mechanics and its applications, Kluwer; 2005.[2] Elber G, Kim M-S. Geometric constraint solver using multivariate rational

spline functions. In: SMA 2001: proceedings of the sixth ACM symposium onsolid modeling and applications. ACM; 2001. p. 1–10.

[3] Patrikalakis N,Maekawa T. Shape interrogation for computer aided design andmanufacturing. Mathematics and visualization, Springer; 2002.

[4] Sommese AJ, Wampler CW. The numerical solution of systems ofpolynomials—arising in engineering and science. World Scientific; 2005.

[5] Cox D, Little J, O’Shea D. Ideals, varieties, and algorithms: an introduction tocomputational algebraic geometry and commutative algebra. Undergraduatetexts in mathematics, vol. 10. Springer; 2007.

[6] Hanniel I, Elber G. Subdivision termination criteria in subdivisionmultivariatesolvers using dual hyperplanes representations. Computer-Aided Design2007;39(5):369–78.

[7] Tapia RA. The Kantorovich theorem for Newton’s method. The AmericanMathematical Monthly 1971;78(4):389–92.

[8] Mourrain B, Pavone JP. Subdivisionmethods for solving polynomial equations.Journal of Symbolic Computation 2009;44(3):292–306.

[9] Sherbrooke EC, Patrikalakis NM. Computation of the solutions of nonlinearpolynomial systems. Computer Aided Geometric Design 1993;10(5):379–405.

[10] Merlet J-P. Interval analysis for certified numerical solution of problems inrobotics. Applied Mathematics and Computer Science 2009;19(3):399–412.

[11] Moore RE, Kearfott RB, CloudMJ. Introduction to interval analysis. SIAM; 2009.[12] Neumaier A. Interval methods for systems of equations. Encyclopedia of

mathematics and its applications, Cambridge University Press; 1990.[13] The Boost interval arithmetic library. www.boost.org/doc/libs/.[14] Rump S. INTLAB—INTerval LABoratory. In: Csendes T, editor. Developments in

reliable computing. Dordrecht: Kluwer Academic Publishers; 1999. p. 77–104.www.ti3.tuhh.de/rump/.

[15] Merlet J-P. The ALIAS library: an algorithms library of interval analysis forequation systems. www-sop.inria.fr/coprin/logiciels/ALIAS/ALIAS.html.

[16] Lane J, Riesenfeld R. Bounds on a polynomial. BIT Numerical Mathematics1981;21:112–7.

[17] Maekawa T, Patrikalakis NM. Computation of singularities and intersectionsof offsets of planar curves. Computer Aided Geometric Design 1993;10(5):407–29.

[18] Maekawa T, Patrikalakis NM. Interrogation of differential geometry propertiesfor design and manufacture. The Visual Computer 1994;10(4):216–37.

[19] Hanniel I, Elber G. Subdivision termination criteria in subdivisionmultivariatesolvers. In: GMP. 2006. p. 115–28.

[20] Elber G, Grandine TA. An efficient solution to systems of multivariatepolynomial using expression trees. IEEE Transactions on Visualization andComputer Graphics 2009;15(4):596–604.

[21] Fünfzig C, Michelucci D, Foufou S. Nonlinear systems solver in floating-pointarithmetic using LP reduction. In: Symposium on solid and physical modeling.2009. p. 123–34.

[22] Noyes J, Weisstein EW. Linear programming. From MathWorld—a wolframwebresource. http://mathworld.wolfram.com/LinearProgramming.html.

[23] Barequet G, Har-peled S. Efficiently approximating the minimum-volumebounding box of a point set in three dimensions. In: Proc. 10th ACM–SIAMsympos. discrete algorithms. 2001. pp. 38–91.

[24] GLPK: the GNU Linear Programming Kit. www.gnu.org/software/glpk/.[25] Elber G. The IRIT 11 User Manual, 2013. http://www.cs.technion.ac.il/~irit.[26] Moré JJ, Garbow BS, Hillstrom KE. Testing unconstrained optimization

software. ACM Transactions on Mathematical Software 1981;7(1):17–41.[27] Broyden C. A class of methods for solving nonlinear simultaneous equations.

Mathematics of Computation 1965;19:577–93.[28] The COPRIN examples page. www-sop.inria.fr/coprin/logiciels/ALIAS/

Benches/benches.html.[29] Tsidon E, Hanniel I, Keslassy I. Estimators also need shared values to grow

together. In: INFOCOM. 2012. p. 1889–97.[30] Peters J. Mid-structures linking curved and linear geometry. In: SIAM

conference on geometric design and computing. 2003.[31] Peters J, Wu X. On the optimality of piecewise linear max-norm enclosures

based on slefes. In: Proceedings of the 2002 St Malo conference on curves andsurfaces. 2003. p. 335–44.