modification of karmarkar’s projective scaling algorithm
TRANSCRIPT
Applied Mathematics and Computation 216 (2010) 227–235
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier .com/ locate/amc
Modification of Karmarkar’s projective scaling algorithm
R.K. Nayak a,*, M.P. Biswal b, S. Padhy c
a International Institute of Information Technology, Bhubaneswar, Orissa, Indiab Indian Institute of Technology, Kharagpur, West Bengal, Indiac Utkal University, Bhubaneswar, Orissa, India
a r t i c l e i n f o
Keywords:Interior point methodKarmarkar’s methodHooker’s methodPreconditioned Conjugate Gradient method
096-3003/$ - see front matter � 2010 Elsevier Inoi:10.1016/j.amc.2010.01.043
* Corresponding author.E-mail addresses: [email protected] (R.K. Naya
a b s t r a c t
This paper presents a new conversion technique of the standard linear programming prob-lem into a homogenous form desired for the Karmarkar’s algorithm, where we employedthe primal–dual method. The new converted linear programming problem provides initialbasic feasible solution, simplex structure, and homogenous matrix. Apart from the trans-formation, Hooker’s method of projected direction is employed in the Karmarkar’s algo-rithm and the modified algorithm is presented. The modified algorithm has a fasterconvergence with a suitable choice of step size.
� 2010 Elsevier Inc. All rights reserved.
1. Introduction
Since its introduction in 1984, Karmarkar’s projective scaling algorithm [6] has became the most popular interior pointmethod for solving the linear programming problems. For larger LP problems it is significantly faster than any simplex meth-od. The standard form of a LP problem is given as:
min : cT x
subject to Ax ¼ b
x P 0:ð1:0:1Þ
The most important factor in implementing the Karmarkar’s algorithm is that it requires a special structure of LP problemgiven as:
min : cT x
subject to Ax ¼ 0
eT x ¼ 1;x P 0;
ð1:0:2Þ
where A is an m� n matrix of full row rank, eT ¼ ð1;1; . . . ;1Þ 2 Rn and c; x 2 Rn.We present a new technique of transformation of LP problem (1.0.1) into (1.0.2). We employ the Hooker’s method [3] of
projected direction. The step size of moving to next iteration in the transformed space has been discussed. With this mod-ification to the main algorithm of Karmarkar’s, a new algorithm is presented. We rename the algorithm as Karmarkar–Hoo-ker (KH) algorithm. It is important to understand that the implementation technique plays a major role in the efficiency ofany method. While implementing the KH algorithm the inversion of matrix is taken care by the Preconditioned Conjugate
c. All rights reserved.
k), [email protected] (M.P. Biswal), [email protected] (S. Padhy).
0d
228 R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235
Gradient (PCG) algorithm, as the PCG algorithm terminates in a finite number of steps.This paper has been divided into five Sections. In Section 2 the conversion technique for a general LP problem into the
standard form is presented. In Section 3 the KH algorithm is discussed in detail. In Section 4 the implementation issuesare discussed. Finally, in Section 5 some numerical examples and the concluding remarks are presented.
2. Transformation technique
In this section we present the transformation of LP problem (1.0.1) to (1.0.2). It is well known that if a LP problem has asolution, then its primal–dual combined problem must has a solution. Therefore we combined primal–dual LP problem.
Consider the primal program as
ðPÞ : min : cT x
subject to Ax ¼ b
x P 0:ð2:0:3Þ
Dual of the primal program can be presented as
ðDÞ : max : bT y
subject to AT yþ s ¼ c
s P 0; y is free:
ð2:0:4Þ
The combined primal–dual LP problem can be written as:
Ax ¼ b;
AT yþ s ¼ c;
cT x� bT y ¼ 0;x P 0; s P 0; y is free:
The optimality condition of (P) and (D) may be stated by the primal–dual (PD) combined problem:
find z;
such that Qz ¼ h;ð2:0:5Þ
where
Q ¼A 0 0
0 AT I
cT �bT 0
0B@
1CA; h ¼
b
c
0
0B@
1CA; and z ¼
x
y
s
0B@
1CA:
For stability, we introduce a new column which is negative of the second column of Q to form a new constraint matrix Q .Thus the problem (2.0.5) is presented as
ðPDÞ : find Z;
such that QZ ¼ h;
Z P 0;
ð2:0:6Þ
where
Q ¼A 0 0 0
0 AT �AT I
cT �bT bT 0
0B@
1CA; h ¼
b
c
0
0B@
1CA; Z ¼
x
y0
y00
s
0BBBB@
1CCCCA:
The matrix Q involves the original data of the problem and has no artificial variable and y ¼ y0 � y00.To apply the KH-algorithm we always need three things namely, an interior point feasible solution, a homogenous matrix
and a simplex structure. To tackle the first problem we introduce an artificial variable Zkþ1 to create initial interior point solu-tion, so that the PD (2.0.6) can be presented in standard form as:
ðPD0Þ : min : Zkþ1
subject to QZ þ ðh� QeÞZkþ1 � h ¼ 0Zi P 0; for i ¼ 1;2; . . . ; kþ 1:
ð2:0:7Þ
R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235 229
Let a ¼ ðe;1Þ be an interior point feasible solution of the LP problem (2.0.7), where eT ¼ ð1;1; . . . ;1Þ 2 Rk. Then we rewrite theLP problem (2.0.7) as
ðPD00Þ : min : xnþ1
subject to Qxþ ðh� QeÞxnþ1 � h ¼ 0xi P 0; for i ¼ 1;2; . . . ; nþ 1:
ð2:0:8Þ
To find the simplex structure, we define the positive orthant Pþ ¼ fx 2 Rnþ1 : x P 0g and
D ¼ x 2 Rnþ2 :Xnþ2
i¼1
xi ¼ 1
( ):
Here D denotes the simplex structure. Consider the transformation T : x # y by
yi ¼
xiPnþ1
j¼1xjþ1
; i ¼ 1; . . . ; nþ 1;
1Pnþ1
j¼1xjþ1
; i ¼ nþ 2:
8><>:
Now
Xnþ2
i¼1
yi ¼Xnþ1
i¼1
yi þ ynþ2 ¼Pnþ1
i¼1 xiPnþ1j¼1 xj þ 1
þ 1Pnþ1j¼1 xj þ 1
¼ 1;
which implies y 2 D. Now we try to solve the second problem, i.e. to find a homogenous constraint matrix. It is easy to seethat T is bijective and hence T�1 : D! Pþ exists. Thus
yi ¼xiPnþ1
j¼1 xj þ 1; i ¼ 1;2; . . . ; ðnþ 1Þ ¼ xiynþ2;
which yields
xi ¼yi
ynþ2¼ T�1ðyiÞ; i ¼ 1; . . . ; ðnþ 1Þ:
Thus the constraint matrix of (2.0.8) reduces to
Qyi þ ðh� QeÞynþ1 � hynþ2 ¼ 0;
which is the required constraint matrix in homogenous form. Thus the homogenous matrix can be written as ½Q h� Qe � h�.Finally we find the modified objective function of the homogenous LP problem. The objective function of (2.0.8) can be
written as:
z ¼ xnþ1 ¼ynþ1
ynþ2;
i.e.
z0 ¼ z ynþ2 ¼ ynþ1:
Thus the matrix corresponding to the objective function is ½01�n 1 0�1�ðnþ2Þ. Since TðaÞ ¼ en 2 D then TðaÞ is the centre of the
simplex. Thus the transformed homogenous LP problem can be stated as:
min : 0yi þ 1ynþ1 þ 0ynþ2
subject to Qyi þ ðh� QeÞynþ1 � hynþ2 ¼ 0
eT y ¼ 1;y P 0:
ð2:0:9Þ
If we apply the KH-algorithm to (2.0.9), and we substitute xi ¼ yiynþ2
then we find the solution of (2.0.8) and thus the solution ofLP (2.0.3).
3. Karmarkar–Hooker’s method
3.1. Karmarkar’s method
As discussed earlier, Karmarkar’s algorithm requires a standard (LP) given in (1.0.2).A feasible solution vector x of (1.0.2) is an interior point solution if xi > 0; i ¼ 1;2; . . . ;n and eT x ¼ 1 represents a polytope,
230 R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235
D ¼ x 2 Rn :Xn
i¼1
xi ¼ 1; xi P 0; i ¼ 1; . . . ;n
( ):
Suppose D0 ¼ fx : Ax ¼ 0g. The assumption for the algorithm are as follows:
1. Ae ¼ 0 so that x0 ¼ en ¼ 1
n ; . . . ; 1n
� �T 2 Rn is an initial interior point solution2. cT x P 0 for all x 2 D \ D0
By applying the KH algorithm we find a x� such that either x� 2 D \ D0 with cT x� ¼ 0 or
minfcT x : x 2 D \ D0g > 0:
3.2. Projective transformation on the simplex
Let x be an interior point of D and
Xn
i¼1
xi ¼ 1
for xi 2 Rn and xi P 0; i ¼ 1; . . . ;n.We define a diagonal matrix
Dk ¼ diag xk1; x
k2; . . . ; xk
n
� �:
The projective transformation T 0 from D to D is given by
y ¼ T 0ðxÞ ¼ D�1k x
eT D�1k x
; ð3:2:1Þ
where x ¼ ðx1; . . . ; xnÞT . Since
D�1k x ¼ x1
xk1
; . . . ;xn
xkn
� �T
; ð3:2:2Þ
and
eT D�1k x ¼
Xn
j¼1
xj
xkj
;
then
Xn
i¼1
yi ¼
Pni¼1
xixk
iPnj¼1
xj
xkj
¼ 1 which implies eT y ¼ 1:
Since the transformation T 0 is well defined and bijective then T 0�1 : D! D exists.From (3.2.1) and eT x ¼ 1 we obtain
x ¼ ðeT D�1k xÞðDkyÞ ¼ Dky
eT Dky¼ T 0�1ðyÞ:
Then LP problem (1.0.2) can be rewritten in the form
min :cT DkyeT Dky
subject to ADky ¼ 0
eT y ¼ 1;y P 0;
ð3:2:3Þ
which is a linear fractional programming problem as the objective function is the ratio of two linear objective functions. Wehave
D�1k xk ¼ xk
1
xk1
; . . . ;xk
n
xkn
� �T
¼ ð1; . . . ;1ÞT ¼ e 2 Rn;
R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235 231
and hence
eT D�1k xk ¼
Xn
j¼1
xkj
xkj
¼ n:
So yk ¼ en which is the centre of the simplex D.
3.3. Projected direction CP
Rosens’s method [7,8] finds a usable feasible direction with the help of the projection operator that projects Rn onto thelinear subspace parallel to the intersection of the hyperplanes corresponding to the active constraints at the current feasiblepoint.
Therefore to find the direction we follow the Hooker’s method, which project orthogonally cT Dk onto the hyperplane de-fined by ADky ¼ 0 and eT y ¼ 0. To determine the direction of the steepest descent in the feasible polytope of the linear frac-tional program (3.2.3), we must find the orthogonal projection CP of Dkc onto the subspace
S ¼ fy : ADky ¼ 0 and eT y ¼ 0g:
Then we define a vector d that solves the least square problem
mind
: kDkc � dk2 ¼ ðDkc � dÞTðDkc � dÞ ð3:3:1aÞ
subject to
ADkd ¼ 0; ð3:3:1bÞeT d ¼ 0: ð3:3:1cÞ
We solve the above problem by defining the Lagrangian L and omitting the constraint (3.3.1c) as
L ¼ ðDkc � dÞTðDkc � dÞ þ 2ðukÞTðADkdÞ; ð3:3:2Þ
where uk is a vector of Lagrangian multipliers.Now from (3.3.2)
rdL ¼ 0
yields 2ðDkc � dÞð�1Þ þ 2ðukÞTðADkÞ ¼ 0
which implies d ¼ Dkc � DkAT uk: ð3:3:3Þ
Multiplying ADk to both sides of Eq. (3.3.3) and using (3.3.1b) we get,
AD2k c ¼ AD2
kAT uk
which implies uk ¼ ðAD2kATÞ�1AD2
kc: ð3:3:4Þ
As CP is the orthogonal projection of Dkc then projecting d orthogonally on the null space of eT we get,
CP ¼ Pd ¼ PðDkc � DkAT ukÞ;
where
P ¼ I � eðeT eÞ�1eT ¼ I � eðnIÞ�1eT ¼ I � eeT
n:
Thus the projected direction is
CP ¼ I � eeT
n
� �Dkc � DkAT uk� �
; ð3:3:5Þ
where
uk ¼ ðAD2kATÞ�1ðAD2
k cÞ ¼ MðAD2kcÞ and M ¼ AD2
kAT:
During computation of CP on a digital computer, enough time is needed to invert the matrix AD2k AT at each iteration. This
time consuming task is equivalent to solving a system of linear equations Mu ¼ v . We consider the Preconditioned ConjugateGradient (PCG) algorithm to solve the system of linear equations, which will be discussed in Section 4. Since PCG algorithm isapplicable only for symmetric positive definite matrix M, we verify that M is a symmetric positive definite matrix.
232 R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235
Theorem 3.1. M ¼ AD2k AT is a positive definite symmetric matrix.
Proof
M ¼ AD2k AT ¼
Pni¼1ða1ixk
i Þ2 Pn
i¼1a1ia2iðxki Þ
2 . . .Pn
i¼1a1iamiðxki Þ
2
Pni¼1a2ia1iðxk
i Þ2 Pn
i¼1ða2ixki Þ
2 . . .Pn
i¼1a2iamiðxki Þ
2
� � . . . �� � . . . �� � . . . �Pn
i¼1amia1iðxki Þ
2 Pni¼1amia2iðxk
i Þ2 . . .
Pni¼1ðamixk
i Þ2
0BBBBBBBBBBB@
1CCCCCCCCCCCA¼ ðaijÞ ðsayÞ;
where A ¼ ðaijÞm�n.Since aij ¼ aji then M is symmetric. Let M1 be a matrix obtained from Msuch that the row reduced echelon form of M is
M1. Then it can be verified that M1 does not have any non zero row. Thus row rank of M is equal to the number of non zerorow in M1, which is equal to m, (full rank of M), which implies M�1 exists.
Hence M is positive definite which completes the proof of the theorem. h
Since CP is a feasible direction of movement for yk then the next point ykþ1 ¼ yk � akCP remains feasible to the LP program(3.2.3) for some step size ak.
3.4. Choice of step size
Now we are in a position to translate in the transformed solution space, the current interior solution yk ¼ en along the
direction �CP of the steepest descent, but not so far, so as to leave the feasible region. i.e.
ykþ1 ¼ en� akCP > 0:
If CP 6 0, then ak can be any positive number provided ykþ1 should not leave the permissible region. i.e. ykþ1 > 0.If CPi
> 0 for some i, where CPiis the ith component of CP , then akð> 0Þ will be such that
ei
n> akCPi
yields ak <ein
CPi
¼ 1nCPi
:
Therefore, we can choose 0 < a < 1 such that
ak ¼mini
anCPi
: CPi> 0
� ¼ a
nðmax CPiÞ : CPi
> 0�
:
Thus ak is an appropriate step length for which ykþ1 > 0. Since an overly large a does more harm than good, it is better inpractice to choose a such that the current solution moves to the nearest positive wall to form a new interior solution inthe transformed space. By combining Karmarkar’s and Hooker’s algorithm we propose the following iterative procedurefor solving the LP problem.
KH-ALGORITHM:
Step 1: Initialization: Set k ¼ 0; x0 ¼ en be the initial solution.
Step 2: Computation of required matrices: Dk ¼ diag xk1; x
k2; . . . ; xk
n
� �;ADk;Dkc;AD2
k AT;AD2
kc; and I � eeT
n :
Step 3: Direction of Steepest Descent: Compute CP ¼ I � eeT
n
� �Dkc � DkAT uk� �
.
Step 4: Computation of Step Size: ak ¼ anðmax CPi
Þ : CPi> 0
n owhere a 2 ð0;1Þ.
Step 5: Moving to a new Point in the transformed space: ykþ1 ¼ en� akCP .
Step 6: Writing the next solution: xkþ1 ¼ Dkykþ1
eT Dkykþ1.
Step 7: Optimality Condition: If cT xkþ1 is not close to zero go to Step 2; otherwise, STOP.
3.5. Complexity of the algorithm
Let L be the number of bits required to represent the problem data in a computer. The number L is called the size of theproblem. By Karmarkar’s method, any LP problem solved has worst-case complexity Oðn2:5LÞ, where n is the number of
R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235 233
variables. It is to be noted that the KH algorithm has overall complexity of Oðn3:5LÞ and to have it OðnLÞ iterations areenough.
4. Implementation issue
The main issue is to implement the KH algorithm, where it is required to find the inverse of a matrix M in each iteration.So the Preconditioned Conjugate Gradient (PCG) method is used to solve a system of linear equations in the form of Mu ¼ vwhere Mis a symmetric positive definite matrix, u is an unknown vector and v is a known vector. Although the steepest des-cent method can be used to find the minimum, but in practice PCG algorithm usually converges faster than the steepest des-cent method. It is a technique for improving the condition number of the matrix M. Suppose that C be a symmetric positivedefinite matrix that approximate M, but it is easier to invert. Then solving the system Mu ¼ v is same as solving the system
C�1Mu ¼ C�1v : ð4:0:1Þ
If jðc�1MÞ � jðMÞ, i.e. if the eigenvalues of C�1M are better clustered than those of M, we can iteratively solve the system(4.0.1).
The storage requirements for the PCG algorithm is quite low, amounting to a few vectors of length m. Since we have tosolve the system Mu ¼ v where M ¼ AD2AT , then we may compute AD2AT as AðD2ðAT uÞÞ and thus need only to store the non-zero elements of A and the diagonal D rather than the matrix AD2AT , which can be quite dense. The preconditioners shouldalso be chosen to conserve the storage. Since accuracy requirements for the search direction in the beginning phase of theinterior point algorithm are quit low, only a few conjugate gradients iterations are required.
The crucial issue in the PCG algorithm is to find a preconditioner for each step of interior point method. A good precon-ditioner may dramatically accelerate the convergence rate and gain great computational strategies. There are several precon-ditioners suggested by several researchers. We prefer to cite some of them. An easy preconditioner is Jacobi preconditioner[5]. Jacobi Preconditioner consists in taking the diagonal of M for the matrix C, i.e.
Cij ¼Mii; if i ¼ j;
0; otherwise:
�
It is equivalent to scaling the quadratic form along the coordinate axes. (By comparison, the perfect preconditioner C ¼ Mscales the quadratic form along its eigenvector axes.) A diagonal matrix is trivial to invert, but is often only a mediocre pre-conditioner. It is tested that the condition number has improved from 3.5 to roughly 2.8 [9]. Of course, this improvement ismuch more beneficial for systems where m� 2. Advantages of such preconditioners are the facility of its implementationand the low memory it needs. But we can find other preconditioners such that resolution of the linear system is fastest,it is the case of the SSOR preconditioner.
SSOR (symmetric successive over relaxation) preconditioner: The matrix M may be decomposed as follows:
M ¼ Lþ Dþ LT ;
where L is the strictly lower part of M and D is the diagonal of M. SSOR preconditioner consists in taking
C ¼ Dpþ L
� �p
2� pD�1 D
pþ LT
� �;
where p is a relaxation parameter. A necessary and sufficient condition of the Preconditioned Gradient Method algorithm isto fix the parameter p in the interval (0,2). This technique is suggested by Soualem [11].
We now present some other preconditioners that one can use in place of the Jacobi and SSOR preconditioner. A good pre-conditioner is incomplete Cholesky factorization originally proposed by Varga [13], could be used in place of Cholesky if den-sity of the matrix factors is too great. Although many softwares use this preconditioner but still it is sometimes unstable.Another preconditioner was suggested by Vaidy [12] and latter it is shown by Chen and Toldeo [2] that Vaidy’s precondition-er has a remarkable performance in practice. Benzi [1] has given a survey of preconditioning techniques for the iterativesolution of large linear system with a focus on algebraic methods for general sparse matrices. There are also several codesavailable for solving the linear system. In Matlab, u ¼ PcgðM;v ; tol;maxit;CÞ solves C�1Mu ¼ c�1v . Here C ¼ L2, whereL�1ML�1 is well conditioned matrix or a matrix with clustered eigenvalues. Since our objective is to solve the LP problemvia interior point method, one can use any of the preconditioners discussed above. But one can use the Matlab code to solvethe linear system Mu ¼ v for finding the step direction CP in each iteration of the interior point algorithm. For any other code,one choose any of the preconditioner and use the PCG algorithm stated above.
5. Numerical experiment and concluding remarks
We have tested many LP test problems on the software LPKHAL, a rudimentary code developed by us. With this code, weaim to study the applicability of the KH algorithm and the efficiency of the new conversion technique for the standard formas discussed in Section 2. The KH algorithm requires a special type of LP problem, but the present numerical experiment in-cludes all type of LP problem even, when b is negative. Also it can identify the infeasible and unbounded LP problem. Now we
234 R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235
present some test examples for the numerical experiment. Solution of the problem at different iterations obtained using KHalgorithm are presented below.
Example 1
min : 2x1 þ 3:5x2 þ 8x3 þ 1:5x4 þ 11x5 þ x6
subject to
4x1 þ 8x2 þ 7x3 þ 1:3x4 þ 8x5 þ 9:2x6 6 10;x1 þ 5x2 þ 9x3 þ 0:1x4 þ 7x5 þ x6 P 8;15x1 þ 11:7x2 þ 0:4x3 þ 22:6x4 þ 17x6 P 10;90x1 þ 120x2 þ 106x3 þ 97x4 þ 130x5 þ 180x6 P 150;x1; . . . ; x6 P 0;
Solution:
Iter. no.
x1 x2 x3 x4 x5 x6 z ¼ cT x1
0.7477 0.8129 1.2146 0.9079 0.5838 0.9079 22.74956 2 0.5558 0.7165 1.2140 0.7362 0.2947 0.7447 18.42149 3 0.3844 0.6855 1.0344 0.5345 0.1352 0.5294 14.26137 13 0.0076 0.8923 0.3932 0.0093 0.0022 0.0064 6.32861 14 0.0081 0.8871 0.3948 0.0095 0.0024 0.0067 6.32745 15 0.0042 0.9006 0.3878 0.0065 0.0013 0.0039 6.242904It is to be noted that the optimal solution achieved at iteration no. 15 with minimum objective value is 6.242904. Optimalvalue is 6.2432 by LINGO-8.0 [10] and by LPKART-2.9 [4] 54.9846.
Example 2
max : 8x1 þ 3x2 þ 8x3 þ 6x4
subject to
4x1 þ 3x2 þ x3 þ 3x4 6 16;4x1 � x2 þ x3 6 12;x1 þ 2x2 6 8;3x1 þ x2 6 10;2x3 þ 3x4 6 9;4x3 þ x4 6 12;x1; . . . ; x4 P 0:
Solution:
Iter. no.
x1 x2 x3 x4 cT x1
1.2577 1.0465 1.3883 0.8659 29.50307 2 1.5149 1.0090 1.7768 0.7701 33.98089 3 1.8195 0.8690 2.1898 0.7164 38.98001 4 2.1119 0.6405 2.5061 0.7406 43.30983 10 2.3457 0.1117 2.6967 1.1881 47.80366 12 2.3478 0.1053 2.6974 1.1933 47.83691 13 2.3481 0.1051 2.6978 1.1934 47.84245It is to be noted that the optimal solution achieved at iteration no. 13 with maximum objective value is 47.84245. Optimalvalue is 47.9 by LINGO-8.0 [10] and by LPKART-2.9 [4] 50.772316.
Example 3 (Degenerate LP problem).
max : 3x1 þ x2 þ 0x3
subject to
R.K. Nayak et al. / Applied Mathematics and Computation 216 (2010) 227–235 235
x1 þ 2x2 þ 0x3 6 5;x1 þ x2 � x3 P 2;7x1 þ 3x2 � 5x3 6 20;x1; . . . ; x3 P 0:
Solution:
Iter. no.
x1 x2 x3 cT x1
1.5259 0.9154 1.0913 5.49314 2 2.2806 0.7315 1.4357 7.57336 3 3.2693 0.4685 2.0531 10.27634 4 4.5984 0.1111 2.7939 13.90646 11 4.9907 0.0026 2.9938 14.97473 12 4.9913 0.0026 2.9931 14.97638 13 4.9917 0.0025 2.9931 14.97766It is to be noted that the optimal solution is achieved at iteration no. 13 with maximum objective value 14.97766. Opti-mum solution is degenerate and there exists alternate solution that is non basic. Degenerate solution is 15 by LINGO-8.0 [10].Furthermore, it is to be noted that both the experimental codes LPKHAL and LPKART-2.9 are based on Karmarkar’s method.
6. Conclusion
We have proposed a new algorithm to solve LP problems, which combines ideas from the established Karmarkar’s algo-rithm and Hooker’s projective scaling method. The present algorithm converges faster with the new modification and thesuitable choice of step size. We have shown the effectiveness of transformation technique of any LP problems to the formdesired by KH algorithm.
From the executed computational experiments we verify that KH algorithm proposed presents, a superior performancethan the one presented by the classic algorithms. It also provides solution to the degenerate LP problems. However, it cannot find alternate optimal solution for LP problems like simplex algorithm. We observe, the convergence is dependent ofthe choice of a discussed in Section 3.4. Moreover, we observe, that with the adopted tolerance, the solutions obtainedfor some of the computational experiments, are identical to the solutions furnished by the libraries.
Acknowledgement
Authors are grateful to the anonymous referee for the useful and constructive comments to revise the paper.
References
[1] M. Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics 182 (2002) 418–477.[2] D. Chen, S. Toldeo, Vaidya’s preconditioner and implementation and experimental study, Electronic Transactions on Numerical Analysis 16 (2003) 30–
49.[3] J.N. Hooker, Karmarkar’s Linear Programming, Interfaces 16 (1986) 75–90.[4] J. Heui, Lee Shi-Woo, Shim Hyo-Sun, Kim Kwang-Suk, Kim Ju-Mi, N.G.-Hwan, Directed by Prof. Park, Soondal, A ‘‘C code solver of LP problems using
Karmarkar’s method, 1999.[5] C.G.J. Jacobi, Ü ber eine neue Aufiösungsart der bei der Methode er kleinsten Quadrate vorkommenden linearen Gleichungen, Astronomische
Nachrichten 22 (1845) 297.[6] N. Karmarkar, A new polynomial time algorithm for linear programming, Combinatorica 4 (1984) 373–395.[7] J.B. Rosen, The gradient projection method for nonlinear programming, Part I: Linear constraints, Journal of the Society for Industrial and Applied
Mathematics 8 (1960) 181–217.[8] J.B. Rosen, The gradient projection method for nonlinear programming, Part II: Nonlinear constraints, Journal of the Society for Industrial and Applied
Mathematics 9 (1961) 514–532.[9] J.R. Shewchuck, An introduction to the conjugate gradient method without the agonizing pain, School of Computer Science Carnegie Mellon University,
Pittsburgh, PA 15213, 1994.[10] L. Schrage, LINGO Release 8.0. LINDO System, Inc, 2003.[11] N. Soualem, Preconditioned conjugate gradient method, http://www.math-linux.com, 2006.[12] P.M. Vaidya, Solving linear equations with symmetric diagonally dominant matrices by constructing good preconditioners. Unpublished manuscript. A
talk based on the manuscript was presented at the IMA Workshop on Graph Theory and Sparse Matrix Computation, 1991.[13] R.S. Varga, Factorizations and normalized iterative methods, in: R.E. Langer (Ed.), Boundary Problems in Differential Equations, Univ. Wisconsin Press,
Madison, 1960, p. 121.