solving nonlinear equationshrtdmrt2/teaching/nm_2017_fall/l6/l6.pdf · 2017. 11. 14. · system of...
TRANSCRIPT
-
Solving Nonlinear Equations
Carlos Hurtado
Department of EconomicsUniversity of Illinois at Urbana-Champaign
Nov 2nd, 2017
C. Hurtado (UIUC - Economics) Numerical Methods
mailto:[email protected]
-
On the Agenda
1 System of Nonlinear Equations
2 Nonlinear Solvers
3 In Practice
C. Hurtado (UIUC - Economics) Numerical Methods
-
System of Nonlinear Equations
On the Agenda
1 System of Nonlinear Equations
2 Nonlinear Solvers
3 In Practice
C. Hurtado (UIUC - Economics) Numerical Methods
-
System of Nonlinear Equations
System of Nonlinear Equations
I A function f : Rn → R is defined as being nonlinear when it does notsatisfy the superposition principle:
f (x1 + · · ·+ xn) 6= f (x1) + · · ·+ f (xn)
I Now that we know what the term nonlinear refers to we can define asystem of nonlinear equations.
I A system of nonlinear equations is a set of equations as the following:f1(x1, x2, · · · , xn) = 0,f2(x1, x2, · · · , xn) = 0,
...fn(x1, x2, · · · , xn) = 0
where (x1, x2, · · · , xn) ∈ Rn and each fi is a nonlinear function, fori = 1, 2, · · · , n.
C. Hurtado (UIUC - Economics) Numerical Methods 1 / 9
-
System of Nonlinear Equations
System of Nonlinear Equations
I A function f : Rn → R is defined as being nonlinear when it does notsatisfy the superposition principle:
f (x1 + · · ·+ xn) 6= f (x1) + · · ·+ f (xn)
I Now that we know what the term nonlinear refers to we can define asystem of nonlinear equations.
I A system of nonlinear equations is a set of equations as the following:f1(x1, x2, · · · , xn) = 0,f2(x1, x2, · · · , xn) = 0,
...fn(x1, x2, · · · , xn) = 0
where (x1, x2, · · · , xn) ∈ Rn and each fi is a nonlinear function, fori = 1, 2, · · · , n.
C. Hurtado (UIUC - Economics) Numerical Methods 1 / 9
-
System of Nonlinear Equations
System of Nonlinear Equations
I A function f : Rn → R is defined as being nonlinear when it does notsatisfy the superposition principle:
f (x1 + · · ·+ xn) 6= f (x1) + · · ·+ f (xn)
I Now that we know what the term nonlinear refers to we can define asystem of nonlinear equations.
I A system of nonlinear equations is a set of equations as the following:f1(x1, x2, · · · , xn) = 0,f2(x1, x2, · · · , xn) = 0,
...fn(x1, x2, · · · , xn) = 0
where (x1, x2, · · · , xn) ∈ Rn and each fi is a nonlinear function, fori = 1, 2, · · · , n.
C. Hurtado (UIUC - Economics) Numerical Methods 1 / 9
-
System of Nonlinear Equations
System of Nonlinear Equations
I A function f : Rn → R is defined as being nonlinear when it does notsatisfy the superposition principle:
f (x1 + · · ·+ xn) 6= f (x1) + · · ·+ f (xn)
I Now that we know what the term nonlinear refers to we can define asystem of nonlinear equations.
I A system of nonlinear equations is a set of equations as the following:f1(x1, x2, · · · , xn) = 0,f2(x1, x2, · · · , xn) = 0,
...fn(x1, x2, · · · , xn) = 0
where (x1, x2, · · · , xn) ∈ Rn and each fi is a nonlinear function, fori = 1, 2, · · · , n.
C. Hurtado (UIUC - Economics) Numerical Methods 1 / 9
-
System of Nonlinear Equations
System of Nonlinear Equations
I Example of a nonlinear system:
3x1− cos(x2x3)−12 = 0
x21 − 81(x2 + 0.1)2 + sin x3 + 1 = 0
e−x1x2 + 20x3 +10π
3 = 0
I The terms root or solution are frequently use to describe the finalresult of solving the systems of equations.
I The root or solution is a point a = (a1, a2, · · · , an) ∈ Rn such thatf1(a) = f2(a) = · · · = fn(a) = 0
C. Hurtado (UIUC - Economics) Numerical Methods 2 / 9
-
System of Nonlinear Equations
System of Nonlinear Equations
I Example of a nonlinear system:
3x1− cos(x2x3)−12 = 0
x21 − 81(x2 + 0.1)2 + sin x3 + 1 = 0
e−x1x2 + 20x3 +10π
3 = 0
I The terms root or solution are frequently use to describe the finalresult of solving the systems of equations.
I The root or solution is a point a = (a1, a2, · · · , an) ∈ Rn such thatf1(a) = f2(a) = · · · = fn(a) = 0
C. Hurtado (UIUC - Economics) Numerical Methods 2 / 9
-
System of Nonlinear Equations
System of Nonlinear Equations
I Example of a nonlinear system:
3x1− cos(x2x3)−12 = 0
x21 − 81(x2 + 0.1)2 + sin x3 + 1 = 0
e−x1x2 + 20x3 +10π
3 = 0
I The terms root or solution are frequently use to describe the finalresult of solving the systems of equations.
I The root or solution is a point a = (a1, a2, · · · , an) ∈ Rn such thatf1(a) = f2(a) = · · · = fn(a) = 0
C. Hurtado (UIUC - Economics) Numerical Methods 2 / 9
-
System of Nonlinear Equations
System of Nonlinear Equations
I Example of a nonlinear system:
3x1− cos(x2x3)−12 = 0
x21 − 81(x2 + 0.1)2 + sin x3 + 1 = 0
e−x1x2 + 20x3 +10π
3 = 0
I The terms root or solution are frequently use to describe the finalresult of solving the systems of equations.
I The root or solution is a point a = (a1, a2, · · · , an) ∈ Rn such thatf1(a) = f2(a) = · · · = fn(a) = 0
C. Hurtado (UIUC - Economics) Numerical Methods 2 / 9
-
System of Nonlinear Equations
Fundamentals
I Simplest problem: Root finding in one dimension.
f (x) = 0 with x ∈ [a, b]
I More generally, solving a square system of nonlinear equations.
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I There can be no closed-form answer, we need iterative methods.I Iterative methods: initial guess x0 and update function φ
xk+1 = φ(xk)
C. Hurtado (UIUC - Economics) Numerical Methods 3 / 9
-
System of Nonlinear Equations
Fundamentals
I Simplest problem: Root finding in one dimension.
f (x) = 0 with x ∈ [a, b]
I More generally, solving a square system of nonlinear equations.
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I There can be no closed-form answer, we need iterative methods.I Iterative methods: initial guess x0 and update function φ
xk+1 = φ(xk)
C. Hurtado (UIUC - Economics) Numerical Methods 3 / 9
-
System of Nonlinear Equations
Fundamentals
I Simplest problem: Root finding in one dimension.
f (x) = 0 with x ∈ [a, b]
I More generally, solving a square system of nonlinear equations.
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I There can be no closed-form answer, we need iterative methods.I Iterative methods: initial guess x0 and update function φ
xk+1 = φ(xk)
C. Hurtado (UIUC - Economics) Numerical Methods 3 / 9
-
System of Nonlinear Equations
Fundamentals
I Simplest problem: Root finding in one dimension.
f (x) = 0 with x ∈ [a, b]
I More generally, solving a square system of nonlinear equations.
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I There can be no closed-form answer, we need iterative methods.I Iterative methods: initial guess x0 and update function φ
xk+1 = φ(xk)
C. Hurtado (UIUC - Economics) Numerical Methods 3 / 9
-
System of Nonlinear Equations
Fundamentals
I A sequence of iterates xk that converges to α has order ofconvergence p > 0 if
limk→∞
|xk+1 − α||xk − α|p
= C
where the constant 0 < C < 1 is the convergence factor.
I A method should at least converge linearly, that is, the error should atleast be reduced by a constant factor every iteration, for example, thenumber of accurate digits increases by 1 every iteration.
I A good iterative method coverges quadratically, that is, the numberof accurate digits doubles every iteration!
C. Hurtado (UIUC - Economics) Numerical Methods 4 / 9
-
System of Nonlinear Equations
Fundamentals
I A sequence of iterates xk that converges to α has order ofconvergence p > 0 if
limk→∞
|xk+1 − α||xk − α|p
= C
where the constant 0 < C < 1 is the convergence factor.
I A method should at least converge linearly, that is, the error should atleast be reduced by a constant factor every iteration, for example, thenumber of accurate digits increases by 1 every iteration.
I A good iterative method coverges quadratically, that is, the numberof accurate digits doubles every iteration!
C. Hurtado (UIUC - Economics) Numerical Methods 4 / 9
-
System of Nonlinear Equations
Fundamentals
I A sequence of iterates xk that converges to α has order ofconvergence p > 0 if
limk→∞
|xk+1 − α||xk − α|p
= C
where the constant 0 < C < 1 is the convergence factor.
I A method should at least converge linearly, that is, the error should atleast be reduced by a constant factor every iteration, for example, thenumber of accurate digits increases by 1 every iteration.
I A good iterative method coverges quadratically, that is, the numberof accurate digits doubles every iteration!
C. Hurtado (UIUC - Economics) Numerical Methods 4 / 9
-
Nonlinear Solvers
On the Agenda
1 System of Nonlinear Equations
2 Nonlinear Solvers
3 In Practice
C. Hurtado (UIUC - Economics) Numerical Methods
-
Nonlinear Solvers
Nonlinear Solvers
I A good initial guess is extremely important in nonlinear solvers!I Assume we are looking for a unique root a ≤ α ≤ b starting with an
initial guess a ≤ x0 ≤ b.I A method has local convergence if it converges to a given root α for
any initial guess that is sufficiently close to α (in the neighborhood ofa root).
I A method has global convergence if it converges to the root for anyinitial guess.
I General rule: Global convergence requires a slower method but issafer.
I It is best to combine a global method to first find a good initial guessclose to α and then use a faster local method.
C. Hurtado (UIUC - Economics) Numerical Methods 5 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I A good initial guess is extremely important in nonlinear solvers!I Assume we are looking for a unique root a ≤ α ≤ b starting with an
initial guess a ≤ x0 ≤ b.I A method has local convergence if it converges to a given root α for
any initial guess that is sufficiently close to α (in the neighborhood ofa root).
I A method has global convergence if it converges to the root for anyinitial guess.
I General rule: Global convergence requires a slower method but issafer.
I It is best to combine a global method to first find a good initial guessclose to α and then use a faster local method.
C. Hurtado (UIUC - Economics) Numerical Methods 5 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I A good initial guess is extremely important in nonlinear solvers!I Assume we are looking for a unique root a ≤ α ≤ b starting with an
initial guess a ≤ x0 ≤ b.I A method has local convergence if it converges to a given root α for
any initial guess that is sufficiently close to α (in the neighborhood ofa root).
I A method has global convergence if it converges to the root for anyinitial guess.
I General rule: Global convergence requires a slower method but issafer.
I It is best to combine a global method to first find a good initial guessclose to α and then use a faster local method.
C. Hurtado (UIUC - Economics) Numerical Methods 5 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I A good initial guess is extremely important in nonlinear solvers!I Assume we are looking for a unique root a ≤ α ≤ b starting with an
initial guess a ≤ x0 ≤ b.I A method has local convergence if it converges to a given root α for
any initial guess that is sufficiently close to α (in the neighborhood ofa root).
I A method has global convergence if it converges to the root for anyinitial guess.
I General rule: Global convergence requires a slower method but issafer.
I It is best to combine a global method to first find a good initial guessclose to α and then use a faster local method.
C. Hurtado (UIUC - Economics) Numerical Methods 5 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I A good initial guess is extremely important in nonlinear solvers!I Assume we are looking for a unique root a ≤ α ≤ b starting with an
initial guess a ≤ x0 ≤ b.I A method has local convergence if it converges to a given root α for
any initial guess that is sufficiently close to α (in the neighborhood ofa root).
I A method has global convergence if it converges to the root for anyinitial guess.
I General rule: Global convergence requires a slower method but issafer.
I It is best to combine a global method to first find a good initial guessclose to α and then use a faster local method.
C. Hurtado (UIUC - Economics) Numerical Methods 5 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I A good initial guess is extremely important in nonlinear solvers!I Assume we are looking for a unique root a ≤ α ≤ b starting with an
initial guess a ≤ x0 ≤ b.I A method has local convergence if it converges to a given root α for
any initial guess that is sufficiently close to α (in the neighborhood ofa root).
I A method has global convergence if it converges to the root for anyinitial guess.
I General rule: Global convergence requires a slower method but issafer.
I It is best to combine a global method to first find a good initial guessclose to α and then use a faster local method.
C. Hurtado (UIUC - Economics) Numerical Methods 5 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I Let us start with f : R→ R, and the interval [a,b], such that f (a)and f (b) have opposite sign. (How?)
I We want to find f (x) = 0 (The roots).
I We know how to solve this using Newton’s method (At this point youshould!)
I Consider the point m = a+b2I Check the intervals [a, m] and [m,b] to see if those have opposite sign.
I Repeat the algorithm with interval that has opposite signs
I This is called the bisection method: Global Convergence but Slow.
C. Hurtado (UIUC - Economics) Numerical Methods 6 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I Let us start with f : R→ R, and the interval [a,b], such that f (a)and f (b) have opposite sign. (How?)
I We want to find f (x) = 0 (The roots).
I We know how to solve this using Newton’s method (At this point youshould!)
I Consider the point m = a+b2I Check the intervals [a, m] and [m,b] to see if those have opposite sign.
I Repeat the algorithm with interval that has opposite signs
I This is called the bisection method: Global Convergence but Slow.
C. Hurtado (UIUC - Economics) Numerical Methods 6 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I Let us start with f : R→ R, and the interval [a,b], such that f (a)and f (b) have opposite sign. (How?)
I We want to find f (x) = 0 (The roots).
I We know how to solve this using Newton’s method (At this point youshould!)
I Consider the point m = a+b2I Check the intervals [a, m] and [m,b] to see if those have opposite sign.
I Repeat the algorithm with interval that has opposite signs
I This is called the bisection method: Global Convergence but Slow.
C. Hurtado (UIUC - Economics) Numerical Methods 6 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I Let us start with f : R→ R, and the interval [a,b], such that f (a)and f (b) have opposite sign. (How?)
I We want to find f (x) = 0 (The roots).
I We know how to solve this using Newton’s method (At this point youshould!)
I Consider the point m = a+b2I Check the intervals [a, m] and [m,b] to see if those have opposite sign.
I Repeat the algorithm with interval that has opposite signs
I This is called the bisection method: Global Convergence but Slow.
C. Hurtado (UIUC - Economics) Numerical Methods 6 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I Let us start with f : R→ R, and the interval [a,b], such that f (a)and f (b) have opposite sign. (How?)
I We want to find f (x) = 0 (The roots).
I We know how to solve this using Newton’s method (At this point youshould!)
I Consider the point m = a+b2I Check the intervals [a, m] and [m,b] to see if those have opposite sign.
I Repeat the algorithm with interval that has opposite signs
I This is called the bisection method: Global Convergence but Slow.
C. Hurtado (UIUC - Economics) Numerical Methods 6 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I Let us start with f : R→ R, and the interval [a,b], such that f (a)and f (b) have opposite sign. (How?)
I We want to find f (x) = 0 (The roots).
I We know how to solve this using Newton’s method (At this point youshould!)
I Consider the point m = a+b2I Check the intervals [a, m] and [m,b] to see if those have opposite sign.
I Repeat the algorithm with interval that has opposite signs
I This is called the bisection method: Global Convergence but Slow.
C. Hurtado (UIUC - Economics) Numerical Methods 6 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I Let us start with f : R→ R, and the interval [a,b], such that f (a)and f (b) have opposite sign. (How?)
I We want to find f (x) = 0 (The roots).
I We know how to solve this using Newton’s method (At this point youshould!)
I Consider the point m = a+b2I Check the intervals [a, m] and [m,b] to see if those have opposite sign.
I Repeat the algorithm with interval that has opposite signs
I This is called the bisection method: Global Convergence but Slow.
C. Hurtado (UIUC - Economics) Numerical Methods 6 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I In practice, a robust but fast algorithm for root finding wouldcombine bisection with Newton’s method.
I Specifically, a method like Newton’s that can easily take huge steps inthe wrong direction and lead far from the current point must besafeguarded by a method that ensures one does not leave the searchinterval and that the zero is not missed.
I Once xk is close to α, the safeguard will not be used and quadratic orfaster convergence will be achieved.
I Newton’s method requires first-order derivatives so often othermethods are preferred that require function evaluation only (orapproximations of the derivatives).
C. Hurtado (UIUC - Economics) Numerical Methods 7 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I In practice, a robust but fast algorithm for root finding wouldcombine bisection with Newton’s method.
I Specifically, a method like Newton’s that can easily take huge steps inthe wrong direction and lead far from the current point must besafeguarded by a method that ensures one does not leave the searchinterval and that the zero is not missed.
I Once xk is close to α, the safeguard will not be used and quadratic orfaster convergence will be achieved.
I Newton’s method requires first-order derivatives so often othermethods are preferred that require function evaluation only (orapproximations of the derivatives).
C. Hurtado (UIUC - Economics) Numerical Methods 7 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I In practice, a robust but fast algorithm for root finding wouldcombine bisection with Newton’s method.
I Specifically, a method like Newton’s that can easily take huge steps inthe wrong direction and lead far from the current point must besafeguarded by a method that ensures one does not leave the searchinterval and that the zero is not missed.
I Once xk is close to α, the safeguard will not be used and quadratic orfaster convergence will be achieved.
I Newton’s method requires first-order derivatives so often othermethods are preferred that require function evaluation only (orapproximations of the derivatives).
C. Hurtado (UIUC - Economics) Numerical Methods 7 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I In practice, a robust but fast algorithm for root finding wouldcombine bisection with Newton’s method.
I Specifically, a method like Newton’s that can easily take huge steps inthe wrong direction and lead far from the current point must besafeguarded by a method that ensures one does not leave the searchinterval and that the zero is not missed.
I Once xk is close to α, the safeguard will not be used and quadratic orfaster convergence will be achieved.
I Newton’s method requires first-order derivatives so often othermethods are preferred that require function evaluation only (orapproximations of the derivatives).
C. Hurtado (UIUC - Economics) Numerical Methods 7 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I What happens if we want to solve a square system of nonlinearequations?
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I We can use Newton’s method where f : Rn → Rn.I This requires solving many linear systems, which can become
complicated when there are many variables.I It also requires computing a whole matrix of derivatives, which can be
expensive or hard to do (differentiation by hand?)I Newton’s method converges fast if the Jacobian is well-conditioned,
otherwise it can ”blow up”I Normally use quasi-Newton methods to approximate the Jacobian.
C. Hurtado (UIUC - Economics) Numerical Methods 8 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I What happens if we want to solve a square system of nonlinearequations?
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I We can use Newton’s method where f : Rn → Rn.I This requires solving many linear systems, which can become
complicated when there are many variables.I It also requires computing a whole matrix of derivatives, which can be
expensive or hard to do (differentiation by hand?)I Newton’s method converges fast if the Jacobian is well-conditioned,
otherwise it can ”blow up”I Normally use quasi-Newton methods to approximate the Jacobian.
C. Hurtado (UIUC - Economics) Numerical Methods 8 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I What happens if we want to solve a square system of nonlinearequations?
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I We can use Newton’s method where f : Rn → Rn.I This requires solving many linear systems, which can become
complicated when there are many variables.I It also requires computing a whole matrix of derivatives, which can be
expensive or hard to do (differentiation by hand?)I Newton’s method converges fast if the Jacobian is well-conditioned,
otherwise it can ”blow up”I Normally use quasi-Newton methods to approximate the Jacobian.
C. Hurtado (UIUC - Economics) Numerical Methods 8 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I What happens if we want to solve a square system of nonlinearequations?
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I We can use Newton’s method where f : Rn → Rn.I This requires solving many linear systems, which can become
complicated when there are many variables.I It also requires computing a whole matrix of derivatives, which can be
expensive or hard to do (differentiation by hand?)I Newton’s method converges fast if the Jacobian is well-conditioned,
otherwise it can ”blow up”I Normally use quasi-Newton methods to approximate the Jacobian.
C. Hurtado (UIUC - Economics) Numerical Methods 8 / 9
-
Nonlinear Solvers
Nonlinear Solvers
I What happens if we want to solve a square system of nonlinearequations?
f (x) = 0 =⇒ fi (x1, x2, · · · , xn) = 0 for i = 1, · · · , n
I We can use Newton’s method where f : Rn → Rn.I This requires solving many linear systems, which can become
complicated when there are many variables.I It also requires computing a whole matrix of derivatives, which can be
expensive or hard to do (differentiation by hand?)I Newton’s method converges fast if the Jacobian is well-conditioned,
otherwise it can ”blow up”I Normally use quasi-Newton methods to approximate the Jacobian.
C. Hurtado (UIUC - Economics) Numerical Methods 8 / 9
-
In Practice
On the Agenda
1 System of Nonlinear Equations
2 Nonlinear Solvers
3 In Practice
C. Hurtado (UIUC - Economics) Numerical Methods
-
In Practice
In Practice
I It is much harder to construct general robust solvers in higherdimensions and some problem-specific knowledge is required.
I In python we can use fsolve function from scipy.optimize
I In many practical situations there is some continuity of the problemso that a previous solution can be used as an initial guess.
I For large problems specialized sparse-matrix solvers need to be used.
I We can think of this as an optimization problem! (How?)
C. Hurtado (UIUC - Economics) Numerical Methods 9 / 9
-
In Practice
In Practice
I It is much harder to construct general robust solvers in higherdimensions and some problem-specific knowledge is required.
I In python we can use fsolve function from scipy.optimize
I In many practical situations there is some continuity of the problemso that a previous solution can be used as an initial guess.
I For large problems specialized sparse-matrix solvers need to be used.
I We can think of this as an optimization problem! (How?)
C. Hurtado (UIUC - Economics) Numerical Methods 9 / 9
-
In Practice
In Practice
I It is much harder to construct general robust solvers in higherdimensions and some problem-specific knowledge is required.
I In python we can use fsolve function from scipy.optimize
I In many practical situations there is some continuity of the problemso that a previous solution can be used as an initial guess.
I For large problems specialized sparse-matrix solvers need to be used.
I We can think of this as an optimization problem! (How?)
C. Hurtado (UIUC - Economics) Numerical Methods 9 / 9
-
In Practice
In Practice
I It is much harder to construct general robust solvers in higherdimensions and some problem-specific knowledge is required.
I In python we can use fsolve function from scipy.optimize
I In many practical situations there is some continuity of the problemso that a previous solution can be used as an initial guess.
I For large problems specialized sparse-matrix solvers need to be used.
I We can think of this as an optimization problem! (How?)
C. Hurtado (UIUC - Economics) Numerical Methods 9 / 9
-
In Practice
In Practice
I It is much harder to construct general robust solvers in higherdimensions and some problem-specific knowledge is required.
I In python we can use fsolve function from scipy.optimize
I In many practical situations there is some continuity of the problemso that a previous solution can be used as an initial guess.
I For large problems specialized sparse-matrix solvers need to be used.
I We can think of this as an optimization problem! (How?)
C. Hurtado (UIUC - Economics) Numerical Methods 9 / 9
System of Nonlinear EquationsNonlinear SolversIn Practice