efficient root-mean calculation

8
root-mean R eal-time digital systems often require the calculation of a root-mean, such as a root-mean square (RMS) level or average magnitude of a complex signal. While averaging can be efficiently implemented by most microprocessors, the square root may not be— especially with low-cost hardware. If the processor doesn’t implement a fast square root function, it must be implemented in software; al- though this yields accurate results, it may not be efficient. One common method for computing the square root is Newton’s method, which iteratively converges on a solution using an initial esti- mate. Since we’re computing the square root of a slowly varying aver- age value, the previous root-mean value makes a good estimate. Fur- thermore, we can combine the iterative Newton’s method with a first-order recursive averager, resulting in a super-efficient method for computing the root-mean of a signal. In this article, I’ll develop and present three efficient recursive al- gorithms for computing the root-mean, illustrating each method with signal flow diagrams and example code. To some degree, each of these methods trades hardware complexity for error. I’ll compare the com- putational performance and error of each method and suggest suit- able hardware for each implementation. ROOT-MEAN The root-mean is computed as the square root of the average over time of its input. This average may be recursive or non-recursive, and I’ll briefly review the general case for both. Non-recursive average The non-recursive average, or moving average, is the weighted sum of N inputs: the current input and N-1 previous inputs. In digital filter- With Newton's help and a few improvements, real-time digital systems can find root-means efficiently. Improve your calculations feature www.embedded.com embedded systems design FEBRUARY 2006 37 BY BRIAN NEUNABER

Upload: brian-neunaber

Post on 11-Apr-2015

3.713 views

Category:

Documents


0 download

DESCRIPTION

Efficient recursive algorithms for computing a square-root of an average value. Originally published in Embedded Systems Design.

TRANSCRIPT

Page 1: Efficient Root-Mean Calculation

root-meanReal-time digital systems often require the calculation

of a root-mean, such as a root-mean square (RMS)level or average magnitude of a complex signal.While averaging can be efficiently implemented bymost microprocessors, the square root may not be—

especially with low-cost hardware. If the processor doesn’t implementa fast square root function, it must be implemented in software; al-though this yields accurate results, it may not be efficient.

One common method for computing the square root is Newton’smethod, which iteratively converges on a solution using an initial esti-mate. Since we’re computing the square root of a slowly varying aver-age value, the previous root-mean value makes a good estimate. Fur-thermore, we can combine the iterative Newton’s method with afirst-order recursive averager, resulting in a super-efficient method forcomputing the root-mean of a signal.

In this article, I’ll develop and present three efficient recursive al-gorithms for computing the root-mean, illustrating each method withsignal flow diagrams and example code. To some degree, each of thesemethods trades hardware complexity for error. I’ll compare the com-putational performance and error of each method and suggest suit-able hardware for each implementation.

ROOT-MEANThe root-mean is computed as the square root of the average overtime of its input. This average may be recursive or non-recursive, andI’ll briefly review the general case for both.

Non-recursive averageThe non-recursive average, or moving average, is the weighted sum ofN inputs: the current input and N-1 previous inputs. In digital filter-

With Newton's help and a few improvements, real-time digitalsystems can find root-means efficiently.

Improve your

calculations

feature

www.embedded.com embedded systems design FEBRUARY 2006 37

BY BRIAN NEUNABER

Page 2: Efficient Root-Mean Calculation

38 FEBRUARY 2006 embedded systems design www.embedded.com

ing terminology, this is called a finite im-pulse response, or FIR filter:

(1)

The most common use of the mov-ing average typically sets the weightssuch that an=1/N. If we were to plotthese weights versus time, we would seethe “window” of the input signal that isaveraged at a given point in time. This1/N window is called a rectangular win-dow because its shape is an N-by-1/Nrectangle.

There is a trick for computing the1/N average so that all N samples neednot be weighted and summed with eachoutput calculation. Since the weightsdon’t change, you can simply add thenewest weighted input and subtract theNth weighted input from the previoussum:

(2)

While this technique is computa-tionally efficient, it requires storageand circular-buffer management of Nsamples.

Of course, many other windowshapes are commonly used. Typically,these window shapes resemble, or are avariation of, a raised cosine between–π/2 and π/2. These windows weight thesamples in the center more than thesamples near the edges. Generally speak-ing, you should only use one of thesewindows when there is a specific needto, such as applying a specific filter to thesignal. The disadvantage of these win-dows is that computational complexityand storage requirements increase withN.

Recursive averageThe recursive average is the weightedsum of the input, N previous inputs, andM previous outputs:

(3)

The simplest of these in terms ofcomputational complexity and storage(while still being useful) is the first-orderrecursive average. In this case, the aver-age is computed as the weighted sum ofthe current input and the previous out-put. The first-order recursive averagealso lends itself to an optimization whencombined with the computation of thesquare root, which we’ll discuss shortly.

In contrast to the non-recursive av-erage, the first-order recursive average’swindow is a decaying exponential (Fig-ure 1). Technically, the recursive averagehas an infinite window, since it never de-cays all the way to zero. In digital filter-ing terminology, this is known as an in-finite impulse response, or IIR filter.

From Figure 1, we see that earliersamples are weighted more than latersamples, allowing us to somewhat arbi-trarily define an averaging time for therecursive average. For the first-ordercase, we define the averaging time as thetime at which the impulse response hasdecayed to a factor of 1/e, or approxi-

y n a x n b y mn

n

N

mm

M

( ) = −( ) + −( )= =∑ ∑

0 1

y n y nN

x x N( ) = −( ) + ( ) − −( )⎡⎣ ⎤⎦11

0

y n a x nn

n

N

( ) = −( )=∑

0

feature

Comparison of recursive and non-recursive averaging windows

First-order recursive1/N Non-recursive

1fs. ta

1fs. ta

ta

1e

t

y(t)

Figure 1

.

Mean-square computation using Newton’s method for reciprocal square root

a a

Σ Σ

3

x x x

z-1z-1

1/2m(n)

––x(n) y(n)

Figure 2

Page 3: Efficient Root-Mean Calculation

feature

mately 37%, of its initial value. Anequivalent definition is the time atwhich the step response reaches 1–(1/e),or approximately 63%, of its final value.Other definitions are possible but willnot be covered here. The weighting ofthe sum determines this averaging time;to ensure unity gain, the sum of theweights must equal one. As a conse-quence, only one coefficient needs to bespecified to describe the averaging time.

Therefore, for first-order recursiveaveraging, we compute the mean level as:

(4)

where x(n) is the input, m(n) is themean value, and a is the averaging coef-ficient. The averaging coefficient is de-fined as:

(5)

where t is the averaging time, and fS isthe sampling frequency. The root-meanmay then be calculated by taking thesquare root of Equation 4:

(6)

where y(n) is the root-mean.

EFFICIENT COMPUTATION METHODSGoogling “fast square root” will get youa plethora of information and codesnippets on implementing fast square-root algorithms. While these methods

may work just fine, they don’t take intoaccount the application in which thesquare root is required. Oftentimes, youmay not need exact precision to the last

bit, or the algorithm itself can be manip-ulated to optimize the computation ofthe square root. I present a few basic ap-proaches here.

y n m n

x n a y n x n

( ) = ( )= ( ) + −( ) −⎡

⎣⎢⎤⎦⎥

12

( )

a e f tS=−1

m n a x n a m n

x n a m n x n

( ) = −( ) ( ) + ⋅ −( )= ( ) + −( ) −⎡⎣

1 1

1 ( )⎤⎤⎦

Listing 1. C++ class that computes the root-mean usingNewton's Method for the reciprocal square root

static const double Fs = 48000.0; // sample ratestatic double AvgTime = 0.1; // averaging time

class RecipRootMean{

public:

double Mean;double RecipRootMean;double AvgCoeff;

RecipRootMean(){

AvgCoeff = 1.0 - exp( -1.0 / (Fs * AvgTime) );Mean = 0.0;RecipRootMean = 1.0e-10; // 1 > initial RecipRootMean > 0

}~RecipRootMean() {}

double CalcRootMean(double x){

Mean += AvgCoeff * (x - Mean);RecipRootMean *= 0.5 * ( 3.0 - (RecipRootMean * RecipRootMean *

Mean) );return RecipRootMean * Mean;

}};

Mean-square computation that combines averaging and iterative square root

Σ

z-1

x/y

(1-a)/2

(1-a)/2

x(n) y(n)

Figure 3

Listing 2. C++ class that implements the floating-point versionof Figure 3

static const double Fs = 48000.0; // sample ratestatic double AvgTime = 0.1; // averaging time

class NewtonRootMean{

public:

double RootMean;double AvgCoeff;

NewtonRootMean(){

RootMean = 1.0; // > 0 or divide will failAvgCoeff = 0.5 * ( 1.0 - exp( -1.0 / (Fs * AvgTime) ) );

}~NewtonRootMean() {}

double CalcRootMean(double x){

RootMean += AvgCoeff * ( ( x / RootMean ) - RootMean );return RootMean;

}};

40 FEBRUARY 2006 embedded systems design www.embedded.com

Page 4: Efficient Root-Mean Calculation

42 FEBRUARY 2006 embedded systems design www.embedded.com

feature

Only calculate it when you need itProbably the simplest optimization is toonly calculate the square root when youabsolutely need it. Although this mayseem obvious, it can be easily overlookedwhen computing the root-mean onevery input sample. When you don’tneed an output value for every inputsample, it makes more sense to computethe square root only when you read theoutput value.

One example of an applicationwhen this technique can be used is RMSmetering of a signal. A meter value thatis displayed visually may only require anupdate every 50 to 100ms, which may befar less often than the input signal issampled. Keep in mind, however, thatrecursive averaging should still be com-puted at the Nyquist rate.

LogarithmsRecall that:

(7)

If you’ll be computing the logarithmof a square root, it’s far less computa-tionally expensive to simply halve the re-sult instead. A common example of thisoptimization is the calculation of anRMS level in dB, which may be simpli-fied as follows:

(8)

Newton’s MethodNewton’s Method (also called the New-ton-Rapson Method) is a well known it-erative method for estimating the root ofan equation.1 Newton’s Method can bequite efficient when you have a reason-able estimate of the result. Furthermore,if accuracy to the last bit is not required,the number of iterations can be fixed tokeep the algorithm deterministic.

We may approximate the root of f(x)by iteratively calculating:

(9)

If we wish to find , then weneed to find the root to the equationf(y)=y2-m. Substituting f(y) into Equa-tion 9, we get:

(10)

Rearranging Equation 9, we get:

(11)

where y(n) is the approximation of thesquare root of m(n).

Equation 11 requires a divide opera-tion, which may be inconvenient forsome processors. As an alternative, wecan calculate and multiply the re-sult by m to get . Again using New-ton’s Method, we find that we may itera-

m1 m

y n y nm n

y n( ) ( )

( )

( )= − +

−⎡

⎣⎢

⎦⎥

1

21

1

y n y ny n m

y n( ) = −( ) −

−( ) −

−( )11

2 1

2

y m=

y n y nf y n

f y n( ) = −( ) −

−( )⎡⎣ ⎤⎦′ −( )⎡⎣ ⎤⎦

11

1

RMS in dB

mean

mean

x

x

x

( )= ( )⎡

⎣⎢⎤⎦⎥

=

20

10

2log

log 22( )⎡⎣

⎤⎦

log logx x( ) = ( )1

2

Page 5: Efficient Root-Mean Calculation

tively calculate the reciprocal square root as:

Y,(n)

=1 Y, (n-l)[ 3- Y, (n-l)2 m(n) ] (12)

and calculate the square root as:

y(n) = Y, (n) m(n) (13)

Although Newton's Method for the reciprocal square root eliminates the di­vide operation, it can be problematic for fIxed-point processors. Assuming that m(n) is a positive integer greater than 1, Y,(n) will be a positive number less than one-beyond the range of representa­tion for integer numbers. Implementa­tion must be accomplished using float­ing-point or mixed integer/fractional number representation.

ROOT-MEAN USING NEWTON'S METHOD A subtle difference between Equations 10 and 11 is that m becomes m(n),

meaning that we're attempting to fInd the square root of a moving target. However, since m(n) is a mean value, or slowly varying, it can be viewed as nearly constant between iterations. Since y(n) will also be slowly varying, y(n-l) will be a good approximation to y(n) and require fewer iterations-one, we hope-to achieve a good estimate.

To calculate the root-mean, one may simply apply Newton's Method for calculating the square root to the mean value. As long as the averaging time is long compared to the sample period (t» lI/s), one iteration of the square root calculation should suffIce for reasonable accuracy. This seems simple enough, but we can actually improve the computational efficiency, which will be discussed in one of the following sections.

Using reciprocal square root Unlike the iterative square-root method, however, the iterative recipro­cal square-root requires no divide. This implementation is best suited for floating-point processing, which can

effIciently handle numbers both greater and less than one. We present this implementation as a signal flow diagram in Figure 2. The averaging co­efficient, a, is defined by Equation 5, and Z-I represents a unit sample delay.

A code listing for a c++ class that implements the computation in Figure 2 is presented in Listing 1. In this ex­ample class, initialization is performed

in the class constructor, and each call to CalcRootMeanO performs one itera­tion of averaging and square-root computation.

Using direct square root Let's go back and take a closer look at Equation 11. Newton's method con­verges on the solution as quickly as possible without oscillating around it,

www.embedded.comlembedded systems design I FEBRUARY 2006 \ '13

Page 6: Efficient Root-Mean Calculation

but if we slow this rate of convergence,the iterative equation will converge onthe square root of the average of its in-puts. Adding the averaging coefficientresults in the following root-meanequation:

(14)

where a is defined by Equation 5. Nowy(n) converges to the square root of the

average of x(n). An equivalent signal-flow representation of Equation 14 ispresented in Figure 3. Here, an addition-al y(n–1) term is summed so that onlyone averaging coefficient is required.Note that x(n) and y(n–1) must begreater than zero.

A code listing for a C++ class thatimplements the computation shown inFigure 3 is presented in Listing 2. As inthe previous example, initialization isperformed in the class constructor, andeach call to CalcRootMean( ) performsone iteration of averaging/square-rootcomputation.

With some care, Figure 3 may alsobe implemented in fixed-point arith-metic as shown in Listing 3. In this ex-ample, scaling is implemented to ensurevalid results. When sufficient word sizeis present, x is scaled by nAvgCoeff priorto division to maximize the precision ofthe result.

Divide-free RMS using normalization Now we’ll look at the special case ofcomputing an RMS value on fixed-pointhardware that does not have a fast divideoperation, which is typical for low-costembedded processors. Although manyof these processors can perform divi-sion, they do so one bit at a time, requir-ing at least one cycle for each bit of wordlength. Furthermore, care must be takento insure that the RMS calculation is im-plemented with sufficient numericalprecision. With fixed-point hardware,the square of a value requires twice thenumber of bits to retain the originaldata’s precision.

With this in mind, we manipulateEquation 14 into the following:

(15)

Although the expressionx(n)2–y(n–1)2 must be calculated withdouble precision, this implementationlends itself to a significant optimization.Note that a/2y(n–1) acts like a level-de-pendent averaging coefficient. If a slighttime-dependent variance in the averag-

y n y n

a

y nx n y n

( ) = −( )+

−( ) ( ) − −( )⎡⎣⎢

⎤⎦⎥

1

2 11

2 2

y n a y n

ax n

y n

( ) ( )

( )

( )

= +( ) −

+ −( ) −

1

21 1

1

21

1

44 FEBRUARY 2006 embedded systems design www.embedded.com

feature

RMS computation optimized to eliminate the divide operation

Σ Σ3a/4

x x

x

z-12-int[log2( )]

x(n) y(n)

Figure 4

Listing 3. C++ class that implements the fixed-point version ofFigure 3

static const double Fs = 48000.0; // sample ratestatic double AvgTime = 0.1; // averaging timestatic const unsigned int sknNumIntBits = 32; // # bits in intstatic const unsigned int sknPrecisionBits = sknNumIntBits / 2;static const double skScaleFactor = pow(2.0, (double)sknPrecisionBits);static const unsigned int sknRoundOffset = (unsigned int)floor( 0.5 *skScaleFactor );

class IntNewtonRootMean{

public:

unsigned int nRootMean;unsigned int nScaledRootMean;unsigned int nAvgCoeff;unsigned int nMaxVal;

IntNewtonRootMean(){

nRootMean = 1; // >0 or divide will failnScaledRootMean = 0;double AvgCoeff = 0.5 * ( 1.0 - exp( -1.0 / (Fs * AvgTime) ) );nAvgCoeff = (unsigned int)floor( ( skScaleFactor * AvgCoeff ) + 0.5 );nMaxVal = (unsigned int)floor( ( skScaleFactor / AvgCoeff ) + 0.5 );

}~IntNewtonRootMean() {}

unsigned int CalcRootMean(unsigned int x){

if ( x < nMaxVal ){

nScaledRootMean += ( ( nAvgCoeff * x ) / nRootMean ) - ( nAvgCoeff* nRootMean );

}else{

nScaledRootMean += nAvgCoeff * ( ( x / nRootMean ) - nRootMean );}

nRootMean = ( nScaledRootMean + sknRoundOffset ) >>sknPrecisionBits;

return nRootMean;}

};

Page 7: Efficient Root-Mean Calculation

46 FEBRUARY 2006 embedded systems design www.embedded.com

feature

ing time can be tolerated—which is of-ten the case—1/y(n–1) can be grosslyapproximated. On a floating-pointprocessor, shifting the averaging coeffi-cient to the left by the negative of the ex-ponent approximates the divide opera-tion. This process is commonly referredto as normalization. Some fixed-pointDSPs can perform normalization bycounting the leading bits of the accumu-lator and shifting the accumulator bythat number of bits.2 In both cases, theaveraging coefficient will be truncated tothe nearest power of two, so the coeffi-cient must be multiplied by 3/2 to roundthe result. This implementation isshown in Equation 16.

(16)

Figure 4 is the signal-flow diagramthat represents Equation 16. Just as inFigure 3, x(n) and y(n–1) must begreater than zero.

A sample code listing that imple-ments Figure 4 is shown in Listing 4.This assembly-language implementation

is for the Freescale(formerly Motorola)DSP563xx 24-bitfixed-point processor.

Of course, thismethod can be im-plemented even with-out fast normaliza-tion. You canimplement a loop toshift x(n)2–y(n–1)2 tothe left for each lead-ing bit in y(n–1). Thiswill be slower but can

be implemented with even the simplestof processors.

HIGHER ORDER AVERAGINGHigher order recursive averaging may beaccomplished by inserting additional av-eraging filters before the iterative squareroot. These filters may simply be one ormore cascaded first-order recursive sec-tions. First-order sections have the ad-vantage of producing no overshoot inthe step response. In addition, there isonly one coefficient to adjust and quan-tization effects (primarily of concern forfixed-point implementation) are far lessthan that of higher-order filters.

The implementer should be aware

y n y n

a

x n

y n

( ) = −( )+

( ) −

− −( )⎡⎣ ⎤⎦{ }

1

3

42 2 1

2

int logi

yy n −( )⎡⎣⎢

⎤⎦⎥

⎨⎪⎪

⎩⎪⎪

⎬⎪⎪

⎭⎪⎪

1

2

2,

int logwhere 22 x⎡⎣ ⎤⎦{ } is a left-shift by

the numberoflleading bits in x

Comparison of RMS calculation methods

0.001

0.002

0.003

0.004

0.005

0.006

0

0 0.1 0.2 0.3 0.4 0.5Seconds

Leve

l

0.6 0.7 0.8 0.9 1

RMSNewton's method RMS approximationNo divide RMS approximationReciprocal square root method

Figure 5

Listing 4. Freescale DSP563xx assembly implementation of divide-free RMS using normalization

RMS; r4: address of output bits 24-47 [y_msw(n)]; r4+1: address of output bits 0-23 [y_lsw(n)]; x0: input [x(n)]

FS equ 48000.0 ;sampling rate in HzAVG_TIME equ 0.1 ;averaging time in secondsAVG_COEFF equ @XPN(-1.0/(FS*AVG_TIME)) ;calculate avg_coeff

move #>AVG_COEFF,x1 ;load avg_coeffmove y:(r4)+,a ;get y_msw(n-1)move y:(r4),a0 ;get y_lsw(n-1)clb a,b ;b=number of leading bits in y(n-1)mpy x0,x0,a a,x0 ;a=x(n)^2

;x0=y_msw(n-1)mac -x0,x0,a x0,y1 ;a=x(n)^2-y_msw(n-1)^2

;y1=y_msw(n-1)normf b1,a ;normalize x(n)^2-y_msw(n-1)^2 by y_msw(n-1)move a,x0 ;x0=[x(n)^2-y_msw(n-1)^2]norm(y_msw(n-1))mpy x1,x0,a y:(r4),y0 ;a= AVG_COEFF

;*[x(n)^2-y_msw(n-1)^2]norm(y_msw(n-1)),;y0=y_lsw(n-1)

add y,a ;a=y(n-1)+avg_coeff;*[x(n)^2-y_msw(n-1)^2]norm(y_msw(n-1))}

move a0,y:(r4)- ;save y_lsw(n)move a,y:(r4) ;save y_msw(n)rts

Page 8: Efficient Root-Mean Calculation

48 FEBRUARY 2006 embedded systems design www.embedded.com

feature

that cascading first-order sectionschanges the definition of averaging time.A simple but gross approximation thatmaintains the earlier definition of stepresponse is to simply divide the averag-ing time of each first-order section bythe total number of sections. However, itis the implementer’s responsibility toverify that this approximation is suitablefor the application.

Second-order sections may also beused, if you want (for example) aBessel-Thomson filter response. If sec-ond-order sections are used, it’s best tochoose an odd-order composite re-sponse since the averaging square-rootfilter approximates the final first-orderfilter with Q=0.5. Care must be taken tominimize the overshoot of this averag-ing filter. Adjusting the averaging timeof this filter in real time will provemore difficult, since there are a numberof coefficients that must be adjusted inunison to ensure stability.

RESULTSThree methods of calculating the RMSlevel are compared in Figure 5. The aver-aging time is set to 100ms, and the inputis one second of 1/f noise with a 48kHz

sampling frequency. The first trace is thetrue RMS value calculated using Equa-tion 6. The second trace is the RMS cal-culation using Equation 14. The thirdtrace is the no-divide calculation ofEquation 16. The fourth trace is theRMS value using the reciprocal square-root method of Equation 13.

For the most part, the four tracesline up nicely. All four approximationsappear to converge at the same rate asthe true RMS value. As expected, thelargest deviation from the true RMS val-ue is the approximation of Equation 16.This approximation will have the great-est error during large changes in the lev-el of the input signal, although this erroris temporary: the optimized approxima-tion will converge upon the true RMSvalue when the level of the input signalis constant.

The errors between the three ap-proximations and the true RMS valueare shown in Figure 6. The error of theRMS approximation using Equation 14slowly decreases until it is below 1e–7,which is sufficient for 24-bit accuracy.The optimized approximation of Equa-tion 16 is substantially worse, at about1e–4, but still good enough for many ap-

plications. The approximation that usesthe reciprocal square root is “in thenoise”—less than 1e–9. For highly criti-cal floating-point applications, this is theefficient method of choice.

As you would expect, the errors dis-cussed above will be worse with shorteraveraging times and better with longeraveraging times. Table 1 summarizes theapproximate error vs. averaging time ofthese three methods, along with suitablehardware architecture requirements.

SUITABLE FOR AVERAGE READERBy combining recursive averaging withNewton’s method for calculating thesquare root, you’ll gain a very efficientmethod for computing the root-mean.Although the three methods I presentedhere are developed for different hard-ware and each, to some degree, trades offhardware capabilities for error, most ofyou should find one of these methodssuitable for your application. ■

Brian Neunaber is currently digital sys-tems architect of software and firmwareat QSC Audio Products. He has de-signed real-time digital audio algorithmsand systems for QSC and St. Louis Mu-sic and has an MSEE from Southern Illi-nois University. You may contact him at

[email protected].

REFERENCES:1.D. G. Zill. Calculus withAnalytic Geometry, 2nd ed.,PWS-Kent, Boston, pp.170-176, 1988.

2.Motorola. DSP56300Family Manual, Rev. 3, Mo-torola Literature Distribu-tion, Denver, 2000.

Approximate error vs. averaging time of RMS calculation methods

Calculation method Approximate error Preferred hardwaret=1ms t=10ms t=100ms t=1 sec

Reciprocal root (Figure 2) 1e–6 1e–8 1e–9 1e–10 floating-pointCombined root-mean (Figure 3) 3e–6 3e–7 1e–7 1e–7 hardware divideDivide-free root-mean (Figure 4) 1e–4 1e–4 1e–4 3e–5 any

Table 1

Error comparison of RMS calculation methods

1.10-91.10-8

1.10-7

1.10-6

1.10-5

1.10-41.10-3

1.10-10

10.1

0.01

0 0.1 0.2 0.3 0.4 0.5Seconds

Erro

r

0.6 0.7 0.8 0.9 1

Error, Newton's methodError, no divideError, reciprocal root

Figure 6