matematik

41
BINOMIAL THEOREM In elementary algebra, the binomial theorem describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the power (x + y) n into a sum involving terms of the form ax b y c , where the exponents b and c are nonnegative integers with b + c = n, and the coefficient a of each term is a specific positive integer depending on n and b. When an exponent is zero, the corresponding power is usually omitted from the term. For example, The coefficient a in the term of x b y c is known as the binomial coefficient or (the two have the same value). These coefficients for varying n and b can be arranged to form Pascal's triangle. These numbers also arise in combinatorics, where gives the number of different combinations of b elements that can be chosen from an n-element set. According to the theorem, it is possible to expand any power of x + y into a sum of the form where denotes the corresponding binomial coefficient. Using summation notation, the formula above can be written This formula is sometimes referred to as the Binomial Formula or the Binomial Identity.

Upload: ton-eimma

Post on 28-Nov-2014

61 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: matEMATIK

BINOMIAL THEOREM

In elementary algebra, the binomial theorem describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the power (x + y)n into a sum involving terms of the form axbyc, where the exponents b and c are nonnegative integers with b + c = n, and the coefficient a of each term is a specific positive integer depending on n and b. When an exponent is zero, the corresponding power is usually omitted from the term. For example,

The coefficient a in the term of xbyc is known as the binomial coefficient or (the two have the same value). These coefficients for varying n and b can be arranged to form Pascal's triangle.

These numbers also arise in combinatorics, where gives the number of different combinations of b elements that can be chosen from an n-element set.

According to the theorem, it is possible to expand any power of x + y into a sum of the form

where denotes the corresponding binomial coefficient. Using summation notation, the formula above can be written

This formula is sometimes referred to as the Binomial Formula or the Binomial Identity.

A variant of the binomial formula is obtained by substituting 1 for x and x for y, so that it involves only a single variable. In this form, the formula reads

or equivalently

Page 2: matEMATIK

Examples

Pascal's triangle

The most basic example of the binomial theorem is the formula for the square of x + y:

The binomial coefficients 1, 2, 1 appearing in this expansion correspond to the third row of Pascal's triangle. The coefficients of higher powers of x + y correspond to later rows of the triangle:

The binomial theorem can be applied to the powers of any binomial. For example,

For a binomial involving subtraction, the theorem can be applied as long as the opposite of the second term is used. This has the effect of changing the sign of every other term in the expansion:

Page 3: matEMATIK

Geometrical explanation

For positive values of a and b, the binomial theorem with n = 2 is the geometrically evident fact that a square of side a + b can be cut into a square of side a, a square of side b, and two rectangles with sides a and b. With n = 3, the theorem states that a cube of side a + b can be cut into a cube of side a, a cube of side b, three a×a×b rectangular boxes, and three a×b×b rectangular boxes.

In calculus, this picture also gives a geometric proof of the derivative (xn)' = nxn − 1:[5] if one sets a = x and b = Δx, interpreting b as an infinitesimal change in a, then this picture shows the infinitesimal change in the volume of an n-dimensional hypercube, (x + Δx)n, where the coefficient of the linear term (in Δx) is nxn − 1, the area of the n faces, each of dimension (n − 1):

Page 4: matEMATIK

Substituting this into the definition of the derivative via a difference quotient and taking limits means that the higher order terms – (Δx)2 and higher – become negligible, and yields the formula (xn)' = nxn − 1, interpreted as

"the infinitesimal change in volume of an n-cube as side length varies is the area of n of its (n − 1)-dimensional faces".

If one integrates this picture, which corresponds to applying the fundamental theorem of

calculus, one obtains Cavalieri's quadrature formula, the integral – see proof of Cavalieri's quadrature formula for details.[5]

The binomial coefficients

Main article: Binomial coefficient

The coefficients that appear in the binomial expansion are called binomial coefficients. These are

usually written , and pronounced “n choose k”.

Formulas

The coefficient of xn−kyk is given by the formula

,

which is defined in terms of the factorial function n!. Equivalently, this formula can be written

with k factors in both the numerator and denominator of the fraction. Note that, although this

formula involves a fraction, the binomial coefficient is actually an integer.

Page 5: matEMATIK

Combinatorial interpretation

The binomial coefficient can be interpreted as the number of ways to choose k elements from an n-element set. This is related to binomials for the following reason: if we write (x + y)n as a product

then, according to the distributive law, there will be one term in the expansion for each choice of either x or y from each of the binomials of the product. For example, there will only be one term xn, corresponding to choosing 'x from each binomial. However, there will be several terms of the form xn−2y2, one for each way of choosing exactly two binomials to contribute a y. Therefore, after combining like terms, the coefficient of xn−2y2 will be equal to the number of ways to choose exactly 2 elements from an n-element set.

Proofs

Combinatorial proof

Example

The coefficient of xy2 in

equals because there are three x,y strings of length 3 with exactly two y's, namely,

corresponding to the three 2-element subsets of { 1, 2, 3 }, namely,

where each subset specifies the positions of the y in a corresponding string.

General case

Expanding (x + y)n yields the sum of the 2 n products of the form e1e2 ... e n where each e i is x or y. Rearranging factors shows that each product equals xn−kyk for some k between 0 and n. For a given k, the following are proved equal in succession:

Page 6: matEMATIK

the number of copies of xn − kyk in the expansion the number of n-character x,y strings having y in exactly k positions the number of k-element subsets of { 1, 2, ..., n}

(this is either by definition, or by a short combinatorial argument if one is defining as

).

This proves the binomial theorem.

Inductive proof

Induction yields another proof of the binomial theorem (1). When n = 0, both sides equal 1, since

x0 = 1 for all x and . Now suppose that (1) holds for a given n; we will prove it for n + 1. For j, k ≥ 0, let [ƒ(x, y)] jk denote the coefficient of xjyk in the polynomial ƒ(x, y). By the

inductive hypothesis, (x + y)n is a polynomial in x and y such that [(x + y)n] jk is if j + k = n, and 0 otherwise. The identity

shows that (x + y)n+1 also is a polynomial in x and y, and

If j + k = n + 1, then (j − 1) + k = n and j + (k − 1) = n, so the right hand side is

by Pascal's identity. On the other hand, if j +k ≠ n + 1, then (j – 1) + k ≠ n and j +(k – 1) ≠ n, so we get 0 + 0 = 0. Thus

which is the inductive hypothesis with n + 1 substituted for n and so completes the inductive step.

Page 7: matEMATIK

Generalizations

Newton's generalized binomial theoremMain article: Binomial series

Around 1665, Isaac Newton generalized the formula to allow real exponents other than nonnegative integers, and in fact it can be generalized further, to complex exponents. In this generalization, the finite sum is replaced by an infinite series. In order to do this one needs to give meaning to binomial coefficients with an arbitrary upper index, which cannot be done using the above formula with factorials; however factoring out (n−k)! from numerator and denominator in that formula, and replacing n by r which now stands for an arbitrary number, one can define

where is the Pochhammer symbol here standing for a falling factorial. Then, if x and y are real numbers with |x| > |y|,[6] and r is any complex number, one has

When r is a nonnegative integer, the binomial coefficients for k > r are zero, so (2) specializes to (1), and there are at most r + 1 nonzero terms. For other values of r, the series (2) has infinitely many nonzero terms, at least if x and y are nonzero.

This is important when one is working with infinite series and would like to represent them in terms of generalized hypergeometric functions.

Taking r = −s leads to a particularly handy but non-obvious formula:

Further specializing to s = 1 yields the geometric series formula.

Generalizations

Formula (2) can be generalized to the case where x and y are complex numbers. For this version, one should assume |x| > |y|[6] and define the powers of x + y and x using a holomorphic branch of log defined on an open disk of radius |x| centered at x.

Page 8: matEMATIK

Formula (2) is valid also for elements x and y of a Banach algebra as long as xy = yx, x is invertible, and ||y/x|| < 1.

The multinomial theoremMain article: Multinomial theorem

The binomial theorem can be generalized to include powers of sums with more than two terms. The general version is

where the summation is taken over all sequences of nonnegative integer indices k1 through km such that the sum of all ki is n. (For each term in the expansion, the exponents must add up to n).

The coefficients are known as multinomial coefficients, and can be computed by the formula

Combinatorially, the multinomial coefficient counts the number of different ways to partition an n-element set into disjoint subsets of sizes k1, ..., kn.

Multiple angle identities

For the complex numbers the binomial theorem can be combined with De Moivre's formula to yield multiple-angle formulas for the sine and cosine. According to De Moivre's formula,

Using the binomial theorem, the expression on the right can be expanded, and then the real and imaginary parts can be taken to yield formulas for cos(nx) and sin(nx). For example, since

De Moivre's formula tells us that

which are the usual double-angle identities. Similarly, since

Page 9: matEMATIK

De Moivre's formula yields

In general,

and

Series for e

The number e is often defined by the formula

Applying the binomial theorem to this expression yields the usual infinite series for e. In particular:

The kth term of this sum is

As n → ∞, the rational expression on the right approaches one, and therefore

This indicates that e can be written as a series:

Page 10: matEMATIK

Common Products and Factors

Any power of a binomial can be obtained from the Binomial Theorem.

Binomial Theorem

For any value of n, whether positive, negative, integer or non-integer, the value of the nth power of a binomial is given by:

Binomial Expansion

For any power of n, the binomial (a + x) can be expanded

Page 11: matEMATIK

This is particularly useful when x is very much less than a so that the first few terms provide a good approximation of the value of the expression. There will always be n+1 terms and the general form is:

How do you square a binomial?

Let’s use as a general binomial, and square it:

Next let's show that this pattern will work for all types of binomials:

Page 12: matEMATIK

There are a few things to notice about the pattern:

If there is a constant or coefficient in either term, it is squared along with the variables.

The powers variable in the first term of the binomial descend in an orderly fashion.

2nd degree, 1st degree, 0 degree or 4th degree, 2nd degree, 0 degree

The powers of the variable in the second term ascend in an orderly fashion.

0 degree, 1st degree, 2nd degree

The sign of the 2nd term is negative in the 3rd example, as it should be.

The sum of the exponents for every term in the expansion is 2.

There are 3 terms in the 2nd power expansion.

What if we cube a binomial?

Page 13: matEMATIK

There are a few things to notice about the pattern:

If there is a constant or coefficient in either term, it is raised to the appropriate power along with the variables.

The powers of the variable in the first term of the binomial descend in an orderly fashion.

3rd degree, 2nd degree, 1st degree, 0 degree

powers of the variable in the second term ascend in an orderly fashion.

0 degree, 1st degree, 2nd degree, 3rd degree

The signs of the 2nd and 4th term are appropriately negative in the 2nd example.

The sum of the exponents in each term of the expansion are 3.

There are 4 terms in the 3rd degree expansion.

Summarizing: What patterns do we need to do any binomial expansion?

The powers of the first term (the “a” term) descend in consecutive order , starting with the power of the expansion and ending with the zero power . Note that we raise the entire term to that power, then one lower, etc.

The powers of the second term (the “b” term) ascend in consecutive integer order, starting with zero power and ending with the power of the expansion.

Page 14: matEMATIK

The sum of the exponents (before simplifying them) of each term is the same as the power of the expansion

You will always have one more term than the number of the expansion.

The signs of an will alternate positive, then negative, etc. The pattern of the coefficients follows Pascal’s Triangle:

i. Expand

Start with the first term

Note there is no need to show the because it is 1 The 2nd term will be

Note that the exponents add up to 5, The 3rd term will be

The 4th term will be

The 5th term will be

The last term will be

Page 15: matEMATIK

Again there is no need to show the because it is 1. Therefore

ii. Expand

The degree is 5 so we will have six terms altogether. The coefficients needed to complete the expansion are the 1 5 10 10 5 1 row of Pascal’s Triangle.

Start with the first term

As usual, there is no need to show the because it is 1. The 2nd term will be

Note that the exponents add up to 5. The 3rd term will be

The 4th term will be

The 5th term will be

The last term will be

All together we get:

Page 16: matEMATIK

In mathematics, a power series (in one variable) is an infinite series of the form

where an represents the coefficient of the nth term, c is a constant, and x varies around c (for this reason one sometimes speaks of the series as being centered at c). This series usually arises as the Taylor series of some known function; the Taylor series article contains many examples.

In many situations c is equal to zero, for instance when considering a Maclaurin series. In such cases, the power series takes the simpler form

These power series arise primarily in analysis, but also occur in combinatorics (under the name of generating functions) and in electrical engineering (under the name of the Z-transform). The familiar decimal notation for real numbers can also be viewed as an example of a power series, with integer coefficients, but with the argument x fixed at 1⁄10. In number theory, the concept of p-adic numbers is also closely related to that of a power series.

Page 17: matEMATIK

A Maclaurin series is a Taylor series expansion of a function about 0,

(1)

Maclaurin series are named after the Scottish mathematician Colin Maclaurin.

The Maclaurin series of a function up to order may be found using Series[f, x, 0, n ]. The th term of a Maclaurin series of a function can be computed in Mathematica using SeriesCoefficient[f, x, 0, n ] and is given by the inverse Z-transform

(2)

Maclaurin series are a type of series expansion in which all terms are nonnegative integer powers of the variable. Other more general types of series include the Laurent series and the Puiseux series.

Maclaurin series for common functions include

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10)

(11)

Page 18: matEMATIK

(12)

(13)

(14)

(15)

(16)

(18)

(19)

(20)

(21)

(22)

(23)

(24)

(25)

(26)

(27)

(28)

(29)

(30)

(31)

Page 19: matEMATIK

(32)

Page 20: matEMATIK

Vectors are usually denoted in lowercase boldface, as a or lowercase italic boldface, as a. (Uppercase letters are typically used to represent matrices.) Other conventions include or a, especially in handwriting. Alternately, some use a tilde (~) or a wavy underline drawn beneath the symbol, which is a convention for indicating boldface type. If the vector represents a directed

distance or displacement from a point A to a point B (see figure), it can also be denoted as or AB.

Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here the point A is called the origin, tail, base, or initial point; point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction.

In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components (or scalar projections) of the vector on the axes of the coordinate system.

As an example in two dimensions (see figure), the vector from the origin O = (0,0) to the point A = (2,3) is simply written as

Page 21: matEMATIK

A vector in the Cartesian plane, showing the position of a point A with coordinates (2,3).

In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented in

The notion that the tail of the vector coincides with the origin is implicit and easily understood.

Thus, the more explicit notation is usually not deemed necessary and very rarely used.

In three dimensional Euclidean space (or ), vectors are identified with triples of scalar components:

also written

These numbers are often arranged into a column vector or row vector, particularly when dealing with matrices, as follows:

Page 22: matEMATIK

Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them:

These have the intuitive interpretation as vectors of unit length pointing up the x, y, and z axis of a Cartesian coordinate system, respectively, and they are sometimes referred to as versors of those axes. In terms of these, any vector a in can be expressed in the form:

Scalar multiplication

Scalar multiplication of a vector by a factor of 3 stretches the vector out.

The scalar multiplications 2a and −a of a vector a

Page 23: matEMATIK

A vector may also be multiplied, or re-scaled, by a real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector

The length or magnitude or norm of the vector a is denoted by ||a|| or, less commonly, |a|, which is not to be confused with the absolute value (a scalar "norm").

The length of the vector a can be computed with the Euclidean norm

which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors.

This happens to be equal to the square root of the dot product, discussed below, of the vector with itself:

Unit vector

The normalization of a vector a into a unit vector âMain article: Unit vector

A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â.

To normalize a vector a = [a1, a2, a3], scale the vector by the reciprocal of its length ||a||. That is:

Null vectorMain article: Null vector

Page 24: matEMATIK

The null vector (or zero vector) is the vector with length zero. Written out in coordinates, the

vector is (0,0,0), and it is commonly denoted , or 0, or simply 0. Unlike any other vector, it does not have a direction, and cannot be normalized (that is, there is no unit vector which is a multiple of the null vector). The sum of the null vector with any vector a is a (that is, 0+a=a).

Dot productMain article: dot product

The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b and is defined as:

where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point and then the length of a is multiplied with the length of that component of b that points in the same direction as a.

The dot product can also be defined as the sum of the products of the components of each vector as

Cross productMain article: Cross product

The cross product (also called the vector product or outer product) is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as

where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (–n).

Page 25: matEMATIK

An illustration of the cross product

The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system (but note that a and b are not necessarily orthogonal). This is the right-hand rule.

The length of a × b can be interpreted as the area of the parallelogram having a and b as sides.

The cross product can be written as

For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below).

Page 26: matEMATIK

Example 1

Here, the denominator splits into two distinct linear factors:

q(x) = x2 + 2x − 3 = (x + 3)(x − 1)

so we have the partial fraction decomposition

Multiplying through by x2 + 2x - 3, we have the polynomial identity

1 = A(x − 1) + B(x + 3)

Substituting x = -3 into this equation gives A = -1/4, and substituting x = 1 gives B = 1/4, so that

Example 2

After long-division, we have

Since (−4)2 − 4(8) = −16 < 0, x2 − 4x + 8 is irreducible, and so

Multiplying through by x3 − 4x2 + 8x, we have the polynomial identity

4x2 − 8x + 16 = A(x2 − 4x + 8) + (Bx + C)x

Page 27: matEMATIK

Taking x = 0, we see that 16 = 8A, so A = 2. Comparing the x2 coefficients, we see that 4 = A + B = 2 + B, so B = 2. Comparing linear coefficients, we see that −8 = −4A + C = −8 + C, so C = 0. Altogether,

Example 3

After long-division and factoring, we have

The partial fraction decomposition takes the form

Multiplying through by (x − 1)3(x2 + 1)2 we have the polynomial identity

Taking x = 1 gives 4 = 4C, so C = 1. Similarly, taking x = i gives 2 + 2i = (Fi + G)(2 + 2i), so Fi + G = 1, so F = 0 and G = 1 by equating real and imaginary parts. We now have the identity

x5 − 2x4 + 5x3 − 5x2 + 6x − 1= A(x − 1)2(x2 + 1)2 + B(x − 1)(x2 + 1)2 + (x2 + 1)2 + (Dx + E)(x − 1)3(x2 + 1) + (x − 1)3

Taking constant terms gives E = A − B + 1, taking leading coefficients gives A = −D, and taking x-coefficients gives B = 3 − D − 3E. Putting all of this together, E = A − B + 1 = −D − (3 − D − 3E) + 1 = 3E − 2, so E = 1 and A = B = −D. Now,

x5 − 2x4 + 5x3 − 5x2 + 6x − 1= A(x − 1)2(x2 + 1)2 + A(x − 1)(x2 + 1)2 + (x2 + 1)2 + ( − Ax + 1)(x − 1)3(x2 + 1) + (x − 1)3

Taking x = −1 gives -20 = −8A − 20, so A = B = D = 0. The partial fraction decomposition of ƒ(x) is thus

Page 28: matEMATIK

LAPLACE TRANSFORMS

In control system design it is necessary, to analyse the performance and stability of a proposed system before it is built or implemented.   Many analysis techniques use transformed variables to facilitate mathematical treatment of the problem.   In the analysis of continuous time dynamical systems, this generally involves the use of Laplace Transforms

Applying Laplace Transforms is analogous to using logarithms to simplify certain types of mathematical operations.   By taking logarithms, numbers are transformed into powers of 10 or e (natural logarithms ).   As a result of the transformations, mathematical multiplications and divisions are replaced by additions and subtractions respectively.   Similarly, the application of Laplace Transforms to the analysis of systems which can be described by linear, ordinary time differential equations overcomes some of the complexities encountered in the time-domain solution of such equations.

Laplace Transforms are used to convert time domain relationships to a set of equations expressed in terms of the Laplace operator 's'.   Thereafter, the solution of the original problem is effected by simple algebraic manipulations in the 's' or Laplace domain rather than the time domain.

The Laplace Transform of a time variable f(t) is arrived at by multiplying f(t) by e -st and integrating from 0 to infinity..f(t) must be a given function which is defined for all positive values of t.s is a complex variable defined by... s = +jω and j = sqrt (-1).

A table of laplace transforms is available to transform real time domain variables to laplace transforms. The necessary operations are carried out and the laplace transforms obtained in terms of s are then inverted from the s domain to the time (t) domain.   This tranformation from the s to the t domain is called the inverse transform...

The contour integral which defines the inverse Laplace Transform is shown below... for reference only, for in practice, this integral is seldom used as table lookup are generally all the operations required for the inverse transform procedure..

Page 29: matEMATIK

Laplace Transform Operations.

Operation f(t) F(s)

Linearity x 1 f 1 (t) + x 2 f 2 (t)

x 1 F 1 (s) + x 2 F 2 (s)

Constant Multiplication

a.f(t) a.F(s)

Complex shift Theorem

e a.t.f(t) F(s a.t )

Real shift Theorem

f( t - T ) e -Ts F(s) for (T >= 0 )

Scaling Theorem f( t / a ) a F(as)First Derivative f' (t) sF(s)   -  f(0+)2nd Derivative f'' (t) s 2 (F(s)   -  sf(0+)  -  f'(0+)3rd Derivative f''' (t) s 3 (F(s)   -  s 2. f(0+)   -  s.f'(0+)  -  f''(0+)4th 5th .. Derivative follow principles established above

First Integral (1/s).F.(s)

Convolution Integral

F 1(s). F 2(s)

Table showing selection of Laplace Transforms.

Laplace Transforms

No Time Function = f(t) Laplace Transform = F(s)

1 ( t )..Unit impulse 1

2 (t - T)..Delayed impulse e -Ts

3 t ...Unit ramp 1 / s 2

4 t n n ! / s ( n+1 )

5 e - at 1 / ( s + a )

6 e at 1 / ( s - a )

7 (1 / a) .(1 - e -at ) 1 / {s.( s + a )}

8 (t n / n !)e - at 1 /(s + a ) n - 1...n = 1,2,3,4,...

9 sin ωt ω / (s 2 + ω 2 )

10 cos ωt s / (s 2 + ω 2 )

11

Page 30: matEMATIK

12

13 (1 / ω 2) .(1 — cos ωt ) 1 / {s.( s 2 + ω 2 )}

14 (1 / a 2) .(a.t — 1 + e -at ) 1 / {s 2.( s + a )}

15

16

17 u(t) or 1 ...Unit step 1/s

18 u(t - T)...Delayed step ( 1 / s )e -Ts

19u(t) - u(t - T) or 1 ...Rectangular Pulse

( 1 / s ) (1- e -Ts )

21 e — at cos ωt (s+a) / { (s+a) 2 + ω 2 }

22 (1 / a 2 )(1 — e -at — ate -at ) (1 / s) ( s + a) 2

23

24 1 / [( s + a ) 2 + ω 2 ] (1/ω).e -at sin ωt

25

26

Derivation of table values

examples on how the table values have been derived are provided below.

From above the transform for a unit step i.e f(1) is easily obtained by setting a = 0 ( e 0.t) = 1)

From above the transform for cos ωt and sin ωt is obtained by setting a = jω

Page 31: matEMATIK

Laplace transform example

An example of using Laplace transforms is provided below

The manipulation of the laplace tranform equation into a form to enable a convenient inverse transform often involves use of partial fractions...

An example application including partial fraction expansion is as follows.....

Page 32: matEMATIK

Partial Fraction Expansion process using the Heaviside cover up method

The Laplace operations generally result in a ratio

This must be proper in that the order of the denominator D(s) must be higher than the numerator N(s).   If the function is not proper then the numerator N(s) must be divided by the denominator using the long division method.

The next step is to factor D(s)

a 1,a 1 etc are the roots of D(s).

D(s) is then rewritten in partial fraction form..

Page 33: matEMATIK

To obtain a 1 simply multiply both sides of the equation by (s - a 1 ) letting s = a 1 This results in all terms on the RHS becoming zero apart from a 1...

G(s).( s-a 1 )| s = a 1 = a 1

The LHS is multiplied be (s - a 1) thus cancelling out (s - a 1)in the denominator. ... and all instances of s are then replaced by a 1

Note: If one of the terms in the numerator is s then this is simply equivalent to (s- a x) with a x = 0.

Repeated Roots....

When the denominator has repeated roots the breakdown into partial fractions is treated differently as shown below...

The factor b 0 is obtained in exactly the same way as above..

The factor b 1 is obtained by first differentiating G(s).(s-b) with respect to ds and then substituting s with b as before. This will generally involve the differentiation d(u/v) = (vdu - udv)/v 2..

The factors b 2 to b r is obtained by progressive differentiation of G(s)(s-b) and dividing the result by the factorial of the level of differentiation (if d/ds 2 then divide by 2!, if , d/ds 3 then divide by 3!(6))...

Complex Roots....

Page 34: matEMATIK