the finite element method defined the finite element method (fem) is a weighted residual method that...

27
The Finite Element Method Defined The Finite Element Method (FEM) is a weighted residual method that uses compactly-supported basis functions.

Upload: bryce-greene

Post on 17-Dec-2015

228 views

Category:

Documents


1 download

TRANSCRIPT

The Finite Element Method Defined

The Finite Element Method (FEM) is a weighted residual method that uses compactly-supported basis functions.

Brief Comparison with Other Methods

Finite Difference (FD) Method:

FD approximates an operator (e.g., the derivative) and solves a problem on a set of points (the grid)

Finite Element (FE) Method:

FE uses exact operators but approximates the solution basis functions. Also, FE solves a problem on the interiors of grid cells (and optionally on the gridpoints as well).

Brief Comparison with Other Methods

Spectral Methods:

Spectral methods use global basis functions to approximate a solution across the entire domain.

Finite Element (FE) Method:

FE methods use compact basis functions to approximate a solution on individual elements.

MGWS

Overview of the Finite Element Method

Strong

form

Weak

form

Galerkin

approx.

Matrix

form

Axial deformation of a bar subjected to a uniform load

(1-D Poisson equation)

Sample Problem

0p=xp

0

00

2

=dx

duEA

=u

p=dx

udEA

Lx

02

L

L=x 0,u = axial displacement

E=Young’s modulus = 1

A=Cross-sectional area = 1

Strong Form

The set of governing PDE’s, with boundary conditions, is called the “strong form” of the problem.

Hence, our strong form is (Poisson equation in 1-D):

0

00

2

=dx

du

=u

p=dx

ud

Lx

02

We now reformulate the problem into the weak form.

The weak form is a variational statement of the problem in which we integrate against a test function. The choice of test function is up to us.

This has the effect of relaxing the problem; instead of finding an exact solution everywhere, we are finding a solution that satisfies the strong form on average over the domain.

Weak Form

Weak Form

0

0

0

0

2

0

2

2

=vdxpdx

ud

=pdx

ud

p=dx

ud

L

2

2

02

Strong Form

Residual R=0

Weak Form

v is our test function

We will choose the test function later.

Why is it “weak”?

It is a weaker statement of the problem.

A solution of the strong form will also satisfy the weak form, but not vice versa.

Analogous to “weak” and “strong” convergence:

Weak Form

fxfxf

xx

n

n

lim :weak

lim :strong

n

n

Weak Form

Choosing the test function:

We can choose any v we want, so let's choose v such that it satisfies homogeneous boundary conditions wherever the actual solution satisfies Dirichlet boundary conditions. We’ll see why this helps us, and later will do it with more mathematical rigor.

So in our example, u(0)=0 so let v(0)=0.

Returning to the weak form:

LL

2

L

2

vdxp=vdxdx

ud

=vdxpdx

ud

0 0

0

2

0

0

2

0

Weak Form

IntegrateIntegrate LHS by parts:

00

00

0

)(

xLx

L

Lx

x

L

dx

duv

dx

duLvdx

dx

dv

dx

du

dx

duxvdx

dx

dv

dx

du

Weak FormRecall the boundary conditions on u and v:

0)0(

0

00

v

=dx

du

=u

Lx

LL

xLx

L

vdxpdxdx

dv

dx

du

dx

duv

dx

duLvdx

dx

dv

dx

du

0 0

0

00

0

HHence,The weak form satisfies Neumann conditions automatically!

Weak Form

functionallinear a ,functionalbilinear a

)(,such that Find

:statement lVariationa10

1

0 0

0

FB

HvvFvuBHu

vdxp=dxdx

dv

dx

du LL

Why is it “variational”?

u and v are functions from an infinite-dimensional function space H

We still haven’t done the “finite element method” yet, we have just restated the problem in the weak formulation.

So what makes it “finite elements”?

Solving the problem locally on elements

Finite-dimensional approximation to an infinite-dimensional space → Galerkin’s Method

Galerkin’s Method

L N

j

L N

jjj

N

i

ii

jj

L L

j

N

jjj

j

N

jjj

N

iii

dxxbpdxxdx

dbx

dx

dc

vdxpdxdx

dv

dx

du

bxbxv

cxcxu

01

01

01

0 0 0

1

1

:formour weak into seInsert the

choseny arbitraril ,

for solve tounkowns ,

Then,

basis finite Choose

Galerkin’s Method

dxpdxdx

d

dx

dc

dxpbdxdx

d

dx

dcb

dxxbpdxxdx

dbx

dx

dc

i

LN

j

Lij

j

i

LN

ii

N

i

N

j

Lij

ji

L N

j

L N

jjj

N

i

ii

jj

0 01

0

0 011 1

0

01

01

01

: Cancelling

:gRearrangin

Galerkin’s Method

effect. without , einterchang

can wesince symmetric be will seealready can We

and

,

unknowns, of vector a is

where, problemmatrix a have now We

0 0

0

0 01

0

ji

K

dxpF

dxdx

d

dx

dK

c

dxpdxdx

d

dx

dc

ij

i

L

i

Lij

ij

j

i

LN

j

Lij

j

FKc

Galerkin’s Method

Galerkin’s Method

So what have we done so far?

1) Reformulated the problem in the weak form.

2) Chosen a finite-dimensional approximation to the solution.

Recall weak form written in terms of residual:

00 00 02

2

L

i

L

ii

Ldxbvdxvdxp

dx

ud RR

This is an L2 inner-product. Therefore, the residual is orthogonal to our space of basis functions. “Orthogonality Condition”

Orthogonality Condition

00 00 02

2

L

i

L

ii

Ldxbvdxvdxp

dx

ud RR

The residual is orthogonal to our space of basis functions:

u

uh

H

Hh

φi

Therefore, given some space of approximate functions Hh, we are finding uh that is closest (as measured by the L2 inner product) to the actual solution u.

Discretization and Basis Functions

Let’s continue with our sample problem. Now we discretize our domain. For this example, we will discretize x=[0, L] into 2 “elements”.

0 h 2h=L

1Ω 2Ω

In 1-D, elements are segments. In 2-D, they are triangles, tetrads, etc. In 3-D, they are solids, such as tetrahedra. We will solve the Galerkin problem on each element.

Discretization and Basis Functions

For a set of basis functions, we can choose anything. For simplicity here, we choose piecewise linear “hat functions”.

Our solution will be a linear combination of these functions.

x1=0 x2=L/2 x3=L

unity. ofpartition esatisfy th they Also, ory.interpolat

be illsolution wOur :satisfy functions Basis ijji x

φ1 φ2φ3

Discretization and Basis Functions

To save time, we can throw out φ1 a priori because, since in this example u(0)=0, we know that the coefficent c1 must be 0.

x1=0 x2=L/2 x3=L

φ2φ3

Basis Functions

otherwise 0

, if 12

otherwise 0

, if 2

2

,0 if 2

23

2

2

2

LxL

x

LxL

x

xL

x

L

L

L

x1=0 x2=L/2 x3=L

φ2 φ3

Matrix Formulation

expected. as symmetric is Notice

s.polynomialfor

exact isit since ,quadratureGaussian use toFEMin standard isIt

.quadratureby y numericall performed be n wouldintegratio

and known, are functions basis thesince advance,in done

becan functions basis theatingdifferenti code,computer aIn

,22

241

:have weintegrals, theevaluating then functions,

basis theatingDifferenti problem. algebralinear aat arrive

and slide previous on thechosen insert thecan We

: problemmatrix our Given

41

21

0

0 01

0

K

FK

FKc

FKc

FKc

L

p

L

dxpdxdx

d

dx

dc

i

i

LN

j

Lij

j

Solution

2

:is problem for thissolution analyticalexact The

,n whe)(

,0n whe

:solution numerical finalour

gives functions basisby multiplieden which wh,

: tscoefficienour obtain we

slide, previous on the problemn eliminatioGaussian theSolving

20

0

22

041

2043

2

83

20

20

xpLxpxu

LxLxLp

xLxpx

c

L

L

iLp

Lp

i

c

Solution

0

0.1

0.2

0.3

0.4

0.5

0.6

0 0.2 0.4 0.6 0.8 1

x

u(x) Exact

Approx

Notice the numerical solution is “interpolatory”, or nodally exact.

Concluding Remarks

•Because basis functions are compact, matrix K is typically tridiagonal or otherwise sparse, which allows for fast solvers that take advantage of the structure (regular Gaussian elimination is O(N3), where N is number of elements!). Memory requirements are also reduced.

•Continuity between elements not required. “Discontinuous Galerkin” Method