neuralnetworks for navierstokesequation

4
Abstract— The algorithm of the numerical decision of the hydrodynamics equations with representation of the decision on method of weighted residuals on the base of general neuronet's approximation in the whole flow area is developed. The algorithm is tested on decision of the two- dimensional Navier-Stokes equations. I. INTRODUCTION The numerical solutions of fluid dynamics problem can be considered using the method of weighted residuals (MWR) which origin from the supposition about the capability of analytical presentation of flow equation. The type of representative testing functions (form- functions) determines the specific version of MWR such as sub- areas, collocations, minimum squares and Galerkin methods [1]. As a rule, MWR algorithm realization is transformed to variation problem, through which solution of the total residuals minimization of hydrodynamics by the way of trial solution parameter selection. The solution accuracy of MWR is determined by approximating characteristics of testing functions and by degree equivalence to the original partial differential equations for continuum solution of fluid flow. The neural network training is the change of its inner parameters in the way that the outlet data of neural networks gradually approach the required ones. Neural network training is the movement along the surface of errors, and the argument of the error surface is the inner structure of neural networks. Neural network outlet is the surface being continuously determined for the whole real space of inlet population set. Artificial neural networks (ANN) are powerful means of approximation. The idea of application of ANN methodology for the mathematical physics equation solution is not a new one and it is presented in the number of publications, which are reviewed in the reference book [2]. It is also noted there that the application of ANN for hydrodynamics problem simulation is very limited. In particular, among of 2342 publications mentioned in [2], only one [3] deals with hydrodynamics. The computational algorithm, described in this article, allows to use neural networks structure for continuous solution in the computational area, Manuscript received August 30, 2008 corresponding to hydrodynamics equations in contrast to the standard procedure of neural network training according to the given number of reference values in discrete points. II. NEURAL NETWORKS COMPUTATIONAL ARCHITECTURE To investigate the ANN approximated capabilities the perceptron with the single hidden layer (SLP) has been chosen as a basic model which performs nonlinear transformation of the input space into the output space in accordance with the formula: 0 1 1 ) , ( b x w b f v y q i n j j ij i i + + = = = σ x w , (1) where n R x – network input vector, made up of the values j x ; q – the neuron number of the single hidden layer; s R w – all weights and network thresholds vector; ij w– weight entering the model nonlinearly between j-m input and i-m neuron of the hidden layer; i v – output layer neuron weight corresponding to the i- neuron of the hidden layer; 0 , b b i – thresholds of neurons of the hidden layer and output neuron; f σ – activation function (in our case the logistic sigmoid is used). ANN of this structure already has the universal approximation capability, in other words it gives the opportunity to approximate the arbitrary analog function with any given accuracy. The main stage of using ANN for resolving of practical issues is the neural network model training, which is the process of the network weight iterative adjustment on the basis of the learning set (sample) { } k i y n i i i ,..., 1 , , , = R x x in order to minimize the network error – quality functional ( ) = = ε k i i f Q J 1 ) , ( ) ( w w , (2) where w – ANN weight vector; ( ) ( ) 2 , ) , ( i f i f Q w w ε ε = ANN quality criterion as per the i-training example; Method of Weighted Residuals on the Base of Neuronet’s Approximations for Computer Simulation of Hydrodynamics Problems A.V. Kretinin, Yu. A. Bulygin, and M.I. Kirpichev Voronezh State Technical University, Russia e-mail: [email protected]

Upload: chrissbans

Post on 17-Jul-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Neuralnetworks for Navierstokesequation

Abstract— The algorithm of the numerical decision of the hydrodynamics equations with representation of the decision on method of weighted residuals on the base of general neuronet's approximation in the whole flow area is developed. The algorithm is tested on decision of the two-dimensional Navier-Stokes equations.

I. INTRODUCTION The numerical solutions of fluid dynamics problem can

be considered using the method of weighted residuals (MWR) which origin from the supposition about the capability of analytical presentation of flow equation. The type of representative testing functions (form- functions) determines the specific version of MWR such as sub-areas, collocations, minimum squares and Galerkin methods [1]. As a rule, MWR algorithm realization is transformed to variation problem, through which solution of the total residuals minimization of hydrodynamics by the way of trial solution parameter selection. The solution accuracy of MWR is determined by approximating characteristics of testing functions and by degree equivalence to the original partial differential equations for continuum solution of fluid flow.

The neural network training is the change of its inner parameters in the way that the outlet data of neural networks gradually approach the required ones. Neural network training is the movement along the surface of errors, and the argument of the error surface is the inner structure of neural networks. Neural network outlet is the surface being continuously determined for the whole real space of inlet population set. Artificial neural networks (ANN) are powerful means of approximation. The idea of application of ANN methodology for the mathematical physics equation solution is not a new one and it is presented in the number of publications, which are reviewed in the reference book [2]. It is also noted there that the application of ANN for hydrodynamics problem simulation is very limited. In particular, among of 2342 publications mentioned in [2], only one [3] deals with hydrodynamics. The computational algorithm, described in this article, allows to use neural networks structure for continuous solution in the computational area,

Manuscript received August 30, 2008

corresponding to hydrodynamics equations in contrast to the standard procedure of neural network training according to the given number of reference values in discrete points.

II. NEURAL NETWORKS COMPUTATIONAL ARCHITECTURE

To investigate the ANN approximated capabilities the perceptron with the single hidden layer (SLP) has been chosen as a basic model which performs nonlinear transformation of the input space into the output space in accordance with the formula:

01 1

),( bxwbfvyq

i

n

jjijii +∑ ⎟⎠⎞

⎜⎝⎛

∑+== =

σxw , (1)

where nRx ∈ – network input vector, made up of the values jx ; q – the neuron number of the single hidden

layer; sRw ∈ – all weights and network thresholds vector; ijw – weight entering the model nonlinearly between j-m input and i-m neuron of the hidden layer;

iv – output layer neuron weight corresponding to the i-neuron of the hidden layer; 0,bbi – thresholds of neurons of the hidden layer and output neuron; fσ – activation function (in our case the logistic sigmoid is used). ANN of this structure already has the universal approximation capability, in other words it gives the opportunity to approximate the arbitrary analog function with any given accuracy. The main stage of using ANN for resolving of practical issues is the neural network model training, which is the process of the network weight iterative adjustment on the basis of the learning set (sample) { } kiy n

iii ,...,1,,, =∈ Rxx in order to minimize the network error – quality functional

( )∑==

ε

k

iifQJ

1),()( ww , (2)

where w – ANN weight vector; ( ) ( )2,),( ififQ ww εε = – ANN quality criterion as per the i-training example;

Method of Weighted Residuals on the Base of Neuronet’s Approximations for Computer Simulation of Hydrodynamics Problems

A.V. Kretinin, Yu. A. Bulygin, and M.I. Kirpichev Voronezh State Technical University, Russia

e-mail: [email protected]

Page 2: Neuralnetworks for Navierstokesequation

( ) ( ) ii yyif −=ε xww ,, – i-example error. For training purposes the statistically distributed approximation algorithms may be used based on the back error propagation or the numerical methods of the differentiable function optimization.

Suppose that some equation with exact solution y(x)

0)( =yL (3)

for not digital value ys (3) presents an arbitrary xs in learning sample. We have L(y)=R with substitution of approximate solution (1) into (3), where R is equation residual. R is continuous function R=f(w,x) and it is a function of SHL inner parameters. Thus, ANN training under outlet functional consists in inner parameters definition through trial solution (1) for meeting the equation (3) goal and its solution is realized through the corresponding modification of functional quality (2) training.

Usually total squared error at net outlets is presented as a objective function at neural net training and an argument is the difference between the resulted ‘s’ net outlet and the real value that is known a priori. This approach to neural net utilization is generally applied to the problems of statistical set transformation and the reveal of the unknown a priori function values (net outlet) from the argument (net inlet). As for simulation problems, they are connected with mathematical representation of physical laws and modification of them to the form being applied in practice. Usually; it is connected with the necessity of development of digital description of the process to be modeled. Under such conditions we meet the necessity to exclude from objective function the computation result known a priori and the transfer to its functional task. Then the objective function during known law simulation will be defined only by the inlet data and the law which we simulate:

( )( )221∑ −=S

ss fyE x . (4)

III. MWR FORMULATION FOR THE SOLUTION OF NAVIER-STOKES EQUATIONS

Computational capabilities of the developed algorithm can be illustrated by the example of the solution of Navier-Stokes equations, describing two-dimensional isothermal flows of viscous incompressible fluid [1]:

0=∂∂+

∂∂

yv

xu ; (5)

0Re1

2

2

2

2=

⎪⎭

⎪⎬⎫

⎪⎩

⎪⎨⎧

∂∂+

∂∂−

∂∂+

∂∂+

∂∂

yu

xu

yuv

xuu

xp ; (6)

0Re1

2

2

2

2=

⎪⎭

⎪⎬⎫

⎪⎩

⎪⎨⎧

∂∂+

∂∂−

∂∂+

∂∂+

∂∂

yv

xv

yvv

xvu

yp . (7)

Here u,v –velocity components, Re –Reinolds number.

Hydrodynamic equation set is written in dimensionless

form; i.e. they include reduced quantities ∞

=uuu r

r* ,

θθ =

uu

u* , ГD

rr =* , 2*

∞⋅ρ=

upp .

As for u∞ and DГ , for their values one can chose any values of velocity and of linear dimension in the flow area, for example fluid velocity value on the channel inlet and channel width h.

Let on the plane XY there is rectangular region [a,b]×[c,d], and on it there is a rectangular analytical grid, specified by Cartesian product of two one-dimensional grids {xk}, k=l,…,n and {yl}, l=l,…,m.

We will understand neural net functions ),,(,, yxfpvu NET w= as the (5)-(7) system solution

giving minimum of the total squered residual in the knot set of computational grid. We present the trial solution of the system (5)-(7) u, v, p in the form (1):

uq

iiiii bywxwbfvyxu +∑ ++=

121 )(),,(w ; (8)

vq

qiiiii bywxwbfvyxv +∑ ++=

+=σ

2

121 )(),,(w ; (9)

pq

qiiiii bywxwbfvyxp +∑ ++=

+=σ

3

1221 )(),,(w . (10)

Here, as it early w is the vector of all the weights and

thresholds of the net. In this case the amount of q neurons in the trial solutions for each decision variable set is defined the same. This is the parameter which approximated capabilities of neural net trial solution depend on. Computational algorithm functioning should have its result the achievement of the necessary level of the accuracy of solution at the q minimum value.

Let us mark the residuals of equations (5)-(7) R1, R2 и R3 correspondingly, then for the vase of MWR realization for parameters setup of the trial solution it is necessary to minimize three objective functions min;; 2

322

21 →RRR .

In the simplest case the only solution of the multi-criterion problem of minimization can be generated at the substitution of three criterions by one, presented in the form of compression, for example

min23

22

21

2 →++= RRRR . Presenting of the trial solution in the form of continuous functions (8)-(10) allows to define analytically the first and the second differential coefficient in the equations (5)-(7), knowing which one can generate analytic expressions of the function of residuals Rs(w,x,y) and further for antigradient component of the total residual in the s-reference point at

ANN inner parameters jv

R∂∂ ,

ijwR

∂∂ and

jbR

∂∂ , being later

used in the minimization algorithm in accordance with antigradient direction [4].

Other versions of multi-criterion search are based on different methods of generating of the number of

Page 3: Neuralnetworks for Navierstokesequation

solutions, satisfying Pareto conditions. The choice of candidate solution out of Pareto-optimal population must be based on the analysis of hydrodynamic process and is similar to identification procedure of mathematical model. In any case the procedure of multi-criterion optimization comes to the solution of composition of single-criterion problems, out of which a lot of possible solutions are formed. At the same time particularities of some computational approaches of fluid dynamics allows to use iteration algorithms, on each step of which the solution at only one physical magnitude is generated. The computational procedure, described below, is analogous to the MAC method [1], in which the possibility of MWR application is investigated on the basis of neural net trial functions.

Let us consider the flow of incompressible fluid in the channel with the turning point (Fig. 1).

Stating of boundary conditions is carried out in the following way: on the solid walls u=v=0, on the inflow boundary u=0, v=1, on the outflow boundary

0=∂∂=

∂∂

xv

xu . There are no boundary conditions for

pressure except for one reference point, where p=0 is specified (in the absolute values p=p0), with regard to which indication of incoming into the momentum

equation xp

∂∂ and

yp

∂∂ is realized.

For solution of flow equations by the predictor method it is necessary to specify initial velocity distribution in the computational area, satisfying the equation of continuity. For this purpose velocity potential ( )yx,ϕ is introduced

and x

u∂ϕ∂= and

yv

∂ϕ∂= .

As a result of Laplas equation solution, there is generated velocity distribution, which can be called free-vortex component of the sought quantity. The final velocity and pressure distribution is generated as a result of momentum equation solution in accordance with the following algorithm.

Velocity distribution on the following time layer is generated according to the formula

11 ++ ∇⋅Δ−= nnn ptFu ,

where pressure distribution on each iteration step is generated out of the solution of Poisson equation

),(112 yxft

p NETnn =∇

Δ=∇ + F

with the implementation of neural net trial functions. The vector TGF ),(≡F entering this algorithm can be

defined out of momentum equation [1], or to find the pressures one can use Poisson equation in the form of

⎟⎟⎠

⎞⎜⎜⎝

⎛∂∂

∂∂−

∂∂

∂∂=

∂∂+

∂∂

yu

xv

yv

xu

yp

xp 2

2

2

2

2.

Thus, the solution for the pressure generated from

Poisson equation, leads to the fulfillment of continuity equation at the time 1+n . When 1+np is generated, the substitution of these values into the formula

11 ++ ∇⋅Δ−= nnn ptFu allows to define 1+nu and 1+nv .

Iteration process continues till velocity distribution does not change (becomes steady).

We will show some results of system solution (5)-(7) for Re=100. In Fig. 2 the steady velocity distribution is presented in the computational area (there is a fragment of flow area with the flow turning point). The total residual at ~10000 computational points did not exceed R=0.1, what proves algorithm efficiency for solution of the system of nonlinear differential Navier-Stokes equations.

The analysis shows that numerical scheme application on the basis of net trial solutions leads to several

important advantages in comparison with traditional computational methods of hydrodynamics. The most evident of them is the independent choice of computation points, arbitrarily located in flow field, which serve for adjustment of net trial solution. Implementation possibilities of this numerical algorithm are limited only by accuracy of the generation of the global net approximation of the analyzed functional continuum.

0=∂∂

1=∂∂

0=∂∂

0=∂∂

02

2

2

2

=∂∂+

∂∂

yxϕϕ

x

y

Fig. 1. Computational area

Fig. 2. Net velocity distribution

Page 4: Neuralnetworks for Navierstokesequation

IV. CONCLUSION The presented results reveal one of the possible

apparatus applications of artificial neuron nets – numerical solution of equations including differential ones, similar to method of weighted residuals, but with the implementation of specific net approximation. Minor modification of standard training algorithm of neural network allows using computational points arbitrarily designed in the computational field for insetting as supplement the continuous solution.

REFERENCES [1] K. Fletcher, Computational techniques for fluid dynamics, In 2

volumes. –M.: Mir, 1991. [2] A.I. Galushkin, Neuromathematics. –Moscow, IPRJR. 2002. [3] C.J. Richardson and D.J. Barlow, “Neural network computer

simulation of medical aerosols”, J. Pharm. and Pharmacol., 1996, vol. 48. No.6, p. 581-591.

[4] A.V. Kretinin, Yu.A. Bulygin and S.G. Valyuhov, “Intelligent Algorithm for Forecasting of Optimum Neurons Quantity in Perceptron with One Hidden Layer”, Proceedings of 2008 International Joint Conference on Neural Networks, Hong Kong. p. 907-912

[5] A.V. Kretinin, “The weighted residuals method based on neuronet approximations for simulation of hydrodynamics problems”, Siberian J. Num. Math, 2006, vol. 9, No.1, p. 23-35.