behavior of nuclear reactor systems representation for

12
Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=tnst20 Journal of Nuclear Science and Technology ISSN: 0022-3131 (Print) 1881-1248 (Online) Journal homepage: https://www.tandfonline.com/loi/tnst20 Package Flow Models by Neural Network Representation for Understanding the Dynamic Behavior of Nuclear Reactor Systems Hiroshi MATSUOKA & Misako ISHIGURO To cite this article: Hiroshi MATSUOKA & Misako ISHIGURO (1996) Package Flow Models by Neural Network Representation for Understanding the Dynamic Behavior of Nuclear Reactor Systems, Journal of Nuclear Science and Technology, 33:9, 675-685, DOI: 10.1080/18811248.1996.9731982 To link to this article: https://doi.org/10.1080/18811248.1996.9731982 Published online: 15 Mar 2012. Submit your article to this journal Article views: 65

Upload: others

Post on 14-Jun-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Behavior of Nuclear Reactor Systems Representation for

Full Terms & Conditions of access and use can be found athttps://www.tandfonline.com/action/journalInformation?journalCode=tnst20

Journal of Nuclear Science and Technology

ISSN: 0022-3131 (Print) 1881-1248 (Online) Journal homepage: https://www.tandfonline.com/loi/tnst20

Package Flow Models by Neural NetworkRepresentation for Understanding the DynamicBehavior of Nuclear Reactor Systems

Hiroshi MATSUOKA & Misako ISHIGURO

To cite this article: Hiroshi MATSUOKA & Misako ISHIGURO (1996) Package FlowModels by Neural Network Representation for Understanding the Dynamic Behavior ofNuclear Reactor Systems, Journal of Nuclear Science and Technology, 33:9, 675-685, DOI:10.1080/18811248.1996.9731982

To link to this article: https://doi.org/10.1080/18811248.1996.9731982

Published online: 15 Mar 2012.

Submit your article to this journal

Article views: 65

Page 2: Behavior of Nuclear Reactor Systems Representation for

Journal of NUCLEAR SCIENCE and TECHNOLOGY, Vol. 33, No. 9, p. 675-685 (September 1996)

Package Flow Models by Neural Network Representation for Understanding the Dynamic Behavior

of Nuclear Reactor Systems

Hiroshi MATSUOKAt ,

Science and Technology Agency*

Misako ISHIGUROtt

Faculty of Engineering, Ibaraki University”

(Received February 6, 1996), (Revised May 10, 1996)

“Package Flow Model” (PFM) is a simple simulation model for intuitive understanding of various types of system dynamics. In the previous papers, the PFM was proposed and its application to the dynamic analysis of nuclear reactor systems was presented. In the present paper, the same model and same application are considered but a new representation method of the PFMs by a neural network is introduced, so that the dynamic simulation of the reactor subsystem can be performed through the calculation of corresponding neural network. Furthermore, the quasi optimum parameter values of each PFM are easily obtained by applying appropriate learning algorithm to get weight-values of the neural network.

Some case studies show that the learning process and the obtained optimum values can give us new useful information on approximate Understanding of the dynamic behavior of actual processes in the system.

KEYWORDS: reactor system simulation, package flow model, reactor dynamics, reactor kinetics, artificial neuron, neural network, learning algorithm, automatic optimization

I . INTRODUCTION “Package Flow Model” (PFM) is a simple simulation

model for intuitive understanding of the various types of system dynamics. In the previous papers(’)(’), the PFM was proposed and its application to the dynamic analysis of nuclear reactor systems was presented. Here, the same model and same applications are considered but a new representation method of PFMs is proposed, which is effective for the approximate understanding of dynamic behavior of various subsystems.

As to the simulation models, the conventional mod- els, such as the model of RELAP5 code, consist of many nodes and junctions representing a reactor system, and in addition sometimes involve complicated equations with various types of experimental parameters. For these reasons, if there is some discrepancy between simulated results and experimental ones, it is difficult to intuitively find nodes and junctions which should be modified or pa- rameters which should be adjusted for the better agree- ment. They are disadvantages of the conventional simu- lation models.

In the previous paper(’), a reactor system was mod- elled by a small number of PFMs, e.g. seven PFMs for the simplest case. Furthermore, each PFM was described

* Kasumigaseki, Chiyoda-ku, Tokyo 100. ** Nakanarusawa-cho, Hitachi-shi 316. ’ Doctoral student of Ibaraki University. t t Corresponding author, Tel. +81-294-38-5194,

Fax +81-294-37-1569, E-mail: ishiguroQhit .ipc.ibaraki.ac.jp

by the simple visual mechanism which was governed by only two kinds of parameters; dropping density function and imaginary flow velocity. Accordingly, the PFMs es- sentially have the feature suitable for intuitive under- standing and easy adjustment in comparison with the conventional models.

In this paper, a new representation method of PFM is introduced, in which each PFM is represented by a single linear artificial neuron. A reactor subsystem described by the PFMs is represented by a neural network. By applying an appropriate learning algorithm to the neu- ral network, the optimum weights of the neural network, which can be converted to the parameters of each PFM, are obtained. It facilitates the adjustment of the drop- ping density function, since the optimum weights can be automatically obtained through the learning algorithm. Furthermore, the intuitive understanding of the dynamic behavior of the subsystems is easily attained, because the physical interpretation of the finally learned optimum values of parameters will become clear through the sim- ple visual mechanism of PFMs. This is our motivation to introduce the PFMs by neural network representation.

n. PACKAGE FLOW MODELS BY NEURAL NETWORK REPRESENTATION

1. Description of Package Flow Models‘’) In the PFM, each subsystem is regarded as a flow sys-

tem of some entity particles such as energy, mass, and so on. The dynamic processes in the subsystem can be corresponded to the visualized PFM mechanism.

675

Page 3: Behavior of Nuclear Reactor Systems Representation for

676 H. MATSUOKA and M. ISHIGURO

. -

packages with

flow toward the outlet

the imaginary 1 0

i / ’

The single step of the PFM mechanism consists of the following three actions illustrated as Fig. l (a) :

- Collecting of entity particles The entity particles are collected by a given rate

q ( ~ ) (“Collecting rate”) during the time period that ends at a time T .

Packing of entity particles into packages The entity particles collected during the previous

action are equally divided and packed into a con- stant number (“,”) of packages. Dropping of the packages into the imaginary flow

The N packages are dropped into the imagi- nary flow in a tube according to the given distri- bution w(X) (“Dropping density function”), where the symbol X is the dropping position of packages in the tube as shown in Fig. l(a)-ie. -LgXgO.

The imaginary flow is a movement of imaginary car- rier of packages and may be considered sometimes as the time flow and other times as the fluid flow, etc. depend- ing on the modeling of PFM. In addition, we assume that the velocity of the carrier to the tube V(T) (“Imag- inary flow velocity”) is invariant everywhere in the tube during a time period but may vary with T . The above three actions are repeated in every time period while the imaginary flow advances a distance A X , where the AX must be small enough so that both q(7) and V(T) are almost constant during the time advance. The total quantity of the collected entity particles during the pe- riod is given by ~ ( T ) . A X / V ( T ) since the collecting time

-

-

> ( b )

. , I . . .

9 .. - 9 , * .. o Collecting of a a * ’ . ‘ Entity part icles entity particles *, * .* * ’ , . s * Collecting rate‘: q (11

r- I

@ Packing of entity particles into packages

@ Dropping of the packages into the imaginary flaw

Outlet

7- ’Flow-out wckaae; \1

J Leaving of packages from the outlet of tube ‘Residual packages’

interval is [T-AX/V(T), T ] . Thus the quantity of entity particles in each package which was dropped at the time T is given by q(T).AX/V(T).l/N. Note that the time length is not the same if the flow velocity vary with T .

Figure l (b ) shows the dropped packages. They are drifting with the imaginary flow toward the outlet of the tube and leaving from it. Considering the positions of these packages at the present time (“t”), we call the packages remaining in the tube as “Residual packages”, and the packages leaving from the outlet of the tube as “Flow-out packages” .

In many applications of PFMs, the basic state- paramaters of subsystems can be corresponded to the flow-out rate of the entity particles (“Flow-out rate”) or the residual amount of entity particles in the tube (“Residual amount”). Thus, these state-parameters can be evaluated by summing up the quantity of entity parti- cles in all the flow-out packages or that in all the residual packages.

[NOMENCLATURE]

X : Position of the package in the tube A X : Small space interval Q ( T ) : Collecting rate of entity particles at T

V(T) : Velocity of imaginary flow at T

N : Number of packages dropped during the time period AX/V(T)

w(X): Dropping density function t : Present time

a ( j ) : Step number when the package j was dropped T(U) : Time when the a-th step ends No : Number of packages dropped at T(U) and flow out

NL : Number of packages dropped at T(.) but still remain at t

in the tube s : Present step number; ~ = T ( s ) k: Step number behind s; k=s-a r=L/AX-l L : Length of the tube located along the minus

X-coordinate (see Fig. l(a)) F (X) : Dropping cumulative function

Wk : Integrated value of w(X) on [-(k + l )AX, -kAX] W;: Integrated value of F (X) on [-(k + l )AX, -kAX].

2. Simulation Mode of PFMs by Neural

The positions of drifting packages are depicted in Fig. 2(a), from which the quantitative relations among the collecting rate, the flow-out rate, and the residual amount can be intuitively derived.

Network Representation

(1) Calculation of Flow-out Rate The flow-out packages are obtained by summing up

the packages collected in the space A X at the outlet position as shown in Fig. 2(a). The distribution is illus- trated in Fig. 2(b).

Fig. 1 Assumptions for Package Flow Model (PFM)

JOURNAL OFNUCLEARSCIENCE ANDTECHNOLOGY

Page 4: Behavior of Nuclear Reactor Systems Representation for

Package Flow Models by Neural Network Representation 677

Step

Step

7&i(rl Dropping densiry function

4 (,Amqik& ( )

Present dr i f t ing positions for packages

N packages

“Flow-out

\ packagesu

N I X . w (XI

A

Nu N

\ .II. I

( b ) lDistribution o f ( c ) D i s t r i b u t i o n of flow-out packages” ‘residual packages”

Fig. 2 Distribution of flow-out packages and residual packages

= c Ns-kq(T(S - k ) ) / v ( T ( S - k ) ) ( l / N ) v ( t ) , k=O

(1) where Here, the symbol a ( j ) stands for the step number when the flow-out package j was dropped. If we assume that the package distribution keeps almost the same shape during the drifting in the tube (see Fig. 2(a)), the distri- bution function of the flow-out packages depicted in Fig. 2(b) is approximately given by N A X w ( X ) , because it may be proportional to w ( X ) and its integration from -L to 0 should be N A X . Here, s_”, w ( X ) d X = l by the definition. Accordingly, the term Ns-k in Eq.(l) is given as follows:

u = s - k .

I/ - k A X

Ns-k = {/ N A X w ( X ) d X A X -(k+l)AX - k A X

= N / w ( X ) d X ( k = 0,1 , . . . , r ) . (2)

-(k+l)AX

From Eqs.(l) and (2),the equation which expresses the flow-out rate is obtained by using the dropping density function w ( X ) :

(Flow-out rate at the present time t )

% 2 wkq(T(s - k ) ) / v ( T ( s - k ) ) V ( t ) , k=O

- k A X

where wk = / w ( X ) d X ( k = 0 , 1 , “ ‘ , r ) . (3) -(k+l)AX

(2) Calculation of Residual Amount As illustrated in Fig. 2(c), the residual amount can

be calculated as follows:

(Residual amount at the present time t ) = (Total quantity of entity particles in

all the residual packages)

c q ( T ( 4 j ’ ) ) - - (all the residual packages j‘)

*Ax/v(T(a (f) 1) ( 1 / N ) s - r

= c N~q(.((.>>/v(~(.>)(1/N)Ax u = s

= c NL-kq(T(s - k ) ) / v ( T ( S - k ) ) ( l / N ) A x . (4) k=O

The distribution function of the residual packages shown in Fig. 2(c) is approximately given by N F ( X ) , because it may be proportional to F ( X ) and the value at X=O is to be N . Here F ( X ) is the cumulative function of the dropping density function (abbreviated as “Dropping cu- mulative function”) defined as

X

F ( X ) = w(X’)dX’, ( 5 ) s_, where F(O)=1 by the definition of w(X’) .

lows: Accordingly, the term Ni-k in Eq.(4) is given as fol-

NL-k k { / - k A X N F ’ ( x ) d x } / A x

-(k+l)AX - k A X

= ( N / A X ) / F ( X ) d X -(k+l)AX

( k = 0,1 , . . . , T ) . (6)

From Eqs.(4) and (6), by using the dropping cumulative function F ( X ) , the equation of the residual amount is given by

(Residual amount at the present time t ) r

k c Wix+ - k ) ) / V ( T ( S - k ) ) , k=O

VOL. 33, NO. 9, SEPTEMBER 1996

Page 5: Behavior of Nuclear Reactor Systems Representation for

678 H. MATSUOKA and M. ISHIGURO

- k A X fore, if the values q(T(s - ~ ) ) / V ( T ( S - Ic)) are regarded as the input data of a neuron, the value of W k or WL be- comes the weight-value of the Ic-th input of the neuron

F ( X ) d X (Ic = 0,1,. . . , r ) . (7) s where WL = - ( k + l ) A X

Note that Eq.(6) is also an approximation similar to Eq.(2) under the assumption that the package distribu- tion keeps almost the same shape during the drifting. This assumption, however, does not say that all the drift- ing packages must have the same velocity. For example, the packages dropped at the same time can exchange their positions one another during the drifting in the tube, while this movement of packages does not change the shape of distribution.

(3) Neural Network Representation As shown so far, if the past collecting rates and flow

velocities are given together with the dropping density function, the approximate transient of flow-out rate and residual amount can be calculated by using Eqs.(3), (5) and (7).

Apparently, Eqs.(3) and (7) have the same form as the output of a single linear artificial neuron. There-

- ( Ic=O, l , . . .,r). This implies that the calculation process of a PFM can be represented by a single neuron as shown in Fig. 3(a). Here, the expression C x V ( t ) shows the operation that CuiWi multiplied by V ( t ) makes the out-

put of a neuron "for the input data ai's. Accordingly, the calculation process of a PFMs network for any subsys- tem can be represented by a neural network. Thus, the simulated results are obtained through the calculation of the neural network. We call this process as "A simu- lation mode of PFMs by neural network representation".

3. Learning Mode of PFMs by Neural

A significant merit of neural network is that learning function is applicable when a PFMs network is repre- sented by the corresponding neural network. In many cases, full or partial information on the desired flow-

Network Representation

Flow-out ra te Residual amount

amount Residuoi&

a

(a) General case

D

(b) In the case of V ( T ~ - ~ ) + V ( t ) Fig. 3 Calculation of flow-out rate and residual amount by a neuron

JOURNAL OF NUCLEAR SCIENCE AND TECHNOLOGY

Page 6: Behavior of Nuclear Reactor Systems Representation for

Package Flow M o d e l s by Neural N e t w o r k Representation 679

out rate or desired residual amount will be given as a supervisor knowledge. Each weight-value of the neural network is adjusted step by step to produce the desired output through the learning process. The shape of the dropping density function can be improved through this process because the change of weight-values corresponds to the change of function. As a result, additional use- ful information is obtainable on the behavior of actual systems. We call this process as “A learning mode of PFMs by neural network representation”. The principle of learning algorithm will be explaind in Chap.111.

4. Simplification for the Cases of Quasi

If the imaginary flow velocity V ( T ) changes slowly V ( ~ ( s - k ) ) + V(t ) ,

Invariant Imaginary Flow

enough to satisfy the condition: Eqs.(3) and (7) are rewritten as follows:

(Flow-out rate at the present time t ) r

k=O w k q ( T ( s - k ) ) ,

5. The Application Procedure of PFMs by

As explained so far, the application procedure of PFMs by neural network representation is summarized in Fig. 4. In the simulation mode, complicated behavior in the actual system which is difficult to understand by human intuition is replaced with a simple visual mech- anism described by PFMs network, and the results are obtained by the calculation of the corresponding neural network. In the learning mode, the shape of the drop- ping density functions is adjusted through the automatic optimization process of weight-values and the optimized shape may be useful for further understanding of the ac- tual behavior of various time delays in the subsystem. In some cases, the learning process itself, ie. the improve- ment process of the dropping density functions and its effect on the simulated results, can often give us more useful information on the dynamic behavior of the sub- system.

Neural Network Representation

III. PRINCIPLE OF LEARNING - k A X ALGORITHM

w ( X ) d X ( k = 0,1 , . . . , T ) , (8) s 1. Definition of Error Function where w k =

- ( k + l ) A X ~I

(Residual amount at the present time t ) The learning algorithm applied here is a method to gradually change the weight-values so that the error function is decreased in the steepest descent direction. A simple case of a single linear neuron with (~+1) input data is considered. The supervisor data D ( T ( c T ~ ) ) at a time 7(ac) is not always given as the direct output of the neuron but may be given as the value after a certain function f is operated. On the other hand, the supervi-

+ 2 W h ( + - k))(l /V(t)) , k=O

- k A X where W L = J ( k = 0,1,. . . , T ) . (9)

In this case, q( . r (s -k) ) can be regarded as the input data of neuron as shown in Fig. 3(b).

F ( X ) d X - ( k + l ) A X

Complicated S i m p l e visual Simple calculation process represented

PFMs network by neural network

A p p r o x i r a t e i n t u i t i v e P F K b.v t h e u n d e r s t a n d i n [ t h r o u c h n e u r a l n e t w o r k t h e PFKs r e c h a n i s r r e p r e s e n t a t i o n

Automatic optomization of weight-values by

learning algorithm

3 Hints for Improved dropping

density function standing o n or information the actual from learning behavior of process the system

Fig. 4 Application procedure of P F M s by a neural network representation

VOL. 33, NO. 9, SEPTEMBER 1996

Page 7: Behavior of Nuclear Reactor Systems Representation for

680 H. MATSUOKA and M. ISHIGURO

sor data D(T(c~ ) ) itself may contain an error v. The er- ror function at the weight vector W=(Wo, W1, e * . , can be defined as

E(W) = (UP) E [ { S C ( W ) - dcl/&12, (10) c= 1

where Wm denotes the weight-value of the m-th input and p is the number of times to be compared.

The symbol S c ( W ) is the simulated value at the time T(c,) and given by

SC(W) =acOWO+a,lWl +. - .+ ac,W, = A c . W , C = 1 , 2 , . . . , p , (11)

where A c = ( a c ~ , acl , . . . , u ~ , ) ~ and

acm = d d o c - m)) /V(~(gc - m))V(~(gc)) , m = 0 , 1 , 2 , . . . , r . (12)

If the function f is strictly monotone and d f / d S # O , then there exists an inverse function f-l. Accordingly

Here the objective of learning is assumed that the final output after operated by f is to be adjusted to have a mean square error less than or equal to v2, then the condition E ( W ) S 1 is resulted from below:

Mean square error

= (l/P) E {f (SC(W)) - D(.(.C>>l2

= (UP) E {f ( S C ( W ) ) - f (dC)Y

+ (l/P) C((df/dS)s=S,(w,(Sc(W) - dc)12

= (l/P) E (./sc)2(sc(W) - d C l 2

= Y2(1/P) E { ( S c ( W ) - d C ) / 6 C l 2

c= 1

(from Eq.(13)) c= 1 P

c=l (from linear approximation)

(from E q 4 4 ) ) c= 1

c= 1

= v2E(W) 5 v2.

2. Procedure to Obtain the Solution From the description so far, the following three con-

W . I = 1, where I is thevector ( l , l , . . - , l ) T , (15) ditions must be satisfied for the weight-vectors:

W,zO, m = 0 , 1 , 2 , . . . , r E ( W ) 5 1.

Note that the condition W m g l is already included in Eqs.(l5) and (16).

Procedure to obtain one of the solution vectors is as follows:

STEP 1: Assume an initial vector W(0) satisfying the conditions (15) and (16).

STEP 2: Calculate the value of error function E ( W ( i ) ) . If E(W(i ) )&l , then the vector W(i) is the solution. Otherwise make a small modifica- tion to the vector W(i ) (see Sec.111-3.) to obtain a smaller E ( W ) while keeping the conditions (15) and (16).

STEP 3 : Repeat the STEP 2 for the new vector

If the E ( W ) cannot become equal to or less than 1, W(i+l ) .

there is no solution vector.

3. Modification of the Weight-vector First, consider the small movement of the W in the

hyperplane W-I=l. It is represented as follows:

W(i + 1) = (W(i) + e)/(l + E ) ,

where & = & . I , 0<1&1<<1. (18) where E is a (r+l)-dimensional small vector. Apparently, W(i + 1) keeps the condition (15). Second, in order to keep the condition Wm(i + l ) & O , E, must be selected to keep W,(i)+~,20. At last, the modification vector E is determined to approach the vector W to the condition (17). The change of simulated value and error function is calculated as

SC(W(i + 1)) - S C ( W ( i ) ) = W(i + 1) . A, - W(i ) . A, + (1 - &)(W(i> + e) . A&W(i) . A,

(l/(l+&) g(1 - E ) is assumed.) + E * A, - &SC(W(i)) = E . (A, - S c ( W ( i ) ) I ) .

E(W(2 + 1)) - E ( W ( i ) )

JOURNAL OF NUCLEAR SCIENCE AND TECHNOLOGY

Page 8: Behavior of Nuclear Reactor Systems Representation for

Package Flow Models by Neural Network Representation 681

Combining above three considerations, the iterative process is composed as below.

STEP 1 :

STEP 2 :

STEP 3 : STEP 4 :

P

c=l E:, = -@[ c { ( S C ( W ( i ) ) - d c ) / & }

.{(a,, - SC(W(i)))/~C}l. if W,(i) +EL 2 0, E, = EL, otherwise E, = -Wm(i).

E = Eo + €1 + ... + E,.

Wm(i + 1) = (Wm(i) + E,)/(l + E ) ,

... m = 0,1,2, , T. (22) ..................................

4. Convergence of the Solution If E(W( i ) )> l and there exists at least one solution-

vector satisfying the conditions (15)-( 17), assign & = p W ( # ) , where p is a small positive number and W ( # ) is one of the solution vectors. The condition Wm(i + l)zO is always satisfied since

Wm(i + 1) = (Wm(i) + CLWrn(#))/(l+ P W # ) . I ) = (Wm(i) + PWrn(#))/(l+ P ) z 0,

because W ( # ) . I=1, W,(i)zO, W,(#)zO, and p>O.

c=l

5 0,

where e c ( W ) was defined by { S c ( W ) - dc} /Sc ,

and note that C ( e c ( W ( i ) ) ) 2 > C ( e c ( W ( # ) ) ) 2 because

E ( w (i) > 1 a Z 'E( w (#) i 1. As the result, E ( W ( i + l))<E(W(i)) was proved. It

was found, if E(W)>1 and a solution-vector exists, a di- rection to which the value of error function will decrease always exists. If W goes to a local minimum, such a direction will not exist, which means that there is no lo- cal minimum before reaching the solutions. Accordingly, the weight-vector will converge to the solution by using a learning algorithm to find a direction so that the error function will be decreased.

Now consider a set in the (r+l)-dimensional space as defined below.

U ( e ) = { WIE(W) 5 e , O 5 W, 5 1 (rn = O,1,2, - . . , T),

P P

c= 1

W . I = l } (24) Particularly, U ( 1) represents the set of solution vectors. The three conditions in Eq.(24) represent a hyperellip- soid, a hypercube, and a hyperplane. All these three hy- perpolyhedrons are convex, and therefore the common set U(1) of them, if it exists, is convex and unique. If the set U(1) have many members and at least one solu- tion is found by the described procedure, other members will be located near by this solution. The set U(1) may sometimes have no members.

(23) ......................................

IV. APPLICATION OF PFMs GIVEN BY NEURAL NETWORK REPRESENTA- TION TO PWR SUBSYSTEMS

1. Neural Network Representation for

Some examples of PFMs by the neural network repre- sentation are shown, which are applied to the subsystems of PWR. In a previous paper('), an example of PFMs network for a PWR was presented as shown in Fig. 5 , where ,l3 denotes the fraction of delayed neutrons and Keff represents the effective multiplication factor. And the notation IxI shows a function to change the unit and @ shows the summation of two PFMs.

The same PFMs network will be used in this study. As depicted in the figure, the energy generation and transfer processes in the PWR are divided into four stages. The first stage is nuclear fission, which can be simulated by two PFMs. The second stage is heat trans- fer in the fuel rods, which can be simulated by a single PFM. The third stage is heat transfer in the reactor core, and the fourth stage is heat transfer in the coolant loop. The last two stages can be simulated by two PFMs, re- spectively. By replacing each PFM in the PFMs network with a neuron according to the method shown in Fig. 3, the neural network representation for each subsystem, i e . each stage of the PWR, is obtained.

PWR Subsystems

VOL. 33, NO. 9, SEPTEMBER 1996

Page 9: Behavior of Nuclear Reactor Systems Representation for

682 H. MATSUOKA and M. ISHIGURO

Stage of nuclear f i ss ion i Stage of heat transfer in the fuel

Stage of heat transfer in the reactor core

w: ,,=+

r I J. Core

Stage of heat t rans fer in the coolant loop I i

i - Flow of en t i t y

Flow of in fo rmat ion ___--

Core .+. inlet

generator inlet

Fig. 5 An example of PFMs network for PWR

2. Case Study of the Simulation Mode

Figure 6(a) shows the neural network representa- tion for the stage of nuclear fission, which is obtained by using the method depicted in Fig. 3(b). Now, this example is studied in detail.

of PFMs

(1) Simulation of Prompt Neutrons The first neuron deal with the time delay of prompt

neutrons, where the delay time is made to equal to a prompt neutron life time 1. Only one input data is re- quired since it is regarded that the neuron output at a time t is determined by the collecting rate at the time (t-1) alone. In this case, the imaginary flow is just the time flow, then the velocity of the imaginary flow is con- stant. Let V, be the time velocity, the dropping position of the packages is determined by the delay time 1 from the expression X=-V,l. Hence the dropping density func- tion of the prompt neutrons is given as follows:

w(X) = 6(-&1), (25)

where 6 ( X ) stands for a delta function. According to the Eq.(8), the discrete values of w k are given by

- k A X w k = / w ( X ) d X

- ( k + l ) A X - k A X

= / S(-V,l)dX - ( k + l ) A X

1 if - ( k + 1)AX S -&lg -kAX, (26) =i 0 otherwise.

I Rompt neutrons \

4 Deloyed neutrons

q(t-y9bll

(a) Neurons for the stage of nuclear fission

0001 001 0 1 10 10 I00 s e c

(b) Step response of nuclear fission without feedback Fig. 6 Simulation mode of neuron for

the stage of nuclear fission

Thus

(Flow-out rate of the neuron for prompt neutrons at the present time t )

= q d t )

= Wf4(-7(S - f)),

= 4(-7(s - f)) = q(t - I ) . (27)

r

k=O % w k q ( T ( S - k ) )

where 3 f: -(f + 1)AX 5 -V,l - fAX

This is the output value of the neuron for prompt neu- trons, which coincides with the previous explanation.

In our numerical study on the prompt neutrons, a value of 0.0001 s is assumed for 1 and 0.0001 s for the time-interval AX/&.

(2) Simulation of Delayed Neutrons The second neuron deals with the time delay of de-

layed neutrons. In this case, the neuron accompanies with many inputs, because various delay times must be considered. The output of the neuron is determined by the many collecting rates in the past. If all the precur- sors of delayed neutrons are assumed to have only one decay constant A, the cumulative distribution function of the delay time (t--7) will become the decay curve; exp{-X(t - T)}. This can be rewritten as exp(XX/V,),

JOURNAL OF NUCLEAR SCIENCE AND TECHNOLOGY

Page 10: Behavior of Nuclear Reactor Systems Representation for

Package Flow Models by Neural Network Representation 683

where the variable (t--7) is changed to XI since the drop- ping position of the packages at a delay time (t-T) is given by X=-Vt(t - T) . Accordingly, the dropping den- sity function w(X) is derived from the differential equa- tion as follows:

w(X) = d{exp(XX/&)}/dX = (A/&) exp(XX/&).

Now, the six groups of delayed neutrons are consid- ered in our attempt. The cumulative distribution func- tion of the delay time is given by

6

g=l c {(PgIP)exp{-X,(t - 711,

where Pg: Fraction of the g-th group delayed neutrons

P: Fraction of all the delayed neutrons (g=1,. . . , 6 )

6

(P= c P,) g= 1

A,: Decay constant of the g-th group delayed neutrons (g=1,. . . , 6 ) .

Accordingly, the dropping density function w (X) is given bY

( k = 0,1, . . . , T ) (29) where A T is defined by AT=AX/&. The A T is a con- stant since the imaginary flow velocity is invariant for this case. Now the following equation is obtained.

(Flow-out rate of the neuron for delayed neutrons at the present time t ) = q d ( t )

T - * c Wk!7(7-(S - k ) ) k=O

This is the output value of the neuron for delayed neu- trons.

In our numerical study on the delayed neutrons, the time T is set from the past 70s for the input data q ( T )

and the timeinterval AT is assumed to be 0.2s. Ac- cordingly, the number of inputs of the neuron for delayed neutrons is given by (L/V,)/AT’=70/0.2=350.

(3) Simulation of the Stage of Nuclear Fission The value of q ( t ) can be calculated from the val-

ues of qp( t ) and q d ( t ) by the equation KeE{(l-P)qp(t) + P q d ( t ) ) . Then, the transient behavior of neutron den- sity or fission rate can be simulated by using Eqs.(27) and (30) under a given K,R.

Here, a simple comparison of results is conducted be- tween the PFM method using the neural network repre- sentation and the conventional analytical method(3) us- ing one point approximation. Several transients without reactivity-feedback are numerically studied on neutron density in the reactor core, when step-change reactivity is inserted after a long steady state. Figure 6(b) shows the comparison and fairly good agreement is found.

3. Case Studies of the Learning Mode

(1) Learning for the Stage of Heat Transfer

Next example is from the stage of heat transfer in the fuel rods. It was pressumed, in this analysis, that the heat energy flows from the center of fuel rods to the sur- face of them. This flow could be analyzed by a PFM‘l). Figure 7 shows the neuron for this stage. Figure 8(a) shows the improving process of dropping density func- tion through the learning. The input data to the neuron are the transient data of neutron density in the reac- tor core, ie. fission rate, which are drawn in Fig. 8(b). Here, the output of the neuron, giving the transient fuel heat flux is calculated. The learning process of the neu- ron improves the output by using the result of a transient behavior of nuclear ship “MUTSU” reactor system as the supervisor data. The data are obtained by the Nuclear Ship Engineering Simulation System NESSY(4).

Figure 8(b) shows the calculated result of the fuel heat flux by the PFMs method. The simulated curves can nearly reach to the desired output very quickly after

of PFMs

in the Fuel Rods

Given fission rate

q ( t - 2 A T ) v ~

Fuel heat flux

@ 9

9 ( t - 3 4 9 ~ ~ )

Fig. 7 A neuron for the stage of heat transfer in the fuel

VOL. 33, NO. 9, SEPTEMBER 1996

Page 11: Behavior of Nuclear Reactor Systems Representation for

684 H. MATSUOKA and M. ISHIGURO

0.04 0.03

0.0 I

After the 1st learning 0.02

k After the 2nd

(a) Weight-value

-- Given fission rate

-------- Desired output

see (b) Fuel heat flux

Fig. 8 Learning mode of neuron for the stage of heat transfer in the fuel rods

a few times of learnings. As shown in Fig. 8(b), the improved dropping den-

sity functions can give the result near by the desired output. The main factor which brings the time delay of energy transfer from fissions to the coolant is concen- trated within ten seconds time delay, and that the peak of time delay exists about several seconds. These are use- ful information for understanding the actual behavior of the subsystem and obtaining the hints for considering the physical process:

- Key process of energy transfer from the fissions re- sults in the time delaies around several seconds.

- No peak is found at the delay time zero. Although there may arise prompt energy transfer processes to the coolant such as y-heating at the same time with the fissions, almost all released energy from the fissions is quickly absorbed in the fuel pellets. The pellets are separated from the coolant by the cladding tube, and the gap between the pellets and the cladding, which require non-zero delay time of heat transfer. (2) Learning for the Stage of Heat Transfer in

the Reactor Core and the Coolant Loop This stage can be described by the combination of

four PFMs. Accordingly, the corresponding neural net- work consists of four neurons as shown in Fig. 9. In this study, the “MUTSU” reactor system is considered again, and the core-outlet enthalpy is calculated through the summation of the outputs of the first two neurons. The input data of the first neuron are the transient data of fuel heat flux drawn in Fig. lO(a). The input data of the last neuron are the transient data of steam generator output, which are separately calculated by NESSY(4). Initial weights distributions are assumed as shown in

Given fuel heat f lux

I n i t i o l weight

99 5 0 . 99

to core-outlet

I

Steam generatoi output

99 4321 0

Fig. 9 Neural network for the stage of heat transfer in the reactor core and the coolant loop

Fig. 9. Figure lO(a) shows the calculated core-outlet en-

thalpy with three peaks without learning. The second and third peaks seem to be caused from the first peak by the circulation of coolant through the loop, because the plotted time interval is nearly equal to the coolant circulation time. The first peak is caused by a single pulse-like heat generation from the fuel rods. Accord- ingly, the initial transient of the first peak depends only on the neuron A. Thus, the weights of neuron A are in- vestigated so that the simulated curve may fit to the initial part of the desired curve. The learned result is drawn in Fig. 10(b), which shows fairly good agreement in the prior ten seconds. From this fact it is seen that the key process which determines the gradient of the initial heat up of the coolant is included in the energy transfer processes from the fuel to the outlet of core.

Next, we will change the weights of neuron B. The initial weights distribution was assumed as a &function, where the heat diffusion was not considered during the circulation of coolant through the loop. After the learn- ing of neuron B, the magnitude of the diffusion effect in the coolant loop can be estimated from the difference between the initial simulated curve and the improved curve, because the adjustment of the weights of neuron B implies the change for the heat diffusion effect there.

JOURNAL OF NUCLEAR SCIENCE AND TECHNOLOGY

Page 12: Behavior of Nuclear Reactor Systems Representation for

Package Flow Models by Neural Network Representation 685

* -- Given heat flux _-_-_- Desired output

,Before learning

Fuel heat flux

Core - out I et enthalpy

sec

(a) Initial calculation

Core-outlet en t ha Ip y

IAN f f ~ ~ ~ ~ ~ ~ r ~ i n g After the learning

1 100.5

% -. _ _ of neuron A

After the 2nd learnina 100 o- 20 40

sec

(b) Learning of neuron B Fig. 10 Learning mode of neural network for the

stage of heat transfer in the reactor core and the coolant loop

V. CONCLUSIONS From the several case studies of the Package Flow

Models by neural network representation, the method is shown to be effective to improve the intuitive under- standability and the easy adjustability for the dynamic analysis of the PWR subsystems. The features of our method are summarized below.

1. Simulation Mode of Package Flow Model Dynamic behavior of subsystems described by PFMs

can be simulated by the calculation of the correspond- ing neural network. Accordingly, the calculation can be performed by repeating the same kind of simple calcula- tions, i.e. weighted summation.

2. Learning Mode of Package Flow Model (1) PFMs by neural network representation can realize

more precise simulation after learning. Furthermore the learning process can be done automatically by the algorithm appropriate for the network.

(2) The learning algorithm proposed here is effective since the simulated curves quickly close to the de- sired output within several times of learnings.

(3) Learned optimum weight-values distribution, which becomes the improved dropping density function,

gives more precise information on the distribution of delay-time of entity flow in the subsystems. This information will be sometimes useful for considering the physical mechanism of actual system, because delay-time is one of the basic parameters to char- acterize the dynamic behavior of various physical processes.

In addition, through the learning process, it can be known how much change of the dropping den- sity function will bring into how much effect on the simulated results. This facilitates the evaluation of impact on the simulated results caused by the actual deviation or measurement error of physical process in the subsystem.

(4) Neuron-wise learning, more generally speaking, partial learning of the corresponding neural network sometimes helps us for deeper understanding of the dynamic behavior of the subsystem. Because the in- formation about “which neuron influences on which part of the simulated curve” enable us to systemati- cally find the inadequite understanding on the phys- ical process which may cause some discrepancy be- tween the simulated results and the desired results. In addition, the magnitude of the effect can be in- tuitively known by comparing the initial simulated curve with the improved curve after the learning.

(5) As shown in Chap.111, the supervisor data can be given at the point where a kind of non-linear func- tion is operated to the output of the linear neuron. This implies that non-linear neurons can be used, and then the present method can be applied to the non-linear systems. In addition, the general form of error function proposed by Eq.(lO) enables us to take into consideration of the magnitude of uncer- tainty included in the supervisor data, by giving the goal of learning as Eg1 . These are main feature of our method. Furthermore, as to the learning algo- rithm, other various methods developed in the area of neural networkC6) may be applied, as long as a system is represented by a PFMs network.

-REFERENCES-

Matsuoka, H.: Nucl. Technol., 94, 228-241 (1991). Matsuoka, H., Ishiguro, M.: J. Nucl. Sci. Technol., 33[1], 26-33 (1996). Springer, T.E.: LA-3802, (1964). Kusunoki, T., et al.: JAERI-M 93-223, (1993). Matsuoka, H., Ishiguro, M.: Concept of package flow models by neural network representation for the sim- ulation of nuclear reactor system dynamics, Proc. 3rd Workshop on Supersimulators for Nuclear Power Plants, 65-74 (1995). Baldi, P.E., Hornik, K.: IEEE Trans. Neural Networks, 6[4], 837-858 (1995).

VOL. 33, NO. 9, SEPTEMBER 1996