produccion de urea

4
20IO International Coerence on Computer Application and System Modeling (ICCASM 2010) Design and Realization of Optimization System of Urea Production Process Based on BP Neural Network Zhang Yu The Third Department Southwest Institute of Technical Physics Chengdu, China [email protected] Abstractptimization of urea production process is important for urea production in our country, and there are wide requirements for it. Improving quality, raising output and reducing cost all need optimization of production process. So we look upon urea production process as an unknown non- linear function in this paper, and we get BP neural network model that can be used in optimization by function approaching. Then cyclic variable method is used to optimize that model, and we prove the validity of above-mentioned method. Finally, the optimization system of urea production process is designed and realized based on above theories by soſtware engineering method. This system needs to implement large amount of calculations, and it is applied to the problem resolving the offiine optimization. Key Words-Urea production process Optimization BP neural network Cyclic variable method I. INTRODUCTION Urea mainly as a fertilizer applied to agricultural production. At present, the world production of urea is more than one-third of nitrogen fertilizer, and the urea production of China is one-third of the world. This amount is increasing. China urea industry was mainly developed by the circulating water process, the process is shown in figure 1 below: Wastewater treatment Ex p el p roducts Figure I. Urea production process Urea production process is very complex, and it has many random factors and influences to determine the precise linear mathematical model. As a nonlinear expression, the neural network has an ability that can approach any continuous function of information processing and patte recognition with arbitrary precision, thus increasing attention. Yu Liang Computer School Sichuan University Chengdu, China Urea production process in the application of neural network modeling can be regarded as a black box which can directly reflects the relation between input and output of the system and bypass detail questions. II. NEURAL NETWORK METHOD REALIZING THE FCTION ApPROXIMATION OF THE UREA PRODUCTION PROCESS The traditional identification methods are very difficult for the general nonlinear system, but neural network can provide a powerl tool. Its essential is that properly selects neural network model to approximate actual system. It is mainly depended on two factors, the network topology structure and network study work rule, to establish a neural network. Both the factors combine the major factor of the network. A. Specc improved algorithm This paper uses the improved Back Propagation algorithm (BP) modeling, and the network structure uses pe 8 X 8 X 1. That means the structure uses three layer structure (Input layer, output layer and hidden layer). The nodes number of input layer is 8, the nodes number of output layer is 1, and the nodes number of hidden layer will be determined through the practical application. The topological structure is shown in figure 2 below: Input Layer Hidden Layer Output Layer y Figure 2. BP neural network topology structure The specific improved algorithm of BP neural network is shown below: 978-1-4244-7237-6/10/$26.00 mOl 0 IEEE VIO-368

Upload: ilirea

Post on 24-Sep-2015

216 views

Category:

Documents


1 download

DESCRIPTION

Produccion de Urea

TRANSCRIPT

  • 20IO International Conference on Computer Application and System Modeling (ICCASM 2010)

    Design and Realization of Optimization System of Urea Production Process Based on BP Neural Network

    Zhang Yu The Third Department

    Southwest Institute of Technical Physics Chengdu, China

    [email protected]

    Abstract-Optimization of urea production process is important for urea production in our country, and there are wide requirements for it. Improving quality, raising output and reducing cost all need optimization of production process. So we look upon urea production process as an unknown nonlinear function in this paper, and we get BP neural network model that can be used in optimization by function approaching. Then cyclic variable method is used to optimize that model, and we prove the validity of above-mentioned method. Finally, the optimization system of urea production process is designed and realized based on above theories by software engineering method. This system needs to implement large amount of calculations, and it is applied to the problem resolving the offiine optimization.

    Key Words-Urea production process Optimization BP neural network Cyclic variable method

    I. INTRODUCTION

    Urea mainly as a fertilizer applied to agricultural production. At present, the world production of urea is more than one-third of nitrogen fertilizer, and the urea production of China is one-third of the world. This amount is increasing. China urea industry was mainly developed by the circulating water process, the process is shown in figure 1 below:

    Wastewater treatment

    Expel products

    Figure I. Urea production process

    Urea production process is very complex, and it has many random factors and influences to determine the precise linear mathematical model. As a nonlinear expression, the neural network has an ability that can approach any continuous function of information processing and pattern recognition with arbitrary precision, thus increasing attention.

    Yu Liang Computer School

    Sichuan University Chengdu, China

    Urea production process in the application of neural network modeling can be regarded as a black box which can directly reflects the relation between input and output of the system and bypass detail questions.

    II. NEURAL NETWORK METHOD REALIZING THE FUNCTION ApPROXIMATION OF THE UREA PRODUCTION

    PROCESS

    The traditional identification methods are very difficult for the general nonlinear system, but neural network can provide a powerful tool. Its essential is that properly selects neural network model to approximate actual system. It is mainly depended on two factors, the network topology structure and network study work rule, to establish a neural network. Both the factors combine the major factor of the network.

    A. Specific improved algorithm This paper uses the improved Back Propagation

    algorithm (BP) modeling, and the network structure uses type 8 X 8 X 1. That means the structure uses three layer structure (Input layer, output layer and hidden layer). The nodes number of input layer is 8, the nodes number of output layer is 1, and the nodes number of hidden layer will be determined through the practical application. The topological structure is shown in figure 2 below:

    Input Layer Hidden Layer Output Layer

    y

    Figure 2. BP neural network topology structure

    The specific improved algorithm of BP neural network is shown below:

    978-1-4244-7237-6/10/$26.00 mOl 0 IEEE VIO-368

  • 2010 International Conference on Computer Application and System Modeling (ICCASM 2010)

    (1 ) Create network CDEight input neurons are temperature on the top of

    synthetic tower, temperature on the bottom of synthetic tower, proportion between NH3 and COb proportion between H20 and CO2, NH3 concentration in the exit, CO2 concentration in the exit, carbamate concentration in the exit, CO2 concentration in liquid dimethylamine, and they are represented by XI X8 in network. One output neuron is CO2 conversion, and it is represented by y in network. The m hide neurons are represented by hI h2 in network. The actual output of original sample is represented by y d(P).

    The activation function of hidden layer is rx)=sigmoid(x), and the activation function of output layer is g(x)=purlin(x).

    The weights form input layer to hidden layer are iw1ij, the weights form hidden layer to output layer are iw2j, the thresholds of hidden layer are b 1 j, and the threshold of output layer is b2.

    Then we set learning rate j.l, momentum factor a alJowable error E, the upper limit of the iteration N.

    (2) Train network This Back Propagation neural network is actived by the

    application of input, x I (p), X2(P), ... , X8(P), and expected output Yd(P). Then the weights and thresholds are reverse adjusted.

    CD Calculate actual outputs of hidden neurons

    hj(p) = f[ t Xi(P)XiWIij(P) -hI j] Calculate actual outputs of output neurons

    Calculate error slopes of output neurons

    8(p) = y(p)x[I-y(p)]xe(p), e(p) = Yd(P)-yep)

    Calculate calibrations of the weights of output layer

    l1iw2j(p) = j.1xy/p)x8(p)

    @Update the weights of the output layer

    iw2 j (p + 1) = iw2 j (p) + a x l1iw2 j (p)

    Calculate error slopes of hidden neurons

    (]) Calculate calibrations of the weights of hidden layer

    @ Update the weights of the hidden layer

    iw 1 ij (p + 1) = iw 1 ij (p) + a x l1iw 1 ij (p)

    (3) Update learning style CDCalculate the network error

    p

    c(p) = I [y d (p) -y(p)] i=! 2

    Judge whether the network is convergence If E (p) E , the learning is succeed. Then we export

    the network parameters, and the algorithm is over. The algorithm continues to the next step if this condition is not met.

    Update learning rate

    If E (p) < E (p-l) , j.l (p) =1. 05 X j.l (p-1)

    If E (p) > 1. 04 X E (p-l), j.l (p) =0. 7 X j.l (p-l)

    Judge whether the upper limit of iteration is reached If the limit is reached, the learning is failed. Then we

    export the network parameters, and the algorithm is over. Otherwise back to step 2 to start learning process again.

    B. Compare the improved algorithm and the standard algorithm

    In order to verity the effect of improved BP algorithm is better than standard BP algorithm, we used 400 groups production data to train by two algorithms for five times, and then compared the results of network training. We set the network structure as 8 X 8 X l, learning rate as j.l =0. 1, momentum factor as a =0. 95, alJowable error as E =0. 001, the upper limit of the iteration as N=100. The comparison results of network training are shown in table 1 below:

    TABLE!

    Training algorithm

    The I improved 0.038036

    BP 4 algorithm 0.046915

    The I standard 0.204823

    THE COMPARISON RESULTS OF NETWORK TRAINING

    Output absolute

    Training error error of the optimal network

    2 3 0.044992 0.045852 0.0571 5 0.044315

    2 3 0.1287 0.201024 0.194962

    VIO-369

  • 2010 International Conference on Computer Application and System Modeling (ICCASM 2010)

    BP algorithm

    4 0.211278

    5 0 196266

    The comparison results of two algorithm show that the training error and output absolute error of the improved BP algorithm are more smaller than the standard BP algorithm.

    III. ApPLICA TION EXAMPLES AND THE ANALYSIS OF OPTIMIZATION SYSTEM OF UREA PRODUCTION PROCESS

    This paper selects 498 groups production data of a nitrogenous fertilizer factory in 2008 as system application samples.

    A. Examples of application First, the original sample data can be divided into two

    parts. Four fifths of samples are randomly selected as the training collection, and the others are selected as the verification collection.

    Next, function approximation can be executed when the train collection data are used as discrete points. According to the experience, We set learning rate as 11 =1, momentum factor as a =0. 95, allowable error as =0. 001, the upper limit of the iteration as n=lOOO, the number of times of parallel running as m=20, hidden neurons as 8. After the network model is created and trained, the network errors of 20 times parallel training are shown in table 2 below:

    TABLE II THE NETWORK ERRORS OF 20 TIMES PARALLEL TRAINING

    The nnmber of times of oarallel runnin!! Training error I 0.037457 2 0.044581 3 0.050142 4 0.044062 5 0.045251 6 0.044478 7 0.044390 8 0.048066 9 0.044214

    10 0.045163 II 0.044517 12 0.046298 13 0.044808 14 0.044012 15 0.044959 16 0.044855 17 0.046530 18 0.044355 19 0.044665 20 0.044524

    From the results of 20 times parallel running, the minimum error of the network, MSEmin=0.037457, is get. This network model is proved to be ideal as correlation coefficient R=0.9989 and mean absolute error MAE=0.0529.

    After the model is created, the maximum iteration, m= 1000, is set. Then according to the actual production condition , interval accuracy of 8 input attribute is set as 1, 1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1. The optimization intervals after optimization calculation are shown in table 3 below:

    TABLE Ill. THE OPTIMIZATION INTERVALS

    Attribute(unit) lower limit Upper limit Temperature on the top of 154.3970 synthetic tower("C) 155.2059

    Temperature on the bottom of 168.1734 169.0000 synthetic tower CC) Proportion between NHJ and CO, 3.8200 4.7700 (%) Proportion between H20 and CO, (%)

    - U500 1.9700

    NH3 concentration in the exit (%) 25.6000 26.3618 CO, concentration in the exit (%) 10.0400 10.7564 Carbamate concentration in the 28.7981 29.5465 exit (%) CO, concentration in liquid 26.1212 26.7500 dimethylamine (%)

    B. The optimization results analysis Optimization data are created by the optimization range

    in the last section, and then we use these data into created model to simulate and get the prediction. Now we can determine whether the ratio of samples reached expectation and total samples have improved greatly than original data, and it is the judge whether the optimization effect is good or bad. Specific solutions are shown as follows:

    1) Create 5 groups simulation samples, and the number of every group is 500.

    2) According to the request of optimization production, the optimal class sample value is set to 62.0. So the optimal class rates in the calculation results are shown in table 4 below:

    TABLE IV.

    Group I 2 3 4 5

    THE OPTIMAL CLASS RATES OF 5 GROUPS SIMULATION DATA

    minimum maximum The optimal value value class rate 61.64 64.69 99.4% 61J5 64.69 99.8% 61J 6 64.69 99 0% 60.46 64.69 99.4% 60.87 64.69 99.0%

    The optimal class rate of the third group, 99.0%, is minimum.

    3) The number of original samples is 498, and the number of samples that have reached the optimal class sample value is 91. So the optimal class rate of original samples is 18.3%.

    4) From the step 3 and the step 4, we can calculate the ratio of optimal class rate is 5.4l.

    5) From the calculation in step 5, we know that the optimization effect is good because the ratio of optimal class rate is much more than 1.

    IV. CONCLUSION

    This paper uses the development process of urea production as the research object. From the study of urea production system, the BP neural network model is established. On this basis, the optimization calculations of

    VIO-370

  • 2010 International Conference on Computer Application and System Modeling (ICCASM 2010)

    eight attribute that can influence C02 conversion are completed. The model is proved to be effective by simulating finally.

    REFERENCES

    [1] W.S.McCulloch,W.Pitts.A logical calculus of the ideas immanent in nervous activity.Bulletin of Mathematical Biophysics, 1943(5): 115-137

    [2] D.O. HebbThe Organization of Behavior.A Neuropsychological Theory,John Wiley, I 949

    [3] F. Rosenblatt.The perceptron a probabilistic model for information storage and organization In the brain. Psychological Review, I 958(65): 3 86-408

    [4] M.L.Minsky, SAPapert.Perceptrons.MIT Press.Cambridge.MA,1 969 [5] S.Grossberg.How does a brain build a cognitive code. Psychological

    Review, 1980(87): I -5 I

    [6] JJ. Hopfield.Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the USA, 1982(79):2554-2558

    [7] T.Kohonen.Self-organized formation of topolohically correct feature maps.Biological Cybernetics, 1982(43):59-69

    [8] AG.Barto,RS.Sutton, CW.Anderson.Neuroolike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems,Man and Cybernetics, I 983(SMC-13 ):834-846

    [9] DERumelhart,lL.McClelland,eds.Parallel Distributed Processing: Explorations in the Microstructures of Cognition 2 vols.MIT Press.Cambridge.MA,1986

    [10] D.S. Broomhead,D.Lowe.Multivariable functional interpolation and adaptive networks. Complex Systems, I 988(2) 321-355

    [I I] K.Hornik,M . Stinchcombe,H .White.Universal Approximation of an Unknown Mapping and its Derivatives Using in Multiplayer Feed forward Networks.Networks, I 990,(3):550-560

    VI0-371