restoring images using a feed-forward neural network and the gradient adaptive learning rate...

14
This article was downloaded by: [The University of Manchester Library] On: 16 October 2014, At: 01:00 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Modern Optics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tmop20 Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm Adel Y. E. Atta a & Hani M. Haarb a a Azhar University, Faculty of Engineering , Cairo, Egypt Published online: 03 Jul 2009. To cite this article: Adel Y. E. Atta & Hani M. Haarb (1998) Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm, Journal of Modern Optics, 45:11, 2301-2313, DOI: 10.1080/09500349808231240 To link to this article: http://dx.doi.org/10.1080/09500349808231240 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: hani-m

Post on 10-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

This article was downloaded by: [The University of Manchester Library]On: 16 October 2014, At: 01:00Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Modern OpticsPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/tmop20

Restoring images using a feed-forwardneural network and the gradientadaptive learning rate algorithmAdel Y. E. Atta a & Hani M. Haarb aa Azhar University, Faculty of Engineering , Cairo, EgyptPublished online: 03 Jul 2009.

To cite this article: Adel Y. E. Atta & Hani M. Haarb (1998) Restoring images using a feed-forwardneural network and the gradient adaptive learning rate algorithm, Journal of Modern Optics, 45:11,2301-2313, DOI: 10.1080/09500349808231240

To link to this article: http://dx.doi.org/10.1080/09500349808231240

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

JOURNAL OF MODERN OPTICS, 1998, VOL. 45, NO. 11, 2301-2313

Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

ADEL Y. E. A T T A and HANI M. HAARB Azhar University, Faculty of Engineering, Cairo, Egypt

(Received 24 January 1997; revision received 4 March 1998)

Abstract. The problem of restoring high-quality images from degraded imaging systems without any prior knowledge of the degradation model and statistics of noise is considered. A novel training algorithm, the gradient adaptive learning rate, is employed to train a feedforward neural network, the nonlinear restoration model. A linearization model of the neuron’s sigmoidal activation function is utilized to speed up the convergence of the algorithm. Restoration is accomplished by making use of the generalization capabilities of the network. Ideal data sets representing flat and edge regions that would be occurring in real images have been proposed to train the network while real images have been used only during operation of the system. Computer simula- tion examples are given to illustrate the significance of this method. Comparison with one of the conventional restoration methods is also presented. Simulation results indicate better performance of the proposed method to other competing methods.

1. Introduction Restoration of degraded images due to the physical imaging system, image

digitization and image display system is a basic task in computer vision and imaging systems. Potential degradation includes diffraction in the optical system, sensor nonlinearities, optical system aberrations, film nonlinearities, atmospheric turbulence effects, image motion blur and geometric distortions. Noise disturb- ances may be caused by electronic imaging sensors or film granularity [l]. Basic- ally, the procedure is to model the image degradation to obtain a restored image. Various techniques such as the singular value decomposition (SVD) pseudoinverse filter, the Kalman filter, the Wiener filter, the minimum mean square error (MMSE) filter, and many other model-based approaches have been proposed for image restoration [2]. They assumed that degradation model and noise statistics are known. In real situations, neither of these assumptions would hold true. Some work has been achieved using adaptive techniques without any assumptions for the degradation model but the noise statistics are assumed known [3].

I t has been hypothesized that the brain with its highly computational archi- tecture might be the solution through the use of neural networks to restore degraded images effectively without any assumption of knowing the degradation model and noise statistics. To the best of our knowledge, little work has been achieved on restoration of images using neural networks. Most remarkable is the work of Zhou and Chellapa [4] where they have used a Hopfield network for restoration. This method again assumes that the blurring model is known. Sivakumar and Desai [S] have used a multilayer perception network trained

0950-0340/98 $1240 0 1998 Taylor & Francis Ltd.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 3: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

2302 A. Y . E. Atta and H . M . Haarb

with the back-propagation (BKP) algorithm. They have used a multilevel sig- moidal function to cover the grey scale range of the images and have assumed no knowledge of the blurring function and statistics of noise, but all their reported simulation results were based only on binary images with very limited size. Greenhill and Davies [6] have used neural networks in removing noise. Other image processing fields such as enhancement, coding and remote sensing have found interest as well in using neural networks [7-141.

In this paper, we consider the problem of restoring real images without any knowledge of the degradation model and the statistics of noise. We propose a restoration system that uses a feed-forward neural network. The network is trained using a robust training algorithm. A linearized model of the sigmoidal activation function for all neurons in the hidden and output layers has been employed. A training methodology has been set up for designing a training data set based on ideal grey values that would be present in flat and edge regions of real images to train the network on both flat and edge characteristics while real images were utilized only during testing and operation of the system.

The organization of this paper is as follows: the next section provides descrip- tion of the gradient adaptive learning rate (GALR) algorithm which is the foundation of the neural system. Section 3 presents the structure of the neural restoration system, the linearized neuron characteristic, and the training data set methodology. Section 4 gives simulation results of performance of the training algorithm and the restoration network with a comparison to one of the conven- tional methods used for image restoration. Section 5 contains the conclusion.

2. Gradient Adaptive Learning Rate (GALR) Training Algorithm The GALR algorithm is based on the simple adaptive weight update rule

commonly used in back-propagation algorithm [15, 161 which can be expressed in the following form:

Wnew = wold dE -v-7 dW

where: wneW is the updated weight connecting source neuron to target neuron, wold is the old weight, q is the learning rate and dE/dw is the gradient of the criterion error function E with respect to weight w in the network. In this case, equation (1) would be written in an iterated form as

dE dwij(n) '

wij(n + 1) = wij(n) - q-

Consider a multilayer feed-forward network with an arbitrary number of layers, with M neurons at the output layer, and with batch mode training of L samples. Therefore, the error function E to be minimized is the familiar sum- squared error measure over all the samples which can be put in the form:

L M

E = 0.5 x [ T i ( k ) - Yi(k)I2, (3) k=l i=l

where T;(k) and Yi(k) are the ith network target and actual outputs respectively for sample k.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 4: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

Restoring images using feed-forward neural network and GALR algorithm 2303

The principle is that the learning rate is made adaptive as in equation (1) such that [17]

where a is constant and dE(n)/&(n) is the gradient of the error function with respect to the learning rate q.

So, to achieve optimal convergence, updating of different weights w;i would use different learning rates. Therefore, equations (2) and (4) become

wG(n + I ) = wij(n) - qij(n) - awG(n) '

To compute aE(n)/dqy(n), we may use the chain rule as

The first term on the right of equation (7) is known

(6)

follows:

(7)

from equation (5). The second term may be computed by taking the first derivative of equation (5) with respect to vij(n) as follows:

Assuming that 77 is changing slowly.

We can replace the update rule (2) by the following updating rules:

awij(n + 1) - awij(n) -

The constant a in equation (lob) plays an important role in controlling the performance of the algorithm. The larger a is, the slower the convergence will be. When a is too small, instability occurs. Experiments have showed that choosing a E [0.04,0.1] can obtain good results.

Compared wirh the BKP algorithm, it is clear that the GALR algorithm increases the computational complexity. The dependence of each synapse on the three parameters weight WG, learning rate '6, and weight gradient dE(n)/dno(n) has resulted in more operations in each cycle, but the advantage is that the number of learning cycles has been reduced; hence, the total number of operations has been reduced and much faster convergence has been achieved.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 5: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

2304 A . Y . E. A t ta and H . M . Haarb

The algorithm is as follows:

(1) Initialize: a, w and 7. (2) Apply samples and compute E , aE/aw and aw/aq. ( 3 ) If E < 6(6 = was used in simulations) stop; otherwise go to step (4). (4) Update the parameters associated with each synapse as in equations (1 0 a ) ,

( 5 ) Go to step (2). ( l o b ) and (1Oc).

3. The Neural Restoration System 3 . 1 . System description

During the training phase, instead of applying the entire image at the input of the system, which would require a very large network size and a lengthy training cycle, a small size m x m window generator has been used (m = 3 was used in computer simulations).

The neural restoration system consists of a feed-forward neural network with two layers. Neurons in any layer are connected only to neurons in the next layer. Figure 1 (a) shows a sketch of the neural system during training. It consists of an ideal image generator which generates ideal grey values in the range 0-255 representing possible flat and edge regions that would appear in real grey images. The output of the generator is fed to a degradation model.

The degradation model as detailed in figure 1 ( b ) consists of a blurring function and a zero-mean additive random noise. The blurring function is convolved with the generator output and the resultant blurred pixel is added to the random noise to produce the distorted pixel. The output of the degradation model is applied to the neural network, the single output of which is the actual output of the system. The desired output of the system which is the centre pixel of the ideal image generator is compared with the actual output of the system and the error between the two outputs is taken as a measure of the criterion that is used by the training algorithm to update the network’s weights. Thus, the neural network learns the inverse characteristics of the degradation model and thus the restoration capability.

Image(ldea1 flat levels eldeal edges)

(blurrlng a

Actual Deslred output OUtDUt

N o l s e

Ideal Image *

Blurred and Depradatlon PSF

NOl-y lmaQ0

Figure 1. (a) Architecture of the restoration system during training. ( b ) Details of the degradation model.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 6: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

Restoring images using feed-forward neural network and GALR algorithm 2305

1.2

'.4 t MODIFIED

Figure 2. The modified and conventional sigmoidal activation functions.

Two assumptions are made. First, it is assumed that the degradation model and the statistics of noise are the same for the ideal data during training and for the real images during operation. Second, the degradation undergone by the images is local, that is each pixel is affected by its neighbours only. Thus a degraded pixel can be restored only by its neighbouring pixels.

During operation, real degraded images are applied at the input of the trained network. The network acts as a moving window to scan the entire degraded image sequentially. It is similar to the operation of spatial domain filters commonly used in image filtering techniques [ 1 8 ] .

3.2. Piecewise linearization of neuron's sigmoidal activation function The neuron transfer function utilized in this research is a stretched sigmoidal

function which is approximated in a piecewise linear fashion. Initially, we failed to find a convergent network for the restoration problem based on the conventional sigmoidal activation function. Finally, we decided to use [19] to implement the network.

Figure 2 shows the modified sigmoidal function. In this case, the output of a hidden layer neuron is the weighted sum of the input layer neurons. Thus, it can be compared with the response of spatial domain filters. The use of the modified transfer function has reduced significantly the convergence time during training and the computational time during operation.

3 . 3 . Training methodology The GALR algorithm described in section 2 has been utilized to train the

restoration network. Network weights were initialized according to [ 2 0 , 211. Experimental simulations have shown us the convergence using this scheme better than when using the familiar method of initializing the weights randomly in the [ - Y , +r] range.

T o train the network, the restoration model, a set of 1280 grey-level sample vectors the values of which range from 0 to 255, was chosen. Among this training set, 256 samples having equal grey values starting from [0 0 . . . 0 ] up to [255 255 ... 2551 were utilized. The reason for this is to train the network on some possible flat areas that may be found in real images. The other 1024 samples were chosen to train the network on some possible edge regions. The network input is scaled across the grey-level range to be in the [&1] range.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 7: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

2306 A . Y . E . A t t a and H . M . Haarb

For proper generalization, we have noticed that the network has to be trained for at least 10% of the total number of possible image neighbourhood states. This empirical result was observed from our computer simulations on binary images. For the grey-images case, it was very difficult and not feasible to prepare 10% of the possible qm2image neighbourhood states where q is the number of grey levels (imagine the number of training samples needed when q = 256 and m = 3).

4. Simulation Results 4.1. Performance of the gradient adaptive learning rate algorithm

The algorithm described in section 2 was tested on a sine function approxima- tion problem. T o provide a basis for comparison, the BKP algorithm was applied to the same problem. The configuration of the network was 1-10-1, that is one input neuron, 10 hidden neurons and one output neuron. The sigmoidal activation function was used for the hidden layer's neurons while a linear function was used for the output neuron. The network was trained to approximate the function

y = 0.5 + 0.2 sin (2xx)

using both algorithms. The training set consists of 50 input-output pairs. A batch training mode was used until the sum of squares of errors was less than lo-'. The training curves shown in figure 3 were an average over five different initial weights. The initial weights were randomized according to [21].

It is observed that the error fluctuates on its way down. It is also observed that the number of learning steps was greatly reduced when using the GALR algorithm compared with the BKP algorithm. It should be noted that figure 3 provides only limited information, since the two algorithms do not have the same number of floating point operations (FLOPs) for each training cycle. Table 1 summarizes the results, showing the number of FLOPs for each cycle and for the overall cycles.

Note that the BKP algorithm takes more than eight times as many flops as the GALR algorithm for the overall cycles. It is also observed that the GALR algorithm takes about three times as many flops as the BKP algorithm for each cycle. The comparison in figure 3 has indicated much more pronounced accelera- tion for the proposed algorithm than for the BKP algorithm.

GA L R BKP

0.15 . i. '. i i..

0.05 - '.* , . '.

50 im 150 mo 2y) 3~ 350 400 4x1 ! n o . . . .

T d ni ng Cycles I0

Figure 3 . Training session of the sine approximation network using the GALR and BKP algorithms.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 8: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

Restoring images using feed-forward neural network and GALR algorithm 2307

Table 1. Number of FLOPS required for convergence.

Number of Flops

GALR BKP

Single cycle 43 5 127 Overall cycles 52 200 422 540

4.2. System performance for grey level images We have developed three restoration networks corresponding to three types

of degradation: blurring, additive noise and both. The network configurations were 9-17-1, 9-19-1 and 9-22-1 respectively. We have used the cascade correlation algorithm [22-241 to figure out the minimum number of neurons for the hidden layers of each network. The GALR algorithm was used for updating the weights of the network. The weights were initialized according to [20]. The algorithm was used to train each network until an error of was reached.

We have made three sets of experiments on the test images shown in figure 4 . Figure 4 ( a ) is the original image. Its size is 256 x 512 with 256 grey levels and variance 625. Figure 4(b) is the image from figure 4 ( a ) degraded by a 3 x 3 uniform blurring function, that is H ( k , 1) = 4 for (k l , 111 < 1 . Figure 4 (c) is the image from figure 4 ( a ) degraded by additive Gaussian noise with zero mean and variance 400. Figure 4 ( d ) is the image from figure 4 ( a ) degraded by both the blurring function used in the image of figure 4(b) and the additive noise used in figure 4 (c).

The first set of experiments deals with the removal of the blurring function on the image of figure 4 ( b ) , where the result is shown in figure 5(a). Comparing figures 4 (b ) and 5 ( a ) , we can see that a significant amount of blurring has been removed and that the reconstructed image looks comparable with the original image in figure 4(a). Furthermore, the proposed method retains most of the image details. Some artefacts appear on the left-hand side of the reconstructed image. Some small edge regions on the right-hand side of the reconstructed image are missing. We believe that the reason for that is due to the limited generalization capabilities of the neural network.

The second set deals with the removal of the additive noise on the image in figure 4 (c), where the result is shown in figure 5 (b ) . Comparing figures 4 (c) and 5 (b) , in the same way, we can see as a result of the restoration scheme that noise has been removed. Moreover, the restoration scheme preserves most of the image details. Some artefacts also appear in the reconstructed image.

The third set of experiments deals with the removal of both the blurring function and the additive noise on the image of figure 4 ( d ) . The result is shown in figure 5 (c). Comparing figures 4 ( d ) and 5 ( c ) , we can see that the neural network is capable of restoring and removing noise from the degraded image. Moreover, it is shown that the reconstructed image looks comparable with the original image. Some edges are missing especially in the top right-hand side of the reconstructed image. It is due to the same reason as in figure 5 (a).

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 9: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

2308 A. Y . E. Atta and H . M . Haarb

Quantitative evaluation of the performance of the proposed scheme is obtained by table 2. It illustrates the rms error (RMSE) for the degraded and reconstructed images in the three above cases.

As observed from table 2, the RMSE values for the reconstructed images have been reduced significantly because of the effect of the restoration networks except for the noisy case.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 10: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

Restoring images using feed-forward neural network and GALR algorithm 2309

Figure 4. The test images: ( a ) original image; (b) image degraded by a 3 x 3 blur function; (c) image degraded by Gaussian noise with zero mean and variance 625; ( d ) image degraded by a blurring function and Gaussian noise.

For comparison purposes, figure 6 shows the output of MMSE method. In the implementation of this technique, we have used the same blurring function and noise variance on the image in figure 4 ( d ) .

It should be noted that some distortion is noticeable in figure 6. Some artefacts appear on the right-hand side of the image. Moreover, some edges are blurred.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 11: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

2310 A . Y . E . Atta and H . M . Haarb

(b)

This indicates visually that the neural network method gives better output than the MMSE method. Although the RMSE value using the MMSE method is smaller than its value when using the neural method.

5. Conclusion A neural restoration technique for real images using a feed-forward network

and a robust training algorithm has been developed. The technique does not

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 12: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

Restoring images using feed-forward neural network and GALR algorithm 23 11

(4 Figure 5 . The results of restoration on the noisy blurred images of figure 4: ( a ) Restored

image degraded by the blurring function; ( b ) restored image degraded by Gaussian noise; (c) restored image degraded by the blurring function and noise.

Figure 6. Restoration of a noisy blurred image using the MMSE method (RMSE, 23.8).

require any prior knowledge of the degradation model and noise statistics. I t gives satisfactory performance when restoring real images that have been degraded using exactly the same degradation model that has been used during the network’s training. Better performance can be achieved when the generalization capabilities

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 13: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

2312 A. Y . E. Atta and H . M . Haarb

Table 2. RMSE for degraded and reconstructed images.

Degradation type Degraded Reconstructed

Blurring function Gaussian noise Both

33.4 19.94 32.85

6.02 82.37 20.47

of the neural system are improved by increasing the training sample domain and network size. Comparison with the MMSE method has shown that the proposed approach gives better visual output than the MMSE method. The technique is currently under development to include nonlinear degradation and other types of noise: impulsive and multiplicative. Moreover, other restoration techniques for additive noise and blurring will be compared with our approach.

References [l] SAVAKIS, A. E., and TRUSSELL, H. J., 1994, IEEE Trans. Image Processing, 2, 141. [2] JOSEPH, E., and PAVALIDIS, T., 1994, I E E E Trans. Image Processing, 2, 223. [3] KIM, S. P., and Su, W. Y., 1993, I E E E Trans. Image Processing, 2, 534. [4] ZHOU, Y. T., and CHELLAPPA, R., 1988, IEEE Trans. Acoust., Speech, Signal Processing,

[5] SIVAKUMAR, K., and DESAI, U. B., 1993, IEEE Trans. Signal Processing, 41, 2018. [6] GREENHILL, D., and DAVIES, E. R., 1994, Pattern Recognition in Practice IV , edited by

E. S. Gelsma and L. N. Kana1 (Amsterdam: Elsevier). [7] MUNEYASU, M., TSUJII, S., and HINAMOTO, T. 1994, Proceedings of the 1994 I E E E

International Conference on Acoustics, Speech, and Signal Processing, Vol. 5 (New York: IEEE), pp. 57-60.

[8] HUNGENAHALLY, S., 1995, Proceedings of the 1995 I E E E International Conference on Systems, M a n and Cybernetics. Intelligent Systems for the 2lst Century, Vol. 5 (New York: IEEE), pp. 4626-4631.

[9] SOLAIMAN, B., and MAILLARD, E. P., 1995, Proceedings of the 1995 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5 (New York: IEEE),

[lo] CORRAL, J. A., GUERRERO, M., and ZURIA, P. J., 1994, Proceedings of the 1994 I E E E International Conference on Neural Networks and IEEE World Congress on Computational Intelligence, Vol. 6 (New York: IEEE), pp. 41 13-41 18.

[ll] Xu, M., and KUH, A., 1995, Proceedings of the 1995 I E E E Symposium on Circuits and Systems, Vol. 3 (New York: IEEE), pp. 1632-1635.

[12] AMAR, F., DAWSON, M. S., FUNC, A. K., and CHEN, K. S., 1995, Proceedings of the 1995 I E E E International Geoscience and Remote Sensing Symposium, Quantitative Remote Sensing for Science and Applications, Vol. 1 (New York: IEEE), pp. 694-696.

[13] AHMED, F., GUSTAFSON, S. C., and KARIM, M. A, 1995, Proceedings of the IEEE 1995 National Aerospace and Electronics Conference. Vol. 2, 588-592.

[14] FORCIA, V. L., NIRCHIO, G. PASQUARIELLO, and SPERANZA, A., 1995, Proceedings of the 1995 IEEE International Geoscience and Remote Sensing Symposium, Quantitative Remote Sensing for Science and Applications, Vol. 2 (New York: IEEE), pp. 945-947.

36, 1141.

pp. 3447-3450.

[15] BUNTINE, W. L., and WIECEND, A. S., 1994, I E E E Trans. Neural Networks, 5, 480. [16] HACAN, M. T., and MENHAJ, M. B., 1994, I E E E Trans. Neural Networks, 5, 989. [17] DAVILA, C. E., 1994, IEEE Trans. Signal Processing, 42, 268. [18] CHEN, S., and ARCE, G. R., 1993, I E E E Trans Signal Processing, 41, 1021. [19] SHIRVAIKER, M. V., and TRIVEDI, M. M., 1995, I E E E Trans. Neural Networks, 6 , 252. [20] DRACO, G. P., and RIDELLA, S., 1992, IEEE Trans. Neural Networks, 3, 627. [21] WEYMAERE, N., and MARTENS, J . P., IEEE Trans. Neural Networks, 5, 738.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4

Page 14: Restoring images using a feed-forward neural network and the gradient adaptive learning rate algorithm

Restoring images using feed-forward neural network and GALR algorithm 23 1 3

[22] FAHLAhl, S. E., and LEBIERE, C., 1990, Advances in Neural Information Processing Systems 2, edited by D. S. Touretzky (Los Altos, California: Morgan-Kaufmann), pp. 524-532.

[23] HOEFELD, M., and FAHLMAN, S. E., 1992, IEEE Trans. Neural Networks, 3, 602. [24] PAHLOS, A. G., FERNANDEZ, B., ATIYA, A. F., MUTHUSAMI, J . , and TSAI, W. K. , 1994,

IEEE Trans. Neural Networks, 5 , 493.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

1:00

16

Oct

ober

201

4