06706844-2

Upload: rahul-b-warrier

Post on 02-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 06706844-2

    1/7

    Complex-valued Bidirectional Auto-Associative Memory

    Yozo Suzuki and Masaki Kobayashi

    Abstract

    Complex-valued Hopfield Associative Memory(CHAM) can store multi-valued patterns. But CHAM stores notonly given training patterns but also many spurious patterns,such as their rotated patterns, at the same time. These rotatedpatterns and spurious patterns reduce the noise robustnessof the CHAM. In the present work, we propose Complex-valued Bidirectional Auto-Associative Memory (CBAAM) as amodel of auto-associative memory which improves the noiserobustness. CBAAM consists of two layers. Although the struc-ture of CBAAM is a Bidirectional Associative Memory (BAM),CBAAM works as an auto-associative memory, because the onelayer is a visible layer and the other one is an invisible layer.The visible layer consists of complex-valued neurons and canprocess multi-valued patterns. The invisible layer consists ofreal-valued neurons and can reduce pseudo-memory such as

    rotated patterns. Thus, CBAAM has strong noise robustness.In the computer simulations, we show that the noise robustnessof CBAAM highly exceeds that of CHAM. Especially, we findthat CBAAM maintains high noise robustness independent ofthe resolution factor.

    I. INTRODUCTION

    FOR recent years, artificial neural networks have beenstudied for flexible information processing. An associa-tive memory is a subject for study in this field. In the past,

    Hopfield [1],[2] has proposed Hopfield Associative Memory

    (HAM) as a model of auto-associative memory. HAM has

    some problems. One of these problems is that HAM cannot

    deal with the multi-valued patterns.

    Complex-valued Hopfield Associative Memory (CHAM)

    was proposed as an advanced model of HAM by Aizenberg

    et al.[3], Noest[4], [5] and Jankowski et al.[6]. CHAM

    can deal with the multi-valued patterns unlike HAM. Thus,

    CHAM is often applied to storing gray-scale images ( Aoki

    and Kosugi[7], Aoki et al.[8], Muezzinoglu et al.[9]). Some

    researchers have proposed advanced learning algorithm for

    CHAM in order to improve its storage capacity and noise

    robustness. Aoki et al. [8] and Lee [10] proposed projection

    rule for CHAM. Lee [11] and Kobayashi et al. [12] proposed

    gradient descent learning algorithm. Muezzinoglu et al.[9]

    and Kobayashi [13] proposed a learning algorithm by solvingsystems of linear inequalities.

    CHAM stores not only training patterns but also their

    rotated patterns. This is referred to as rotation invariance

    (Zemel et al.[14]). The rotated patterns are typical spurious

    patterns and K1 rotated patterns exist for each trainingpattern in case of K quantification. Mixture patterns are

    secondary typical spurious patterns [15]. Mixture patterns are

    Yozo Suzuki and Masaki Kobayashi are with InterdisciplinaryGraduate School of Medicine and Engineering, University of Ya-manashi, 4-3-11, Takeda, Kofu, Yamanashi 400-8511, Japan (email: [email protected])

    combinations of stable rotated patterns. Hence, CHAM con-tains an enormous number of spurious patterns. Thus, these

    rotated patterns reduce the noise robustness of the CHAM.

    It is promising for the improvement of noise robustness to

    avoid stable rotated patterns [16]-[23].

    Kosko[24][25] has proposed Bidirectional Associative

    Memory (BAM). A BAM consists of two layers and re-

    alizes mutual association. Moreover, a BAM realizes high

    parallelism, because the neurons in the same layer are

    independent. BAM has been extended to Complex-valued

    BAM (CBAM) to process multi-valued patterns[26].

    In this paper, we proposed Complex-valued Bidirectional

    Auto-Associative Memory (CBAAM). A CBAAM is an

    auto-associative memory model whose structure is that ofBAM. Our proposed model consists of a visible layer and

    an invisible layer. The visible layer consists of complex-

    valued neurons and can process multi-valued patterns. The

    invisible layer consists of real-valued neurons and can reduce

    spurious memory such as rotated patterns. Therefore, our

    proposed model can process multi-valued patterns and has

    high noise robustness. In the computer simulations, we show

    that the noise robustness of CBAAM highly exceeds that

    of CHAM. Especially, we find that CBAAM maintains high

    noise robustness independent of the resolution factor.

    The rest of the present paper is organized as follows:

    Section II-IV briefly describes HAM, CHAM and CBAM;In section V, we describe our proposed model CBAAM;

    Section VI provides computer simulation; In section VII, we

    discuss the computer simulation results; Finally, we conclude

    in section VIII.

    II . HOPFIELDA SSOCIATIVEM EMORY

    In this section, we briefly describe Hopfield Associative

    Memory (HAM). First, we define the neuron of HAM. The

    neuron takes one of two value, 1 or1. Let a real numberS and a function f() be an input value and the activationfunction. The state x of neuron is defined as follows:

    x= f(S) (1)

    f(S) =

    1 (S0)1 (S

  • 8/11/2019 06706844-2

    2/7

    Fig. 1. Hopfield Associative Memory (number of neurons is 4)

    x x x

    Input Output

    Fig. 2. Recall process of HAM. HAM recalls a training pattern.

    for i= j. This requirement ensures that HAM reaches astable state. Let xi be the state of the neuron i. Then the

    weighted sum input Ij to the neuron j is given as follows:

    Ij =i=j

    wjixi. (4)

    Finally, we describe recall process of HAM. All neuronsare connected to each other, it is hard to update multiple

    neurons simultaneously. So we have to update neurons it-

    eratively. The procedure of recall is given by the following

    steps.

    1) An input pattern is given to HAM.

    2) Update all neurons iteratively.

    3) If HAM is unchanged, recall process is completed.

    Otherwise go to 2).

    The recall process is illustrated in Fig. 2. Suppose that,

    for a training pattern x, a pattern x, which is x with noise,is given to HAM. First, all neurons are updated iteratively.

    And, HAM is updated until HAM become stable. Finally, weobtain the training pattern pair x. Moreover, we can remove

    the noise of the initial given pattern.

    III . COMPLEX-VALUEDH OPFIELD A SSOCIATIVE

    MEMORY

    A. Complex-valued neurons

    In this section, we describe Complex-valued Hopfield

    Associative Memory (CHAM), which is a complex-valued

    extension of HAM. Complex-valued neurons input and out-

    put signals are complex numbers. And the state of complex-

    valued neuron is K-valued on the complex unit circle, where

    ss

    s s

    1

    2 3

    0

    Re

    Im

    Fig. 3. States of neuron (K = 4)

    K is the resolution factor and an integer greater than two.

    It divides the complex unit circle into K sectors. Let a real

    numberK and complex numbers sk (k= 0, , K 1)beas follows:

    K=

    K, (5)

    sk = exp(1(2k+ 1)K). (6)

    The states of complex-valued neurons belong to the set{sk}.Figure 3 shows correspondence between the number of state

    and complex-value in case ofK= 4.A complex-valued neuron receives the weighted sum input

    from all the other neurons. Then it selects a new state for

    the weighted sum input by following the activation function.

    In the present work, we use the following activation function

    f():

    f(x) =

    s0 0arg(x)< 2Ks1 2Karg(x)< 4Ks2 4Karg(x)< 6K...

    sK1 2(K 1)Karg(x)< 2KK

    (7)

    where arg(x) is the argument of the complex number x.Therefore,f(x)maximizesRe(skx), whereRe(x)and xarethe real part and the complex conjugate ofx, respectively.

    B. Complex-valued Hopfield Associative Memory (CHAM)

    Let a complex numberwji be the connection weight from

    the neuron i to the neuron j. Then the connection weight

    wji needs to satisfy the following requirement:

    wji = wij . (8)

    This requirement ensures that CHAM reaches a stable state.

  • 8/11/2019 06706844-2

    3/7

    / 2 rotate

    pattern

    rotated

    pattern

    3 / 2 rotate

    pattern

    training

    pattern

    Fig. 4. Rotated patterns of a training pattern

    Letxi be the state of the neuron i. The weighted sum inputIj that the neuron j receives from all the other neurons is

    defined as follows:

    Ij =i

    wjixi. (9)

    We describe two typical learning algorithms for CHAM,

    the complex-valued hebbian learning and the generalized

    inverse matrix learning. We denote the pth training pattern

    vector by xp = (xp1

    , xp2

    , , xpN)T (p = 1, 2, , P),where P and N are the numbers of the training patterns

    and the neurons, respectively. The superscript T means thetranspose matrix. The complex-valued hebbian learning is

    the simplest learning algorithm but the storage capacity and

    noise robustness is extremely low. The connection weight wjiis given by wji =

    px

    pj xpi . The complex-valued hebbian

    learning definitely satisfies the requirement (8). Next, we

    describe the generalized inverse matrix learning, which is an

    advanced learning algorithm and have high storage capac-

    ity and noise robustness. Moreover, we denote the weight

    connection matrix by W, whose (i, j) component is wij .ConsiderNP training matrixX = (x1,x2, ,xP). Thenthe weight connection matrix W is given as follows:

    W= X(XX)1

    X, (10)where the superscriptmeans the adjoint matrix. The matrix(XX)1X is called the generalized inverse matrix ofX.

    C. Rotated patterns in CHAM

    Rotated patterns are strictly related to noise robustness of

    CHAM. For a training pattern x = (x1, x2, , xN), thepatterns skx= (skx1, skx2, , skxN) (k= 1, 2, , K1)are referred to as its rotated patterns. Therefore, the rotatedpatterns are obtained by rotating the states of all neurons by

    2kK. In case of K=4 and N=4, the training pattern of Fig.4 has the three rotated patterns shown in Fig. 4.

    X-Layer

    Y-Layer

    Fig. 5. Bidirectional Associative Memory

    Suppose that a training pattern x is stable. Then the

    following equation holds for each j:

    f(i=j

    wjixi) =xj. (11)

    For a rotated pattern skx, the following equation holds:

    f(i=j

    wjiskzi) =skf(i=j

    wjixi) =skxj . (12)

    This implies that the rotated patterns skx are also stable.

    Therefore, a training pattern hasK1stable rotated patterns.WhenKis large, a training pattern x and the rotated patterns

    s1x are near. This prevents CHAM from recalling thecorrect training patterns.

    IV. COMPLEX-VALUEDB IDIRECTIONAL A SSOCIATIVEMEMORY

    A. Structure

    BAM consists of two layers, X-Layer and Y-Layer. There

    are connections between X-Layer and Y-Layer. There are not

    connections in the same layer. Figure 5 shows the structure

    of BAM. Two layers of BAM are independent unlike HAM.

    Thus, BAM has concurrency in the process of calculating

    weighted sum input to another layer.

    If all neurons are complex-valued neurons, the BAM is

    referred to as Complex-valued BAM (CBAM). In the rest of

    this section, we consider only CBAM. Let thej th neurons of

    X-Layer and Y-Layer be xj andyj , respectively. We denote

    the state vectors of X-Layer and Y-Layer as follows:

    x= (x1, x2, , xM)T, (13)y= (y1, y2, , yN)T, (14)

    whereM andNare the numbers of neurons in X-Layer and

    Y-Layer, respectively. LetwYXji andwXYij be the connection

    weight from the neuron j of X-Layer to the neuron j of

    Y-Layer and the one from the neuron j of Y-Layer to the

    neuron j of X-Layer. CBAM requires wXYij = wY Xji to

    ensure convergence.

  • 8/11/2019 06706844-2

    4/7

    B. Learning Algorithm

    Suppose that the training pattern pairs are given by

    (x1,y1), (x2,y2), , (xP,yP), where P is the number oftraining pattern pairs. We define training pattern matrices as

    follows:

    X= (x1,x2, , xP), (15)Y= (y1,y2, ,yP). (16)

    We denote the connection weight matrix from X-Layer

    to Y-Layer and the ones from Y-Layer to X-Layer by WY XandWXY, respectively. The(i, j)components ofWY X andWXY arew

    Y Xij andw

    XYij . Then, the requirement for CBAM

    to ensure convergence is WY X =WXY.

    Complex-valued hebbian learning rule is given by

    WY X = YX or wYXji =

    py

    pjxpi . Storage capacity and

    noise robustness of complex-valued hebbain learning rule

    is extremely low. Yano and Osana [27][28] proposed the

    generalized inverse matrix learning for CBAM. Although this

    learning algorithm does not satisfy the requirementWYX =WXY , it effectively works. The generalized inverse matrixlearning for CBAM is given as follows:

    WYX =Y(XX)1X, (17)

    WXY =X(YY)1Y. (18)

    Then we can easily get the following equations:

    WY Xxp =yp, (19)

    WXYyp =xp. (20)

    Therefore, the training patterns are stable.

    C. Recall ProcessGiven a training pattern with noise to X-Layer, BAM

    removes the noise and associates Y-Layer corresponding to

    X-Layer. The procedure of recall is given by the following

    steps.

    1) An input pattern is given to X-Layer.

    2) Update Y-Layer.

    3) Update X-Layer.

    4) If X-Layer is unchanged, recall process is completed.

    Otherwise go to 2).

    The recall process is illustrated in Fig. 6. Suppose that,

    for a training pattern pair (x,y), a pattern x, which is x

    with noise, is given to X-Layer. The initial state of Y-Layeris arbitrary. First, Y-Layer is updated. Subsequently, X-Layer

    is updated. X-Layer and Y-Layer are updated by turns until

    both layers become stable. Finally, we obtain the training

    pattern pair (x,y). Moreover, we can remove the noise ofthe initial given pattern.

    V. COMPLEX-VALUEDB IDIRECTIONAL

    AUT O-A SSOCIATIVEM EMORY

    A. Structure

    We describe our proposed model Complex-valued Bidi-

    rectional Auto-Associative Memory (CBAAM). Although

    x

    ?

    x

    y

    x

    y

    x

    y

    Input

    Output

    Output

    X-Layer

    Y-Layer

    Fig. 6. Recall process of BAM. BAM recalls a training pattern pair.

    visible layer

    invisible layer

    Complex-valued neuron

    Real-valued neuron

    Fig. 7. Complex-valued Bidirectional Auto-Associative Memory

    the structure of CBAAM is BAM, it works as an auto-

    associative memory. CBAAM uses X-Layer and Y-Layer as

    the visible layer and the invisible layer, respectively. We

    give an initial pattern to the visible layer and obtain the

    final pattern from the visible layer. Then, we can expect to

    obtain a training pattern without noise. We show CBAAM in

    Fig. 7. The visible layer consists of complex-valued neurons.

    So CBAAM can process multi-state patterns. The invisible

    layer consists of real-valued neurons. Therefore, CBAAM

    can avoid storing rotated patterns and is expected to improve

    the noise robustness. The connection weights are complex

    numbers. The neurons of the invisible layer are real-valued

    neurons. We can regard real-valued neurons as complex-

    valued neurons in case ofK= 2. Then, real-valued neuronsignore the imaginary parts of input signals.

    B. Learning Algorithm

    Since CBAAM is an auto-associative memory, the train-

    ing patterns are not pattern pairs. Suppose that the train-ing patterns are given by x1,x2, , xP. We randomlygenerate the patterns of the invisible layer corresponding

    to the training patterns. We denote the generated patterns

    by y1,y2, ,yP. Then we obtain the training patterns(x1,y1), (x2,y2), , (xP,yP), for CBAAM. Therefore,the training pattern matrices are as follows:

    X= (x1,x2, , xP), (21)Y= (y1,y2, ,yP), (22)

    We can use complex-valued hebbian learning rule for

    CBAAM. We have to compare CHAM and CBAAM. The

  • 8/11/2019 06706844-2

    5/7

    x

    ?

    x

    y

    x

    y

    x

    y

    Input Output

    visible layer

    invisible layer

    Fig. 8. Recall process of CBAAM. CBAAM recalls a training pattern fromthe visible layer, ignoring the pattern in the invisible layer.

    storage capacity and noise robustness of complex-valued

    hebbian learning rule is extremely low in both cases. There-

    fore, complex-valued hebbain learning rule is not adequate

    for comparison. Thus, we adopt the generalized inverse

    matrix learning, nevertheless it does not ensure convergence

    theoretically. By the generalized inverse matrix learning, we

    obtain the following connection weight matrices:

    WYX =Y(XX)1X, (23)

    WXY =X(YTY)1

    YT. (24)

    C. Recall Process

    Given a training pattern with noise to the visible layer,

    CBAAM removes the noise and provides the original training

    pattern in the visible layer. The procedure of recall is given

    by the following steps.

    1) An input pattern is given to the visible layer.

    2) Update the invisible layer.

    3) Update the visible layer.

    4) If the visible layer is unchanged, recall process is

    completed. Otherwise go to 2).

    The recall process is illustrated in Fig. 8. Suppose that,

    for a training pattern x, a pattern x, which is x with noise,is given to X-Layer. The initial state of Y-Layer is arbitrary.

    First, Y-Layer is updated. Subsequently, X-Layer is updated.

    X-Layer and Y-Layer are updated by turns until both layers

    become stable. Finally, we obtain the training pattern x in

    the visible layer.

    D. Rotated patterns

    We describe why the rotated patterns are not stable. Sup-

    pose that x is a training pattern and y is the corresponding

    pattern in the invisible layer. Then, the relations WYXx= y

    and WXYy = x hold. Moreover, suppose that the state ofthe visible layer is a rotated pattern e

    1x ofx. Then, theinvisible layer receives the following weighted sum inputIY.

    IY =WYX(e1x) (25)

    =e1WY Xx (26)

    =e1y (27)

    Since the invisible layer ignores the imaginary part, it re-

    ceives (cos )y. If2

    < < 2

    , the invisible layer recalls

    the pattern y. Thus, the visible layer recalls x. We find that

    CBAAM recalls the training pattern for the rotated pattern.

    V I. COMPUTER S IMULATIONS

    In this section, we confirm the noise robustness of

    CBAAM exceeds that of CHAM by computer simulation.

    The simulation has been carried out under the conditions

    M = N = 100, K = 10, 20 and 30, P = 10, 30 and 50.Also, the number of neurons of the hidden layer is 100. Weadded the noise by the following procedure.

    1) L neurons were randomly selected.

    2) The states of the selectedLneurons were replaced with

    randomly generated states.

    L is referred to as noise level. If CHAM and CBAAM

    could restore the original pattern, the trial was regarded as

    successful.

    In each condition, 100 training data sets were generated

    at random. For each training data set and each noise levelL,

    100 trials were carried out by the following procedure.

    1) A training pattern was selected at random and put on

    the CHAM and CBAAM.

    2) Noise was added and CHAM and CBAAM recalled.Figure 9 shows the simulation results. The horizontal axis

    shows the noise level. The vertical axis shows the successful

    rate. In this case, the success is that all of the noise is

    removed from the leraning pattern with L noise neurons and

    is restored to the correct learning pattern. The successful

    rate is obtained by counting how many times each learning

    patterns with the L noise neurons is restored with 100 trials.In the simulation results, we find that noise robustness of

    CBAAM exceeded that of CHAM in every result that was

    looked out.

    VII. DISCUSSION

    We discuss the computer simulation results. Although

    CBAAM by the generalized inverse matrix learning does

    not always reach a stable state theoretically, all trials in our

    computer simulation converged. Noise robustness of both

    CHAM and CBAAM decreased as the number of training

    patterns P increased. As the resolution factor K increased,

    noise robustness of CHAM decreased while that of CBAAM

    was unchanged. This implies that CHAM had many spurious

    patterns around the training patterns but CBAAM did not.

    Therefore, CBAAM exceeds CHAM especially in case that

    the resolution factor K is large.

    We have some problems to overcome. The generalized

    inverse matrix learning for CBAAM does not satisfy therequirement wXYij = w

    YXji . It is necessary to develop

    learning algorithms to satisfy such requirement. The patterns

    of the invisible layer was randomly generated. It is also

    needed to generate suitable patterns of the invisible layer.

    VIII. CONCLUSION

    In this paper, we proposed the CBAAM to improve the

    noise robustness of complex-valued auto-associative memo-

    ries. CBAAM consists of the complex-valued visible layer

    and the real-valued invisible layer. The complex-valued

    visible layer enables CBAAM to process multi-state data.

  • 8/11/2019 06706844-2

    6/7

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=10 P=10

    0

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=10 P=30

    0

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=10 P=50

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=20 P=10

    0

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=20 P=30

    0

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=20 P=50

    0

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=30 P=10

    0

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=30 P=30

    0

    20

    40

    60

    80

    100

    0 20 40 60 80

    SuccessRate

    Noise Level

    K=30 P=50

    CHAM

    CBAAM

    Fig. 9. Results of computer simulations: horizontal axis and vertical axis indicate noise level and successful rate, respectively.

    The improvement in noise robustness is due to the real-

    valued invisible layer. Stable rotated patterns deteriorate

    noise robustness. The invisible layer can make rotated pat-

    terns unstable. By the computer simulations, we found that

    noise robustness of CBAAM is much better than that of

    CHAM. In addition, the noise robustness of CBAAM was

    determined by the number of patterns. On the other hand,

    the noise robustness of CHAM decreased as the resolution

    factor increased. Especially, in case of large resolution factor,

    CBAAM is effective for improvement in noise robustness.

    Moreover, CBAAM has high concurrency from the in-dependence of each layer unlike CHAM. If neurons of

    each layer increase, CBAAM can simultaneously calculate

    weighted sum input and is expected to process more quickly

    than CHAM.

    In the future, we have to develop a new learning algorithm

    to solve the following problems.

    1) A new learning algorithm has to satisfy wXYij = wY Xji

    in order to ensure that CBAAM always reaches a stable

    state.

    2) A new learning algorithm has to provide suitable

    patterns of the invisible layer.

    REFERENCES

    [1] J. J. Hopfield, Neural networks and physical systems with emer-gent collective computational abilities, Proceedings of the National

    Academy of Sciences of the United States of America, vol.79, no.8,pp.2554-2558, 1982.

    [2] J. J. Hopfield, Neurons with graded response have collective compu-tational properties like those of two-state neurons, Proceedings of the

    National Academy of Sciences of the United States of America, vol.81,no.10, pp.3088-3092, 1984.

    [3] I. N. Aizenberg, N. N. Aizenberg and J. Vandewalle, Multi-valued anduniversal binary neurons - theory, learning and application, Kluwer

    Academic Publishers, Boston, 2000.

    [4] A. J. Noest: Phasor neural networks, Neural Information ProcessingSystems, ed. D. Z. Anderson, pp.584-591, AIP, New York, 1988

    [5] A. J. Noest: Discrete-state phasor neural networks, Physical ReviewA, vol.38, no.4, pp.2196-2199, 1988

    [6] S. Jnakowski, A. Lozowski and J. M. Zurada: Complex-valued multi-state neural associative memory, IEEE Trans. Neural Networks, Vol.7,No.6, pp.1491-1496, 1996

    [7] H. Aoki and Y. Kosugi, An image storage system using complex-valued associative memory, Proceedings of the International Confer-ence on Pattern Recognition, vol.2, pp.626-629, 2000.

    [8] H. Aoki, M. R. Azimi-Sadjadi and Y. Kosugi, Image association usinga complex-valued associative memory model, IEICE TRANSACTIONSon Fundamentals of Electronics, Communications and Computer Sci-

    ences, vol.E83-A, pp.1824-1832, 2000.

    [9] M. K. Muezzinoglu, C. S. Guzelis and J. M. Zurada, A new designmethod for the complex-valued multistate Hopfield associative mem-

  • 8/11/2019 06706844-2

    7/7

    ory, IEEE Transaction on Neural Networks, vol.14, no.4, pp.891-899,2003.

    [10] D. L. Lee, Improvements of complex-valued Hopfield associativememory by using generalized projection rules, IEEE Transaction on

    Neural Networks, vol.17, no.5, pp.1341-1347, 2006.

    [11] D. L. Lee, Improving the capacity of complex-valued neural networkswith a modified gradient descent learning rule, IEEE Transaction on

    Neural Networks, vol.12, no.2, pp.439-443, 2001.

    [12] M. Kobayashi, H. Yamada and M. Kitahara, Noise robust gradi-

    ent descent learning for complex-valued associative memory, IEICETransactions on Fundamentals of Electronics, Communications and

    Computer Science, vol.E94-A, no.8, pp.1756-1759, 2011.

    [13] M. Kobayashi, Pseudo-relaxation learning algorithm for complex-valued associative memory, International Journal of Neural Systems,vol.18, no.2, pp.147-156, 2008..

    [14] R. S. Zemel, C. K. I. Williams and M. C. Mozer, Lending directionto neural networks, Neural Networks, vol.8, no.4, pp.503-512, 1995.

    [15] J. Hertz, A Krogh and R. G. Palmer, Introduction to the theory ofneural computation, Santa Fe Institute Series, vol.1, USA, PerseusBooks, 1991.

    [16] M. Kitahara, M. Kobayashi and M. Hattori, Chaotic rotor associativememory,Proceedings of International Symposium on Nonlinear Theoryand its Applications, pp.399-402, 2009.

    [17] M. Kitahara and M. Kobayashi, Fundamental abilities of rotor asso-ciative memory, Proceedings of 9th IEEE/ACIS International Confer-

    ence on Computer and Information Science, pp.497-502, 2010.[18] M. Kitahara and M. Kobayashi, Gradient descent learning for rotorassociative memory, IEEJ Transactions on Electronics, Informationand Systems, vol.131, no.1, pp.116-121, 2011 (in Japanese).

    [19] M. Kitahara, M. Kobayashi and M. Hattori, Reducing spurious

    states by rotor associative memory, IEEJ Transactions on Electronics,Information and Systems, vol.131, no.1, pp.109-115, 2011 (in Japanese).

    [20] M. Kitahara and M. Kobayashi, Complex-valued Associative Memorywith Strong Thresholds, Proceedings of International Symposium on

    Nonlinear Theory and its Applications, pp.362-365 , 2011.[21] M. Kitahara and M. Kobayashi, Projection rules for complex-valued

    associative memory with large constant terms, Nonlinear Theory andIts Applications, vol.3, no.3, pp.426-435, 2012.

    [22] Y. Suzuki, M. Kitahara and M. Kobayashi, Dynamic complex-

    valued associative memory with strong bias terms, Proceedings ofInternational Conference on Neural Information Processing, pp.509-518, 2011.

    [23] Y. Suzuki, M. Kitahara and M. Kobayashi, Rotor associative mem-ory with a periodic activation function, Proceedings of IEEE WorldCongress on Computational Intelligence, pp.720-727, 2012.

    [24] B. Kosko, Adaptive bidirectional associative memories, AppliedOptics, vol. 26, no. 23, pp. 4947-4960, 1987.

    [25] B. Kosko, Bidirectional associative memories,IEEE Transactions onSystems Man and Cybernetics, vol. 18, no. 1, pp. 49-60, 1988.

    [26] D. L. Lee, A multivalued bidirectional associative memory operatingon a complex domain, Neural Networks, vol. 11, no. 9, pp. 1623-1635,1998.

    [27] Y. Yano and Y. Osana, Chaotic complex-valued bidirectional asso-ciative memory, Proceedings of IEEE and INNS International JointConference on Neural Networks, pp.3444-3449, 2009.

    [28] Y. Yano and Y. Osana, Chaotic complex-valued bidirectional asso-

    ciative memory one-to-many association ability , Proceedings ofInternational Symposium on Nonlinear Theory and its Applications,pp.1285-1292, 2009.