chaotic neurodynamics for autonomous agents 1 chaotic neurodynamics for autonomous agents ·...

16
CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents Derek Harter Member, Robert Kozma Senior Member Division of Computer Science, University of Memphis, TN, USA Abstract— Mesoscopic level neurodynamics study the collective dynamical behavior of neural populations. Such models are becoming increasingly important in understanding large-scale brain processes. Brains exhibit aperiodic oscillations with a much more rich dynamical behavior than fixed-point and limit- cycle approximation allow. Here we present a discretized model inspired by Freeman’s K-set mesoscopic level population model. We show that this version is capable of replicating the important principles of aperiodic/chaotic neurodynamics while being fast enough for use in real-time autonomous agent applications. This simplification of the K model provides many advantages not only in terms of efficiency but in simplicity and its ability to be analyzed in terms of its dynamical properties. We study the discrete version using a multi-layer, highly recurrent model of the neural architecture of perceptual brain areas. We use this architecture to develop example action selection mechanisms in an autonomous agent. Index Terms— neurodynamics, chaos, dynamic memory, au- tonomous agent I. I NTRODUCTION A. Connectionist Models of Spatio-Temporal Neural Dynam- ics Recent biologically inspired control architectures for adap- tive agents utilize complex spatial and temporal dynamics to model cognition. Clark [1] categorizes such biologically inspired architectures as third generation connectionist models. Third generation connectionist models are characterized by increasingly complex temporal and spatial dynamics. More complex temporal dynamics are due, in part, to the use of feedback and recurrent connections in the models. Complex spatial dynamics are seen in the variety of connectionist architectures produced, usually meant to capture some aspect of the architecture of biological brains. Such simulations are no longer strictly three layered, with input, hidden and output layers, but have many layers connected with specialized and complex relations. Examples of third generation connectionist models include the DARWIN series produced by Edelman’s research associates [2] and the Distributed Adaptive Control (DAC) models of Verschure and Pfeifer [3], [4]. Neural networks with recurrent connections are widely used in the literature. Such architectures have the potential of pro- ducing complex behavior, including chaos. However, the oper- ating range of these systems has been predominantly selected in the fixed-point regime; see e.g., [5], [6]. This research has contributed to the explosive growth of neural networks with powerful generalization capabilities. More recently, chaotic models of neural processing have been introduced by a number of researchers. Biologically plausible dynamical models of neural systems have been developed for example in [7], [8], [9], [10], [11]. Chaotic models have been established also in the field of computational neural networks [12], [13], [14], [15], [16], [17]. These works emphasized chaos control, which meant the suppression of chaos in the models [18], [19]. Some researchers in dynamical cognition and neurodynam- ics have discussed the possibilities that aperiodic, chaotic-like dynamics may play in the role of adaptive behavior [20], [21], [22], [23], [24]. Chaotic dynamics have been observed in the formation of perceptual states of the olfactory sense in rabbits [20]. Mathematical theories of the nonconvergent neurodynamics of perception and decision making have been proposed based on the principles of the olfactory neurodynam- ics [25], [26]. Other researchers have analyzed activity patterns of primate and human cortex and reported on the dynamics of large-scale neural organization [27], [28], [29]. Hardware implementation of the proposed dynamical principles has been reported on VLSI circuitry [30]. Skarda and Freeman [20] have speculated that chaos may play a fundamental role in the formation of perceptual mean- ings. Chaos provides the right blend of stability and flexibil- ity needed by the system, with swift and robust transitions from one cognitive state to the other using first order phase transitions. According to Skarda and Freeman, the normal background activity of neural systems is a chaotic state. In the perceptual systems, input from the sensors perturbs the neuronal ensembles from the chaotic background. The result is that the system transitions into a new attractor that represents the meaning of the sensory input, given the context of the state of the organism and its environment. But the normal chaotic background state is not like noise. Noise cannot be easily stopped and started, whereas chaos can essentially switch immediately from one attractor to another. This type of dynamics may be a key property in the flexible production of behavior in biological organisms. Based on the neurophsiolog- ical findings, Freeman [31], [32], [20] has developed a model of the chaotic dynamics observed in the cortical olfactory system, called the K-sets. K-sets have been used successfully for dynamic memory designs and for robust classification and pattern recognition [21], [33], [24], [34]. Principe and colleagues have developed a discrete imple- mentation of Freeman’s K model using gamma processing elements followed by a nonlinearity. This approach has proved to be very efficient to transform the K model to a discrete formalism that allows obtaining a solution without the need for Runge-Kutta integration. Based on this approach, efficient and accurate solutions have been obtained both on digital computers and in VLSI hardware domains [30], [35]. Discrete models of dynamical systems are widely used in the literature, and they provide an alternative to continuous time systems by

Upload: others

Post on 20-Jun-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1

Chaotic Neurodynamics for Autonomous AgentsDerek Harter Member, Robert Kozma Senior Member

Division of Computer Science,University of Memphis, TN, USA

Abstract— Mesoscopic level neurodynamics study the collectivedynamical behavior of neural populations. Such models arebecoming increasingly important in understanding large-scalebrain processes. Brains exhibit aperiodic oscillations with amuch more rich dynamical behavior than fixed-point and limit-cycle approximation allow. Here we present a discretized modelinspired by Freeman’s K-set mesoscopic level population model.We show that this version is capable of replicating the importantprinciples of aperiodic/chaotic neurodynamics while being fastenough for use in real-time autonomous agent applications. Thissimplification of the K model provides many advantages notonly in terms of efficiency but in simplicity and its ability tobe analyzed in terms of its dynamical properties. We study thediscrete version using a multi-layer, highly recurrent model ofthe neural architecture of perceptual brain areas. We use thisarchitecture to develop example action selection mechanisms inan autonomous agent.

Index Terms— neurodynamics, chaos, dynamic memory, au-tonomous agent

I. INTRODUCTION

A. Connectionist Models of Spatio-Temporal Neural Dynam-ics

Recent biologically inspired control architectures for adap-tive agents utilize complex spatial and temporal dynamicsto model cognition. Clark [1] categorizes such biologicallyinspired architectures as third generation connectionist models.Third generation connectionist models are characterized byincreasingly complex temporal and spatial dynamics. Morecomplex temporal dynamics are due, in part, to the use offeedback and recurrent connections in the models. Complexspatial dynamics are seen in the variety of connectionistarchitectures produced, usually meant to capture some aspectof the architecture of biological brains. Such simulations areno longer strictly three layered, with input, hidden and outputlayers, but have many layers connected with specialized andcomplex relations. Examples of third generation connectionistmodels include the DARWIN series produced by Edelman’sresearch associates [2] and the Distributed Adaptive Control(DAC) models of Verschure and Pfeifer [3], [4].

Neural networks with recurrent connections are widely usedin the literature. Such architectures have the potential of pro-ducing complex behavior, including chaos. However, the oper-ating range of these systems has been predominantly selectedin the fixed-point regime; see e.g., [5], [6]. This research hascontributed to the explosive growth of neural networks withpowerful generalization capabilities. More recently, chaoticmodels of neural processing have been introduced by a numberof researchers. Biologically plausible dynamical models ofneural systems have been developed for example in [7], [8],

[9], [10], [11]. Chaotic models have been established also inthe field of computational neural networks [12], [13], [14],[15], [16], [17]. These works emphasized chaos control, whichmeant the suppression of chaos in the models [18], [19].

Some researchers in dynamical cognition and neurodynam-ics have discussed the possibilities that aperiodic, chaotic-likedynamics may play in the role of adaptive behavior [20],[21], [22], [23], [24]. Chaotic dynamics have been observedin the formation of perceptual states of the olfactory sensein rabbits [20]. Mathematical theories of the nonconvergentneurodynamics of perception and decision making have beenproposed based on the principles of the olfactory neurodynam-ics [25], [26]. Other researchers have analyzed activity patternsof primate and human cortex and reported on the dynamicsof large-scale neural organization [27], [28], [29]. Hardwareimplementation of the proposed dynamical principles has beenreported on VLSI circuitry [30].

Skarda and Freeman [20] have speculated that chaos mayplay a fundamental role in the formation of perceptual mean-ings. Chaos provides the right blend of stability and flexibil-ity needed by the system, with swift and robust transitionsfrom one cognitive state to the other using first order phasetransitions. According to Skarda and Freeman, the normalbackground activity of neural systems is a chaotic state. Inthe perceptual systems, input from the sensors perturbs theneuronal ensembles from the chaotic background. The result isthat the system transitions into a new attractor that representsthe meaning of the sensory input, given the context of thestate of the organism and its environment. But the normalchaotic background state is not like noise. Noise cannot beeasily stopped and started, whereas chaos can essentiallyswitch immediately from one attractor to another. This type ofdynamics may be a key property in the flexible production ofbehavior in biological organisms. Based on the neurophsiolog-ical findings, Freeman [31], [32], [20] has developed a modelof the chaotic dynamics observed in the cortical olfactorysystem, called the K-sets. K-sets have been used successfullyfor dynamic memory designs and for robust classification andpattern recognition [21], [33], [24], [34].

Principe and colleagues have developed a discrete imple-mentation of Freeman’s K model using gamma processingelements followed by a nonlinearity. This approach has provedto be very efficient to transform the K model to a discreteformalism that allows obtaining a solution without the needfor Runge-Kutta integration. Based on this approach, efficientand accurate solutions have been obtained both on digitalcomputers and in VLSI hardware domains [30], [35]. Discretemodels of dynamical systems are widely used in the literature,and they provide an alternative to continuous time systems by

Page 2: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 2

solving the discretized equations by recursive iterations, see,e.g. [36], [37].

In the present work we introduce an alternative discreteapproach for solving Freeman’s K models, which is called KAmodel. In KA we introduce a second order time differenceequation to describe the dynamics of the basic processingelements, called KA-0. Consequently, we build higher-leveldiscrete KA-I, KA-II, and KA-III models. We solve thedifference equation directly, without the need of Runge-Kuttaintegration.

B. Introduction to K SetsThe K-set dynamics are designed to model the dynamics of

the mean field (e.g. average) amplitude of a neural population.A nonlinear, second order, ordinary differential equation wasdeveloped to model the dynamics of such a population. Theparameters for this equation were derived by experimentationand observation of isolated neural populations of animals pre-pared through brain slicing techniques and chemical inhibition.The isolated populations were subjected to various levels ofstimulation, and the resulting impulse response curves werereplicated by the K-set equations.

The basic ODE equation of a neural population of the K-model is:

αβd2ai(t)

dt2+ (α + β)

dai(t)

dt+ ai(t) = neti(t) (1)

In this equation ai(t) is the activity level (mean field ampli-tude) of the ith neural population. α and β are time constants(derived from observing biological population dynamics tovarious amounts of stimulation). The left side of the equationexpresses the intrinsic dynamics of the K unit (which capturesa neural populations characteristic responses).

On the right side of the equation are factors that allow forexternal network input to the population neti(t) Stimulationbetween populations is governed by a nonlinear transfer func-tion. The nonlinear transfer function used in the K-modelsis an asymmetric sigmoid that was again derived throughmeasurements of the stimulation between biological neuralpopulations:

neti(t) =∑

j

wijoj(t) (2)

oj(t) = ε{1 − exp[−(eaj(t) − 1)

ε]} (3)

where ε is a parameter that indicates the level of arousal inthe population (high values indicated a more aroused, moti-vated state), and aj(t) is the activation of the jth populationconnected to the target unit. The asymmetry is an importantproperty in the transfer function as it means that excitatoryinput causes a destabilization of the dynamics of networks.This destabilization is essential in the collapse of aperiodicattractors observed in biological perceptual systems.

These equations model the dynamic behavior of the activityof isolated neural populations. In Freeman’s K-model, theseare the basic units that are connected together to form largercooperating components. Two excitatory or inhibitory unitstogether form a K-I set. A K-I excitatory with a K-I inhibitory

pair form a K-II set of four units (see Figure 2). Freemanand associates used these neural population units to constructa model of the olfactory system that replicate the dynamicsobserved from EEG recordings. Three or more groups ofK-II units connected together form a K-III unit. The K-III forms a multi-layer, highly-recurrent neural populationmodel of biological perceptual systems. The K-III model wasoriginally used to replicate the chaotic dynamics observed inthe olfactory bulb of rabbits and rats.

According to this view, the dynamics of the brain, asmodeled by the K-III, is characterized by a high-dimensionalchaotic attractor with multiple wings. The wings can beconsidered as memory traces formed by learning through theanimal’s life history. In the absence of sensory stimuli, thesystem is in a high-dimensional itinerant search mode, visitingvarious wings. In response to a given stimulus, the dynamicsof the system is constrained to oscillations in one of thewings, which is identified with the stimulus. Once the input isremoved, the system switches back to the high-dimensional,itinerant basal model [38]. These results from the study anddevelopment of the K-models have led to the establishment ofa dynamical theory of perception [39].

Recently, a new class of chaotic behavior, called chaoticitinerancy, has been introduced [40], [26], [38], which isrelated to dynamical behavior of K-sets. Chaotic itinerancyis observed in high-dimensional dynamical systems with tra-jectories evolving through successions of “attractor ruins”,with each attractor being destroyed as soon as it is reached,and the system continuously remains unstable, as in a searchmode. Results by the KIII model indicate that the complex,intermittent spatio-temporal oscillations in KIII are possiblemanifestations of Tsuda’s attractor ruins and chaotic itinerancyin a biologically plausible neural network model [24], [41].

C. Motivation of KA ModelingThe K sets are an attempt to model the aperiodic dynamics

observed in cortical sensory systems, and to begin to explainhow such dynamics contribute to the recognition and learningof sensory patterns in biological brains. Recent work inaperiodic dynamics in cortical systems [42], [43], [21], [44],[45], [46], [38] have begun to move beyond sensory systems tolook at how such dynamics may also help us better understandthe production of intelligent behavior in biological agents.

The motivation behind the KA model is to develop asimplification of the original K-sets that is still capable ofperforming the essential dynamics, but is simpler and fasterand therefore more suitable for use in large-scale simulationsof more complete autonomous agent architectures. The KAis to be used in developing autonomous agents that takeadvantage of aperiodic dynamics for perception, memory andaction.

This introduction of KA model has many possible ad-vantages. The KA simplification uses a discrete differenceequation to replicate the original K-set dynamics. The discretedifference equations are more mathematically tractable andanalyzable. Besides mathematical analyzability, the KA unitsare much more efficient. We will show that our KA based

Page 3: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3

method performs much faster than approximation techniqueswhich solve the ODE equations using, e.g., Runge-Kuttamethod. This gain in efficiency allows for correspondinglybigger and more complex models to be built, and greatlyexpands the types of problems that can be investigated usingthese highly-recurrent neural models. The KA simplificationoffers units that we believe are on a very useful level ofabstraction. The neural population model is more detailed andbiologically plausible than standard ANN and even simplercellular automata models of neurodynamics. The simplificationclosely replicates the dynamics of the original K-sets whilebeing simpler and more efficient.

First we describe a version of the K-set model that we havedeveloped for use in the creation of adaptive agent control ar-chitectures, referred to as KA-sets (K-sets for adaptive agents).We then present simulations using the KA-sets to model someof the important principles of chaotic neurodynamics. Finallywe demonstrate the ability of the KA model to generatedeterministic chaos and show how the KA units may be usedto learn simple behaviors in an autonomous agent.

II. KA MODEL

A. DescriptionThe purpose of the model presented here is to provide

elementary units capable of the complex mesoscopic dynamicsobserved in the brains of biological organisms. These unitsmodel the dynamics of populations of neurons, rather thana single neuron. The modeled units presented here are alsodesigned to be computationally efficient, so that they may beused to build real-time control architectures for autonomousagents.

At its heart the KA model uses a discrete time differenceequation to replicate the dynamics of the original second orderordinary differential equations of the K-sets. A unit in theKA model simulates the dynamics of a neuronal population.Each KA unit simulates an activity level, which representsan average population current density. The basic form of thedifference equation can be given simply as shown in Eq. (4),which states that the current at time step t is a function of thecurrent in the two previous time steps, as well as the externalinfluence from the net input of units connected to the simulatedunit.

ai(t) = F (ai(t − 1), ai(t − 2), neti(t − 1)) (4)

The evolution equation of the KA unit can be described bythree components that are combined to compute the simulatedcurrent at time t from the current and the rate of change ofthe current at time t− 1 and t− 2. These three influences onthe simulated current are 1) a tendency to decay back to thebaseline steady state deci(t); 2) a tendency to maintain themomentum of the current in a particular direction momi(t);and 3) the influences of external excitation or inhibition asinput to the unit neti(t).

When isolated neural populations are externally stimulatedaway from their baseline steady state, once the external stim-ulation is removed the population experiences an exponentialdecay back to the baseline. In the KA model the tendency

to return to the baseline steady state is modeled by a decayterm. The resting, or baseline state of the current in thesemodel is defined as an activity level of 0. The effect of decayis described as:

deci(t) = −ai(t) × α (5)

Here α is a parameter that indicates the rate of decay. Sincethe difference is proportional to the current, the effect is tocause the decay to be rapid when the activity of the unit is farfrom the baseline, while the rate of decay slows down as theactivity approaches the steady state.

Neural populations exhibit a certain amount of momentumin the dynamics of their activity over time. In essence, oncea population’s current begins to move in a certain direction(positive or negative) it tends to keep moving in that directioneven for some time after any influence pushing it has beenremoved. In the original K-set models, this was observed whenstimulating a isolated brain-slice population. After stimulationceased the population rapidly returned to its resting level.However in the process of decaying back to the baseline itwill undershoot and actually go below the baseline steadystate for some time before returning to equilibrium. This slightoscillation in the neural populations is what necessitates theused of the second order term of the differential equations,as only second order equations are capable of capturing suchoscillatory behavior. The momentum term is needed in theKA difference equation in order to capture this dynamicbehavior of the population. Similarly, the momentum term isalso second order, as it relies on two previous time steps inorder to calculate its influence.

To simulate the momentum of a units activity, we need touse a function of the previous two time steps. This is necessaryso that we can simulate a momentum based on the rate ofchange of the activity of the unit, as well as the other reasonsmentioned above. We first define the rate of change of theactivity at time t, ri(t). This is the difference of the activityof the unit at time t from the activity at a previous time stept − 1. The rate of change at time t is thus:

ri(t) = ai(t) − ai(t − 1) (6)

With the rate at time t defined, we can describe the momen-tum as shown below:

momi(t) = ri(t) × β (7)

Where β is a parameter that controls how much of an influencethe momentum has on the dynamics of the model. β can bethought of as a percentage which indicates what portion of themomentum at the present time step should continue into thenext time step.

The effect of the net input at time t is the same as in theK-model and is shown in Equation 8. This is the standardsummation of the activity of the input units through a transferfunction multiplied by the connection strength. The output ortransfer function oj(t) of a KA unit is a function of the activityof the unit. The KA model uses the same asymmetric sigmoidtransfer function and summation mechanism of the originalK-sets. The transfer function is shown in Equation 9. The ε

Page 4: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 4

TABLE IKA MODEL VARIABLES

Variable Descriptionai(t) Simulated activity of ith population at time t

deci(t) Difference at time t due to decay to baselinemomi(t) Difference at time t due to momentum

ri(t) Rate of change of the activity at time t

oi(t) Transfer function of the activity of the ith unit at time tneti(t) Difference at time t due to external net input

TABLE IIKA MODEL PARAMETERS

Parameter Description Defaultα Rate of decay to baseline 0.1505β Rate of momentum 0.0985ε Transfer function arousal level 5.0

parameter is a scaling factor that indicates the level of arousalof the KA unit. Arousal in biological organisms is a function ofhistory and experience, and can vary with things like surpriseand familiarity with the current situation.

neti(t) =∑

j

wijoj(t) (8)

oj(t) = ε{1 − exp[−(eaj(t) − 1)

ε]} (9)

The sum of the influences in Equations 5, 7 and 8 representthe total influence that will be applied to the activity of theunit in the next time step.

ai(t) = ai(t−1)+deci(t−1)+momi(t−1)+neti(t−1) (10)

In other words, activity of a KA unit is a function of thedecay and momentum terms along with the influence fromthe net input of external units. We sum the values from thesethree influences and add them to the previous activity of theunit to determine the new activity of a KA unit. In Table Iwe summarize the variables used in the KA model. Table IIprovides a summary of the KA parameters and their valuesused in the experiments described in this paper. The decay andmomentum rates were determined experimentally by fitting thedynamics of a single KA unit to those of the original K unitunder various conditions of stimulation and inhibition. Thedetermination of these time constants will be discussed next.

B. Determination of Momentum and Decay Time ConstantsWe use an empirical method to determine the parameters

of the KA model that allow it to closely approximate theoriginal K model dynamics. Keep in mind, however, that theα and β parameters of the two models represent different timeconstants, and as such will be set at different values in the twomodels. We take as our target the dynamics of a K model unit,and subject it to varying intensities of external stimulation andinhibition, for varying lengths of time. We then find the decay,momentum and other parameters that allow the KA responses

to best approximate the original K model, using least squaredfit to measure the difference.

Therefore we subjected a K unit to levels of stimulationranging from -0.49 to 0.5 in 0.01 increments (intensity =[-0.49:0.01:0.5]). We also varied the time each stimulationlevel was applied to the K unit from 1 to 50 ms in 1 msincrements (time duration = [1:1:50]). We ran the simulationof the K unit for 500ms in Matlab 6.5 using Runge-Kutta tosolve the ODE, and captured its response to the 5000 differentcombinations of intensity and time durations. These 5000 timeseries represented the target dynamics we tuned the KA modelto replicate.

With the 5000 samples of the K unit dynamics, we thenexhaustively searched the momentum (α) and decay (β) pa-rameter space of the KA model to find a combination thatreplicate the dynamics of these 5000 samples of the K0 unitby a KA0. We applied the same 5000 combinations of theintensity and time duration of stimulation to a KA unit for thevarious α and β values. Through a systematic search we couldreduce the difference in the dynamics to an arbitrarily smallamount. We used a hill-climbing algorithm in order to zero-in on the exact values for the parameters that provided verygood approximations of the original dynamics. We found thata decay rate of α = 0.1505 and a momentum of β = 0.0985produced a good fit of the KA to the K model.

The parameter space defined by the momentum and decayparameters ends up forming a smooth function in the KAmodel, with only 1 global minima. This makes it easy to findthe appropriate parameters to fit the KA single unit dynamicsto the original K0 unit. For example, in Figure 1 we showa part of the decay and momentum parameter space of theKA model. Here we plot decay along the X axis from valuesranging from 0.1 to 0.2, and momentum is plotted on the Yaxis from 0 to 0.5. Color is used to indicate the error in the fitat each point in the α/β parameter space. We can see visuallythat the space is smooth, and there is a global minima inthe error somewhere in the area of α = 0.15 β = 0.1. Theglobal minima depicted in this figure is the place where themomentum and decay parameters of the KA model yieldedthe closest results to the dynamics of a K0 unit.

Table II summarizes the parameters of the KA modeldiscovered by the parameter fitting process and used for thesimulations and experiments described in the rest of this paper.The arousal level parameter ε is only significant when we havenetworks of units connected together, it does not affect thedynamics of a single unit in isolation. Since we are using thesame asymmetric sigmoidal transfer function in both the K andKA models, we have used a standard arousal level of 5.0 in theexperiments described next. Future work is needed to explorethe uses of the arousal level in models of cognition, and itspossible relation to more global and slow-changing dynamicssuch as neuro-chemical processes that affect the dynamics ofpopulations in brains.

C. Learning MechanismsIn this section we discuss in more detail the learning

mechanisms used in the KA multi-layer recurrent neurody-namical models. In the simulations with autonomous agent

Page 5: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 5

Fig. 1. A portion of the α / β parameter space of the KA model. Decay (α),plotted along the X axis, varies here from 0.1 to 0.2. Momentum (β), plottedalong the Y axis, varies from 0.0 to 0.5. Intensity indicates the calculatederror (using sum squared difference) between the dynamics of the KA0 andK0 unit over the 5000 sample time series. A global minimum is present inthe area of α = 0.15 β = 0.1

architectures we will be using two types of unsupervised learn-ing, Hebbian synaptic weight modification and habituation ofunreinforced stimuli.

1) Hebbian Mechanisms in the KA Population Model:The basic idea behind Hebbian mechanisms is that when theactivity of two connected neural units co-occur, they havesome statistical relationship to one another. We can exploitthis relationship by increasing the likelihood that in the futureif one of the units is active the other becomes active. This canbe done by increasing the strength of the weight between theunits. In other words, units that tend to fire together shouldhave the weights between them strengthened so that they aremore likely to fire together in the future. The converse of thisrule is also true, if the units do not tend to fire together, then thestrength of any connection between them should weaken overtime. This simple mechanism defines a type of competitiveprocess among the links between neural units.

Hebbian learning is a simple concept, but it is very powerfulin shaping the weight space of a neural model to processstimuli. Hebbian mechanisms allow the models to capturestatistical regularities in the stimulation patterns that occur inthe environment of the organism.

In the simplest formal definition of the Hebbian learningmechanism we consider a pre-synaptic node A and post-synaptic node B connected by a link with weight wBA. Theactivity or firing rates of the nodes are represented by thevalues aA and aB respectively. For simple models where theactivity of the units ai is a measure of the mean firing rateof a neuron, we can correlate the activity between the unitsto determine the difference we wish to apply to the weight as[47]:

∆wBA = εaAaB (11)

Here the proposed change to the weight ∆wBA is simply afunction of the product of the activity of the pre and post-

synaptic nodes times a learning rate parameter ε.In mean field models where the resting or normal level of

the unit is not necessarily 0, we can’t simply use the activitylevel of the unit. Instead we must look at the firing rate ofthe unit over some time period. We can determine if a unitis more or less active by comparing its current firing rate towhat its normal or average firing rate usually is. The slightlymore complex Hebbian rule thus becomes:

∆wBA = ε(aA − aA)(aB − aB) (12)

Here aA and aB represent the average firing rates of thepre and post-synaptic nodes respectively. In these firing ratemodels, notice that the current firing rate can be lower thanthe average, which can lead to negative, or decreasing weightchanges. This may or may not be what is wanted dependingon the type of model being experimented with. For example, itmay or may not make sense to strengthen the weight betweentwo units when they both have less than average activity atthe same time.

The K family of models, including the KA model, are neuralpopulation models, not models of single neurons. Thereforethe concept of the firing rate of a node is not relevant.The activity level in KA units represents an average currentdensity for the population. However, unlike the simple case,this average population current can change rapidly, since theunits are oscillatory in nature, which makes a simple Hebbianequation inadequate for our use. We instead need to developa concept of the activity of a unit over some time window. Inthe KA models we use the root mean square to calculate theactivity over a time window:

rms(i, a, b) =

1

b − a

b∑

t=a

ai(t)2 (13)

This states that the root mean square intensity of unit i overthe time interval a to b is given by taking the sum of thesquares of the activity over the time interval, dividing it bythe time interval, and taking the square root. The root meansquare is a better measure of the activity of a unit over a timeinterval than simply taking the average of the units activity.The root mean square is invariant with respect to the averageactivity level, which makes comparing the rms of two unitsmore plausible.

Given the definition of the rms to calculate the activity of aunit over an interval, we can define the Hebbian equation forthe KA model. Normal Hebbian rules compare the activityof a unit to its average activity. We instead compare theaverage activity of a unit over an interval to the averageactivity of some subset population of units. This is necessary asdetermining a base or average activity is not a straightforwardproposition in the K family neural population models. Wetherefore determine how the activity of a unit is varying bycomparing it to the current average activity of a population.

The Hebbian equation used by the KA experiments is given

Page 6: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 6

by:

∆wBA = ε × (14)(rms(A, a, b) − rms(sea, a, b)) ×

(rms(B, a, b) − rms(sea, a, b))

where rms(sea, a, b) is a spatial ensemble average of somepopulation of units that the units A and B belong to. Thelearning rate parameter, ε, is determined experimentally foreach simulation by tuning it to have optimum performanceas defined by the simulation. In a similar manner, the timewindow used is also determined experimentally for eachproblem. The normal time window is take from the currenttime to some time in the past which can vary from 50 to 250time steps.

2) Habituation in the KA Model Experiments: The secondlearning mechanism used in the following experiments ishabituation. Habituation is defined as a diminished responseto sensory stimuli that is not reinforced. Sensory signals thatare repeatedly encountered but never co-occur with appetitiveor aversive signals become diminished in the organism. Thisphenomena is very familiar to people as, for example, wequickly “tune out” background noise such as an air-conditionerin our environment. Therefore habituation is a type of cumu-lative rule based process, whereby unreinforced stimuli areiteratively tuned out and ignored by sensory systems.

In the KA models, Hebbian learning only occurs whenreinforcement signals are generated in the organism. In thefollowing experiments, reinforcement signals are usually hard-coded in the agent such that when it bumps into objects painsignals are generated which signal opportunities for Hebbianmodification. When a reinforcement signal is not currentlybeing produced by the organism, habituation of stimuli willbe performed.

Habituation of stimuli is performed in KA by lesseningthe strength of connections to neural units that are moreactive than an average population activity during times of non-reinforcement. The basic weight modification for habituationis defined as:

∆wBA = −η|(rms(B, a, b) − rms(sea, a, b))| (15)

Here the habituation weight of a link from A to B ∆wBA isa function of how far the unit B’s activity is above or belowa spatial ensemble average of some subpopulation of units,times a habituation decay constant η. Again η is determinedexperimentally for each simulation by tuning it for optimumperformance. For some cases where we only want to habituatenodes whose activity is higher than the average (not those thatare lower), we can use ∆wBA = 0 if the rms of B is lowerthan the spatial ensemble average.

Hebbian modification and habituation are usually performedon all plastic connections in the simulation. That is to say,some connections in a simulation are not variable, and there-fore do not learn and change in response to environmentalexperiences. The internal links within a KA-II are examplesof non-plastic connections in simulations using the KA model.Plastic connections are usually those links between unitswithin a layer of, for example, a KA-III.

Fig. 2. The KA hierarchy. The KA-I are a combination of two excitatoryor two inhibitory units connected with mutual feedback. The KA-II is acombination of a KA-Ie and a KA-Ii, connected with various weightsbetween them. The KA-II level allows for both positive and negative feedbackwhich can create oscillatory behavior. The KA-III level is a collection ofthree (or more) KA-II connected with various feedforward and feedbackconnections. When the three layers of the KA-III are nonhomogeneous, theresulting dynamics of the KA-III system is chaotic.

III. KA MODEL CHARACTERISTICS

A. Oscillatory Dynamics and KA-II SetsFreeman [21] postulates ten building blocks of neurody-

namics that help to explain how neural populations create thechaotic dynamics of intentionality. The first three principlesdeal with the formation of non-zero steady-state and oscil-latory dynamics through various types of feedback in neuralnetworks with excitatory and inhibitory connections. Figure2 shows a particular configuration, called the KA-II set, withtwo excitatory and two inhibitory units connected together.Such a configuration provides excitatory-excitatory, inhibitory-inhibitory and excitatory-inhibitory feedback simultaneously.This is one of the simplest configurations with all possibleconnections between excitatory and inhibitory units and it willbe used as the basic model of a mixed excitatory-inhibitorypopulation in this paper. In Figure 3 we compare the behaviorof an original K-II with a KA-II. In this comparison all theconnections between units are set to the same value in the Kand KA model. We can see that the 4 units in the K and KAmodel maintain similar activity levels. Moreover, each modelreaches an approximate steady state after a transient time ofaround 10ms.

The KA-II configuration, in the vast majority of parametersettings, produce damped or sustained oscillatory behavior.Though some regimes of chaotic behavior may exist in thesimple KA-II configuration (see, e.g., [48]), we restrict our selfto working with oscillatory KA-II. With such a configurationthe KA model is capable of producing oscillatory behavior ofvarying frequencies depending on the values of the ten internalweights. Table III gives the parameters and some majorproperties of three different KA-II sets. wee, wei, wie, wii are

Page 7: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 7

0 5 10 15 20 25 30 35 40 45 50−3

−2

−1

0

1

2

time (msec)

Sim

ulat

ed C

urre

nt

KA−II Model Simulation, ee=1.1, ei=0.5, ie=1.0, ii=1.8

e1i1e2i2

0 5 10 15 20 25 30 35 40 45 50−5

−4

−3

−2

−1

0

1

2

time (msec)

Sim

ulat

ed C

urre

ntOriginal K−II Model Simulation, ee=1.1, ei=0.5, ie=1.0, ii=1.8

e1i1e2i2

Fig. 3. Comparison of K-II and KA-II (all internal parameters equal).

TABLE IIIKA-II UNITS USED IN KA-III WEIGHT SCALING SIMULATION

Group wee wei wie wii mx σx f0

1 0.94 1.41 0.80 1.33 -0.25 0.14 312 1.05 1.40 0.44 0.05 -0.12 0.30 273 1.29 1.27 0.65 1.19 -0.08 0.25 25

the connection weights, mx is the mean and σx is the standarddeviation of the simulated current over a given 10 secondwindow (excluding initial transients). The frequency (f0) isdetermined based on the main peak of the power spectrum ofthe simulated current.

Although there are 10 weights, we reduce this to 4 param-eters by setting the weights between like pair types to be thesame. For example, the weights between excitatory units (fromE1 to E2 and from E2 to E1, correspondingly) are set to beequal and are shown by the value wee in the table. Similarly forthe 2 inhibitory-inhibitory (wii), 3 excitatory-inhibitory (wei)and 3 inhibitory-excitatory (wie) weights.

Figure 4A. shows a time series of the first excitatory unitfrom the first KA-II group in Table III. In Figure 4B. wedisplay a state space representation for the same series withtime delay of t vs. t+5. We see a stable limit cycle oscillationwith frequency 31 Hz, after the initial transients die out. Thethree groups shown in the table are naturally oscillatory, thatis to say that they oscillate without external stimulation, asshown in the figure. The mean (mx) and standard deviation(σx) shown in Table III are measures of the behavior of thetime series after initial transients have been discarded (1000in this case). The dominant frequency (f0) is the frequencythat the KA-II groups oscillate at (in simulated cycles persecond). The selected three parameter groups in Table IIIare the results of an extensive parameter search aimed atidentifying KA-II sets with strong limit cycle oscillations atvarious differing frequencies. In this approach we generated500 KA-II groups at random, with different wee, wei, wie

A) 0 50 100 150 200 250 300 350 400 450 500−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

time (ms)

c t (curr

ent)

B) −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2

−0.6

−0.4

−0.2

0

0.2

t

t+5

Fig. 4. An example of the oscillatory behavior that can be generated by aKA-II configuration. In A) we show a time series of the first excitatory unitfrom a KA-II configuration. And in B) we display a delayed state space plotof the same KA-II at t vs. t + 5

and wii parameters uniformly distributed in the range [-2.0,2.0]. From these candidates we selected three such that they1) showed sustained oscillations and 2) oscillated at differentand incommensurate characteristic frequencies respectively.

In the next section, we link the three KA-II sets intoa network and show that under certain conditions, the in-commensurate frequencies compete with each other but noneof them wins. As a result, a complex aperiodic oscillationemerges.

B. Chaotic Dynamics in KA-III SetsFreeman’s [21] fourth principle building block of neurody-

namics concerns the formation of chaotic background activity:The genesis of chaos as background activity bycombined negative and positive feedback amongthree or more mixed excitatory-inhibitory popula-tions.

We demonstrate the production of deterministic chaos bythe KA model using the mixed excitatory-inhibitory KA-IIpopulations described in the previous section in Table III.The KA-III set, shown in Figure 2, right, is an example ofa configuration of three KA-II groups connected together inorder to produce chaotic dynamics. In these simulations, theexcitatory units from higher layers have projections to deeper

Page 8: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 8

2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000−0.5

0

0.5

1

c t (cur

rent

KA−iii simulation, chaotic time series of e1 from groups 1,2 and 3

g1, e1

2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000−1

−0.5

0

0.5

1

c t (cur

rent

g2, e1

2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000−1

−0.5

0

0.5

t (time)

c t (cur

rent

g3, e1

Fig. 5. An example of a chaotic time series generated by a KA-IIIconfiguration. We show the time series of the E1 unit of layer 1 (top), layer2 (middle) and layer 3 (bottom).

−0.5 0 0.5 1−0.5

0

0.5

1

t

t+12

KA−iii simulation, Return Plot, g1 e1, t+12

Fig. 6. A state space plot of the activity of the E1 unit of group 1. We plotthe activity of the unit at time t vs. the activity at time t + 12.

layers. Recurrent back-projections are also present from lowerlayers back up to higher layers. These back-projections mayhave delays associated with them, which reflects the delayednature of these back-projections in biological neural tissue.

Figures 5 and 6 display a time series and state spacerepresentation from a KA-III simulation generated using thisKA-III configuration. A calculation of the first Lyapunovexponent of the time series using Wolf’s method [49] showsa strictly positive exponent of around 0.1 for this series,indicating strong chaotic behavior.

We now demonstrate the effects of changing the weightsbetween the groups on the calculated Lyapunov exponent. Inthis simulation, the projection weights between layers werevaried from 0% to 100% of their original connection strengths,in 5% increments. Ten simulated time series were generated

Fig. 7. Effects of scaling the excitatory weights between the KA-II layers ofthe KA-III on the calculated Lyapunov exponent. The intergroup excitatoryweights are scaled from 0.0 to 1.0 in 0.05 increments. We show the averagecalculated Lyapunov exponent for 10 experiments at each scaling factor alongwith an indication of the variation (error bars). Above the figure are examplesof time series and state spaces generated by the KA-III at weight scalingfactors of 0.0, 0.6 and 1.0 from left to right respectively.

for each weight setting, and the Lyapunov exponent calculatedon the resulting time series. Figure 7, bottom, plots the effectsof scaling the projection weights on the Lyapunov exponentfor this KA-III. When the projection weights are reduced to0% of their original value, the KA-II layers become isolatedand no longer affect one another. In this case we observe thedamped oscillatory behavior of the KA-II in layer 1 (Figure7 top left, we show both the time series and a state spaceplot of the delayed activity of the unit against itself). Ata 100% scaling factor we show the dynamics of the KA-III where the measured Lyapunov exponent is close to 0.06(Figure 7 top right). In general, as the projection weightsbetween layers are increased, the behavior of the KA-IIIunit becomes incrementally more chaotic. Even very smallprojecting weights between layers are enough to push thedamped oscillatory dynamics of a KA-II into a sustainedquasi-periodic orbit. Some initial conditions at some scalingfactors, however, produce stronger chaotic interactions. Forexample, at a scaling factor of 0.6 we show one example witha measured Lyapunov exponent of 0.15 (Figure 7 top middle).

C. Comparison of Power Spectra of KA Models and Rat EEGSignals

In the original K model, the purpose of the K-III set wasto model the chaotic dynamics observed in rat and rabbitolfactory systems [32], [45], [50]. The K-III set was not onlycapable of producing time series similar to those observed inthe olfactory systems under varying conditions of stimulationand arousal, but also of replicating major power spectrumcharacteristics of these time series.

The power spectrum is a measure of the power of aparticular signal (or time series as for example that obtainedfrom an EEG recording of a biological brain) at varyingfrequencies. The typical power spectrum of a rat EEG (see

Page 9: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 9

100 101 10210−4

10−3

10−2

10−1

100

101

102

Hz

Powe

r

KA−III model (Group 3)

Fig. 8. The power spectrum of a rat Olfactory Bulb EEG is simulatedwith the KA-III model. The calculated “1/f” slope of the EEG and model isapproximately -2.0. Rat OB data from [51]

Figure 8, top) shows a central peak in the 30-40 Hz range, anda 1/fα form of the slope. The measured slope of the powerspectrum varies around α = −2.0. 1/fα type power spectraare abundant in nature and are characteristic of critical states,between order and randomness, at which chaotic processesoperate. The atypical part of the experimental EEG spectrais the central peak, indicating stronger oscillatory behavior inthe γ frequencies. This central peak in the 30-60 Hz rangeis known as the γ frequency band, and is associated withcognitive processes in biological brains.

In Figures 8 we show an example of the KA-III modelsability to replicate these types of dynamics. In particularthe power spectrum analysis (Figure 8, bottom) shows thetypical “1/f” power spectrum with a slope of around -2 anda central frequency peak, similar to that produced from theEEG recordings of a rat olfactory bulb.

D. Comparison of KA to K-Sets and FFNNsIn Table IV we compare the features and equations of the

K and KA models. The KA model is a discrete, 2nd orderdifference equation, as opposed to the original continuous 2nd

order ordinary differential equation of the K-sets. Both the Kand KA models use time constants as parameters (α and β)in order to tune the dynamics of the models to those observedfrom real neural populations. It should be noted, however, thatthese time constant parameters are different between the twomodels and will take on different values in order to achievethe same dynamics. Both of the models use the same net input

and asymmetric sigmoidal transfer function to describe theinfluence of activation passed between the population units.The final item of this table shows the total time needed to run asimulation of a K/KA III that contained a total of 513 units andover 10,000 connections. The simulation was of 10 seconds ofactivity in the neural model and both were coded and executedusing Matlab 6.5 on a 1.0 GHz Pentium class computer.The KA implementation used the discrete equations describedin the previous section. The original K unit implementationuses the Matlab Runge-Kutta method for approximating thesolution to the coupled ODEs. The KA model executes thesimulation in just under 10 seconds, while the K model takesover three times as long to run the same simulation. We willdiscuss more results of this type comparing the efficiency ofthe two models in coming sections. The simulations describedin the next sections use an implementation ported to C++,which is faster still than the Matlab implementation.

The generic form of the difference equation used in standardFFNN models can be stated as:

ai(t) = F (ai(t − 1), neti(t − 1)) (16)

Here the activation of a unit i in a FFNN model is a functionof the activation of the unit in the previous time step alongwith the net influence of input from other units in the previoustime step. In the vast majority of ANN models, however, theinfluence of the activity of the unit by the units activity ina previous time step is ignored, and thus the normal usagesimplifies to:

ai(t) = neti(t − 1) (17)

In other words the activity of a unit in the next time stepdepends solely on the net input to the unit from externallyconnected units. This is a reasonable simplification in strictlyfeed-forward networks, since there is only a single time stepbeing simulated. On the introduction of the input to thefirst layer, the activity simply flows forward in one directionin the network. However, this simplification becomes lessuseful in the realm of recurrently connected networks, wherethe dynamics of a unit over time can be simulated, andsuch dynamics may effect the performance of the network.Most research in recurrent ANNs still only use the simplifiedequation. This means that even in recurrent ANN research,the dynamics of the units depends solely on the activity ofconnected units in the previous time step. The units do nothave nor use any intrinsic dynamics of their own.

The KA model is a simplification of the K-sets. One of thepurposes of both models is to captures the dynamics of anisolated neural population in response to external stimulation.As such, both the K and KA models have intrinsic dynamicsassociated with a neuronal unit, such that in the absence ofexternal stimulation they will continue to modify their activitylevels as a function of the passage of time. This can ofcourse be seen most clearly in the KA difference equations,which include terms that depend on the previous activityof a unit in determining the activity of the next time step,which differentiates these models from the vast majority ofANN modeling. Further, both the K and KA models dependon a second order term in order to correctly replicate the

Page 10: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 10

TABLE IVCOMPARISON OF FEATURES AND EQUATIONS OF KA AND K-SET NEURAL POPULATION MODELS

KA KDiscrete Continuous

2nd order difference equation 2nd order ordinary differential equationai(t) = ai(t − 1) + deci(t − 1) + momi(t − 1) + neti(t − 1) αβ

d2ai(t)

dt2+ (α + β)

dai(t)dt

+ ai(t) = neti(t)deci(t) = −ai(t) × αmomi(t) = ri(t) × β

neti(t) =∑

jwijoj(t) neti(t) =

jwijoj(t)

oj(t) = ε{1 − exp[−(e

aj(t)−1)

ε]} oj(t) = ε{1 − exp[

−(eaj(t)

−1)ε

]}9 sec. 32 sec.

dynamics of neural populations. The second order term isnecessary in the K-model ODE in order to capture the dampedoscillations of neural populations. Similarly, in the KA model,the momentum parameter depends on two previous time stepsof the activation of the unit in order to capture this type ofbehavior.

Another difference between the K and KA models on oneside, and ANN models on the other is, of course, the formof the transfer function. In all cases, the nonlinearity of thetransfer function is an important feature in capturing thenonlinear nature of neural functioning. ANN research usesmany different transfer functions, though the most popular isthe standard sigmoidal transfer function used in models usingreal activation values:

oi(t) =1

1 + e−ai(t)(18)

However, the K and KA models use a particular asymmetricsigmoidal transfer function (Equation 9), that has a firmerbasis in biological networks. The asymmetric transfer functionused was derived by Freeman and associates by studyingthe nonlinear passing of activation between biological neuralpopulations [50]. The asymmetry is an important property inthe transfer function as it means that excitatory input causes adestabilization of the dynamics of networks. This destabiliza-tion is essential in the collapse of aperiodic attractors observedin biological perceptual systems.

In Table V we summarize the comparison of the KAand feed-forward neural network (FFNN) models. Both usediscrete difference equations to describe the activity of unitsand it’s changes over time. The vast majority of research instandard FFNNs use a discrete equation, that simply dependson the activity of connected units at a previous time step todetermine the activity of the unit in the current time step. TheKA model (and original K-sets) model the intrinsic dynamicsof isolated neural populations. Both need a second order termin order to capture the description of these dynamics. In thediscrete KA model case, two previous time steps are neededin order to describe the momentum of a neural population.Both FFNN and KA models use nonlinear transfer function.The form of the transfer function in the K and KA modelsis an asymmetric sigmoidal transfer function that has a firmerbasis in biological observations. The asymmetry is importantin the K family of models as it allows for the destabilization ofpopulations of units in response to inputs [21]. The biological

models of the K family of equations are always multi-layeredhighly recurrent models that capture the architecture of brainregions. A final difference between the KA and FFNN modelsis the learning rule. Backpropagation is the main type oflearning mechanism used in standard FFNN research. The KAand K models use Hebbian learning, habituation and home-ostasis to adjust the weight space in simulations [24]. Theselearning mechanisms have a firmer basis in biology and havebeen directly observed as processes in brains. The learningmechanisms used by the KA model will be discussed morethoroughly in later sections when we describe simulationsusing the KA model to control autonomous agents.

IV. KA CONTROL OF AUTONOMOUS AGENT

The continuous K sets have been shown to be good modelsof olfactory cortical dynamics. They can replicate the com-plex dynamics and power spectra of biological cortical EEGrecordings. The K sets can learn using unsupervised methods,such as Hebbian modification and habituation, to replicatesome of the behavior of rabbits when learning new olfactorysensory stimuli. The K sets have also been extended to moreabstract domains to demonstrate their use in standard patternrecognition tasks [45], [23], [24].

We are currently extending the KA model to not onlyperform perceptual tasks, but to also model the completebehavior of an organism, from perception to action andthe steps needed in between [42]. One of the purposes ofproducing the KA model was to provide a simplified andefficient system that was still capable of producing the typesof dynamics deemed important to biological organisms inproducing general intelligent behavior. The KA model is adiscrete version of the original K sets and is used to experimentwith autonomous agents to replicate and explain the dynamicsof cortical systems in organizing and producing behavior.Because of the efficiency gains made possible by the discretesimplification, much larger neuronal models may be exploredin the context of building control mechanisms for autonomousagents. In this section we describe some simple examples ofhow KA units can be used to produce behavior in autonomousagents. We will show a simple example of learning with theKA units and compare the results to other dynamical neuralarchitectures.

Page 11: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 11

TABLE VCOMPARISON OF FEATURES AND EQUATIONS OF KA AND FFNN MODELS

KA FFNNDiscrete difference equation Discrete difference equation

ai(t) = F (ai(t − 1), ai(t − 2), neti(t − 1)) ai(t) = F (neti(t − 1))neti(t) =

jwijoj(t) neti(t) =

jwijoj(t)

oj(t) = ε{1 − exp[−(e

aj(t)−1)

ε]} oj(t) = 1

1+e−aj(t)

multi-layer multi-layerHighly recurrent Feed-forward

Hebbian, habituation Backpropagation

A. Learning Object Avoidance BehaviorIn this experiment we use a Khepera robotic agent in a

virtual environment. The task we choose is similar to thatexplored in the original Distributed Adaptive Control modelsof Verschure, Krose and Pfeifer [52]. Figure 9 illustrates themorphology of the Khepera robot and the internal architectureused to perform the experiment. The Khepera robot is a simplerobot that contains 8 infra-red distance sensors (labeled DS1−8

in the figure). In this task, the simulated Khepera robot isoriginally endowed with a set of basic reflexive behaviors thatallow it to wander around in its environment, bumping intoobstacles and turning away from them. For example, if therobot bumps into an object on the left side of its body, it turnsto the right until it is no longer bumping the obstacle and thenattempts to continue forward. We used a virtual simulation ofa physical Khepera robot to perform these experiments [53].The Khepera robot is equipped with two independent motorsattached to wheels, that allow the robot to move forward,backward and turn. We use only the 6 forward facing distancesensors in this experiment.

In Figure 9 we show the architecture used to perform theexperiment. KA-0 units are used in the Reflex, Sensory andMotor areas to build the architecture. A set of three reflexivebehaviors are hardwired to perform appropriate actions toallow the robot to wander in the environment. The Left Obsand Right Obs reflexes are connected to the three sensors onthe left and right sides of the robot respectively. If any of thethree connected sensors is at its maximum value (indicatingthe sensor is touching an obstacle) then the Left Obs or RightObs unit will be stimulated appropriately. The No Obs unitis similarly connected to the four forward facing distancesensors, and it is only stimulated when all four sensors areless than maximum, indicating that the robot is not bumpinginto an obstacle in front of its body.

The Left Obs and Right Obs behaviors respond to the robotbumping into an obstacle on the left or right side of the robotrespectively. They are hardwired to the Turn Left and TurnRight motor behaviors. For example Left Obs, which detectsthe presence of an obstacle on the left, is wired to stimulate theTurn Right behavior in order to turn away from the detectedobstacle. Again, in a similar manner, the No Obs reflex whichdetects the condition of no obstacle currently impeding therobot is hardwired to the Move Forward behavior which causesthe robot to move in a forward direction. The Turn Left andTurn Right motor behaviors are wired as would be expected to

Fig. 9. (Bottom Left) The morphology of the Khepera agent with 8 infra-red distance sensors positioned around the body and 2 motors for movementAbove is a graph of the response of the distance sensors (dashed linelabeled DS) and the inverse distance sensors (solid line labeled DI) to anobstacle. (Center) The internal architecture of the Khepera agent. Reflexesare hardcoded such that the agent moves around and bumps into obstacles inthe environment. When the agent bumps into an obstacle, it triggers motorunits to turn away from the obstacle and continue in a new direction. Units inthe Sensor area gradually learn to trigger avoidance behaviors to avoid objectsat a distance before running into them.

the Left Motor and Right Motor units to produce appropriateleft turn and right turn behavior. The values of the Left Motorand Right Motor unit are read out at discrete intervals to set thespeed of the robots left and right wheel encoders. The TurnLeft and Turn Right behaviors are connected together withmutually inhibitory connections in order to avoid a conflictsituation which can results in an impasse when both left andright turn behaviors are equally stimulated.

In this experiment, the goal of the agent is to learn to asso-ciate long-range distance sensory information with behaviorsto learn to trigger avoidance behaviors at a distance, beforethe agent actually bumps into the obstacle. Therefore in therobots behavior architecture we also have a set of units that areconnected to the long range infra-red distance sensors (labeled’Sensory’ in Figure 9 Right). The distance sensors can senseobstacles at a distance from the robot. Six KA-0 units areconnected to the normal output of the distance sensors (DS1−6

connected to S1−6) while six other KA-0 are connected to theinverse of the indicated distance sensor (DI1−6 connected to

Page 12: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 12

S7−12). The inverse of a distance sensor is maximally activewhen no obstacle is detected, and is minimally active whenthe sensor is right next to an obstacle. Initially the 12 sensoryKA-0 are fully connected to each other with small randomweights (not shown in figure). Also the 12 KA-0 are fullyconnected to each of the 3 basic motor behaviors (Turn Left,Turn Right and Move Fwd) again with small random weights.

We use Hebbian learning and habituation on the connectionsbetween the ’Sensory’ units and from the ‘Sensory’ to the’Motor’ units. Since these connections are initially random,typically they do not affect the behavior of the robot in thebeginning. The reflexes cause the robot to move around in theenvironment. Later on the robot may bump into somethingon its left. This will cause some of the Motor behaviors tobe performed, such as turning right. Since the Sensory unitsthat are connected to sensors on the left side of the bodyhave become stimulated while approaching the obstacle, theyremain highly active when the right turn behavior is activated.This allows the strength of the connection between the Sensoryunit for detection of obstacles on the left and the right turnbehavior to become strengthened due to Hebbian modificationbecause of their co-occurring excitation. Similar strengtheningis happening between units that sense the absence of obstacleson the right and right turn behavior as well. Hebbian modifica-tion is only performed in response to collisions, and thereforecollisions produce a type of pain valence signal. Habituation isperformed at other times which lessens extraneous responsesbetween the long-range sensors and motor behaviors in theabsence of important stimuli. Gradually the links between thelong-range sensors and the motor units become strong enoughto activate behavior when an object is sensed at a distance,before the robot actually bumps into it. Therefore the robot haslearned a type of object avoidance behavior through couplingof the activity of its sensors with its motor behaviors.

B. ResultsIn Figure 10 we show the results of learning object avoid-

ance using the architecture and methods described above. Inthis figure we display the average performance of the robotover 50 independently conducted simulations. We plot boththe results with only reflexive behavior (No Learning) andwith the Sensory unit connections being manipulated throughHebbian modification and habituation (Learning). Along theX axis we show the time (in seconds) that the simulationhas been running. We plot the total number of times thatthe robot has bumped into an object in the environment. Inthe case of the ’No Learning’ condition, the robot continuesto move and bump into obstacles in the environment. In theLearning condition, the robot quickly begins to avoid objects,and eventually learns to move through the environment withoutbumping into anything at all. These results are comparable, interms of performance and learning rate, to those obtained bythe original DAC architecture [52].

The KA-0 units using unsupervised learning methods, asshown, can learn to avoid obstacles at a distance. This simpleexample also shows that KA units can be used to build andcontrol the behavior of autonomous agents.

Fig. 10. Results of Khepera simulation. As time goes by, the robot learns tobump into things less and less. This figure represents the cumulative results of50 simulations. Time (in seconds) is plotted along the X axis, and the averagecumulative bumps is plotted along the Y axis. We show the results withoutlearning (only reflexive behavior) and with learning turned on.

As another example, consider the simple dynamical neuralSchmitt trigger [54]. Hulse and Pasemann have shown that asimple architecture of 2 units is capable of producing objectavoidance and exploration behavior in a Khepera robot. Intheir paper they used a genetic algorithm to learn appropriateweights to solve the avoidance and exploration task. Theirsimple architecture contains two input units, which receive theaverage activation from the three left and three right distancesensors respectively, and two motor units. The motor unitsare connected with mutual inhibitory connections, similar tohow our Turn Left and Turn Right motor units are mutuallyinhibitory. We will use the weight settings they evolved anddescribe in [54] to compare the performance to our KA-0 unitsin this similarly learned task.

Figure 11 displays a comparison of typical paths generatedin an environment using the KA architecture described previ-ously and compared to the architecture using the dynamicalHulse-Pasemann Schmitt Triggers (HPST). We use the KAunits after they have adequately learned obstacle avoidance,at which point we freeze the weights, similar to the evolvedweights learned for the HPST. The path of a HPST is shownon the left, while the results from the KA units behavior isshown on the right. In general, the KA exhibits comparableperformance as the HPST in this environment. For example,we ran 10 simulations each of the HPST and KA architectures.Each of the trials simulated 60 minutes of activity by theKhepera robot. These results are summarized in Table VIwhere we show the distance and standard deviations obtainedfor the 10 trials for each architecture in this first experiment.The results indicate that the KA traveled a somewhat shorterdistance over the same time.

The main goal of this section is to demonstrate that KAcan perform at the same level, and in some cases better,than alternative control algorithms, like the HPST. This isproof-of-principle of the feasibility of the K-based control

Page 13: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 13

Fig. 11. A comparison of typical paths created by the Hulse-Pasemann neuralSchmitt trigger (Left) and the KA units (Right).

TABLE VIRESULTS OF KA AND HULSE-PASEMANN SCHMITT TRIGGER KHEPERA

SIMULATIONS

Experiment 1 Experiment 2Arch dist std dist std-d time std-tKA 246.11 m 1.08 3.62 m 0.04 48.51 s 0.66

HPST 250.52 m 0.27 3.68 m 0.08 51.13 s 1.31

approach. In Figure 12 we study how much time it takes tomove from the top of a long corridor to the bottom end. Itis seen that the trajectory produced by KA is more smooth,while HPST control gives trajectories with sharp corners. Inorder to analyze this behavior, an additional experiment hasbeen designed with 10 trials of trajectories. We display theresults of 10 trials for the HPST architecture (Left) and theKA architecture (Right), starting at the same location andorientation (the orientation was varied over the 10 trials). Theresults for the second experiment are summarized in TableVI. By both measures, in this environment, the KA is moreefficient because it travels to the end in less time using lessdistance. This is mainly a result of the form of the path takenby the KA architecture. The KA units trigger the turningbehavior in a more smooth manner, and at a greater distancefrom the obstacles, resulting in smoothed, curved turns.

It is not claimed, however, that the KA architecture de-veloped here is in any way superior to the HPST for thegiven simple task. Other performance criteria, such as areaexplored and covered or mean times to revisit areas, may givedifferent results. But, given appropriate evaluation functionsin the case of the HPST architecture, and value signals for theKA architecture, these differing tasks could be learned equallywell by either approach.

The dynamics used in this experiment by the KA units arerelatively simple. We use a homogeneous collection of KA-0units. The various recurrent connections, in the ’Sensory’ and’Motor’ areas do produce KA-I and KA-II level behaviors.The real power of the K and KA family of models comes

Fig. 12. Paths created by 10 trials of the Hulse-Pasemann neural Schmitttrigger (Left) and the KA architecture (Right). We study how much time ittakes to get to the end of the corridor and how long of a distance the agenttravels during this traversal.

when we use and exploit chaotic dynamics to form perceptualcategories and produce complex learned behaviors. We havebegun work along these lines of using such chaotic dynamicsin autonomous agents. The research along these lines of usingchaotic dynamics is in progress [55], [56], [57].

V. DISCUSSION

The above task serves to demonstrate that the KA unitscan effectively be connected together to form the controlmechanism for an autonomous agent. The performance ofthe KA units is comparable to that achieved by Hulse andPasemann with their HPST for the object avoidance task [54].The learning of object avoidance by the KA is also comparablewith Verschure, Krose and Pfeifer’s results in their originaldistributed adaptive control experiments [3], [4]. The dynamicsof the KA units can be shaped by Hebbian modification andhabituation to reliably associate the conditioned stimuli fromthe long-range sensors with the unconditioned and instinctualmotor responses to turn away from collisions. This type oflearning is an example of classical conditioning using anunsupervised learning mechanism to associate stimuli withinstinctual behaviors.

We have not yet, in this simulation, shown how a fullimplementation of an aperiodic KA-III might be used to forma control mechanism for an autonomous agent. We believethat mechanisms based on the formation and dissolution

Page 14: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 14

of an aperiodic attractor landscape have great potential forimproving the cognitive abilities of autonomous agents. Thedemonstration of this remains our ultimate goal using KA-IIIin the future. The performance of the KA units for controlin this simulation is by no means meant to be an example ofwhat we believe is ultimately achievable by the applicationof aperiodic dynamics to the control problem. Much simplerarchitectures are known to exist that effectively solve theobstacle avoidance problem in complex environments. Forexample Hulse and Pasemann [54] show effectively how onecan evolve the connection weights between two recurrentlyconnected units to perform obstacle avoidance. The recurrentnature of the connections is also important in our models, asthey form the basis for generating the oscillatory and aperiodicdynamics. The ultimate goal of developing the KA model,however, is to explore biologically motivated architecturesusing autonomous agents of complete intentional systems.

In this paper we have demonstrated the basic ability of theKA model to replicate the important dynamics of the originalK sets developed by Freeman et. al. [32], [45]. The KA modelis a discretized simplification of the cortical dynamics firstdeveloped to model the sensory systems of biological brains.We are now beginning to extend the original K sets to notonly model cortical sensory dynamics, but to also explain theproduction and selection of behavior in complete autonomoussystems. Towards that end, we are using the KA sets tobuild more complicated architectures that capture pieces ofthe important areas believed to contribute to basic intentionalbehavior [46].

In our view of cognition and the production of intelligentbehavior, aperiodic dynamics plays an important role in theprocess. Chaotic dynamics provides many advantages to asystem that needs to balance between stability and flexibilityin the actions it produces. Aperiodic dynamics have beenobserved in the sensory cortices of biological brains, andhave been speculated to be useful in the sensory recognitionprocess. The K-III and KA-III are capable of replicating thetypes of dynamics observed in these cortical regions.

But perceptual systems alone, though very important, arenot the only component necessary for the production of intel-ligent behavior. In [42] we have speculated on the essentialpieces necessary for the production of general intelligent be-havior. Besides sensory and motor systems, organisms need atleast a basic memory system (provided by the Hippocampus)and a motivational system. There is biological and experimen-tal evidence [21], [33], [58] that the same types of dynamicsobserved in the perceptual system, and modeled by the originalK-III, may also be the essential building blocks used inthese other three areas. The K-IV architecture is a modelof a complete intentional system, comprising sensory, motor,memory and motivational systems. Each individual system ismodeled by some form of a K-III, and the K-III together forma complete agent.

We have taken steps towards modeling the complete K-IV.In this paper we presented an example of using KA unitsto form the perceptual and motor systems. We are currentlyworking on KA-III models for the simulation of Hippocampalfunctions such as place cell formation and cognitive map

building [42], [58], [59], [55], [56], [57]. These steps areessential to better understanding how observed cortical dy-namics participate in the production of intentional behavior inbiological brains.

VI. CONCLUSION

In this work we have developed a discrete time modelof neural dynamics in neural networks with excitatory andinhibitory connections. We have built a hierarchy of KA mod-els, starting from the KA-I and KA-II units with fixed pointand limit cycle dynamics, to the KA-III model with complexaperiodic dynamics. We have demonstrated the feasibility ofgenerating chaotic oscillations in KA-III and compared thedynamics of the KA model to the original K sets. The devel-oped KA units can be used to build an adaptive autonomoussystem that explores an environment and generates behavioralstrategies in order to solve a given task. The K and KAseries of models represent steps to a better understanding ofhow aperiodic dynamics observed in the cortical systems ofbiological brains play a part in the production of intelligentbehavior.

ACKNOWLEDGMENT

This work was supported by NASA Intelligent SystemsResearch Grant NCC-2-1244 and by the National ScienceFoundation Grant NSF-EIA-0130352.

Derek Harter (Member) is an Assistant Professorof Computer Science and Information systems atTexas A&M University - Commerce. He received hisPh.D. in 2004 from the University of Memphis onresearch involving neurodynamical models and theirapplications to autonomous agents. His researchinterests are in AI, Cognitive Science and the studyof Complex Systems.

Robert Kozma (Senior Member ’98) holds a Ph.D.in applied physics from Delft University of Technol-ogy, The Netherlands (1992). Presently he is Profes-sor of Computer Science, Department of Mathemat-ical Sciences, The University of Memphis. He is theDirector of the Computational Neurodynamics Lab-oratory. He has published 3 books, over 50 journalarticles, and 100+ papers in conference proceedings.His research interest includes autonomous adap-tive brain systems, mathematical and computationalmodeling of spatio-temporal neurodynamics, and the

emergence of intelligent behavior in biological and computational systems. DrKozma serves on the Board of Governors of the International Neural NetworkSociety INNS, he Chairs the Special Interest Group on NeuroDynamics, and isa member of the NN Technical Committe of IEEE Computational IntelligenceSociety. He has been Program Chair and acted as Program Committee memberof a number of international conferences on neural networks, fuzzy systems,and computational intelligence.

Page 15: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 15

REFERENCES

[1] A. Clark, Mindware: An Introduction to the Philosophy of CognitiveScience. Oxford, NY: Oxford University Press, 2001.

[2] G. M. Edelman and G. Tononi, A Universe of Consciousness: HowMatter Becomes Imagination. New York, NY: Basic Books, 2000.

[3] R. Pfeifer and C. Scheier, Understanding Intelligence. Cambridge, MA:The MIT Press, 1998.

[4] P. F. M. J. Verschure and P. Althaus, “A real-world rational agent:Unifying old and new AI,” Cognitive Science, vol. 27, no. 4, pp. 561–590, 2003.

[5] S. I. Amari, “Neural theory of association and concept formation,”Biological Cybernetics, vol. 26, pp. 175–185, 1977.

[6] J. J. Hopfield, “Neuronal networks and physical systems with emergentcollective computational abilities,” Proceedings of the National Academyof Science, vol. 81, pp. 3058–3092, 1982.

[7] A. Babloyantz and A. Desthexhe, “Low-dimensional chaos in an instanceof epilepsy,” Proceedings of the National Academy of Science, vol. 81,pp. 3513–3517, 1986.

[8] I. Tsuda, “Can stochastic renewal maps be a model for cerebral cortex,”Physica D, vol. 75, pp. 165–178, 1994.

[9] X. Wu and H. Liljenstrom, “Regulating the nonlinear dynamics ofolfactory cortex,” Network Computing and Neural Systems, vol. 5, pp.47–60, 1994.

[10] I. Aradi, G. Barna, and P. Erdi, “Chaos and learning in the olfactorybulb,” International Journal Intelligent Systems, vol. 1091, pp. 89–117,1995.

[11] M. A. Sanches-Montanes, P. Konig, and P. Verschure, “Learning sensorymaps with real-world stimuli in real time using a biophysically realisticlearning rule,” IEEE Transactions on Neural Networks, vol. 13, pp. 619–632, 2002.

[12] K. Aihara, T. Takabe, and M. Toyoda, “Chaotic neural networks,”Physica Letters A, vol. 144, pp. 333–340, 1990.

[13] Y. V. Andreyev, A. S. Dmitriev, and D. A. Kuminov, “1d maps, chaos,and neural networks for information processing,” International Journalof Bifurcation and Chaos, vol. 6, pp. 627–646, 1996.

[14] L. P. Wang, “Oscillatory and chaotic dynamics in neural networks undervarying operating conditions,” IEEE Transactions on Neural Networks,vol. 796, pp. 1382–1388, 1996.

[15] R. M. Borisyuk and G. N. Borisyuk, “Information coding on the basisof synchronization of neuronal activity,” Biosystems, vol. 40, pp. 3–10,1997.

[16] M. Nakagawa, “Chaos associative memory with a periodic activationfunction,” Journal of the Physical Society of Japan, vol. 67, pp. 2281–2293, 1998.

[17] H. Nakano and T. Saito, “Grouping synchronization in a pulse-couplednetwork of chaotic spiking oscillators,” IEEE Transactions on NeuralNetworks, vol. 15, pp. 1018–1026, 2004.

[18] T. L. Carroll and L. M. Pecora, “Stochastic resonance and chaos,”Physica Review Letters, vol. 70, pp. 576–579, 1993.

[19] W. L. Ditto, S. N. Rauseo, and M. L. Spano, “Experimental control ofchaos,” Physica Review Letters, vol. 26, pp. 3211—3214, 1990.

[20] C. A. Skarda and W. J. Freeman, “How brains make chaos in order tomake sense of the world,” Behavioral and Brain Sciences, vol. 10, pp.161–195, 1987.

[21] W. J. Freeman, How Brains Make Up Their Minds. London: Weidenfeld& Nicolson, 1999.

[22] W. J. Freeman, R. Kozma, and P. J. Werbos, “Biocomplexity: Adaptivebehavior in complex stochastic dynamical systems,” BioSystems, vol. 59,pp. 109–123, 2000.

[23] R. Kozma and W. J. Freeman, “Encoding and recall of noisy data aschaotic spatio-temporal memory patterns in the style of the brains.” inProceedings of the IEEE/INNS/ENNS International Joint Conference onNeural Networks (IJCNN’00), Como, Italy, July 2000, pp. 5033–5038.

[24] ——, “Chaotic resonance - methods and applications for robust clas-sification of noisy and variable patterns,” International Journal ofBifurcation and Chaos, vol. 11, no. 6, pp. 1607–1629, 2001.

[25] H. Liljenstrom, “Global effects of fluctations in neural informationprocessing,” International Journal of Neural Systems, vol. 4, pp. 497–505, 1996.

[26] I. Tsuda and A. Yamaguchi, “Singular-continuous nowhere differentiableattractors in neural systems,” Neural Networks, vol. 11, pp. 927–937,1998.

[27] S. L. Bressler and J. A. S. Kelso, “Cortical coordination dynamics andcognition,” Trends in Cognitive Sciences, vol. 5(1), pp. 26–36, 2001.

[28] H. L. Liang, M. Z. Ding, and S. L. Bressler, “Detection of cognitive statetransitions by stability changes in event-related cortical field potentials,”Neurocomputing, vol. 38, pp. 1423–1428, 2001.

[29] O. Manette and M. Maier, “Temporal processing in primate motorcontrol: relation between cortical and emg activity,” IEEE Transactionson Neural Networks, vol. 15, pp. 1260–1267, 2004.

[30] J. C. Principe, V. G. Tavares, and J. G. Harris, “Design and imple-mentation of a biologically realistic olfactory cortex in analog VLSI,”Proceedings of the IEEE, vol. 89, pp. 1030–1051, 2001.

[31] W. J. Freeman, Mass Action in the Nervous System. New York, NY:Academic Press, 1975.

[32] ——, “Simulation of chaotic EEG patterns with a dynamic model of theolfactory system,” Biological Cybernetics, vol. 56, pp. 139–150, 1987.

[33] W. J. Freeman and R. Kozma, “Local-global interactions and therole of mesoscopic (intermediate-range) elements in brain dynamics,”Behavioral and Brain Sciences, vol. 23, no. 3, p. 401, 2000.

[34] R. Gutierrez-Osuna and A. Gutierrez-Galvez, “Habituation in the kiiiolfactory model with chemical sensor arrays,” IEEE Transactions onNeural Networks, vol. 14, pp. 1565–1568, 2003.

[35] D. Xu and J. Principe, “Dynamical analysis of neural oscillators in anolfactory cortex model,” IEEE Transactions on Neural Networks, vol. 23,pp. 46–55, 2000.

[36] C. M. Marcus and R. M. Westerveld, “Dynamics of iterated-map neuralnetworks,” Physica Review A, vol. 40, pp. 501–504, 1989.

[37] L. P. Wang, “On the dynamics of discrete-time, continuous-state hopfieldneural networks,” IEEE Transactions on Circuits and Systems, II: Analogand Digital Signal Processing, vol. 45, no. 6, pp. 747–749, 1998.

[38] I. Tsuda, “Towards an interpretation of dynamic neural activity in termsof chaotic dynamical systems,” Behavioral and Brain Sciences, vol. 24,no. 4, pp. 793–847, 2001.

[39] W. J. Freeman, “The physiology of perception,” Scientific American, vol.264, no. 2, pp. 78–85, 1991.

[40] K. Kaneko and I. Tsuda, “Constructive complexity and artificial reality:An introduction,” Physica D, vol. 75, pp. 1–10, 1994.

[41] R. Kozma, “On the constructive role of noise in stabilizing itineranttrajectories of chaotic dynamical systems,” Chaos, vol. 1193, pp. 1078–1090, 2003.

[42] R. Kozma, W. J. Freeman, and P. Erdi, “The KIV model - nonlinearspatio-temporal dynamics of the primordial vertebrate forebrain,” Neu-rocomputing, vol. 52-54, pp. 819–826, 2003.

[43] D. Harter, R. Kozma, and S. P. Franklin, “Ontogenetic developmentof skills, strategies and goals for autonomously behaving systems,” inProceedings of the Fifth International Conference on Cognitive andNeural Systems (CNS 2001), Boston, MA, May 2001, p. 18.

[44] W. J. Freeman, “Olfactory system: Odorant detection and classification,”in Building Blocks for Intelligent Systems: Brain Components as Ele-ments of Intelligent Function, D. Amit and G. Parisi, Eds. AcademicPress, 1997, vol. 3, ch. 1, pp. 1–1.

[45] K. Shimoide, M. C. Greenspon, and W. J. Freeman, “Modeling ofchaotic dynamics in the olfactory system and application to patternrecognition,” in Neural Systems Analysis and Modeling, F. H. Eeckman,Ed. Boston: Kluwer, 1993, pp. 365–372.

[46] W. J. Freeman, “The neurodynamics of intentionality in animal brainsmay provide a basis for constructing devices that are capable ofintelligent behavior,” in NIST Workshop on Metrics for Intelligence:Development of Criteria for Machine Intelligence, National Institute ofStandards and Technology (NIST), Gaithersburg, MD, 2000.

[47] W. Gerstner and W. M. Kistler, Spiking Neuron Models. Cambridge:University Press, 2002.

[48] F. Pasemann, “Complex dynamics and the structure of small neuralnetworks,” Network: Computation in Neural Systems, vol. 13, pp. 5–35, 2002.

[49] A. Wolf, J. B. Swift, H. L. Swinny, and J. A. Vastano, “DeterminingLyapunov exponents from a time series,” Physica D, vol. 16, pp. 285–317, 1985.

[50] W. J. Freeman and K. Shimoide, “New approaches to nonlinear conceptsin neural information processing: Parameter optimization in a large-scale, biologically plausible corticle network,” in An Introduction toNeural and Electronic Networks, Zornetzer, Ed. Academic Press, 1994,ch. 7, pp. 119–137.

[51] L. Kay, K. Shimoide, and W. J. Freeman, “Comparison of EEG timeseries from rat olfactory system with model composed of nonlinearcoupled oscillators,” International Journal of Bifurcation and Chaos,vol. 5, no. 3, pp. 849–858, 1995.

[52] P. F. M. J. Verschure, B. Krose, and R. Pfeifer, “Distributed adaptivecontrol: The self-organization of behavior.” Robotics and AutonomousSystems, vol. 9, pp. 181–196, 1992.

Page 16: CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents · 2019-08-08 · CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 3 method performs much faster

CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 16

[53] O. Michel, “Webots v4.0 3-d physics based mobile robot simulator,”www.cyberbotics.com, 2003.

[54] M. Hulse and F. Pasemann, “Dynamical neural schmitt trigger for robotcontrol,” Lecture Notes in Computer Science, ICANN 2002, vol. 2415,pp. 783–788, 2002.

[55] D. Harter and R. Kozma, “Navigation and cognitive map formationusing aperiodic neurodynamics,” in From Animals to Animats 8: TheEighth International Conference on the Simulation of Adaptive Behavior(SAB’04), Los Angeles, CA, July 2004, pp. 450–455.

[56] ——, “Aperiodic dynamics and the self-organization of cognitive mapsin autonomous agents,” in Proceedings of 17th International FloridaArtificial Intelligence Research Society Conference (FLAIRS), MiamiBeach, FL, May 2004, pp. 424–429.

[57] ——, “Aperiodic dynamics for appetitive/aversive behavior in au-tonomous agents,” in Proceedings of the 2004 IEEE InternationalConference on Robotics and Automation (ICRA), New Orleans, LA,April 2004, pp. 2147–2152.

[58] R. Kozma and W. J. Freeman, “Basic principles of the KIV modeland its application to the navigation problem,” Journal of IntegrativeNeuroscience, vol. 2, no. 1, pp. 125–145, 2003.

[59] R. Kozma and P. Ankaraju, “Learning spatial navigation using chaoticneural network model,” in Proceedings of the IJCNN 2003 InternationalJoint Conference on Neural Networks, Portland, OR, July 2003, pp.1476–1479.