111(1gail a. carpenter and stephen grosberg boston university center for adaptive systems and...

27
MENTATION PAGE r -. >% AD-A264 831 .......... 'REPORT DATE 3.REPORT TYPE AND DATES COVERED arc h 1•93 -dfinal tci'aL c 0/ L-- 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Development of Neural Network Arc-iectures for S I- Organizing Pattern Recognition a-Ad Robotics AF9- r 6. UTHOR(S) 6110O2F 2313 -al A. Carpenter and Stephen Grossberg CS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) B. PERFORMING ORGANIZATION Boston University REPORT NUMBER Center for Adaptive Systems 11 Cummington Street 74 Boston, MA 02215 9. SPONSORING MONITORING AGENCY NAME(S) AND ADORESS(ES) 10. SPONSORING MONITORING Awl. AGENCY REPORT NUMBER AFOSR/NL 110 Duncan Avenue, Suite B115 Dr John TangneyMA 493i 11. SUPPLEMENTARY NOTES •l' C"I 12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for public release; distribution unlimited 13. ABSTRACT (Maximum 200 words) During the DARPA ANNT Program contract, new neural network architecturee developed to carry out autonomous real-time preprocessing, segmentation, rsCon4ition, timing, and control of both spatial and temporal inputs. These architectures contri- bute to: (1) preprocessing of visual form and motion signals; (2) preprccessinz of acoustic signals; (3) adaptive pattern recognition and categorization in an-a u.u. p vised learning context; (4) adaptive pattern recognition and prediction in aSu-•..er- vised learning context; (5) processing of temporal patterns using working -Orv networks, with applications to 3-D object recognition; (6) adaptive timin7 frt scheduling; (7) adaptive sensory-motor control using head-centered s•a:-tial '-, - ta.ons of 3-D target position. 93-10652 ____________________________________________ ItIUiilI(I!II((1 IIIl l~III RliI(Ii 111(1 III(. !111 ___________ IA. SUBJECT TERMS GES 26 pages 16 PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFr'ATION it.i. . URITY CLASSIFICATION 20- LIMITATION OF ABSTRACT OF REPORT Of THIS PAGE OF ABSTRLACT (U) I (U) (U) UNLIMITED NSN 75,10-01-280-5500 sta'dard ;o,- 2983 % 9

Upload: others

Post on 20-Feb-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

MENTATION PAGE r -.>%AD-A264 831 ..........

'REPORT DATE 3.REPORT TYPE AND DATES COVEREDarc h 1•93 -dfinal tci'aL c 0/ L--

4. TITLE AND SUBTITLE S. FUNDING NUMBERS

Development of Neural Network Arc-iectures for S I-Organizing Pattern Recognition a-Ad Robotics AF9- r

6. UTHOR(S) 6110O2F2313

-al A. Carpenter and Stephen Grossberg CS

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) B. PERFORMING ORGANIZATION

Boston University REPORT NUMBER

Center for Adaptive Systems11 Cummington Street 74Boston, MA 02215

9. SPONSORING MONITORING AGENCY NAME(S) AND ADORESS(ES) 10. SPONSORING MONITORING

Awl. AGENCY REPORT NUMBERAFOSR/NL110 Duncan Avenue, Suite B115

Dr John TangneyMA 493i11. SUPPLEMENTARY NOTES •l' C"I

12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE

Approved for public release;distribution unlimited

13. ABSTRACT (Maximum 200 words)

During the DARPA ANNT Program contract, new neural network architectureedeveloped to carry out autonomous real-time preprocessing, segmentation, rsCon4ition,timing, and control of both spatial and temporal inputs. These architectures contri-bute to: (1) preprocessing of visual form and motion signals; (2) preprccessinz ofacoustic signals; (3) adaptive pattern recognition and categorization in an-a u.u. pvised learning context; (4) adaptive pattern recognition and prediction in aSu-•..er-vised learning context; (5) processing of temporal patterns using working -Orvnetworks, with applications to 3-D object recognition; (6) adaptive timin7 frtscheduling; (7) adaptive sensory-motor control using head-centered s•a:-tial '-, -ta.ons of 3-D target position.

93-10652____________________________________________ ItIUiilI(I!II((1 IIIl l~III RliI(Ii 111(1 III(. !111 ___________

IA. SUBJECT TERMS GES26 pages

16 PRICE CODE

17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFr'ATION it.i. . URITY CLASSIFICATION 20- LIMITATION OF ABSTRACTOF REPORT Of THIS PAGE OF ABSTRLACT

(U) I (U) (U) UNLIMITED

NSN 75,10-01-280-5500 sta'dard ;o,- 2983 % 9

Page 2: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

9I

DARPA FINAL TECHNICAL REPORT

October 1, 1989-December 31, 1992

DEVELOPMENT OF NEURAL NETWORK ARCHITECTURES FORSELF-ORGANIZING PATTERN RECOGNITION AND ROBOTICS

Principal Investigators:Gail A. Carpenter and Stephen Grosberg

Boston UniversityCenter for Adaptive Systems

andDepartment of Cognitive and Neural Systems

I II Curnmington StreetBoston, MA 0221-5

(617) :35:3-7857 (phone)(617) :353-77.55 (fax)

Contract AFOSR-90-0083Artificial N,:i!La! Network Technology (ANNT) Program:

Theory and Modeling

N IS CRAMfK'1 TAB~ U

AvbAIldbdhty Codes

Avdec ald/orL~t~t j Stecial

~ A

Page 3: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

DARPA Fiual Technical Report

Development of Neural Network Architecturesfor Self-Organizing Pattern Recognition and Robctics

Principal Investigators:Gail A. Carpenter and Stephen Grossberg

Center for Adaptive Systemsand

Department of Cognitive and Neural SystemsBoston University

TABLE OF CONTENTS

Publications

Project Summary

Project Descriptions1. Preprocessing of Visual Form and Motion Sigiials 9

A. Why do parallel cortical systems exist for the perceptionof static form and moving form?

B. Cortical dynamics of visual motion perception:Short-range and long-range apparent motion

C. .Neural dynamics of global motion segmentation:Detection fields. apertures. and resonant grouping

D. Brightness perception, illusory contours. andcorticogeniculate feedback

E. A what-and-where neural network for invariant imagepreprocessing

F. Figure ground separation of connected scenic figures:Boundaries, filling-in, and opponent processing

(2. Cortical dynamics of :3-D vision and figure-ground pop-outH. Synchronized oscillations for binding spatially distributed

feature codes into coherent spatial patternsI. Cortical dynamics of feature binding and reset.J. Processing of synthetic aperture radar (SAR) images

by the BCS/FCS system

2. Preprocessing of Acoustic Signals 13A. A neural network model of pitch detection and representationB. Evaluation of speaker normalization methods for vowel

recognition using fuzzy ARTMAP and K-NN

3. Adaptive Pattern Recognition and Categorization:Unsupervised Learning 1-4

A. Fuzzy ART: Fast stable learning and categorization ofanalog patterns by an adap)tive resornance system

B. ART 2-A: An adaptive resonance algorithm for rapidcategory learning and recognition

C. A neural theory of visual s:carchD. Normal and amnesic Iearning, recognition, and meiiiory by

a neural model of cortico-hippo(campal interactions

Page 4: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

4. Adaptive Pattern Recognition and Prediction:Supervised Learning It;

A. ARTNIAP: Supervised real-time learning and class ificat iol ofnonstationary data by a self-organizino neural network

B. Fuzzy ARTMAP: A neural netwvork architecture forincremental supervised learning of analogmrultidimensional maps

C. Fusion ARTMAP: A neural network architecture formulti-channel data fusion and classification

D. Comparative performance measures of fuzzy ARTNI.-\P.learning vector quantization. and back propagationfor handwritten character recognition

E. Rule extraction by fuzzy ARTMAPF. Medical database analysis and survival prediction

"Ly neural networksG,. Fuzzy ARTMAP, slow learning, and probability estimationH. Probabilistic neural networks and calibration of

supervised learning systemsI. Construction of neural network expert systems usirig

switching theory

5. Temporal Patterns, Working Memory, and 3-DObject Recognition 19

A. Working memory networks for learning temporal orderwith application to 3-D visual object recognition

B. ,"brking memories for storage and recall of arbitrarytemporal sequences

C. ART-EMAP: Learning and prediction with spatial andtemporal evidence accumulation

6. Adaptive Timing 20A. A neural network model of adaptively timed reinforcement

learning and hippocampal dynamics

7. Adaptive Control 21A. Neural representations for sensory-motor control: Head-

centered 3-D target positions from opponent eye commandsB. Emergence of tri-phasic muscle activation from the nonlinear

interactions of central and spinal neural network circuitsC. C'erebellar learning in an opponent motor controller for

adaptive load compensation and synergy formationD. Optimal control of machine set-up schedulingE. Vector associative maps: Unsupervised real-time error-based

learning and control of movement trajectoriesF. A neural pattern generator that exhibits bimanual

coordination and human and quadruped gait transitionsG. VITEWRITE: A neural network model for handwriting production

Page 5: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

PUBLICATIONS PARTIALLY SUPPORTED BY [DARPACONTRACT AFOSR-90-0083

CENTER FOR ADAPTIVE SYSTEMSAIN D

DEPARTMENT OF COGNITIVE AND NEURAL SYSTE\NISBOSTON UNIVERSITY

OCTOBER 1, 1989-DECEMBER 31, 1992

BOOKS

1. Carpenter, G~.A. and (;rossber g. S. (Eds.) ('1991). Pattern recognition by Self-organizing neural networks. ('arbridge-NMA: %MIT Press.

2. Commons. %I., Grossberg S.. and Staddon. .J.E.R. (Eds.) (1991). Neural networkmodels of conditioning and action. Hlillsdale, N.J: Eribaumn .- ssociatPS.

:3. Carpenter. G.A. and Grossber-, S. (Eds.) (1992). Neural networks for vision andimage processing. Cambridge. MIA: \IIT Press.

ARTICLES

1. Asfour. Y.R.. Carpenter, G.A., Grossberg. S.. and Lesher, GAX. (1993). Fulsion A\RT-MAP: A neural network architecture for multi-channel dlata fusion and classification.Technical Report CAS/CNS-TR-93-006, Boston U~niversity. Submitted for puldli-cation.

2. Bradski. CG. (1992). Dynamic programming for optimal control of set-up schedullnirvit hneural network modifications. In Proceedings of the international joint conferenceon neural networks. Technical Report CAS/CNýS-TR-92-002. Boston Vniversit v.

3. Bradski. G., Carpenter, G.A., and Grossber. S. (1991). Wkorkin~g memory netwvor'Ks firlearning multiple lgroupin~gs of temporally ordered events: Applications to:3-D vism11ýl ()I-ject recog-nition. In Proceedings of the international joint conference on neuralnetworks. Seattle, 1, 723-728. Piscataway.. N.I: IEEE Service C'enter.

4. Braski, G., Carpenter, G.A., and Crossberg, S. (1992). Working mnemory ne.tworksfor learning temporal order with application to 3-D visual object recognition. Pua

.1Computation, 4. 270-286. Nokn eoisfrýo~~(

.. Bradski. G., Carpenter, G.A., anl Crossberg. S. (1992). Wri~ eoisfrsoaand rmcall of arbitrary temporal sequences. I n Proceedings of the international joint.conference on neural networks, Baltimore, 11,.57-62. Piscatawvay, NJ: IEEE SrirCenter. Technical Report CAS/CNS-TR-92-003. Boston JUniversitv

6. Bradski, G., Carpenter, G;.A., and Crossberg, S. (1992). STORE working m(ertorY nfi-works for storage and recall of arbitrary temporal sequences. Submitted for ptlblicatiorTi.

T, Bullock, D., Contreras- Vidal, .J.L., andl Crossberg, S. (1993). Cerebellar learni TI in1an opponent motor controller for adaptive load compensation and sYne(rgqy formawt io.Technical Report CAS/CNS-TR-93-009, Boston U niversity. Suibmitt ed for ptilbl-cation.B.fullock. D. andl Crossberg, S. (1992). Emrergence of t ri-phasic muscle acti vat~ion fromnII,nonlinear interactions of central and spinal neuiral circufits. Ifbiwan W\ovoenont .Sciouv.11. 157-167.

9. Bullock, D,, C rossberlg, S., and %lannes. (.(9).A neural network moidel for c ~sc~ript, 1)ro(lt1(tionr. Siljnrittedl for pimblicat ion.

Page 6: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

10. Bullock. D.. Grossherg. S.. and .Maniis. ('. .199). the \ITEW\VtI( IL i tJh'l , 1-writing production. Technical Report CAS/CNS-TR-93-011. I3nI• ' i, v.Submitted for publication.

11. Burke. H.B.. Rosen, D.B., Goodman. P.H.. and Harrell. F.E. (1993). Learning to prieictbreast cancer •:rvival and disease staoe with missing and censored attributes. Snubm-tTedfor publication.

12. Carpenter, G.A. and Govindarajan. K. (1993). Evaluation of >peaker normalizdttionmethods for vowel recognition using fuzzy ARTMAP and IK-NN. Technical ReportCAS/CNS-TR-93-013. Boston University. Submitted for publication.

13. Carpenter. G.A. and Grossberg, S. (1990). Adaptive resonance theory: Neural networkarchitectures for self-organizing pattern recognition. In R. Ecknillfer, G. Hartmann.and G. Hauske (Eds.). Parallel processing in neural systems and computers.Amsterdam: Elsevier, pp. 38:3-:389.

14. Carpenter, G.A. and Grossberg, S. (1990). ART 3: Self-organization of distributedpattern recognition codes in neural network hierarchies. In Proceedings of the in-ternational conference on neural networks (Paris). Dordrecht: Kluwer Acaderii..Publishers.

15. Carpenter. G.A. and Grossberg, S. (1991). Distributed hvpothesis testing. attentionshifts, and transmitter dynamics during the self-organization of brain recognition codes.In H. G. Schuster and XV. Singer (Eds.), Nonlinear dynamics and neuronal net-works. New York: Springer-Verlag, pp. 305-:334.

16. Carpenter, G.A. and Grossber:, S. (1991). Attention. resonance, and transmitter dynam-ics in models of self-organizing cortical networks for recognition learning. In A. V. Hloldenand V. 1. Kryukov (Eds.), Neurocomputers and attention, I: Neurobiology, syn-chronization, and chaos. Manchester: Manchester University Press, pp. 201-222.

17. Carpenter, G.A. and Grossberg, S. (1992). Self-organizing cortical networks for dis-tributed hypothesis testing and recognition learning. In J. G. Taylor and C. L. T. Man-nion (Eds.), Theory and applications of neural networks. London: Springer-Verlat.pp. 3-27.

18. Carpenter. G.A. and Grossberg, S. (1992). Adaptive resonance theory: Self-organizinm-neural network architectures for pattern recognition and hypothesis testing. In Ency-clopedia of artificial intelligence, Second edition. New York: Wiley and Sons.pp. 13-21.

19. Carpenter, G.A. and Grossberg, S. (1992). Fuzzy ARTMAP: Supecvised learning, recog-nition, and prediction by a self-organizing neural network. IEEE Commnnications .ag-azine, 30, 38-49.

20. Carpenter, G.A. and Grossberg, 5. (1993). Normal and amnesic learning, recognition.and memory by a neural model of cortico-hippocampal interactions. Trends in .Nurn-sciences, in press.

"21. Carpenter, G.A. and Grossberg, S. (19923). Fuzzy ARTMAP: A synthesis of neural net-works and fuzzy logic for supervised categorization and nonstationary prediction. In R.R.Yager and L.A. Zadeh (Eds.), Fuzzy sets, neural networks, and soft computing.in press.

22. Carpenter, G.A., Grossberg, S., and lizuka. K. (1992). Comparative performance inea-sures of Fuzzy ARTMAP, learning vector quantization, and back propagation for hand-written character recognition. In Proceedings of the international joint conferenceon neural networks, Baltimore, 1, 794-799. Piscataway, N.J: IEEE Service ('enter.Technical Report CAS/CNS-TR-92-005,, Boston UTniversity.

23. Carpenter, G.A., Grossberg, S., and Lesher. C. (1992). A what-and-where neural net-work for invariant image proc-ssing. In Proceedings of the international joint

Page 7: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

conference on neural networks. 1Bait i IlI. I( :)VO. II. .I: iT- t%-Service Center. Technical Report ( A\Si( N -192-00 Riol.t B 1'nkk I l l rv.

2t Carpenter. G.A.. Grossberg. S.. Markuzon. N.. Reyllold:s..1.L. it-d l)o.( , 1 1. _Fuzzy ARTMIAP: A neural network architect ure for increniental ýupervi -',I learniin, .,•analog multidimensional maps. IEEE Tnuisactions on V\enra! Network-,s. 3. 6i9> 713.

275. Carpenter, G.A.. Grossberg, S.. Markuzon, N.. Reynolds, .. 11.. and Rofie. D).B. i 19902Supervised learning by adaptive resonance neural networks. In (;..Scarpetta LItFd.Structure: From physics to general systems. Festschrift volume in honor of Ihe70th birthday of Professor Eduardo Caianiello. in press.

26. Carpenter. G.A., Grossberg, S., Markuzon, N., Reynolds, .J.I1.. and Rosen. 1). i992Attentive supervised learning and recognition by an adaptive resonance systein. InG.A. Carpenter and S. Grossberg (Eds.), Neural networks for vision and imageprocessing. Cambridge, MA: MIT Press, pp. 305-384.

27. Carpenter, G.A., Grossberg, S., Markuzon. N., Reynolds, J.H.. and Rosen. 1)43. (IP•D2.Fuzzy ARTMAP: An adaptive resonance architecture for incremental learning, of analowmaps. In Proceedings of the international joint conference on neural networks.Baltimore. III, :309-314. Piscataway, N.J: IEEE Service Center.

28. Carpenter. G.A., Grossberg, S., and Reynolds, J.H. (1991). ARTNMAP: Supervised real-time learning and classification of nonstationary data b1y a self-organizing reural network.Neural Networks. 4. 565-588.

29. Carpenter. G.A.. Grossberg, S., and Reynolds. J.H. (1991). ARTMAP: A self-oraanizingneural network architecture for fast supervised learning and pattern recogTnition. In T.IKohonen. M. Mdikiara, 0. Simula, and .J. Kangas (Eds.). Artificial neural networks,Volume 1. Amsterdam: Elsevier, pp. 31-36.

30. Carpenter. G.A., Grossberg, S., and Reynolds, J.H. (1991). A neural network architec-ture for fast on-line supervised learning and pattern recognition. In H. Wech-lý,er iLed..Neural networks for perception, Volume I: Human and machine perception.New York: Academic Press, pp. 248-264.

:31. Carpenter. G.A., Grossberg, S., and Reynolds,, J.H. (1991). A self-organizing ARTMAPneural architecture for supervised learning and pattern recognition. In R . Marnnoneand Y. Y. Zeevi (Eds.), Neural networks: Theory and applications. New York:Academic Press, pp. 4:3-80.

32. Carpenter. G.A., Grossberg, S.. and Reynolds, .J.H. (1991). ARTMAP: A self-oraanizingneural network architecture for fast supervised learning and pattern recognition. InProceedings of the international joint conference on neural networks, Seattle.I, 86:3-8SV. Piscataway, N.J: IEEE Service (Center.

33. Carpenter, G.A., Grossberg, S.. and Reynolds, J.H. (199:3). Fuzzy ARTNIAP, slow larn-ing. and probability estimation. Technical Report CAS/CNS-TR-93-014. BostonUniversity. Submitted for publication.

31. Carpenter, G.A., Grossberg, S., and Rosen, D.B. (1991). ART2-A: An adaptive reM)-nance algorithm for rapid category learning and recognition. Neural Networks. 4. .193-504.

35. Carpenter, G.A., Grossberg, S., and Rosen, D.B. (1991). ART2-A: An adaptive reso-nance algorithm for rapid category learning and recognition. In Proceedings of theintcrnational joint conference on neural networks, Seattle. II. 51-l-5(i. Pica5-away. NJI: IEEE Service Center.

36. Carpenter, G.A., Grossberg, S., and Rosen, D.B. (1991). Fuzzy ART: An adlai i'ceresonance algorithm for rapid. stable classification of analog rpatterns. In Proceedingsof the international joint conference on neural networks. Seatt le, II. 111 I1 Gi.Piscataway, N.l: IEEE Service ('enter.

Page 8: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

37. Carpenter. G.Av.,\ (lrossberg. S. and Posell. 1)11. i19 l' ý 1, !1/,'v\ UT F['vtantd c'at egrizat ion of analo'a pattern I Is ayitn atdait yep Io I% evs etol,1i NO .\f'id4. 7i59-77TI1.

:15. Carpenter. G. A. and Ross. WV. [D. ( 1993). A RT- EMIA P: A neural net wo)rk atrchi i t ip ofor learning andl prediction by evidence accumnulatiOnl. Technical Report CAS/'CNS-TR-93-O1 3. Boston Universityv. Submitted for publicattion.

:39. Carpenter. (G.A. and Tan. A.-=H. ( 1993). Rule extraction. fuzz.%A MiTMNI A. and medi caldatabases. Technical Report CAS/CNS-TR-93-016. BoSton I'niversity. Submniitctdfor publication.

40. Cohen. NI. A.. CGrossberu. S., and Pribe, C . (1992). .A neural pat tern szeneralor hioaexhibits frequency-dependent in-phiase and anti-pha'Se oscillations arid qua1;dru ped Lnaiiltransitions. Submitted for publication.

41. Cohen, MI.A., Crossber~g, S.. and Pribe. C'. (1993). A neural pattern g~eneratlor Thi 1texhibits arousal- dependent hiuman gait transitions. Technical Report CAS/C-NS-TR-93-O1 7. Boston University. Submitted for publication.

421. Cohen. MI.A., Crossberg.- S., and Pribe. C. (193). Frequency-dependent, phase transi-tions in the coordination of human birnanual tasks. Technical Report CAS/CNS-TR-93-018. Boston University. Submiitted for publication.

4:3. Cohen, M.A.. Grossberg,. S., and Pribe. C. (199:3). Quadruped gait trari sitions froma neural pattern generator with arousal modulated interactions. Technical ReportCAS/CNS-TR-93-019, Boston University. Submitted for publication.

44. Cohen, NI.A., Crossberg, S., and Wy' se. L, (1992). A neural network for synthesizinga. thepitch of an acoustic source. In Proceedings of the international joint conferenceon neural networks. Baltimore, IV. 649-654. Piscataway. NJ: IEEE Service Center.Technical Report CAS/CNS-TR-92-009, Boston University.

4.5. Cohen. M.A., Grossberg, S. and Wyse, L. (1992). A neural network model of plichdletection and representation. Submitted for publication.

46. Cruthirds. D., Cove.CA. rossberg, S., Mlingolla. E. .N owak. N.. and WVilliarnson. J.(1992). Processing of synthetic aperture radar images by the Boundary Contour Systemnand Feature Contour System. In Proceedings of the international joint conferenceon neural networks, Baltimore. IV, 414-417. Piscataway. NJ: IEEE Service (enter.Technical Report CAS/CNS-TR-92-0 10, Boston University.

471. Francis. G., Crossberg, S., and Mingolla. E. (199:3). Cortical dynamics of feature bind'intgand reset: Control of visual persistence. V~ision Research, in press.

4s. Francis. G., Crossberg, S., and Mingolla. E. (199:3). Dynamic reset of p~ersistent vsasegm ent at ions by neural networks. Submitted for publication.

49. (;audiano. P. and Crossberg, S. (1990). A self-regulating, generator of sample-and-holdlrandom training, vectors. In MI. Caudill (Ed.). Proceedings of the internationaljoint conference on neural networks., January, 11, 213-216. Hillsdale. Ni: EribauiiiAssociates.

50. Gadao P. and Crossberg, S. (1991). Vector associative maps5: Unsupervised real-timelferror-based( learning, and control of movement trajectories. \'eural V\etivorks. 4. 117 - I 3.

.1. Gaudiario. P. andl Crossbergr, S. (1992). Self-organization of spatial representations andarm trajectory controllers by vector associative maps energized by cyclic random) genera-tors. In A. Bablovantz (Ed.). Self-organization, emerging properties and learning.London: Plenum Press.

.52. (;aiidia.ro. P. a rid ( rossbera, '). (I 9921). A daptivye vector i nt~egra itio to endploi n t:51organizing neural ci r(:rii s for control of pla nned movern ent. t rajectories. Hlumian Alv

Page 9: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

-)3. (oodlIIIiuan. 1'. 11.. Kl I)I Irlasos. \.( .. cIri )l O, .. ("Irpe I~. I, I . ( -.t ,noids. .1.-I. R~oseni. D.l3.. andI lartz. :\... 199V21 Fuiz\- Ak:\lI~AI, nerlwld p'compared to linear dliscrimlinanlt analysis prevlct lol l of lie lcll'-t I) of' h1U~.)i.dl M~d I )rients with pneumonia. Conference on Systerns, Mta( yentc.(~~~j1. 74S-753. New York: IEEE PreSs.

.)4. Cov.e. A.. Grossberg. S.. and %Minvolla, E. (1993). llinghi ness percept ion. illllUoli c"I!'f-tour,. and (orticogeniclilate feedlback. Technical Report CAS/CNS-TR-93-021IBoston U niversity. Submitted for ptbl~ication.

i.Greve, D.. Grossber,g., S., Gunent her. F.. and Blluock. D - 0 992) Neural 'i'- itionls for senIsory-miotor control. 1: Ilead-centered 3-D[ target jpOsitifflis froml uppoum-tit .0commands. A cua Psy-chologica, in press.

56. Grossberg., S. (1990). Self-organizing neural architectures for Vision. learnin!g, and rolmi iCcontrol. In Proceedings of the international conference on neural networks(Paris). Dordrecht: IKluwer Academic Pub~lishers.

.. 7 rossberg. S. (1990). Neural FACADES: Výisual representat ions of static andl niuvinucform-and-color-and-depth. Mlind and Languagle (special issue onl I'derstandling 1.iu5. 411-4356.

58. Grossb~ergl, S. (1991). Why (10 parallel cortical systemls exi'st for the perception of <t~aticform and moving form" Perception and Psychophysics. 49. 117--1.11.

59. Crosslberga. S. (1991). The symmetric organization of p~arallel cortical stesfor formand motion perception. In H-. Wechsler (Ed.). Neural networks for perception.Volume 1: Human and machine perception. New York: Academic Press. pp. 64-1 03.

60. Grossbercg. S. (1992). Cortical dynamics of :3- D vision and figure-ground 1)01-out . I n P. L.Hjorth (Ed.), Proceedings of the conference on brain and mind. Copenhaaerl,Denmark: Munksgaard Export and Sub~scription S'ervice.

61. Grossb~erg, 5. (1992). Why do parallel cortical systems exist for the. perception of sýtaticform and moving form? In GA. Carpenter and S. (Grossb~erg-, tEds.). Neural networksfor vision and image processing. Camblridge. MIA: M\IT Press. pp. 24-1 292.

62. G;rossb~erg. 5. (199:3). A solution of the figaure-ground problem for bIIological i(m.Neural A et works in press..

6:3. (Grossberg. S. (1993). Self-organizing, neural networks for stab~le control of alutorlolrion>b~ehavior in a changing world. In .J.G. Tay-lor (Ed.). Mathematical studies of neuralnetworks. Amsterdam: Elsevier Science Publishers, in press.

6 4. (;iossberg-, S. and Mýerrifl. J.W.L. (1992) .A neural network model of adaptively timedreinforcement learning and hippocampal dynamics. C'ogniitive Prain R-Parch. 1. 3--23S.

6-5. Grossberg, S. and \Mingolla, E. (1992) . Neural dynamics of visual motion p~erception:Local detection and glolbal grouping,. In G.A. Carpenter and S. Grossb~erg (Eds.). Neuralnetworks for vision and image processing. Camb~ridge, MIA: \MIT Press. pp. '293342.

66. ;rossherg. S. and Mlingolla, E. (199:3). Neural dynamics of motion perception: Direct Onfiel,ýds, apertures, andl resonant grouping.f Porceptionl and PsYchcphysics. In pre'ss.

67. (;rossberg, S.. Nlingolla, E.. and fioss. W.). (199:3). A neural theory of visuial Search:Pteclursive attention to se.,gient~ations and suirfaces. Technical Report CAS/CNS-TR-93-023, Boston Un'iiversity. Suibnit~tced( for lpuI)Iication.

G6ý (;t (ssberg. S. and Itudd, N11. A.( !) . neural t horvy of visiml inotion percepT rN-In B3. Bl3um (Ed.), Channels in the visual nervous systemn: Neurophysiology,psychophysics, and models. Tel ;iv reuind IPulihhihing, flouie Lid.

Page 10: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

S hort ?-ra 1ge a nd 1( )11 g- r'al ,'e ti I);) Ii~t TIt !j 99. T I)

ture linking in a cormical nmf&ofe mt ial p(ri(1Pir()P. Van AWim 4. 1 371. Grossberit. S. and Somers. D). i 1991 ESvtoronin -W a~m ( ý%,,

ture linking in %%mul corte. In T. Rolohomni. lx I uisar (K) S'riodd aw .I~ ý(Eds Artificial neural networks, Vfolumne 1. \immknr F!wir q)

72. (;rosserg. S. and Sotners. D). 1991 ). Sy-tichronin.H (nudciit MY ilýn iniu;njTtune linking in visual cortex. In Proceedings of the international joint conferenceon neural networks, Seat? e. 11. 2 19 2.5 M. Pisra awa v NT IFF lLS rý (i :

73. Grossbemg S. awd Somers. D. 11992). Svinhronizv oscillam ns Anr limlineiutiýKtributed featuremcnks into coher~ent npai nl Pattenx. In G.A. ( arpenflk'r ,1u141 S ( ro.(Eds.. Neural networks for vision anid imiage processhig. Caiida&W NIA M 1Press. pp. T52-4()6-

74. Grossberg. S. and \Vvse. L. t 1991Y. Invariant rn'oenitiori (A hot end >"qne-I qe

organizing A RE architIectuore: Li gure- ro n d separat ion .AW uAli*\ei 4uir.. 704075. (hrossberg. S. and Wyse. L. I 191). Invariant rrcognit ion otifi Qtered scnew Iga lf

organizing, ART architecture: Figure-ground separation. in Proceedings of the in-ternational joint conference onb neural networks. S)eatt le. 1, 633~ 63,. PiýcmaavN.J: IEEE Service (Center.

76. (;rossberg. S. and Wvse. L. (1992) . A neural network architeciumr for hWur-wnzrindseparation of connected scenic figures. IN R.B. Pinfer arid 13 N abet (Eds. . Nonlinearvision: Determination of neural receptive fields, function, and networks. BocaRaton. FL: CRC Press.

77.;rossberg. S. and Wyvse. L. (1992). Figure- gron rd separation of connected>-ni Nc fir n -

Boundaries. filling-in, and opponent J)rOCeSSin!. Jr- G.A-. ( arpenter arid S.(;rŽh u(Eds.). Neural networks for vision and image processing. ('anibnidge. MA: MIT"1Press.' pp. 161-194.

78. .Jaskolsk-i. J1. (1992) Construction of neural network classification expert svsieniw linrnr.switching theory alagorithms. In Proceedings of the international joint conferenceon neural networks. Technical Report (.\S/CNS-TR.9)2-0I 1.Bositon Inrivrsiiv

79. Rosen. D).B., Goodlman. P.H.. and B~urke. H. B. f1993). EnhLanced pro1A ) ti s cne n~vra)networks and calibration of supervised learning sy-tems, Siibmiit e for pu~lrca n i

Page 11: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

Develop itiett of Neuri al Netwxo rk Ar!cl i? ct u esfor Self-Organizing Pattern liecogitit :ot and Robotics

Contract AFOSR. 90-0083

Principal Investigators:Gail A. Carpenter and Stephen Grossberg

Center for Adaptive Sv'steinsand

Departmient of Cognitive and Neural SystemnsBoston Universit~y

S UM IX A RYDuring the I)ARPA ANN FT rogramn contract, newv neuiral neftwork itrilit',( 1out

leveloei to cairry Out amointOfoitls real -titne prt('pn( Wr>i g. >e i1(latinum Aciewnhit

ing. and contr-ol o4 b)011 spatial and meipotllod iriptit-n This lri('I summiary is Ukod lori~e extensive one. which rites articles friri tile lst of publicatins pp.ly

(1) Preprocessfing of visual formi and mrotion signals: Karallel cortical Wthe processin of static visual formis and viov ing visual formns are derived rOut a 1)11 Iiicaled FMI Sv~ni iet rv. A solution (1f the WWhI! niot ion segn ientat ion prohlein for cm o 1rvvision is also outlined, as xvel Ias an anal vsis of 3-D vision and fiu re-EgrOn rid p)op-OUT .A-('ADE heorvi A Afeedforward What -and-\\hei e filte Yc models p~aralll vId vs syhan Isvtn~generate an input. represenitat ion (What it is that is invariant to position. size. and orbenta-tion wit hout discarding this inforn iar ion \\ here it ish Synchronized owciI at ol in a n~oit

of visual cortex are capable of rap~idly bind in ir spat ially (list ri buted featutre detectnr' irs( ii

clobally coherent segnientation . Another svwtern separates scenic' fgures froul each otlwr~ia cluttered background. The vWson niodels have been applied to0 the analysis Of 0200ipercept ion. iIlluisory cont ours. featumre hindi ng. and reWet The, models have also Ibeen apti 'Id

to the processing of svnt.hetic apertutre radlar (",AR ) iniages.(2) Preprocessing of acoustic signals: A netiral network nmoelI for aprpr-o eimian

acoustic sotircp generates a repreý,entatiori Of p1 th as a spatial pat ern that eiiierme hynitl

a tyvpe of netiral harmionic sieve. Neural net works are also tusedI to ealuae IN si( 1)1norrila lizat ion xiiet hods for vowe recognhitn.

(3) Adaptive pattern recognition and categorization: Unsupervised learning:An algoritIhinir Uern ing syste. cal led A RTY-A. achieves a 2-:3 order of inlawnliturde nal~f i -Ip

over the ARFI 2 recognition svysterni A notlher new andaog adapt ivYe r"ncrsl lf iodt Mi Pz / vA HI) incorporates, conipu)t1ations fro fuzzy smet heory into the fi nary ATK 1 ii oelW i Monused as part of a larger architecture for supervised learning. fruzzy A\RT' emialili th Iiwrililterprot, vectors of learned adlaptivye wel ghtý > a.' if-t hen riules. thbus defintin, g sefit rut /l

('.l-Irt systemn. AFrU systems, are also) cenltral to it neural Iieorv Of visual search anod at ei nhnnto0'-0grnentat ions: and( to an analysis of nortmal and anmnesir W ernng.

(4) Adaptive pattern recognition and prediction: Supervised learning: I hARTMAI:\P arid frizzy ART IA P architectutres carry out hi rernenta sriper\ ised leartin inftrecogrrit ion caegories andi multidimnensional maps in response to arlbit rar~v seqruvnces 4f anai-log or b~inary vectors. A Mininnax Learning Rule, conjointly iiiinirnizs prtwh(livte( error ailrttdiHniX M'S (code ( 01Hnressi on, t hereby opt im allIy nh apOig Ieogt ion catego1(ries, to T he iý i

Qs of the itnpuit ('ilvirolnlnwn 11cncbiaik stridie> affirm ARTMINIA' Wrt'tvr ot1 oirriei n,(Ilterntivtle triodels froto mnachine learning. xmritic algorithmis. and tr(qiial net works. WWIrl-rig application dIohrains ~uch as large dataIbase amialsis. nwirlial prediction. rrile ext ~ionnr

and prob~ability ( stitnation. Nrsiron A HU\.\ l is a omt icdhanirel AHI'NIAl' retwiuk Wruruli-n~irdat a friskm . \rrhit her v>Itq eirr Vs~ im VK4u~ LSI switrdl lg r le y t inW

Page 12: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

noutral net orks QI I) h inI! inh u n ii a44 T tif if A it '111 fm- 1h Ir '! I8 'v 441w

p)"- ()1)11 e -s.

(5) Temiporal patternis, workinig niicruorv. and 3-D object recognition' Wr

memliory, neural net works. cal led K st ni ned T'ern ond Order lcu rrent Show ) I ris A.I(ode the invariant temporal order of sequential events, withi repeated (or non1- roppeat KinIin a mianner that is stable uinder increme-nial lkar~iin1 (oridtions . Another >%vst(in AVE

E.NI :\P. uiss spatial adn~ terpndloa evi ln ce acen Inul ation to inpitw O\ \ HI iA P periur1 18

!! noisy' Input (environmnclts. Both STORE and ART-ENIAP -systins arp lieina apiW 1 n3-DF object recognition problems.

(6) Adaptive tirrinig: A neural net work cWidriit models ada pt ive t Kiiniig of re( wJ n bmand rein forcement learning. [he model is closely l inked to ci rcnits in t he liip pocainqpus.

(7) Adaptive control: An unsupervised error-based learning s'y stem cal led a\'((-)A\ssociativye NMap ('A NI) learns 3.1) spatial representations and seli-calibrat ing li-a'entqtont.rollers in robotics applications.\A model of sen-sorx'- n-' 4or control sho'\ how\ 4)4 1 w.eve( movement commands can be transformed by, two stages of opp ner laiosin(e~ ii o ýi1uhead-centered spatial representation of :3- D target posit ion. Opponent prof r-Ang is. iu ina key elmeriet in an analysis of arm miovement dat a. Analy sis 'If sensoq -nut r (4)11

sytems frames a model of cereb~ella- learning. it- model of motor oa( ill itions vii V P5

bAmanual coordination and human and quiadrup~ed gait transit ions. .Anot1her sv - eim iodel,

handwriting product on. including cursive script. Related moolel propert ies are us ed in ;ci~ap~plication to optimal control of machine set-up scheduling.

These and related projects. including model development. analysis. SiTIHiulat ion, and1( com)-parisons with behavioral and neural data, are described below.

The contract has provided partial summer salary for the two Principal 1nvest~i~ators and(suppor for four Research Assistants. all of whom are PhD students in the Boston Vniversit~vDepartment of Cognitive and Neural Systems.

Page 13: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

I. PREPROCESSING OF VISUAL FORM AND MOTION SI(;NAL.S

(IA) W~hy do parallel cortical systems exist for the perception of static frorm anidmioving form?

This prolect anialyses(~ computational palnjssiis tha~t darify whfy W IarA (wdI it Ai -v:teiis V1 -- V2. VA -- MY1 arid VI -- V2-- 11I exist for tie yr ti';! im j)o"Auc~i'

>,tict( visual forms andl moving visual fomis A syninnerv jrincicple. tilldII(J .\lprecl cmed to govrN the developmetit of t hese- ; o 11 colt ical vsynio ry > u~un ad conwWihe ways of symimetr ically gating sustained ( elks with t ranisient ( ellk anal oryn& Oni0,'-

sumtAine-tamnsient cells into opponent pahs of onA-~ls and ofotl s "Tst 1 ikpe i A 1' 8

arv insensitive to cdiection-of-cootrast. This -symmnetric organization c>:pil ms how~ th omiform system (Static BCS ) generates emergent boundary 5fetliient at ions w hose oipll i!*(1insens itiv e to li rect ion-of-contrast and i nsensi tiv\e to dl!rect ion-of-n othmin. w hten A, not ion form sys tern (Motion BCS) generates emergent boundary seqrnenl at ion:> who~e 01,4 ;)Uare insensitive to direction-of-ont rast but sensitive to direction-of- tmot ion. I-' \l'Itry clarifies why the geometries of static and motion form percept ion dIiffer: t~lfor &

whyv the opposite orientation of vertical is horizontal (900W) but th opposite diret it Ai)uip is down (iSO 3 ). Opposite orientations andi directions are embiedd~ed in (gated' dipoifs(ppoflent proce-sses that are capdale of antagonistic reboundl Negat ive afterirnagos s-Tn &ithe NOW=~a and waterfall illusions. are hereby exp[lai ned. as are aft ereffects of lon,-anLapplarent motion. These antagonistic rebounids help to control a dynamnic balantv e fewem-complementary perceptual gtates of resonance and reset. Resonance cooperatively links fea-tures into emneigent boundary segmentations via positive feedback in at ('(' Loop, anid resetterininates a resonance when the imagp changes, thereby ;)reventing massive smearing ofp~ercep~ts. These complemnentary preattentuve. states of resonance anid reset are relate~d toanalogous states that govern attentive feature integration. learning, and memoryvýearO 10iAdaptive Resonance Theory. The mechanism usedl in the V' I - .11l' system to QetieraI ;1wave of applarent motion between discrete flashes may also l)e used in other cortic al > tl.to generate spatial sh'ýLs of attention. The theory siuggests. how the V I - V2 - .1117' ii

stream helps to complute movinga-form--in-depth andl how long-range, a pparenOt mot ion ofl-lusory contours occnurs These results collectively argue against vision t heories that 'pniindependent processing modules. Instead, specializ d subs\ sterns interact to (tver(mis, tOn-p)tiat ional. uncertainties and c-ompllemnentary dleficiencires. to cooperat ively vhlindl fe-u un 'intocontext-snsitive resonances. and to realize svmmnet ry nrin. lples t hat are( lpredlii( ted toi'

the. development of visual cortex. [56-51A 61i

(1B) Cortical dynamics of visual motion perception: Short-range and long-rangeapparent motion

The, theory of biological motion p)ercept ion is also used to explain classical and re ew(Iat~a ab~out short-range andi long-range alplra en t looth io erc(epl s that have( unit ve %ý !)-finexplained by alternative models. These dlat a in clutde beta mot ion : spl it mot0!ioni: furalinia

mi ot ion and reverse-contrast garnrra mrot ion: deýlta mnotion, vi st al inert ia: the tram,ýi> i nf romn group motion to element motion in re~sponse to a Ternus display'. as the nirt1nuVinterval ( ISI) decreases: group~ motion in response to a reverse-contrast. Ternus display ~veýnat short ISNI: speedh-up of mrothin velocity as int erfiash di-stance iiicre~ases or flash duirmtiondecreases: dependencc of the t.ran sit ion frmn eleten t mot ion to giroinp ii oition Ion st iiOlm

durration and size; various classical deppendevncies bet wee flash dirra t in. spat ialI sepa rat is "n121. and mnot ion t hreshol k nown a~s IKortes L.awn: dependencetif n"iotin stren gh on stirn tiltorienitation amid spuatial frequency: short-rangor and long-range forri- ci 1 r irit rai in>: anIlhiinoctilar intera ct ions of (lasie to d(Ii qwre WS Q~ #

Page 14: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

( IC) N~eural dynamiics of globl~a inotion Segmienitat ion: 1>1 ect ion fieldsý. ;t1por-tures, and resonant grouping

ThiLs prolect has (leveloped a nerii('iwi %t oik model of ilobaI t10 0)! t', I !

\-Istil cortex. Calledl the Moltioli Houn:oIrv ( Conitoir-StItlli 113('S-i. 1 (1''aiil~igtivios local niov\eirients oil it (oiiiplex iii(VIng ý.liapv' are act .._;nic !nh vhereiu glolml 1in)tioii signal. 1rilikc riiaidi IJV('Vois rcm~drChols, wt, am~idYvf ho'A %~Mfot ion Sýignal IS iniplartedl to all regions, of at In~oitIg figure. riot onily loi mwuYiolls at 'Xti, ý !L (

ambheigouis mo1t iOn Signals exist. lime I iiodel hereby sigets solut loll 1( the i1ob rI mjproblem. The Motion BU'S des-cribes how preproces,'sing of [not ion sýignials hy at Mlotion 01) iented Contrast Filter (MNO(C Filter) is joinied to long-range cooperativ'e 'i'rouping llwchalll'-Iii>in a Niot ion T oprtI-Cmpti r Loop (M\O(C Loop) to (001 ml plium um ena 'ai a>motion (apt tire and induced motion. The Not ion B3CSis corm poted in parallel with flw ~t ,!'BCS of Grossberg and MIingolla, ( 1985a. 1 S5b 9Sý7 . Homologous properties of mI ho t - i

B( ' and the Static d( S. specialized to process miov.ement direct ions anid static or meWi'i ion-.respectively. support a unified explanation of many dat a ab~out static form per ( r pion midmotion form perception that have heretofore been uneXplained or treated sýqparwcty Prfeia(tions abott microscopic conmputational differences of the p~arallel cortica s ýtreamsn .11TAland V1 - V~ 2 are made. The Motion BC'S ('an comnpute mot ion direct ions that riar 1hfsynthesized from mu~ltiple orientations with opposite directions-of-cont rast. lotf.eract ions ofModlel simple cells. complex cells. hvpercomplex cells. anid [)ipole cells are dlescribed, wtspecial emphasis given to new functional roles (direction rlisamibiguation. induced miotionifor end stopping at multiple processing stages and to the dynamiic itrlavof spatiallyshort-ran~ge and long-range interactions. 1465-66]

(MD) Brightness perception, illusory contours, and corticogeniculate feedbackA neural network model of early v-isual procesSing offerS an exp~lanationi of briTht ne-.

effects often associated with Illusory contours. Top-down feedb)ack- from the miodels" armaloý!of v-isual cortical comp~lex cells to model lateýral geniculate. nucleus (L(;,N cells are tise(d !oenhance contrast at line ends and other areas of boundary (discontinuity. The resuilt is ;increase in perceived b~rightoess outside a (lark line endl. akin to what lKennedy ( 19719) m enrred"-1rightoess butttons* in his analysis of visual illusions. WVhen seve_ ral lines formi a sialconfi~trration. as In an Ehrenstein pattern. the p~erceptual effect of enhanced brightness (anb~e quite strong. M\odel simulations show the generation of brightriess buttons. W~ith the,1,(;. model cricuitrv embedded in a larger mno(lel of lpreattentiv-e -.-Ision. simiulations quingcornplex inpuits show the interaction of the b)right ness butttons with real anid Illusory CoWiOUrs.:

4iL

(IE) A what-and-where neural network for invariant, imrage preprocessingThe What-and-WVhere filter is a feedforward neural network for inv-ariant m11ilauO preI-ofo

co.ssina that rersents th postio, oriontitt tion and size of an image urf, heei

ri a multiplexed spatial mrap. This ilial is uisedl to grenera:. e ao inv.kariant re'presentiat lw othe figure that is insensitiv'e to position, orient at ion, and size for purposes rif pattern rvi m;-riition (what it is). A nnultiscale array of oriented filters. followed bY (otlipet it ion bewt\0 wrenorienrtations and1 scales is used] to define the Where. filter. [23])

(1IF) Figure-ground separation of connected scenic figures: Boundaries, filling-in.,and opponent processing

A neoural neftwork nbodle perfortirs(iii omiatnM itarall(l] separat ioti of -onnlected >r enlic fim_'1ore, from one anlotheýr and fromu their I jack grounds. The model Is part of a self-i rqa iii ii ii garch it ectumre for i tiari antit pat tern rýcot.n itlT III ind. CI mtt ered env-ironmnuetmt. The figu re- g, am nds eparat ion p~roces's itoraoe op(lirat 0ions fro a etoe(Otor senm I 'S an aIon

ar ont ourII Syste (BIl(SjW ''lor He F(1; (l)1c1111 she 11 iunrinant anld fills- IIII surf'Ic piopiat if-.

Page 15: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

<11('11 it' l)lj-ht'lleSS" and~ colo~r. !hen it] i-( wunto p A td,~:> \t I. ~l-iF le iis to use filling-in for figtire-grotid 111 eparat lol. VeK itilit>Imwindla ifthat dlefine the regbons in which filling" Ill 0((ilr,. I1w B(' (:S liodelklld lIv ;ltI f' V

network. called the (,owR-X 2 filter, that colubines or,;ent ed[(l r~ept Ive field's with rici Ii\ 'M2

comp~et itive. and cooperative inrteract ionis to detect, 1eulriead colli phi e Iwndi :.ilipI to 50%c analog noise. This filter (omine ili mm5tl pleinent airv properties of laru refields and small receptive fields5. andl of on-cel Is an 1 oif- (eli>. !o ,,nrtifaPpei t~

accurate and~ less noisy 1)oundarles. Douible opponlent !ut eractl)l of E ~I)ffacilitate separation of fi~gures wvith incoinplete( 'ORT-N boundaries.Tl eitslr:v-van FB F network can rapidlly Separate figures that hum11an's (atIIIlot -. Irpar& P (II I iisearch ta~sks. [74-77]

(IG) Cortical dynamics of 3-D vision and figure-ground pop-outA neural mnodel. called FACADE Theory' om n- oo-AdDp iIw a

parvocellular processing streanms from LGN to V4 can a oive( rise to 3-1) ,!isual pl~i

which figuires pop-out from their backgrounds. The miodel stii-ests how ( or it-al !;xi~nisms such a~s boundary completion, texture Segreg~ation, and sllrfa(-e filling-in art, iied i Ithis purpose. It clarifies how miono(-ularly viewed parts of an image may fill-in 11,w)I0)1)riate surface depth from contig-uous b~inocularly 'viewed parts (luring DaVi n -teep-Other explanations include how a 2-D image may generate a :3-1D percept. how 1p;4ti~a1l%sparse disparity cues can generate continuous surface representations at different v'-eiveddlepths. and ho;w represent at ions of occluded[ regions can be comnpletedl and rec(Janlzed witoh-out usually being seen. The model has also been usedl to analyse dlata about such variedphenomena as illusory contours. shape-from- texture, visual p~ersistence. and synchronousýcortical oscillations. it suggests how the brain uses comnputationally complemrentary bloband interbiob processing streams wherein a hierarchical resolution of uncertainty geneatecontext-sensitive visual representations that overcome several (different sources of loc-al ami-loguity in visual data. Cortical circuits of simple. complex. hypercomnplex. and bipole cells airesi mulateri. Recent psychophysical and neurobiological data relevant to miodel explaniations-and predictions are sumnmarized. [160. 62]

(1M) Synchronized oscillations for binding spatially distributed feature codesinto coherent spatial patterns

.Neural network models are describ~ed for binding out-of-phase activations of spat iafllvdistributed cells into synchronized oscillations within a single processing, cycle. These result:'suggest how the brain may overcome the temnporal jitter" inherent in multi-level process'ingýof spatially distributed data. Coherent synchronous p~atterns of spatially (list ribut (,d ffea-tures are formed to represent and learn ab~out external objects and events. Temporal jit Irthus does not typically cause scenic p~arts to be. c-ormbinedl into the wrong visuial objectýýDuring preattentive vision, such patterns may rep~resent emfergent bounfldarysgmnTt osincluding, illusory contours. During attentive visual object recognition. such pat t ('s i avoccur during, an attentive resonant state that triggers new learning [Iffrn rletIespreattentive and attentive oscillations are predicted, an(I comp~aredl wi! hi ueuroph\ ' y,1oloo icaldata concerning.- rapid synchronization of tellI activations in visual -ont ,e x [0-7 i~]

(11) Cortical dynamics of feature binding and resetMany properties of visual Ipersistence are- hYpothesized to be (amiedl by 1)ositive fvedb ~k

in t~he visual cortical circuits that are responsible for the binding or segtnent~aton d of dafeatures Into coherent visual forms, with tile dlegree of persistence limrited by (11(1111 tHatreýset these segmyent at jots at stiminuilus offset. A mlodlel of the (-orti cal local ci rciii r rv~ ('wriS ýi -Wle for suich feature binidinrg and reset q iialit i tat i vely sinluilat'(es p~sycliopiys'ical I1o~ dat a iii

Page 16: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

incivase of persi stenice withIii spt'al >I cara t of i If I I t (1!irw Ia Isistence to flash luminance and duration: I('aer [twrstt ;e o,1f1 :lii-0on; !%Jis > ,Icontours, with maximal persistence at an intermediate >,tiinulus (Iuratji,,: and d'el'nic,. eof persistence on pre-adapted stimulus orientation. Data cmncerning cortical cell respo,:<t'to illusory and real contours are also analysed.i as are alternative models of feature bindingand persistence properties. [-17-481

(1J) Processing of synthetic aperture radar (SAR) inages by the BCS/FCSsystem

An improved Boundary Contour System (BC(S) and Feature ('ontour System t F(CSineural network model of preattentive vision has been applied to two large images c(.OntailninTrange data gathered by a synthetic aperture radar (SAR) sensor. The goal of processingis to make structures such as motor vehicles, road, or buildings more salient anrd more in-terpretable to human observers than they are in the original imagery. Early processini4 byshuntina center-surround networks compresses signal dynamic range and performs local con-trast enhancement. Susequent processing by filters sensitive to oriented contrast. includingshort-range competition and long-range cooperation, segments the image into regions. Fi-nally, a diffusive filling-in operation within the segmented regions produces coherent visiblestructures. The combination of BCS and FCS helps to locate and enhance structure overregions of many pixels, without the resulting blur characteristic of approaches based on lowspatial frequency filtering alone. [461

12

Page 17: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

"2. PREPROCESSING OF ACOUSTIC SIGNALS

(2A) A neural network model of pitch detection and representationA neural network model capable vof eneraming asmtial repres•eýlt ,i fornhw 0 H,

an acoustic source has been d(eveloped. The imdel(. cal led the SpatiMl Pitch .N(It,• .. ta harmonic sieve" mechanisni whereby the strength of activation of a ,_ive. pi "ii Uw,•hd .r:,upon a weit hted sum of narrow regionsi around the ha rnioni cs, of the nlotiihAl p t(, x-.A key feature of the model is that higher harmonics contribute les tio a pitchi than !o,,:ones. Suitably chosen harmonic weighting functions enable compliter sin ulation• f• iperception data invol vi ng mistuned components. shifted harmoncs., and varh io ,yp,•1 illcontinuous spectra including rippled noise. It is shown how the weigýhting funciOh- p, O(

the dominance region and how they lead to octave shifts of pitch in reponse to atnbii;-ous stimuli. No explicit attentional window is needed to limit pitch choices by t h n ioil.A method is described for relating the deterministic strength-of-activation 1)Itc (Ii 1.,"o)nto statistical human performance and for comparing the network model with (;ol>t i i-statistical Optimum Processor Theory. [14-41]

(2B) Evaluation of speaker normalization methods for vowel recognition usingfuzzy ARTMAP and K-NN

Fuzzy ARTMAP and K-Nearest Neighbor (IK-NN) categorizers are used to evaluate in-trinsic an(l extrinsic speaker normalization methods. Each classifier is trained on prepro-cessed. or normalized, vowel tokens from about 30A of the speakers of the Peterson-Barneydatabase, then tested on data from the remaining speakers. Intrinsic normalization meth-ods include one nonscaled, four psychophysical scales (bark. bark with end-correction, meel.ERB), and three log scales, each tested on four different combinations of the fundamental(F0 ) and the formants (F1 , F-, F•3). For each scale and frequency combination, four extrin>icspeaker adaptation schemes are tested: centroid subtraction across all frequencies (0S). cen-troid subtraction for each frequency (C'Si). linear scale (LS). and lir,'-ar transformation i LT iýA total of :32 intrinsic and 128 extrinsic methods are thus compared. Fuzzy ARTMAP andt"-NN show similar trends. with K-NN performing somewhat oetter and fuzzy AT.•mAPrequiring about 1/10 as much memory. The optimal intrinsic normalization method is barkscale, or bark with end-correction, using the differences between all frequencies (Diff AMl.The order of performance for the extrinsic methods is LT, (CSi. LS. and ('S. with fuzzyARTMAP performing best using bark scale with Diff All. and IK-NN choosing psychopihys-cal measures for all except CSi. [123

13

Page 18: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

3. ADAPTIVE PATTERN RECOGNITION AND (A..TE()RIZ:VTIo)N: t.N-SUPERVISED LEARNING

(3A) Fuzzy ART: Fast stable learning and categorization of analog patterns byan adaptive resonance system

A fuzzy Adaptive Resonance Theory (ART) miodel capable of rapid stable ]i arl inu ofrecognition categories in response to arbitrary sequences of analog or b~inary input pateri:n,has been developed. Fuzzy ART incorporates computations from fuzzY set theory into theART 1 neural network, which learns to categorize only binary input patterns. The ,enera iza-tion to learning both analog and binary input patterns is achieved by replacing appeara iiesof the intersection operator (n) in ART 1 by the MIN operator (A) of fuzzy set Iheeor. IheMIN operator reduces to the intersection operator in the binary case. (Category proliferationis prevented by normalizing input vectors at a preprocessing stage. A normalization proce-dure called complement coding leads to a symmetric theory in which the MIN operator (,.)and the MAX operator (v) of fuzzy set theory play complementary roles. (Complement cod-ing uses on-cells and off-cells to represent the input pattern, and preserves individual feat ureamplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because alladaptive weights can only decrease in time. Decreasing weights correspond to increasingsizes of category "boxes". Smaller vigilance values lead to larger category boxes. Learningstops when the input space is covered by boxes. With fast learning and a finite input set ofarbitrary size and composition, learning stabilizes after just one presentation of each, inputpattern. A fast-commit slow-recode option combines fast learning with a forgetting rule thatbuffers system memory against noise. Using this option, rare events can be rapidly learned.yet previously learned memories are not rapidly erased in response to statistically unreliableinput fluctuations. [36-37]

(3B) ART 2-A: An adaptive resonance algorithm for rapid category learning andrecognition

ART 2-A is an efficient algorithm that emulates the self-organizing pattern recognitionand hypothesis testing properties of the ART 2 neural network architecture. but at a ,peedtwo to three orders of magnitude faster. Analysis and simulations show how the. ART 2-Asystems correspond to ART 2 dynamics at both the fast-learn limit and at intermediatelearning rates. Intermediate learning rates permit fast commitment of category nodes butslow recoding, analogous to properties of word frequency effects, encoding specific~tv effects,.and episodic memory. Better noise tolerance is hereby achieved without a loss of learnin-cstability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm.The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neuralcomputation. In particular, researchers using ART 2 for applications in the DARPA ANNTProgram have used ART 2-A for their projects. [34-.351

(3C) A neural theory of visual searchA neural theory is proposed in which visual search is accomplished by perceptual group-

ing and segregation, which occurs simultaneously across the visual field. and object recogni-tion, which is restricted to a selected region of the field. The theory offers an alternative hv-pothesis to recently developed variations on Feature Integration Theory (Treisman and Sato.1991) and the Guided Search Model (Wolfe, Cave, and Franzel, 1989). A neural architectureand search algorithm is specified that quantitatively exil-ains a wide range of psychophysicalsearch data (Cohen and Ivry, 1991; Mordkotf. Yantis. and Egeth, 1990: Treisman and Sawt,1991: Wolfe. Cave, and Franzel, 1989). [67)

14I

Page 19: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

(3D) Normal and amnesic learning, recognit iol, and Iueniory by a neural Illo(h-Iof cortico-hippocampal interactions

The processes by which humans and o her priimt es lea rn t.) recunize obIc(,, ili%?been the subject of many models. Processes such as learning, categorization. at •t,•.memory search, expectation, and noveity detection work together at different stages to Icaize

object recognition. The structure and function of one such model class (:\daptive ,esonanceTheory, ART) are related to known neurological learning and memory processes. s<uh ashow inferotemporal cortex can recognize both specialized and abstract inforrnation. and h}owmedial temporal amnesia may be caused by lesions in the hippocampal formation. Th,model also suggests how hippocampal and inferotemporal processing may be linked durirnlrecognition learning. [20]

15

Page 20: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

4. ADAPTIVE PATTERN RECOGNITION AND PREDICTION: SUPER-VISED LEARNING

(4A) ARTMAP: Supervised real-time learning and classification of nonstationarydata by a self-organizing neural network

A neural network architecture, called .\RTMAP. autonomously learns to classify arhitrar-ily many. arbitrarily ordered vectors into recognition categories based on predictive suces'.-.This supervised learning system is built up from a pair of Adaptive Resonance Theory ,mo-ules (ARTa and ARTb) that are capable of self-organizing stable recognition categories inresponse to arbitrary sequences of input patterns. During training trials, the ART,' mod-ule receives a stream {a(P)} of input patterns, and ARTb receives a stream {biP)} of inlputpatterns, where b(P) is the correct prediction given a(P). These ART modules are linked byan associative learning network and an internal controller that ensures autonomous syvstemoperation in real time. During test trials, the remaining patterns a(P) are presented withoutb(P). and their predictions at ARTb are compared with b(P). Tested on a benchmark machinelearning database in both on-line and off-line simulations. the ARTMAP system learns or-ders of magnitude more quickly. efficiently, and accurately than alternative algorithms. andachieves 100% accuracy after training on less than half the input patterns in the' database. Itachieves these properties by using an internal controller that conjointly maximizes predictivegeneralization and minimizes predictive error by linking predictive success to category size ona trial-by-trial basis, using only local operations. This computation increases the vigilanceparameter pa of ARf,, by the minimal amount needed to correct a predicti,,; errur at ARTb.ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity ofits hypotheses based upon predictive success. As a result, rare but important events can bequickly and sharply distinguished even if they are similar to frequent events with differentconsequences. Because ARTMAP learning is self-stabilizing, it can continue learning one ormore databases, without degrading its corpus of memories, until its full memory capacity isutilized. [28-321

(4B) Fuzzy ARTMAP: A neural network architecture for incremental supervisedlearning of analog multidimensional maps

Fuzzy ARTMAP extends the capabilities of ARTMAP to carry out incremental super-vised learning of recognition categories and multidimensional maps in response to arbitrarysequences of analog or binary input vectors. A normalization procedure called complementcoding leads to a symmetric theory in which the AND operator (A) and the OR operator(v) of fuzzy logic play complementary roles. Improved prediction is achieved by trainingthe system several times using different orderings of the input set. This voting strategy canalso be used to assign confidence estimates to competing predictions given small. noisy, orincomplete training sets. Four classes of simulations illustrate fuzzy ARTMAP performanceas compared to benchmark back propagation and genetic algorithm systems. These sinu-lations include (i) finding points inside vs. outside a circle; (ii) learning to tell two spiralsapart; (iii) incremental approximation of a piecewise continuous function; and (iv) a letterrecognition database. [19, 21, 24-27]

16

Page 21: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

(4C) Fusion ARTMAP: A neural network architecture for multi-chalniel datlafusion and classification

Fusion ARTM A P is a self-organizin, neural network arclitectui u fr r miulti-ch m>.l. ,itii multi-sensor, data fusion. Single-channel fusion -URTMAIAP is fundtionallv eqHuivalent to fz vA.RT during unsupervised learning and to fuzz, ARTN\P durig spevise learui.network has a svnimetric organization such that each channel can be dvnamicallv confi_,ure,,to serve either as a data input or a teaching input to the system An . RT module foms acompressed recognition code within each channel. These codes. in turn. become iipu?5 toa single ART system that organizes the global recognition code. When a predictive 07rror`occurs. a process called parallel match tracking simultaneously raises vigilanrces in nmuiART modules until reset is triggered in one of them. Parallel match tracking hereby ri-es,only that portion of the recognition code with the poorest match, or minimum pred(t,( iwconfidence. This internally controlled selective reset process is a type of credit assImnmenIJqtthat creates a parsimoniously connected learned network. Fusion ARTNIAP's multi-chd,111,,ocoding is illustrated by simulations of the Quadruped Mammal database. [1I

(4D) Comparative performance measures of fuzzy ARTMAP, learned vectorquantization, and back propagation for handwritten character recognition

A simulation study compares the performance of fuzzy ARTMAP with that of learnedvector quantization and back propagation on a handwritten character recognition task.Training with fuzzy ARTMAP to a fixed criterion uses many fewer epochs. Voting withfuzzv ARTMAP yields the highest recognition rates. [22J

(4E) Rule extraction by fuzzy ARTMAPKnowledge. in the form of fuzzy rules, can be derived from a self-organizing supervised

learning neural network called fuzzy ARTMAP. PV le extraction proceeds in two stages: prum-ing removes those recognition nodes whose confi, nce index falls below a selected threshold.and a quantization of continuous learned weights allows the final system state to be trans-lated into a usable set of rules. Simulations on a medical prediction problem. the PimaIndian Diabetes (PID) database, illustrate the method. In the simulations. pruned networksabout one-third the size of the original actually show improved performance. Quantizationyields comprehensible rules with only slight degradation in test set prediction performance.L 39]

(4F) Medical database analysis and survival prediction by neural networksFuzzy ARTMAP has been used for analysis of medical (fatabases, with comparative ýtud-

ies of other neural networks and statistical methods. Survival prediction networks have beenderived from large data sets for breast cancer. cardiac bypass surgery (CAB(;). and pnleu-monia patients. Ongoing studies focus on p)roblems of missing data and rule identification.[1l. 7,3]

(4G) Fuzzy ARTMAP, slow learning, and probability estimationA nonparametric probability estimation procedure uses the fuzzy ARTMAP neural net-

work. Because the procedure does not make a priori assumptions about underlyin~g proba-bility distributions, it yields accurate estimates on a wide variety of prediction tasks. F-!uzzyARTMAP is used to perform probability estimation in two different modes. In a 'slov-learning" mode, input-output associations change slowly, with the strength of each associa-tion computing a conditional probability estimate. In "max-nodes" mode. a fixed number ofcategories are coded during an initial fast learning interval, and weights are then funeld byslow learning. Simulations illustrate system performance on tasks in which various nudrilorsof clusters in the set of input, vectors mapped to a given class. [33]

17

Page 22: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

(4H) Probabilistic neural networks and calibration of supervised learning sys-tenms

Probabilistic, or general regression, neural networks have been developed for the cal-ibration of supervised learning systems. A training process learns receptive field widthparameters and calibrates predictions to reflect binary outcome probabilities. [79]

(41) Construction of neural network expert systems using switching theoryThis project introduces a new family of neural network architectures (NEXsT) that use

switching theory to construct and train minimal neural network classification expert sys-tems. The primary insight that leads to the use of switching theory is that the problem ofminimizing the number of rules and the number of IF statements (antecedents) per rule ina neural network expert system can be recast into the problem of minimizing the numberof digital gates and the number of connections between digital gates in a Very Large ScaleIntegrated (VLSI) circuit. Algorithms for minimizing the number of gates and the numberof connections between gates in VLSI circuits are used, with some modification, to gener-ate minimal neural network classification expert systems. The minimal set of rules thatthe neural network generates to perform a task are readily extractable from the network'sweights and topology. Analysis and simulations on several databases illustrate the system'sperformance. [78]

Page 23: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

5. TEMPORAL PATTERNS, WORKING MEMORY, AND 3-D OBJECTRECOGNITION

(5A) Working memory networks for learning temporal order with application to3-D visual object recognition

\Vorking memory neural networks, called Sustained Temporal Order RLEmcurrentiSTORE) models. encode the invariant temporal order of sequential events i short-t (riolmemory (STM). Inputs to the networks may be presented with widely differina Qrowthrates., amplitudes, durations. and interstimulus intervals without altering the stored SI \lrepresentation. The STORE temporal order code is designed to enable qropili!s of 0hwstored events to be stably learned and remembered in real time. even as new evernts pertirbthe system. Such invariance and st'ability properties are needed in neural architect ures whichself-organize learned codes for Val -ble-rate speech perception. sensory-motor planning, or3-D visual object recognition. Using such a working memory, a self-organizing arIhitecturefor invariant 3-D visual object recognition is described. The new model is based on a modelof Seibert and Waxman, which builds a 3-D representation of an object from a temporallyordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model.consists of the following cascade of processing modules: Invariant Preprncessor - ART 2 -STORE Model - ART 2 - Outstar Network. [3-4]

(5B) Working memories for storage and recall of arbitrary temporal sequencesAn extension of the STORE model encodes a working memory capable of storing and

recalling arbitrary temporal sequences of events, including repeated items. The memoryencodes the invariant temporal order of sequential events that may be presented at widelydiffering speeds, durations, and interstimulus intervals. This temporal order code is designedto enable all possible groupings of sequential events to be stably learned and remembered inreal time, even as new events perturb the system. [5-6,

(5C) ART-EMAP: Learning and prediction with spatial and temporal evidenceaccumulation

ART-EMAP is a neural architecture that uses spatial and temporal evidence accumui-lation to extend the capabilities of fuzzy ARTMAP. ART-EMAP combines supervised andunsupervised learning and a medium-term memory process to accomplish stable patterncategory recognition in a noisy input environment. The ART-EMAP system features (i)distributed pattern registration at a view category field; (ii) a decision criterion for rmappingbetween view and object categories which can delay categorization of ambiguous objectsand trigger an evidence accumulation process when faced with a low confidence prediction:(iii) a process that accumulates evidence at a medium-term memory (MTM) field: and (v0an unsupervised learning algorithm to fine-tune performance after a limited initial periodof supervised network training. ART-EMAP dynamics are illustrated with a benchmarksimulation example. Applications include 3-D object recognition from a series of ambiguous2-D views. [381

Page 24: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

6. ADAPTIVE TIMING

(6A) A neural network model of adaptively timed reinforcement learning andhippocampal dynamics

A new neural network models adaptively timed reinforcement learning. The adatmivetiming circuit Lý suggested to exist in the hippocampus. and to involve convergence of dew-tate granule cells on CA3 pyramidal cells. and NY.ID: receptors. This circuit. folrmns panrof a model neural system for the coordinated control of recognition learning, reinforcementlearning, and motor learning, whose properties clarify how an animal can learn to acquire adelayed reward. Behavioral and neural data are summarized in support of each processintstage of the system. The relevant anatomical sites are in thalamus. neocortex. hippocanqnpis.hypothalamus, amygdala, and cerebellum. Cerebellar influences on motor learning are diS -tinguished from hippocampal infuences on adaptive timing of reinforcement learning. 'Piemodel simulates how damage to the hippocampal formation disrupts adaptive tinii6! (iu i-nates attentional blocking, and causes symptoms of medial temporal amnesia. Propertif-s oIflearned expectations, attentional focussing, memory search. and orienting reactions to novelevents are used to analyse the blocking and amnesia data. The model also su'gests hownormal acquisition of subcortical emotional conditioning can occur after cortical ablation.even though extinction of emotional conditioning is retarded by cortical ablation. Uhe modelsimulates how increasing the duration of an unconditioned stimulus increases the amplitudeof emotional conditioning, but does not change adaptive timing; and how an in(rease in theintensity of a conditioned stimulus "speeds up the clock," but an increase in the intensityof an unconditioned stimulus does not. Computer simulations of the model fit parametricconditioning data, including a Weber law property and an inverted U property. Both l)ri-mary and secondary adaptively timed conditioning are simulated, as are data concrningconditioning using multiple interstimulus intervals (ISIs), gradually or abruptly chan4inMISIs, partial reinforcement, and multiple stimuli that lead to time-averaging of responls.Neurobiologically testable predictions are made to facilitate further tests of the model. 6t41

20

Page 25: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

7. ADAPTIVE CONTROL

(7A) Neural representations for sensory-mnotor control: Hlead-centitCCC 3-lD ta-r-get positions from opponent eye coinuniajds

his project dlescribes how corollary disharges fnonil! 01HOW YVe tii{viiieti randni>(-an be transformed by t wo stages of opponenl twnural procssing into ia.ea Icnt ~erml rolprsentationtof:3-D) target positionl. [his representatitton imlplicit lv defines a (y(lo;JOaIi ;dillal,system whlose variables apjproxitmate the 1)1m wtlocar vergenu ceind! 11er-ical. hoizniN han!mvert ical an gles wit.h respect to thle observer~s head. %*iriotms psvcl(ho(plyivsal dat.a (,)i (I in L'lbinocular dist ance percelt in and reaching b~ehavior are clarified iw this represenlt"& atanIherepresentation providles a foundation for learning head-centered and body-cont ered ix n dliarepresentations of both fovated and non- foveated 3-1D target positions. It also enalibý ;asolution to be developed of the classical motor equivalence problem. whereby manyx di Iferf-iijoint configuration1s of a redundant mnanipulator can all be usedl to realize a lesi red rap cr orin 3).-17) space. [5

( 7B) Emergence of tri-phasic muscle activation from the nonlinear interactionsof central and spinal neural network circuits

The origin of t he t ri- phasic burst p~attern . ob~served itt the ENIG-s of opp~onenltin-cles during rapid self-termnated( movemem nths been controversial. ('oputer simitlat onsshow that the pat tern eirg from interactions between a central neural trajectory c'on-troller (kVITE circuit) and a p erilheral neuromuscuar fOrce controller ( FLETE circuit) Bothneural nmodels have been derived front simp1le functional constraints that have led to) prin-cip~led explanations of a wide variety of behavioral and neurobiological data. including t hegeneration of tri-phasic bursts. ]

(77C) Cerebellar learning in an opponent motor controller for adaptive load comi-pensation and synergy formation

A tmin im al neural network model of th licerehellun is embedded xwi thin at sensory-iwurpEli ticiilar cont!rol syst em that mirnimis known anat ontv and p~hysiology. W\i th this e 1i z

('(rebel lar learntin g promroteOs load (011 tettsation wvhilIe aiso allowing, bot h coact ivathn t andrecilrocal inhibition of sets of antagonistic muscles. In particutlar. we sýhowv how synap~ticlong term depression gtuided by feedback from tnuscle stretch recE'latos can lead to t rans-cerebellar gain changes that are loadl- compen satring. It is argued that the samte p)rocesses11011 to adlapt ivel v discover rtulti-joint synergies. Simtulat ions of rapid siilglp jol t r to at in-ttndlr lhad illustrates dlesign feasibility and stability. [71]

(7D)) O?timal control of machine set-up schedulingA n Optimal (critrol solution to (11atigoV Of Machinte SeO - tp51) 1 S 't Ii ng is denInn>! rt hl'di

rhe mtodel is biasedl on dynamic programmnring aver age cost per stage valw iter! or as wtforth by (Caramanis e t al. for the 2-4) case. The difculty with t ho optitnal approa( h lein the explosive comnputational growvthI of t he resuiltinrg solu tion. .. \ miteod I oHIf redui i ii th,'otmptutat ional cornple.xity is (lO~olpjvd usinrg ideas from biology and( neu(1ra 1lie wtwirk>.

real-timre con troller is described thit uses a Ii near-log represent atio 1)1of ,t at P>a(( i

neural net works emphy1)lOd to fit cost suirfars. 2

(7E) Vector associative maps: Unsupervised real-time error-ba,-ed learning andcontrol of movement trajectories

This p~roject has led to) the developunen t (f neural retwork in I eels for ada pt iv P wn A'i i iiarm rtowenrent trajectories duiiring visual l gt ide~d reaching atn 11 i1tr(I gene1ral ly. aI frati ii ,i xinýfor unsupervised rl-inero-bsdlearnine,.T Ihe tIteS clarify how aI (lilui on ITJtt t'ailild

Page 26: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

o it. (al le 1arn 1 t o reac tot I llrt Ia fk )'~ It >t >1:* %v w 11 %V

a1111 nio ei enOWTts leadt to iadapt ive tutiU ri o i I di ii 11)!! !() a I i i Ili-;:.-activate the target position representations; hat an'Iiro tl (Je !o~i t1'l. O vir-lIIt 1dwforiviat ion thait controls visnaillv guided recaching". _1l11 AN IT11 11odel I>al ailaw tijyci rcuit based cai th Ve r(ctor inte~gration to Endpoint .ITE) niod 1 foL [ r ariii and Upeech itto0ry generation of Bullock and (,ross berg.,. III the VITE i (1(1oe. it iar'tet Posit ion ( (Inti dlT PC) representý the locat ion of the (desi red It arget .the Presýent Pos .it ion ( 01iilla rld 1' Pl)

encodes the present. h and-armn configuration. TFile A\\IT F model ex plains how self-col- _T('1

T PC and P PC coordlinates are antionotnouslv .-eneral ed and iearnedi. Leýarning, 'f AVIV fFparamieters is regulated by act ivat ion of at self- regulati n, Endo-enlous H andoir(t'iii(ERG) of training vectors. Each vect or Is inte~gratedi at the PIN . givin~g rise to a 1iiovt ii ~li,

comm111and. The generation of each vector induces a comiplemnentary post uralphe u!2which ERG output stops andl learning, occurs. ERG output autonomnouiliv >tops: Inl -1( 'wav that. across trials, a broad samiple of works<pace target posi t irons is geriwrai ed. Lear,1!Iiui2of at transformnation fromn TPC to PPC occurs using the DV ats an error signlal that ;:,zr>;ldueC to learning. This learning- schemne is called a Vector Asýsociative Map, or \;\ N. The"<'AM mnodel is a general-p)urpose device for autononiotis real-tunie error-based learnriM~ tri-dperformiance of associative miaps. VANMs thus provide an on-line unsupervised alternr ixo%to the off-line properties of supervised error- correctilon learningý algorithmns. V.\N moudt-and Adaptive Resonance Theory (ART) miodels exhib~it, comiplemnentary mnatching, learn-inz, and lperforrnance p~roperties that together lprovidie a foundation for (lesinlmirni t.iseniSory-cognitive and cognitive-miotor autonomnous system .,1'49-5)21

(7F) A neural pattern generator that exhibits bimanual coordination and humanand quadruped gait transitions

A neural pat tern tgenerator is based upon a nonlinear ,oop~erativ-e-,onii;)et it ive.fehlneoural network. The systemi can aenerate two >tandard humnan gaiu,: th le walk- and 1 1ýe run,!A >,calar arousal or GO signial caus;es a bifurcation frorin one gTait to thet nlext, Alt )iOIfLthese, two gaits are, qualitatively different. t hey both have the sanie lnimb order anid nia';exhibit oscillation freqjuencies that overlap). The miodel simiulates the walk and the run,, vl~tcIitalitativeiv different waveform shapes. The fraction of cycle that. act ivitv is above thro'> liohidi~t inguishes the two graits. miuch as the duty cycles of the( feet are long.er in the walik I han~in the run. The two-channel version of the Model simiulates data fromi humian blin~antiaicoordination tasks in which anti-phase oscillations at low freqjuencies spontaneously swvto in- phase os(illat ions at high freqjuencies, in- ph ase oscillations canl be, perfor~n ed ii,low andi high freqjuenicies. phase fluoctunations occor at teanti- phase to in -phase a 1?i

and( a "sauleffect"' of larger errors occurs at inlterined iate phasesS. In a fou r-( hII1an nlpat tern generator. bothi tire freqjuency anrdo t he ro-'lat i xe ph ase of oici IIati011 n, lirt ci i tii

byv sc'alar arousal. The uenerat or is used to si roulate q tiatirtpeol gaits: in part i cia r. rarasi insare sirmulated in t he ordier-- walk. ¶ rot., p~ace. aild ga 1101) -hat ocu iirs in 11,Ii'.

Pr r'cse swtching control is achieved by irsirig an arou sal de(-pen~dent rood nill at 10l()fll hein'ý

rinhi bitory Interact ions. This rnodrr Iation generate> a diff~eren t. functional conrnditix~i ill xi>~frleetwork at dlifferent arousal levels. [.40-43]

(7G) VITEWRITE: A neural network model for handwriting productionA ne' 'ra.] net work miodel c'alledi VII E\V ITE FIs -,hown tI() Q eIner! e han wri-t iTI Q !i1

rI'i .The ii od(le consist s of a seq ilemtial cont rolle~r, or rnot.ror progra ni . Iiiat initeract.> 1 Iitt rii ietiorv genera tor to mnove a hiand1 with1 redun rda nt. degrees of freedor i. 1' he nienirad T, r~it orvý gjenerator is t he Vector lnte-grati'ni to FrolpoIrit. ( lITE 1niode0 for >,VThrrl('iroio1Ž arno'speed cont rol of tri ilt ijoi nt mrovements. VliTE properties enablde a siII1111f r.)le rtt rol '-Irat '2% v 'i

ge_.nerate, corn plex hand writ ten script1, if thle hanrd miodel cont aTins red rlrr1iari t 601'ee 1ffr(lorin. The propose(i 'onit roller lauitic hes t ransienit dirýctionail (omliilrari(Is toripeo't aI>,vnertgies at tirnies when tHeP hanlid begins to rirove. or whenr a velo('itv ljreak iii a given1 -i."i 2

Page 27: 111(1Gail A. Carpenter and Stephen Grosberg Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems I II Curnmington Street Boston, MA 0221-5 (617)

is achieved. The VITE model trman>tLs I thlose iem nporallv kli.jomiit .-. mA1 T,_v,smooth curvilinear trajectories amnong t (lporallv overflappi ng sVinraet 11ii0.V( .i,

separate "score" of onset times used in most prior models is hereby replaced 1y ;-.,i

activity-released "n-otor program" that uses few memory resources. en•abes each - r~ vto exhibit a unimodel velocity profile during any stroke. generates letters that are iriviantunder speed and size rescaling, and enables effortless connection of letter ,hapwe' s o ords.Speed and size rescaling are achieved by scalar (;G and (;R{() siimals that expNPI e,. (Wilpijl-

tationalls simple volitional commands. Psychophysical data concerning hand mnem,.such as the isochrony principle, asymmetric velocity profiles, and the two-thirds 1),Ter l;vrelating movement curvature and velocity arise as emeraent properties of model in:r•it i,.

2[1