ICSLP 90PROCEEDINGS
1990INTERNATIONAL CONFERENCE
ONSPOKEN LANGUAGE PROCESSING
NOVEMBER 18,19,20,21,22, 1990INTERNATIONAL CONFERENCE CENTER
KOBE, JAPAN
Sponsored by
THE ACOUSTICAL SOCIETY OF JAPAN,THE ACOUSTICAL SOCIETY OF AMERICA,
EUROPEAN SPEECH COMMUNICATIONASSOCIATION,
THE INSTITUTE OF ELECTRONICS,INFORMATION
AND COMMUNICATION ENGINEERS,and
THE TOKYO SECTION OF IEEE
Volume 2 of 2
UB/TIB Hannover113 589 271
89
16.4 A Comparison of Two Methods to TranscribeSpeech into Phonemes: A Rule-Based Methodvs. Back-Propagation 673Kari Torkkola and Mikko Kokkonen, Laboratory ofInformation and Computer Science, HelsinkiUniversity of Technology, TKK-F, Rakentajanaukio2C, SF-02150 Espoo, Finland
16.5 Phoneme Recognition by PairwiseDiscriminant TDNNs 677Jun-ichi Takami and Shigeki Sagayama, AJRInterpreting Telephony Research Laboratories,Seika-cho, Souraku-gun, Kyoto, 619-02 Japan
16.6 Speaker Independent Speech RecognitionBased on Neural Networks of Each Categorywith Embedded Eigenvectors 681Yasuyuki Masai, Hiroshi Matsu'ura and TsuneoNitta, Information & Communication SystemsLaboratory, Toshiba Corporation, 70, Yanagi-cho,Salwai-ku, Kawasaki, 210 Japan
16.7 Speech Recognition Using Sub-PhonemeRecognition Neural Network 685Kiyoaki Aikawa and Alexander H. Waibel, NTTHuman Interface Laboratories, Nippon Telegraphand Telephone Corporation, 3-9-11 Midoricho,Musashino-shi, Tokyo, 180 Japan
16.8 Speech Recognition Based on the Integrationof FSVQ and Neural Network 689Li-Qun Xu, Tie-Cheng Yu and G.D. Tattersall,Institute of Acoustics, Academia Sinica, Beijing100080, China
116.9 Fast Text-to-Speech Learning 693Samir t. Sayegh, Physics Department, PurdueUniversity, Ft Wayne, IN 46805-1499, U.S.A.
WEDNESDA Y MORNING Nov. 21Session 17: Continuous Speech Recognition
Time: 08:30 to 12:00, November 21, 1990Place: Hall B (501)CHAIRPERSONS:
Jean-Pierre Tubach, Ecole Nationale Supeheure desTelecommunications;Takao Watanabe, C&C Information TechnologyResearch Laboratores, NEC Corporation
17.1 Experiments with a Speaker-IndependentContinuous Speech Recognition System onthe TIMIT Database 697Yunxin Zhao and Hisashi Wakita, SpeechTechnology Labaratory, Panasonic TechnologiesInc., 3888 State St. Suite 202, Santa Barbara, CA93105, U.S.A.
17.2 Continuous Speech Recognition with Vowel-Context-Independent Hidden-Markov-Modelsfor Demlsyllables 701Walter Weigel, Lehrstuhl fur Datenverarbeitung,Technische Universitat Munchen, Franz-Joseph-Str. 38, D-8000 Munchen 40, Germany (F.R.G.)
17.3 Description of Acoustic Variations by Tree-Based Phone Modeling 705Satoru Hayamizu, Kai-Fu Lee and Hsiao-WuenHon, Machine Understanding Division,Electrotechnical Laboratory, 1-1-4, Umezono,Tsukuba, Ibaraki, 305 Japan
17.4 A Tree-Trellis Based Fast Search for Findingthe N Best Sentence Hypotheses inContinuous Speech Recognition 709Frank K. Soong and Eng-Fong Huang, SpeechResearch Department, AT&T Bell Laboratories,600 Mountain Ave., Murray Hill, NJ 07974, U.S.A.
17.5 Modeling Vocabularies for a ConnectedSpeech Recognizer 713F. Gabrieli, A. Dimundo, A. Rizzi, G. Colangeliand A. Stagni, Tecnopolis C.S.A.T.A., Str. Prov.per Casamassima Km.3, 70010 Valenzano (Bah),Italy
17.6 Japanese Phonetic Typewriter Using HMMPhone Units and Syllable Trigrams 717Takeshi Kawabata, Toshiyuki Hanazawa,Katsunobu Itoh and Kiyohiro Shikano, NTT BasicResearch Laboratories, Nippon Telegraph andTelephone Corporation, 3-9-11, Midoh-cho,Musashino-shi, Tokyo, 180 Japan
17.7 A Large Vocabulary Continuous SpeechRecognition System with High PredictionCapability 727Minoru Shigenaga, Yoshihiro Sekiguchi, ToshihikoHanagata, Takehiro Yamaguchi and RyotaMasuda, Faculty of Computer Science, ChukyoUniversity. Kaizu-cho Toyota-city, 470-03, Japan
17.8 Evaluation of a Speech Understanding System- SUSKIT-2 725Yutaka Kobayashi and Yasuhisa Niimi,Department of Electronics and InformationScience, Kyoto Institute of Technology,Matsugasaki, Sakyo-ku, Kyoto, 606 Japan
17.9 Spoken Language System Integration andDevelopment 729Patti Price, Victor Abrash, Doug Appelt, JohnBear, Jared Bernstein, Bridget Bly, JohnButzberger, Michael Cohen, Eric Jackson, RobertMoore, Doug Moran, Hy Murveit and MitchelWeintraub, SRI International, EK-168, Menlo Park,CA 94025, U.S.A.
WEDNESDA Y MORNING Nov. 21Session 18: Modeling of First and Second
Language Acquisition
Time: 08:30 to 12:00, November 21, 1990Place: Hall C (504, 505)CHAIRPERSONS:
Paula Menyuk, Center for Applied Research inLanguage, Boston University;Morio Kohno, Kobe City University of Foreign Studies
18.1 Relationship between Speech Perception andProduction in Language Acquisition 733Paula Menyuk, Boston University, Boston, MA02215, U.S.A.
18.2 A Psycholinguistic Model of First and Second 1385Language LearningTatiana Slama-Cazacu, University of Bucharest,Romania
18.3 Relations between Thought and Language inInfancy 737Andrew N. Meltzoff and Alison Gopnik,Department of Psychology, University ofWashington, Seattle, Washington 98195, U.S.A.
18.4 The Role of Rhythm in the First and SecondLanguage Acquisition 741Morio Kohno, Kobe City University of ForeignStudies, 9-1 Gakuen-higashi-machi Nishi-ku,Kobe, 651-21 Japan
18.5 Towards a New Theory of the Development ofSpeech Perception 745Patricia K. Kuhl, Department of Speech andHearing Sciences, diversity of Washington,Seattle, Washington 98195, U.S.A.
18.6 Audition and Speech Perception in theChimpanzee 749Shozo Kojima, Primate Research Institute, KyotoUniversity, Kanrin, Inuyama, Aichi, 484 Japan
18.7 Prosodic and Phonetic Patterning ofDisyllables Produced by Japanese versusFrench Infants 753P. A. Halle and B. de Boysson-Bardies, L.P.E.,MSH, CNRS, 54 Bid Raspail, 75006 Paris, France
18.8 Perception and Production of Syllable-InitialEnglish hi and IV by Native Speakers ofJapanese 757Reiko A. Yamada and Yoh'ichi Tohkura, ATRAuditory & Visual Perception ResearchLaboratories, Seika-cho, Soraku-gun, Kyoto, 619-02 Japan
18.9 The Perception of Inter-Stress-Intervals inJapanese Speakers of English 761Michiko Mochizuki-Sudo and Shigeru Kiritani,Juntendo University, 1-1, Hiraga-gakuendai, Inba,Inba-gun, Chiba, Japan
WEDNESDA Y MORNING Nov. 21Session 19: Synthesis of Spoken Language
Time: 08:30 to 12:00, November 21, 1990Place: Hall D (Reception Hall) PosterCHAIRPERSONS:
Rolf Carlson, Department of Speech Communicationand Music Acoustics, KTH;Keikichi Hirose, Faculty of Engineering, The Universityof Tokyo
19.1 Inductive Learning of Grapheme-to-PhonemeRules 765Bert van Coile, Laboratory of Electronics andMetrology, University of Gent, St.Pietersnieuwstraat 41, B-9000 Gent, Belgium
19.2 A Support Environment Based on RuleInterpreter for Synthesis by Rule 769Yoichi Yamashita, Hiroyuki Fujiwara, YasuoNomura, Nobuyoshi Kaiki and Riichiro Mizoguchi,I.S.I.R., Osaka University, 8-1 Mihogaoka, Ibaraki-shi, Osaka, 567 Japan
19.3 Speech Synthesis Using Demisyllables forKorean: A Preliminary System 773Jung-Chul Lee, Yong-Ju Lee, Hee-ll Hahn, Eung-Bae Kim, Chang-Joo Kim and Kyung-Tae Kim,Signal Processing Section, Electronics andTelecom. Research Institute, P.O.Box 8, DaedukScience Town, Daejun, 305-606, Korea
19.4 The Rules in a Korean Text-to-Speech System 777Seung-Kwon Ahn and Koeng-Mo Sung, GoldstarCentral Research Laboratory, 16, Woomyeon-DSecho-G, Seoul 137-140, Korea
19.5 Mandarin Speech Synthesis by the Unit ofCoarticulatory Demi-Syllable 781Chi-Shi Liu, Wem-Jun Wang, Shiow-Min Yu andHsiao-Chuan Wang, TelecommunicationLaboratories, Ministry of Communications,P.O.Box 71, Chung-Li, Taiwan
19.6 A Study on Various Prosody Styles inJapanese Speech Synthesizable with the Text-to-Speech System 785Ryunen Teranishi, Kyushu Institute of Design, 4-9-1 Shiobaru, Minami-ku, Fukuoka-shi, 815 Japan
19.7 Japanese Text-to-Speech Conversion System 789Hiroki Kamanaka, Takashi Yazu, Keiichi Chiharaand Makoto Morito, Human Interface Laboratory,OKI Electric Ind. Co., Ltd., 550-5 Higashi-asakawa-cho, Hachioji-shi, Tokyo, 193 Japan
19.8 Neural Network Based Concatenation Methodof Synthesis Units for Synthesis by Rule 793Yasushi Ishikawa and Kunio Nakajima,Information Systems and Electronics DevelopmentLaboratory, MITSUBISHI Electric Corporation, 5-1-1 Ofuna, Kamakura-shi, Kanagawa, 247 Japan
19.9 Improvement of the Synthetic Speech Qualityof the Formant-Type Speech Synthesizer andIts Subjective Evaluation 797Norio Higuchi, Hisashi Kawai, Tohru Shimizu andSeiichi Yamamoto, KDD R&D Laboratories, 2-1-15, Ohara, Kamifukuoka-shi Saitama, 356 Japan
19.10 A Parametric Model of Speech Signals:Application to High Quality Speech Synthesisby Spectral and Prosodic Modifications 801Thierry Galas and Xavier Rodet, Laforia UniteAssociee au CNRS, N°1095, Paris VI, 75252Paris Cedex 05, France
19.11 The Improved Source Model for High-QualitySynthetic Speech Sound 805Tomoki Hamagami and Shinichiro Hashimoto,SECOM Intelligent Systems Laboratory, SECOMCo., Ltd., 6-11-23 Shimorenjyaku, Mitaka, Tokyo,181 Japan
19.12 A New Japanese Text-to-Speech SynthesizerBased on COC Synthesis Method 809Kazuo Hakoda, Shin-ya Nakajima, TomohisaHirokawa and Hideyuki Mizuno, NTT HumanInterface Laboratories, Nippon Telegraph andTelephone Corporation, 1-2356 Take, Yokosuka-shi, Kanagawa, 238-03 Japan
19.13 A Parallel Multialgorithmic Approach for anAccurate and Fast English Text to SpeechTranscriber 873G.M. Asher, K.M. Curtis, J. Andrews and J.Burniston, Department of Electrical and ElectronicEngineering, University of Nottingham,Nottingham, NG72RD, U.K.
19.14 A Highly Programmable Formant SpeechSynthesiser Utilising Parallel Processors 817K.M. Curtis, G.M. Asher, S.E. Pack and J.Andrews, Department of Electrical and ElectronicEngineering, University of Nottingham,Nottingham, NG7 2RD, U.K.
19.15 Enhancement of Human-Computer Interactionthrough the Synthesis of NonverbalExpressions 821Kris Maeda, Yasuki Yamashita and YoichiTakebayashi, Research & Development Center,Toshiba Corporation, 1, Komukai, Toshiba-cho,Saiwai-ku, Kawasaki, 210 Japan
19.16 Duration, Pitch and Diphones in the CSTR TTSSystem 825W.N. Campbell, S.D. Isard, A.I.C. Monaghan andJ. Verhoeven, Centre for Speech TechnologyResearch, 80 South Bridge, Edinburgh EH1 1HN,U.K.
19.17 A Chinese Fundamental FrequencySynthesizer Based on a Statistical Model 829Sin-Homg Chen, Su-Min Lee and Saga Chang.Department of Communication Engineering andCenter for Telecommunication Research, NationalChiao Tung University, Hsinchu, Taiwan 30039,Taiwan
19.18 A Contribution to the Synthesis of ItalianIntonation 833Cinzia Avesani, Scuola Normale Superiore,Piazza dei Cavalieri 7, 56100 Pisa, Italy
19.19 Pause Rule for Japanese Text-to-SpeechConversion Using Pause Insertion Probability 837Kazuhiko Iwata, Yukio Mitome and TakaoWatanabe, C&C Information TechnologyResearch Laboratories, NEC Corporation, 4-1-1Miyazaki, Miyamae-ku, Kawasaki, 213 Japan
19.20 Analysis and Modeling of Tonal Features inPolysyllabic Words and Sentences of theStandard Chinese 841Hiroya Fujisaki, Keikichi Hirose, Pierre Halle andHaitao Lei, Department of Electronic Engineering,Faculty of Engineering, University of Tokyo, 7-3-1Hongo, Bunkyo-ku Tokyo, 113 Japan
19.21 Voice Response Unit Embedded in FactoryAutomation Systems 845Akira Yamamura, Hiroharu Kunizawa, NoboruUeji, Hiroshi Itoyama and Osamu Kakusho, AlLaboratory, IS Center, Matsushita Electric Works,Ltd., 1048 Kadoma, Osaka, 571 Japan
19.22 TETOS - A Text-to-Speech System for German 849Klaus Wothke, Heidelberg Scientific Center, IBMGermany, Tiergartenstr. 15, D 69 Heidelberg,Germany (F.R.G.)
19.23 A Written Text Processing Expert System forText to Phoneme Conversion 853Michel Divay, Institut Universitaire de Technologie,Universite de Rennes, BP 150, 22302 Lannion,France
19.24 Trial Production of a Module for SpeechSynthesis by Rule 857Mikio Yamaguchi, Information and ElectronicsLaboratories. Sumitomo Electric Industries, Ltd.,1-1-3. Shimaya, Konohana-ku Osaka, 554 Japan
WEDNESDAY AFTERNOON Nov. 21Session 20: Application of Speech
Recognition/Synthesis Technologies
Time: 13:30 to 17:00, November 21, 1990Place: Hall A (502)CHAIRPERSONS:
Bathsheba J.Malsheen, Centigram CommunicationsCorporation;Tsuneo Nitta, Information and Communication SystemsLaboratory, Toshiba Corporation
20.1 Integration of Speech Recognition, Text-to-Speech Synthesis, and Talker Verification intoa Hands-Free Audio/Image TeleconferencingSystem (HuMaNet) 861D.A. Berkley and J.L. Flanagan, InformationPrinciples Research Laboratory, AT&T BellLaboratories, 600 Mountain Avenue, Murray Hill,NJ 07974, U.S.A.
20.2 Bellcore Efforts in Applying SpeechTechnology to Telephone Network Services 865G. Velius, C. Kamm, M.J. Altom, T.C. Feustel,M.J. Macchi and M.F. Spiegel, Network Systemsand Services Research Laboratory, Bellcore, 445South St.. Morristown, NJ 07962-1910, U.S.A.
20.3 Extension Number Guidance System 869Fumihiro Yato, Kazuki Katagisi and Norio Higuchi,Artificial Intelligence Laboratory, KDD R&DLaboratories, 2-1-15 Ohara, Kamifukuoka-shi,Saitama, 356 Japan
20.4 Japanese Text-to-Speech Equipment: CurrentApplications and Trends 873Hirokazu Sato, NTT Human InterfaceLaboratories, Nippon Telegraph and TelephoneCorporation. 1-2356, Take, Yokosuka-shi,Kanagawa, 238-03 Japan
20.5 The Synthesis of Dialectal Variation in Englishand Spanish 877Mariscela Amador-Hernandez and Bathsheba J.Malsheen, Centigram CommunicationsCorporation, 4415 Fortran Ct., San Jose. CA95143, U.S.A.
20.6 A Japanese Text-to-Speech System forElectronic Mail 881Hiroyoshi Saito, Motoshi Kurihara, Ken-ichiroKobayashi, Yoshiyuki Hara and Naritoshi Saito,Information & Communication SystemsLaboratory, Toshiba Corporation, 70 Yanagi-cho,Saiwai-ku, Kawasaki, 210 Japan
20.7 Issues Concerning Voice Input Applications 885Tsuneo Nitta and Nobuo Sugi, Information &Communication Systems Laboratory, ToshibaCorporation, 70, Yanagi-cho, Saiwai-ku Kawasaki,210 Japan
20.8 A Prototype for a Speech-to-Text TranscriptionSystem 889Toshiaki Tsuboi and Noboru Sugamura, NTTHuman Interface Laboratories, Nippon Telegraphand Telephone Corporation, 1-2356, Take,Yokosuka, Kanagawa, 238-03 Japan
20.9 A Noise Robust Speech Recognition System 893Masahiro Hamada, Yumi Takizawa and TakeshiNorimatsu, Central Research Laboratories,Matsushita Electric Ind. Co., Ltd., 3-15 Yagumo-Nakamachi, Mohguchi, Osaka, 570 Japan
21.5 Semantic Weights Derived from Syntax-Directed Understanding in DTW-Based SpokenLanguage Processing 913S. Bornerand, F. Neel and G. Sabah, LIMSI-CNRS, B.P. 133, 91403, ORSAY CEDEX, France
21.6 Massively Parallel Spoken LanguageProcessing Using a Parallel AssociativeProcessor IXM2 917Hiroaki Kitano, Tetsuya Higuchi and MasaruTomita, Center for Machine Translation, CarnegieMellon University, Pittsburgh, PA 15213, U.S.A.
21.7 Integration of Speech Recognititon andLanguage Processing in Spoken LanguageTranslation System (SL-TRANS) 921Tsuyoshi Morimoto, Kiyohiro Shikano, Hitoshi lidaand Akira Kurematsu, ATR Interpreting TelephonyResearch Laboratories, Seika-cho, Souraku-gun,Kyoto, 619-02 Japan
21.8 Design Principle of Language Model forSpeech Recognition 925Toshiya Sakano and Tsuyoshi Morimoto, ATRInterpreting Telephony Research Laboratories,Seika-cho Soraku-gun Kyoto, 619-02 Japan
21.9 Sentence Speech Recognition Using SemanticDependency Analysis 929Shoichi Matsunaga and Shigeki Sagayama, NTTHuman Interface Laboratories, Nippon Telegraphand Telephone Corporation, 3-9-11, Midoricho,Musashino-shi, Tokyo, 180 Japan
WEDNESDA Y AFTERNOON Nov. 21Session 21: Language Modeling
Time: 13:30 to 17:00, November 21, 1990Place: Hall B (501)CHAIRPERSONS:
Renato De Mori, School of Computaer Science, Me GillUniversity;Akira Kurematsu, ATR Interpreting Telephony ResearchLaboratories
21.1 Computation of Probabilities for Island-DrivenParsers 897A. Corazza, R. De Mori, R. Gretter and G. Satta,Istituto per la Ricerca Scientifica e Tecnologica,38050 Povo di Trento, Italy
21.2 A Unified Probabilistic Score Function forIntegrating Speech and Language Informationin Spoken Language Processing 901Keh-Yih Su, Tung-Hui Chiang and Yi-Chung Lin,Department of Electrical Engineering, NationalTsing Hua University, Hsinchu, Taiwan
21.3 Continuous Speech Recognition Using Two-Level LR Parsing 905Kenji Kita, Toshiyuki Takezawa, Junko Hosaka,Terumasa Ehara and Tsuyoshi Morimoto, ATRInterpreting Telephony Research Laboratories,Seika-cho, Souraku-gun, Kyoto, 619-02 Japan
21.4 Gap-Filling LR Parsing for Noisy Speech Input:Towards Interactive Speech Recognition 909Hiroaki Saito, Center for Machine Translation,Carnegie Mellon University, Pittsburgh, PA 15213,U.S.A.
WEDNESDA Y AFTERNOON Nov. 21Session 22: Phonetics and Phonology
Time: 13:30 to 17:00, November 21, 1990Place: Hall C (504, 505)CHAIRPERSONS:
Mario Rossi, Institut de Phonetique, Universite deProvence;Kazuo Nakata, Department of Applied Physics, TokyoUniversity of Agriculture and Technology
22.1 Distinctive, Redundant, Predictable,Necessary, Sufficient Accounting for English/bdg/-/ptk/ 933Leigh Lisker, University of Pennsylvania,Philadelphia, PA, 19104, U.S.A.
22.2 An Information Theoretic Approach to theStudy of Phoneme Collocational Constraints 937Rob Kassel and Victor W. Zue, Laboratory forComputer Science, Massachusetts Institute ofTechnology, Cambridge, MA 02139, U.S.A.
22.3 Real-Time Effects of Some IntrasyllabicCollocational Constraints in English 941Bruce L. Derwing and Terrance M. Nearey,Department of Linguistics, Faculty of Arts,University of Alberta, Edmonton, 4-32, AssiniboiaHall T6G 2E7, Canada
22.4 Acoustic-Phonetic Features in the Frameworkof Neural-Network Multi-Lingual LabelAlignment 945Paul Dalsgaard and William Barry, Institute ofElectronic Systems, Speech Technology Centre,Aalborg University, 7 Fredrik Bajers Vej, DK 9220Aalborg, Denmark
22.5 Preliminary Study of Vowel Coarticulation inBritish English 949James L. Hieronymus, Centre for SpeechTechnology Research, Edinburgh University, 80South Bridge, Edinburgh EH1 1HN, Scotland, U.K.
22.6 Effects of Context, Stress, and Speech Styleon American Vowels 953Caroline B. Huang, Department of ElectricalEngineering and Computer Science,Massachusetts Institute of Technology, MIT 36-511 77 Massachusetts, Ave., Cambridge, MA02139, U.S.A.
22.7 Phonetic Study and Recognition of StandardArabic Emphatic Consonants 957M. Djoudi, H. Aouizerat and J.P. Haton, CampusScientifique, CRIN-INRIA Lorraine, B.P. 23954506 Vandceuvre-les-Nancy CEDEX, France
22.8 Articulatory and Acoustic Properties ofDifferent Allophones of / I / in American English,Catalan and Italian 961Daniel Recasens and Edda Farnetani, Departmentde Filologia Catalana, Universitat Autonoma deBarcelona, Bellaterra, Barcelona, Spain
22.9 In Search of a Method to Improve the ProsodicFeatures of Englisn Spoken by Japanese 965Hiroshi Suzuki, Ghen Ohyama and ShigeruKiritani, Faculty of Arts, The University of Tokyo,Meguro, Tokyo, 153 Japan
WEDNESDA Y AFTERNOON Nov. 21Session 23: Assessment/Human Factors,
Database and Neural Networks
Time: 13:30 to 17:00, November 21, 1990Place: Hall D (Reception Hall) PosterCHAIRPERSONS:
Louis C. W. Pols, Institute of Phonetic Sciences,University of Amsterdam;Hisao Kuwabara, Department of Electronics andInformation Sciences, The Nishi-Tokyo University
23.1 A Note on Loud and Lombard Speech 969Z.S. Bond and Thomas J. Moore, Department ofLinguistics, Ohio University, Athens, OH 45701,U.S.A.
23.2 A Weighted Intelligibility Measure for SpeechAssessment 973Ute Jekosch, Lehrstuhl fur allgemeineElektrotechnik und Akustik, Ruhr-UniversitatBochum, Universitatsstr. 150 D-44630 Bochum,Germany (F.R.G.)
23.3 Improvements in Binaural Articulation Scoreby Simulated Localization Using Head-RelatedTransfer Functions 977Shinji Hayashi, NTT Human InterfaceLaboratories, Nippon Telegraph and TelephonCorporation, 3-9-11 Midoricho, Musashino-shi,Tokyo, 180 Japan
23.4 Evaluating Synthesizer Performance: IsSegmental Intelligibility Enough? 981Kim Silverman, Sara Basson and Suzi Levas,Artificial Intelligence Laboratory, NYNEX Scienceand Technology, 500 Westchester Avenue, WhitePlains, NY 10604, U.S.A.
23.5 Media Conversion into Language and Voice forIntelligent Communication 985Fumio Maehara, Masamichi Nakagawa, KunioNobori, Toshiyuki Maeda, Tsutomu Mori andMakoto Fujimoto, Central Research Laboratories,Matsushita Electric, Industrial Co., Ltd., 3-15Yagumonakamachi, Moriguchi-shi, Osaka, 570Japan
23.6 Segmental Intelligibility of Synthetic andNatural Speech in Real and Nonsense Words 989Rolf Carlson, Bjorn Granstrom and Lennart Nord,Department of Speech Communication and MusicAcoustics, Royal Institute of Technology, Box70014, S-10044 Stockholm, Sweden
23.7 The HKU-USTC Speech Corpus 993Chorkin Chan and Ren-Hua Wang, Department ofComputer Science, University of Hong Kong,Pokfulam Road, Hong Kong
23.8 Automatic Alignment of Phonemic Labels withContinuous Speech 997Torbjorn Svendsen and Knut Kvale, Departmentof Electrical Engineering & Computer Science,The Norwegian Institute of Technology, N-7034,Trondheim, Norway
23.9 TELS: A Speech Time-Expansion LabellingSystem 1001D. Tuffelli and Hai D. Wang, Institut de laCommunication Parlee, INPG/ENSERG UniversiteStandhal, CNRS n°368; 46, Av. Felix Viallet.38031 Grenoble Cedex, France
23.10 A Speech Labeling System Based onKnowledge Processing 1005Kazuhiro Arai, Yoichi Yamashita, TadahiroKitahashi and Riichiro Mizoguchi, I.S.I.R., OsakaUniversity, 8-1, Mihogaoka, Ibaraki-shi, Osaka,567 Japan
23.11 Development and Experimental Use ofPHONWORK, a New Phonetic Workbench 1009Hans G. Tillmann, Maximilian Hadersbeck, HansGeorg Piroth and Barbara Eisen, Institut furPhonetik und Sprachliche Kommunikation derUniversitat Munchen, Schellingstr. 3, D-8000Munich 40, Germany (F.R.G.)
23.12 A Speech Recognition Research EnvironmentBased on Large-Scale Word and ConceptDictionaries 1013Hiroyuki Chimoto, Hideaki Shinchi, HidekiHashimoto and Shinya Amano, Japan ElectronicDictionary Research Institute, Ltd., Toshiba-cho,Saiwai-ku, Kawasaki, 210 Japan
23.13 Are Laboratory Databases Appropriate forTraining and Testing Telephone SpeechRecognizers? 1017Benjamin Chigier and Judith Spitz, ArtificialIntelligence Speech Technology Group, NYNEXScience and Technology, 500 WestchesterAvenue, White Plains, NY 10604, U.S.A.
23.14 Standardisation of Speech Input Assessmentwithin the SAM ESPRIT Project 1021Sven W. Danielsen, Research & DevelopmentDepartment, Jydsk Telefon, Sletvej 30, DK 8310Aarhus-Tranbjerg, Denmark
23.15 Multilingual Speech Data Base for EvaluatingQuality of Digitized Speech 1025Hiroshi Irii, Kenzo Itoh and Nobuhiko Kitawaki,NTT Telecommunication Networks Laboratories,Nippon Telegraph and Telephone Corporation, 3-9-11 Midori-cho, Musashino-shi, Tokyo, 180Japan
23.16 The Optimal Gain Sequence for FastestLearning in Connectionist Vector QuantiserDesign 1029Lizhong Wu and Frank Fallside, EngineeringDepartment, Cambridge University, TrumpingtonStreet, Cambridge CB2 1PZ, U.K.
23.17 A Comparison of Preprocessors for theCambridge Recurrent Error PropagationNetwork Speech Recognition System 1033Tony Robinson, John Holdsworth, Roy Pattersonand Frank Fallside, Engineering Department,Cambridge University, Trumpington Street,Cambridge CB2 1PZ, U.K.
23.18 A Recurrent Neural Network for WordIdentification from Phoneme Sequences 1037R.B. Allen, C. Kamm and S.B. James, Bellcore,445 South St. Morristown, NJ 07962-1910, U.S.A.
23.19 Improved Broad Phonetic Classification andSegmentation with a Neural Network and aNew Auditory Model 1041L. Depuydt, J.P. Martens, L. van Immerseel andN. Weymaere, Laboratory for Electronics andMetrology, University of Gent, SintPietersnieuwstraat 41, B 9000 Gent, Belgium
23.20 Formant Extraction Model by Neural Networksand Auditory Model Based on SignalProcessing Theory 1045Kazuaki Obara and Hideyuki Takagi, CentralResearch Laboratories, Matsushita ElectricIndustrial Co., Ltd., 3-15, Yagumo-Nakamachi,Moriguchi, Osaka, 570 Japan
23.21 /b,d,g/ Recognition with Elliptic DiscriminationNeural Units 1049Noboru Kanedera and Tetsuo Funada, IshikawaNational College of Technology, Ishikawa, 929-03Japan
23.22 A Comparative Study of AcousticRepresentations of Speech for VowelClassification Using Multi-Layer Perceptrons 1053Helen M. Meng and Victor W. Zue, Laboratory forComputer Science, Massachusetts Institute ofTechnology, Cambridge, MA 02139, U.S.A.
23.23 Extended Elman's Recurrent Neural Networkfor Syllable Recognition 1057Yong Duk Cho, Ki Chul Kim, Hyun Soo Yoon,Seung Ryoul Maeng and Jung Wan Cho,Department of Computer Science, KAIST,P.O.Box 150, Chongryang, Seoul 130-650, Korea
23.24 Detection and Classification of PhonemesUsing Context-Independent Error Back-Propagation 1061Hong C. Leung, James R. Glass, Michael S.Phillips and Victor W. Zue, Laboratory forComputer Science, Massachusetts Institute ofTechnology, Cambridge, MA 02139, U.S.A.
23.25 A New Method of Consonant Detection andClassification Using Neural Networks 1065Shigeru Chiba and Kiyoshi Asai, SpeechProcessing Section, Electrotechnical Laboratory,1-1-4 Umezono, Tsukuba, Ibaraki, 305 Japan
23.26 An Artificial Neural Network for the Burst PointDetection 1069Shigeyoshi Kitazawa and Masahiro Serizawa,Department of Computer Science, Faculty ofEngineering, Shizuoka University, Hamamatsu,432 Japan
23.27 The Use of Discriminant Neural Networks inthe Integration of Acoustic Cues for Voicinginto a Continous-Word Recognition System 1073Claude Lefebvre and Dariusz A. Zwierzyhski,Speech Research Centre, National ResearchCouncil of Canada, Building U-61, Montreal Road,Ottawa, Ontario, K1A 0R6, Canada
23.28 A Neural Network for Speaker-IndependentIsolated Word Recognition 1077Kouichi Yamaguchi, Kenji Sakamoto, ToshioAkabane and Yoshiji Fujimoto, Central ResearchLaboratories, SHARP Corporation, 2613-1Ichinomoto-cho, Tenri-shi, Nara, 632 Japan
THURSDA Y MORNING Nov. 22Session 24: Speech I/O Assessment and
Database I
Time: 08:30 to 12:00, November 22, 1990Place: Hall A (502)CHAIRPERSONS:
Roger Moore, Speech Research Unit, Royal Signalsand Radar Establishment;Shuichi Itahashi, Institute of Information Sciences andElectronics, University of Tsukuba
24.1 Recent Speech Database Projects in Japan 1081Shuichi Itahashi, Institute of Information Sciencesand Electronics, University of Tsukuba, 1-1-1Tennodai, Tsukuba, Ibaraki, 305 Japan
24.2 Construction of a Large Korean SpeechDatabase and Its Management System in ETRI 1085Joon-Hyuk Choi and Kyung-Tae Kim, SignalProcessing Section, Electronics & Telecom.Research Institute, P.O.Box 8, Daeduk ScienceTown, Daejon, 305-606 Korea
24.3 Speech Corpora and Performance Assessmentin the DARPA SLS ProgramDavid S. Pallett, National Institute of Standardsand Technology, Room A216 Technology Bldg.Gaithersburg, MD 20899, U.S.A.
24.4 A Large-Scale Japanese Speech Database 1089Y. Sagisaka, K. Takeda, M. Abe, S. Katagiri, T.Umeda and H. Kuwabara, ATR InterpretingTelephony Research Laboratories, Seika-cho,Soraku-gun, Kyoto, 619-02 Japan
24.5 ATR Dialogue Database 1093Terumasa Ehara, Kentaro Ogura and TsuyoshiMorimoto, ATR Interpreting Telephony ResearchLaboratories, Seika-cho, Soraku-gun,Kyoto, 619-02 Japan
24.6 Design Considerations and Text Selection forBREF, a Large French Read-Speech Corpus 1097Jean-Luc Gauvain, Lori F. Lamel and MaxineEskenazi, LIMSI-CNRS, BP 133, 91403 Orsaycedex, France
24.7 The ETL Speech Database for Speech Analysisand Recognition Research 1101Kazuyo Tanaka, Satoru Hayamizu and KozoOhta, Machine Understanding Division,Electrotechnical Laboratory, 1-1-4 Umezono,Tsukuba-shi, Ibaraki, 305 Japan
24.8 Collection and Analysis of Spontaneous andRead Corpora for Spoken Language SystemDevelopment 7 705Michal Soclof and Victor W. Zue, Laboratory forComputer Science, Massachusetts Institute ofTechnology, Cambridge, MA 02139, U.S.A.
24.9 A Distributed Speech Database with anAutomatic Acquisit ion System of SpeechInformation 7 709Shozo Makino, Toshihiko Shirokaze and Ken'itiKido, Research Center for Applied InformationSciences, Tohoku University, 2-1-1 Katahira,Sendai, 980 Japan
THURSDA Y MORNING Nov. 22Session 25: Speech Recognition in Noisy
Environments
Time: 08:30 to 12:00, November 22, 1990Place: Hall B (501)CHAIRPERSONS:
Biing-Hwang Juang, Speech Research Department,AT&T Bell Laboratories;Hiroshi Matsumoto, Faculty of Engineering, ShinshuUniversity
25.1 Recent Developments in Speech Recognitionunder Adverse Conditions 7 713B.H. Juang, AT&T Bell Laboratories, 600Mountain Ave. Murray Hill, NJ 07974, U.S.A.
25.2 Features for Noise-Robust Speaker-Independent Word Recogni t ion 7 7 7 7Brian A. Hanson and Ted H. Applebaum, SpeechTechnology Laboratory, Division of PanasonicTechnologies, Inc., 3888 State St., Santa Barbara,CA 93105, U.S.A.
25.3 Acoustical Pre-Processing for Robust SpokenLanguage Systems 7 727Alejandro Acero and Richard M. Stern, School ofComputer Science, Carnegie Mellon University,Pittsburgh, PA 15213, U.S.A.
25.4 Lombard Effect Compensation for RobustAutomatic Speech Recognition in Noise 7 725John H.L. Hansen and Oscar N. Bria, Departmentof Electrical Engineering, Duke University,Durham, NC 27707, U.S.A.
25.5 Speaker-Independent Word Recognition inNoisy Environments Using Dynamic andAveraged Spectral Features Based on a Two-Dimensional Mel-Cepstrum 7129Tadashi Kitamura, Etsuro Hayahara and YasuhikoSimazaki, Department of Engineering, NagoyaInstitute of Technology, Gokiso-cho, Showa-ku,Nagoya, 466 Japan
25.6 Problems of Speech Recognition in MobileEnvironments 7 733A. Noll, Aspect GmbH (i.G.), Gutenbergring 38, D-2000 Norderstedt, Hamburg, Germany (F.R.G.)
25.7 HMM Modeling for Voice-Activated Mobile-Radio System 7 737L. Fissore, P. Laface, M. Codogno and G. Venuti,CSELT, Via G. Reiss Romoli 274, 10148 Torino,Italy
25.8 A Speech Recognition Method for NoiseEnvironments Using Dual Inputs 7 747Yoshio Nakadai and Noboru Sugamura, NTTHuman Interface Laboratories, Nippon Telegraphand Telephone Corporation, 1-2356 Take,Yokosuka-shi, Kanagawa, 238-03 Japan
25.9 Noise Robustness in Speaker IndependentSpeech Recognition 7 745Shuji Morii, Toshiyuki Morii, Masakatsu Hoshimi,Shoji Hiraoka, Taisuke Watanabe and KatsuyukiNiyada, Matsushita Research Institute Tokyo Inc.,3-10-1, Higashimita, Tama-ku, Kawasaki, Japan
25.10 Maximum Likelihood Estimation of SpeechWaveform under Nonstationary NoiseEnvironments 7 749Kaoru Gyoutoku and Hidefumi Kobatake, Facultyof Technology, Tokyo University of Agriculture andTechnology, 2-24-16, Nakamachi, Koganei,Tokyo, 184 Japan
THURSDA Y MORNING Nov. 22Session 26: Foreign Language Teaching
Time: 08:30 to 12:00, November 22, 1990Place: Hall C (504, 505)CHAIRPERSONS:
Hyun Bok Lee, Department of Linguistics, SeoulNational University;Kazue Yoshida, Foreign Language Department,Fukuoka University of Education
26.1 Electropalatography in Phonetic Research andin Speech Training 7 753William J. Hardcastle, Speech ResearchLaboratory, University of Reading, P.O.Box 218,Reading RG6 2AA, U.K.
26.2 Teaching Spoken Language: A Genre-BasedApproach 757Michael Rost, Temple University KyowaNakanoshima Bldg. 1-7-4 Nishi-Tenma, Kita-ku,Osaka, 580 Japan
26.3 Interaction between Native and NonnativeSpeakers in Team Teaching 7 767Kazue Yoshida, Foreign Language Department,Fukuoka Univ. of Education, 729, Akama,Munakata, Fukuoka, 811-41 Japan
26.4 Contrastive Phonetics of English, French andModern Greek in Language Teaching andInterpreting 7 765Ekaterini Nikolarea, Department of ComparativeLiterature, University of Alberta, 347 Arts Building,Edmonton, Alberta CANADA T6G 2E6, Canada
26.5 English Speech Training Using VoiceConversion 7 769Keiko Nagano and Kazunori Ozawa, C&CInformation Technology Research Laboratories,NEC Corporation, 4-1-1 Miyazaki, Miyamae-ku,Kawasaki, 213 Japan
26.6 Contrastive Analysis of American English andJapanese Pronunciation 7 7 73Namie Saeki, Doshisha Women's Junior College,Tanabe-cho, Tuzuki-gun, Kyoto, 610-03, Japan
26.7 Oral Communicative Approaches in SpokenLanguage Processing 7 777Massoud Rahimpour, Department of English,Faculty of Persian Lit. & Foreign Languages,Tabriz University, Tabriz, Iran
26.8 Teaching English Pronunciation to JapaneseUniversity Students: The Voiceless Fricative/s/ Sound 7 787Hisako Murakawa, International Budo University,Katsuura, Chiba, 299-52 Japan
26.9 Automatic Evaluation and Training in EnglishPronunciation 7 785Jared Bernstein, Michael Cohen, Hy Murveit,Dimitry Rtischev and Mitchel Weintraub, SpeechResearch Program, SRI International, Menlo Park,CA 94025, U.S.A.
THURSDA Y MORNING Nov. 22Session 27: Continuous Speech Recognition
and Speaker Recognition
Time: 08:30 to 12:00, November 22, 1990Place: Hall D (Reception Hall) PosterCHAIRPERSONS:
Francis Kubala, BBN Systems and TechnologiesCorporation;Yasuhisa Niimi, Department of Electronics andInformation Science, Kyoto Institute of Technology
27.1 Vocabulary Independent Phrase Recognitionwith a Linear Phonetic Context Model 7 789Yoshiharu Abe and Kunio Nakajima, InformationSystems and Electronics Development Laboratory,Mitsubishi Electric Corp., 5-1-1 Ofuna, Kamakura,Kanagawa, 247 Japan
27.2 Phoneme Probability Representation ofContinuous Speech 7 793Y. Ariki and M.A. Jack, Ryukoku University, Seta,Otsu, Shiga, 520-21 Japan
27.3 Duration Constraints for the Speech InputInterface in the MULTIWORKS Project 7 797Ha'iyan Ye and Jean Caelen, Institut de laCommunication Parlee, UA368, ENSERG/INPG-Universite Stendhal, 46,Av.F.Viallet, F-38031Grenoble Cedex, France
27 A Chinese Continuous Speech RecognitionSystem Using the State Transition Modelsboth of Phonemes and Words 7207Zhi-Ping Hu and Satoshi Imai, ResearchLaboratory of Precision Machinery andElectronics, Tokyo Institute of Technology,Nagatsuta, Midori-ku, Yokohama, 227 Japan
27.5 A New Training Method for Multi-PhoneSpeech Units for Use in a Hidden MarkovModel Speech Recognition System 7205Jade Goldstein, Akio Amano, Hideki Murayama,Mariko Izawa and Akira Ichikawa, University ofCalifornia, Santa Barbara, CA 93106, U.S.A.
27.6 Prediction for Phoneme/Syllable/Word-Category and Identification of Language UsingHMM 7209Yoshio Ueda and Seiichi Nakagawa, Departmentof Information & Computer Sciences, ToyohashiUniversity of Technology, Tempaku-cho,Toyohashi, 441 Japan
27.7 Performance Evaluation in SpeechRecognition System Using TransitionProbability between Linguistic Units 7273Takashi Otsuki, Shozo Makino, Toshio Sone andKen'iti Kido, Research Center for AppliedInformation Sciences, Tohoku University, 2-1-1Katahira, Aoba, Sendai, 980 Japan
27.8 Sentence Recognition Method Using WordCooccurrence Probability and Its Evaluation 727 7Isao Murase and Seiichi Nakagawa, Departmentof Information & Computer Sciences, ToyohashiUniversity of Technology, Tempaku-cho,Toyohashi, 441 Japan
27.9 A Knowledge-Based Understanding System forthe Chinese Spoken Language 7227Yanghai Lu and Beiqian Dai, Department ofElectronics Engineering, University of Science andTechnology of China, Hefei, Anhui, 230026, China
27.10 Conversational Speech Understanding Basedon Cooperative Problem Solving 7225Akio Komatsu, Eiji Oohira and Akira Ichikawa,Central Research Laboratory, Hitachi, Ltd., 1-280Higashi-Koigakubo, Kokubunji, Tokyo, 185 Japan
27.11 A One-Pass Search Algorithm for ContinuousSpeech Recognition Directed by Context-FreePhrase Structure Grammar 7229Michio Okada, NTT Basic Research Laboratories,Nippon Telegraph and Telephone Corporation, 3-9-11 Midori-cho, Musashino-shi, Tokyo, 180Japan
27.12 A Blackboard Architecture for a WordHypothesizer and a Chart Parser Interaction inan ASR System 7223Andrea Di Carlo and Rino Falcone, FondazioneUgo Bordoni, Via B. Castiglione, 59-00142 Rome,Italy
27.13 Heuristic Search Problems in a NaturalLanguage Task Oriented Spoken Man-MachineDialogue System 7237P. Mousel, J.M. Pierrel and A. Roussanaly, CRP-CU, 162a, Avenue de la Faiencerie L-1511,Luxembourg, G.D. Luxembourg
27.14 The Making of a Speech-to-Speech TranslationSystem: Some Findings from the 4>DMDIALOGProject 7247Hiroaki Kitano, Center for Machine Translation,Carnegie Mellon University, Pittsburgh, PA 15213,U.S.A.
27.15 Using High Level Knowledge Sources as aMeans of Recovering Ill-Formed JapaneseSentences Distorted by Ambient Noise 7245K.H. Loken-Kim, Yasuhiro Nara and ShintaKimura, Software Laboratory, Fujitsu LaboratoriesLtd., 1015 Kamikodanaka, Nakahara-ku,Kawasaki, 211 Japan
27.16 Tools for Designing Dialogues in SpeechUnderstanding Interfaces 7249Anders Baekgaard and Paul Dalsgaard, SpeechTechnology Centre, Aalborg University, 7 FredrikBajers Vej, DK-9220 Aalborg, Denmark
27.17 A Method for Expressing AssociativeRelations Using Fuzzy Concepts - Aiming atAdvanced Speech Recognition - 7253Osamu Takizawa and Masuzo Yanagida,Communications Research Laboratory, KansaiAdvanced Research Center, Ministry of Posts andTelecommunications, 588-2, Iwaoka, Iwaoka-cho,Nishi-ku, Kobe, 674 Japan
27.18 Bilingual Speech Interface for a BidirectionalMachine Translation System 7257Jean-Pierre Tubach, Raymond Descout andPierre Isabelle, Signal Department, TELECOMParis, CNRS, URA 820, France
27.19 Optimum Spectral Peak Track Interpretation inTerms of Formants 7267Yves Laprie, CRIN-INRIA, BP 329, 54506Vandceuvre-les-Nancy, France
27.20 A Speech Understanding System 7265Spriet Thierry, Groupe Intelligence Artificielle, PareScientifique et Technologique de Luminy, CNRSURA 816, 136, avenue de Luminy, Case 901,13288 Marseille Cedex 9, France
27.21 Speaker Identification Based on MultipulseExcitation and LPC Vocal-Tract Model 7269Seiichiro Hangai and Kazuhiro Miyauchi,Department of Electrical Engineering, ScienceUniversity of Tokyo, 1 -3 Kagurazaka, Shinjuku-kuTokyo, 162 Japan
27.22 A Neural Network Based Speaker VerificationSystem 7273I-Chang Jou, Su-Ling Lee, Min-Tau Lin, Chih-Yuan Tseng, Shih-Shien Yu and Yuh-Juain Tsay,Telecommunication Laboratories, Ministry ofCommunications, P.O.Box 71, Chung-Li, Taiwan
27.23 Speaker Recognition Using Static andDynamic Cepstral Feature by a LearningNeural Network 7277Hujun Yin and Tong Zhou, Department ofElectrical Engineering, Tongji University, 1239Sipin Rd., Shanghai 200092, China
THURSDA Y AFTERNOON Nov. 22Session 28: Speech I/O Assessment and
Database II
Time: 13:30 to 15:00, November 22, 1990Place: Hall A (502)CHAIRPERSONS:
Victor W. Zue, Laboratory for Computer Science,Massachusetts Institute of Technology;Takayuki Nakajima, Electrotechnical Laboratory
28.1 A National Database of Spoken Language:Concept, Design, and Implementation 7287J.B. Millar, P. Dermody, J.M. Harrington and J.Vonwiller, Computer Sciences Laboratory,Research School of Physical Sciences, AustralianNational University, GPO Box 4, ACT 2610,Australia
28.2 The Italian National Database for SpeechRecognition 7285Giuseppe Castagneri and Kyriaki Vagges, CSELT,Via G. Reiss Romoli 274, 10148 Torino, Italy
28.3 How Useful Are Speech Databases for RuleSynthesis Development and Assessment? 7289Louis C. W. Pols, Institute of Phonetic Sciences,University of Amsterdam, Herengracht 338, 1016CG Amsterdam, Netherlands
28.4 EUR-ACCOR: A Multi-Lingual Articulatory andAcoustic Database 7293W. J. Hardcastle and A. Marchal, SpeechResearch Laboratory, Department of LinguisticScience, University of Reading, P.O.Box 218,Reading RG6 2AA, U.K.
THURSDA Y A FTERNOON Nov. 22Session 29: Dialogue Modeling and Processing
Time: 13:30 to 15:00, November 22, 1990Place: Hall B (501)CHAIRPERSONS:
Philip R. Cohen, Artificial Intelligence Center, SRIInternational;Katsuhiko Shirai, Department of ElectronicalEngineering, Waseda University
29.1 Conversational Turn-Taking Model Using PetriNet 7297Naotoshi Osaka, NTT Basic ResearchLaboratories, Nippon Telegraph and TelephoneCorporation, 3-9-11 Midori-cho, Musashino-shi,Tokyo, 180 Japan
29.2 Dialog Management System MASCOTS inSpeech Understanding System 7307Tetsuya Yamamoto, Yoshikazu Ohta, YoichiYamashita and Riichiro Mizoguchi, Faculty ofEngineering, Kansai University, 3-3-35 Yamate-cho, Suita, Osaka, 564 Japan
29.3 Spoken Language in Interpreted TelephoneDialogues 7305Sharon L. Oviatt, Philip R. Cohen and Ann M.Podlozny, Artificial Intelligence Center, SRIInternational, 333 Ravenswood Avenue, MenloPark, California 94025, U.S.A.
29.4 Linguistic Knowledge for Spoken DialogueProcessing 7309Tsuyoshi Morimoto and Toshiyuki Takezawa, ATRInterpreting Telephony Research Laboratories,Seika-cho, Souraku-gun, Kyoto, 619-02 Japan
29.5 SPICOS Il-A Speech Understanding DialogueSystem 7373Harald Hoge, ZFE IS KOM 3, Siemens AG., Otto-Hahn-Ring 6, 8000 Munchen 83, Germany(F.R.G.)
29.6 Recent Progress on the MIT VOYAGER SpokenLanguage System 7377Victor W. Zue, James R. Glass, Dave Goddeau,David Goodine, Hong C. Leung, Michael K.McCandless, Michael S. Philips, Joseph Politroni,Stephanie Seneff and Dave Whitney, Laboratoryfor Computer Science, Massachusetts Institue ofTechnology, Cambridge, MA 02139, U.S.A.
THURSDA Y AFTERNOON Nov. 22Session 30: Language Acquisition
Time: 13:30 to 15:00, November 22, 1990Place: Hall C (504, 505)CHAIRPERSONS:
Tatiana Slama-Cazacu, Department of Linguistics,University of Bucharest;Ichiro Miura, Department of English, Kyoto University ofEducation
30.1 The Source-Filter Model of Speech ProductionApplied to Early Speech Development 7327Florien J. Koopmans-van Beinum, Institute ofPhonetic Sciences, University of Amsterdam,Herengracht 338, 1016 CG Amsterdam,Netherlands
30.2 The Acquisition of Japanese LongConsonants, Syllabic Nasals, and Long Vowels 7325Ichiro Miura, Department of English, KyotoUniversity of Education, Fukakusa Fujinomori-cho1, Fushimi-ku, Kyoto, 612 Japan
30.3 Infants' Vocalization Observed in VerbalCommunication: Acoustic Analysis 7329Yoko Shimura, Satoshi Imaizumi, Kozue Saito,Tamiko Ichijima, Jan Gauffin, Pierre Halle andItsuro Yamanouchi, Saitama University, 255Shimo-okubo, Urawa, Saitama, 338 Japan
30.4 Perception of Mora Sounds in Japanese byNon-Native Speakers of Japanese 7333Yukie Masuko and Shigeru Kiritani, Faculty ofForeign Languages, Tokyo University of ForeignStudies, 51-21, Nishigahara 4-chome Kita-ku,Tokyo, 114 Japan
THURSDA Y AFTERNOON Nov. 22Session 31: Neural Networks for Speech
Processing
Time: 13:30 to 15:00, November 22, 1990Place: Hall D (Reception Hall) PosterCHAIRPERSONS:
Hong C. Leung, Laboratory for Computer Science,Massachusetts Institute of Technology;Masuzo Yanagida, Kansai Advanced ResearchLaboratory, Communication Research Laboratory
31.1 Continuous Speech Recognition on theResource Management Database UsingConnectionist Probability EstimationN. Morgan, C. Wooters, H. Bourlard and M.Cohen, International Computer Science Institute,1947 Center Street, Suite 600, Berkeley, CA94704, U.S.A.
31.2 Neural Predictive Hidden Markov ModelEiichi Tsuboka, Yoshihiro Takada and HisashiWakita, Central Research Laboratories,Matsushita Electric Industrial Co., Ltd., 3-15,Yagumo-Nakamachi, Moriguchi, Osaka, 570Japan
1337
1341
31.3 On the Robustness of HMM and ANN SpeechRecognition Algorithms 7345Yasuhiro Minami, Toshiyuki Hanazawa, HitoshiIwamida, Eric McDermott, Kiyohiro Shikano,Shigeru Katagiri and Masao Nakagawa, Faculty ofScience and Technology, Keio University, 3-14-1,Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa,223 Japan
31.4 The TDNN-LR Large-Vocabulary andContinuous Speech Recognition System 7349Hidefumi Sawai, ATR Interpreting TelephonyResearch Laboratories, Seika-cho, Soraku-gun,Kyoto, 619-02 Japan
31.5 Rule-Driven Neural Networks for Acoustico-Phonetic Decoding 7353Bulot Remy, Meloni Henri and Nocera Pascal,Department d'lnformatique, Luminy ScienceFaculty, G.I.A., Case 901, 163 av. de Luminy,13288 Marseille Cedex 9, France
31.6 Knowledge-Based Segmentation and FeatureMaps for Speech Recognition 7357Franck Poirier, Department Signal, Telecom Paris,46 rue Barrault, 75013 Paris Cedex 13, France
31.7 Speaker-Independent English AlphabetRecognition: Experiments with the E-set 7367Mark Fanty and Ronald A. Cole, Department ofComputer Science and Engineering, The OregonGraduate Institute of Science and Technology,19600 NW Von Neumann Drive Beaverton, OR97006, U.S.A.
31.8 Neural Network Based Segmentation ofContinuous Speech 7365Pinaki Poddar and P.V. S. Rao, ComputerSystems & Communications Group, Tata Instituteof Fundamental Research, Bombay 400 005, India
31.9 A Normalization of Coarticulation ofConnected Vowels Using Neural Network 7369Tomio Takara and Motonori Tamaki, Departmentof Electronics & Information Engineering,University of the Ryukyus, 1 Senbaru, Nishihara,Okinawa, 903-01 Japan
31.10 Lip-Reading of Japanese Vowels Using NeuralNetworks 7373Tomio Watanabe and Masaki Kohda, Departmentof Electronical & Information Engineering, Facultyof Engineering, Yamagata University, Yonezawa,Yamagata, 992 Japan
31.11 Application of the CompositionalRepresentation to Lexical Access Using NeuralNetworks 1377H. Lucke and F. Fallside, EngineeringDepartment, Cambridge University, TrumpingtonStreet, Cambridge, U.K.
31.12 A Multi-Layer Perception Classifier for RobustChinese Frication/Vocalic DetectionJianing Wei, Adrian Fourcin and Andrew FaulknerDepartment of Phonetics and Linguistics,University College London, Wolfson House, 4Stephenson Way, London NW1 2HE, U.K.