icslp 94 - willkommen — verbundzentrale des gbv · icslp 94 1994 international conference on...

7
ICSLP 94 1994 International Conference on Spoken Language Processing September 18 -22,1994 Pacific Convention Plaza Yokohama (PACIFICO) Yokohama, Japan UB/TIB Hannover 113 552 262 89

Upload: others

Post on 09-Sep-2019

2 views

Category:

Documents


0 download

TRANSCRIPT

ICSLP 94

1994International Conference

onSpoken Language Processing

September 18 -22,1994

Pacific Convention Plaza Yokohama(PACIFICO)

Yokohama, Japan

UB/TIB Hannover113 552 262

89

TABLE OF CONTENTS

MONDAY MORNING Sep. 19Session 1: Integration of Speech and Natural

Language Processing

Time: 10:00 to 12:15, September 19, 1994Place: Room AChairpersons:

Yoichi Takebayashi, Research & DevelopmentCenter, Toshiba Corporation, JapanSheryl R. Young, School of Computer Science.Carnegie Mellon University, U.S.A.

1.1 An Efficient Predictive LR Parser Using PauseInformation for Continuously Spoken SentenceRecognition 1Toshiyuki Takezawa and Tsuyoshi Morimoto, ATRInterpreting Telecommunications Research Labora-tories, 2-2, Hikari-dai, Seika-cho, Soraku-gun, Kyoto,619-02 Japan

1.2 Integrating TDNN-based Diphone Recognitionwith Table-driven Morphology Parsing forUnderstanding of Spoken Korean 5Kyunghee Kim, Geunbae Lee, Jong-Hyeok Lee andHong Jeong, Dept. of Computer Science,POSTECH, P.O.Box 125, Pohang, 790-600, Korea

1.3 Implementation Issues and Parsing SpeedEvaluation of HMM-LR Parser 9Frank O. Wallerstein, Akio Amano and NobuoHataoka, Central Research Laboratory, Hitachi Ltd..1-280, Higashi-koigakubo, Kokubunji, 185 Japan

1.4 One-Pass Continuous Speech RecognitionDirected by Generalized LR Parsing 13Kenji Kita, Yoneo Yano and Tsuyoshi Morimoto,Faculty of Engineering, Tokushima University,Tokushima, 770 Japan

1.5 A Continuous Speech Recognition SystemIntegrating Additional Acoustic KnowledgeSources in a Data-driven Beam Search Algorithm 17B, Plannerer, T, Einsele, M. Beham and G. Ruske,Lehrstuhl fur Datenverabeitung TechnischeUniversitat Munchen, ArcisstraBe 21, 80290Munchen, Germany

1.6 A Context-Free Grammar Compiler for SpeechUnderstanding Systems 21Michael K. Brown, AT&T Bell Laboratories. MurrayHill, NJ 07974, U.S.A.

1.7 Probabilistic Constraint for Integrated Speechand Language Processing 25Katashi Nagao, Koiti Hasida and Takashi Miyata,Sony Computer Science Laboratory Inc., 3-14-13,Higashi-Gotanda, Shinagawa-ku, Tokyo 141 Japan

1.8 A Non-Linear Architecture for Speech and NaturalLanguage Processing 29William Edmondson and Jon lies, School of Com-puter Science, The University of Birmingham,Edgbaston, Birmingham, B15 2TTU.K.

MONDAY MORNING Sep. 19Session 2: Articulatory Motion

Time: 10:00 to 12:15, September 19, 1994Place: Room BChairpersons:

Yuki Kakita, Dept. of Electronics, Kanazawa Instituteof Technology. JapanMaureen Stone, Dept. of Electrical and ComputerEngineering, Dept. of Cognitive Science, The JohnsHopkins University, U.S.A.

2.1 Manifestations of Contrastive Emphasis in JawMovement in Dialogue 33Donna Erickson. Kevin Lenzo and Masashi SawadaDept. of Speech and Hearing Science, The OhioState University, Columbus, OH 43210-1002, U.S.A.

2.2 Jaw Targets for Strident Fricatives 37Sook-hyang Lee and Mary E. Beckman, Linguistics,Ohio State University, Dept. of Linguistics 222 OxlegHall 1712 Neil Avenue, Columbus, OH 43210-1298,U.S.A.

2.3 Jaw Motions in Speech are Controlled in (atLeast) Three Degrees of Freedom 41David J. Ostry and Eric Vatikiotis-Bateson, Dept. ofPsychology, McGill University, 1205 Dr. PenfieldAvenue, Montreal. Quebec H3A 1B1, Canada

2.4 Extracting Articulator Movement Parameters froma Videodisc-Based Cineradiographic Database 45Mark K, Tiede and Eric Vatikiotis-Bateson, ATRHuman Information Processing Rresearch Laborato-ries. 2-2, Hikahdai, Seika-cho, Soraku-gun. Kyoto,619-03 Japan

2.5 Tongue-palate Interactions in Consonants vs.Vowels 49Maureen Stone, and Andrew Lundberg Dept. ofElectrical and Computer Engineering, Dept. ofCognitive Science. Dept. of Computer Science,The Johns Hopkins University, Baltimore MD 21218,U.S.A.

2.6 Kinematic Analysis of Vowel Production inGerman 53Philip Hoole, Christine Mooshammer and Hans G.Tillmann, Institut fur Phonetik und SprachlicheKommunikation, Munich University, Schellingstrasse3, D-80799 Munich, Germany

2.7 Spread of CV and V-to-V Coarticulation in BritishEnglish: Implications for the Intelligibility ofSynthetic Speech 57Sarah Hawkins and Andrew Slater, Dept. of Linguis-tics, University of Cambridge, Sidgwick Avenue,Cambridge CB3 9DA, U.K.

2.8 Mechanisms of Vowel Devoicing in Japanese 61Mariko Kondo, Dept. of Linguistics, University ofEdinburgh. 1F, Adam Ferguson Bldg., 40 GerogeSquare, Edinburgh, EH8 9LL, U.K.

MONDAY MORNING Sep. 19Session 3: Cognitive Models for Spoken

Language Procesing

Time: 10:00 to 12:15, September 19, 1994Place: Room CChairpersons:

William Marslen-Wilson, Birkbeck College U.K.Kazuhiko Kakehi, Nagoya University, Japan

3.1 The Abstractness and Specificity of LexicalRepresentations in Memory: Implications forModels of Spoken Word RecognitionPaul Luce, Department of Psychology, StateUniversity of New York, Bauffalo, NY 14260. U.S.A.

3.2 The Development of Word Recognition 65Peter W. Jusczyk, State University of New York,Baffalo, NY 14260, U.S.A.

3.3 Speech Perception as a Cognitive Process: theRole of Abstractness and InterfaceWilliam Marslen-Wilson, Dept. of Psychology.Birkbeck College, Malet Street, London, WC1E7M4,U.K.

3.4 Competition and Segmentation in Spoken WordRecognition 71Dennis Norris, James McQueen and Anne Cutler,MRC-Psychology Unit, 15 Chancer Road, Cam-bridge CB2 2EF, U.K.

MONDAY AFTERNOON Sep. 19Session 4: Semantic Interpretation of Spoken

Messages

Time: 13:45 to 17:00, September 19, 1994Place: Room AChairpersons:

Renato De Mori, School of Computer Science,McGill University, CanadaKazuhiko Ozeki, Dept. of Computer Science andInformation Mathematics, The University of Electro-Communications, Japan

4.1 Recent Results in Automatic Learning Rules forSemantic Interpretation 75Roland Kuhn and Renato De Mori, McGill University,School of Computer Science, 3480 University Street,Montreal, Quebec, H3A 2A7, Canada

4.2 Semantic Associations, Acoustic Metrics andAdaptive Language Acquisition 79Allen L. Gorin, AT&T Bell Laboratories, 600 MountainAvenue, P.O.Box 636, Murray Hill, NJ 07974-0636,U.S.A.

4.3 Extracting Information in Spontaneous Speech 83Wayne Ward, School of Computer Science,Carnegie Mellon University, 5000 Forbes Avenue,Pittsburgh, PA 15213, U.S.A.

4.4 Coping with Aboutness Complexity in InformationExtraction from Spoken Dialogues 87Megumi Kameyama and Isao Arima, SRI Interna-tional, 333 Ravenswood Avenue, Menlo Park, CA94025, U.S.A.

4.5 An Example-Based Approach to SemanticInformation Extraction from JapaneseSpontaneous Speech 91Otoya Shirotsuka and Ken'ya Murakami, Laboratoryfor Information Technology, NTT Data Communica-tions Systems Corporation, Kowa KawasakiNishiguchi, Bid., 66-2, Horikawa-cho, Saiwai-ku,Kawasaki, 210 Japan

4.6 A Semantic Interpretation Based on DetectingConcepts for Spontaneous Speech Under-standing 95Akito Nagai, Yasushi Ishikawa and Kunio Nakajima,Computer & Information Systems Laboratory,MITSUBISHI Electric Corporation, 5-1-1, Ofuna,Kamakura, 247 Japan

4.7 Discouse Structure for Spontaneous SpokenInteractions: Multi-Speaker VS. Human-ComputerDialogs 2227Sheryl R, Young, School of Computer Science,Carnegie Mellon University, 5000 Forbes Avenue,Pittsburgh, PA 15213-3890, U.S.A.

4.8 Cooperative Distributed Processing for Under-standing Dialogue Utterances 99Akira Shimazu, Kiyoshi Kogure and Mikio Nakano,NTT Basic Research Laboratories, 3-1, Morinosato-Wakamiya, Atsugi, 243-01 Japan

4.9 Incremental Elaboration in Generating andInterpreting Spontaneous Speech 103Michio Okada, Satoshi Kurihara and RyoheiNakatsu, Information Science Research Laboratory,NTT Basic Research Laboratories, 3-1 Morinosato-Wakamiya, Atsugi, 243-01 Japan

4.10 Semantic Analysis in a Robust Spoken DialogSystem 107W. Eckert and H. Niemann, Friedrich-Alexander-Universitat Erlangen-Numberg, Lehrstuhl furMustererkennung (Informatik 5) MartensstraBe 3,91058 Erlangen, Germany

4.11 A User-Initiated Dialogue Model and ItsImplementation for Spontaneous Human-Computer Interaction 111Hiroshi Kanazawa, Shigenobu Seto, HidekiHashimoto, Hideaki Shinichi and YoichiTakebayashi, Research and Development Center,Toshiba Corporation, Komukai Toshiba-cho, Saiwai-ku, Kawasaki, 210 Japan

MONDAY AFTERNOON Sep. 19Session 5: Prosody

Time: 13:45 to 17:00, September 19, 1994Place: Room BChairpersons:

John J. Ohala, University of California at Berkeley,U.S.A.Hirokazu Satoh, NTT Advanced Technology Corp.,Japan

5.1 Analysis of Voice Fundamental FrequencyContours of German Utterances Using aQuantitative Model 2231Hansjorg Mixdorff and Hiroya Fujisaki, Dept. ofApplied Electronics Science, University of Tokyo,2641 Yamazaki, Noda, 278 Japan

5.2 Automatic Labeling of Phrase Accents in German 115Andreas KieBling Ralf Kompe, Anton Batliner,Heinrich Niemann and Elmar Noth, Friedrich-Alexander-Universitat Erlanger-Nurnberg, Lehrstuhlfur Mustererkennung(lnformatik 5), Martensstr. 3, D-91058 Erlangen, Germany

5.3 Testing of Word-Prosody Problem and the Theoryof Synharmonism in KazakhZhoumagaly Abouv, Dept. of Phonetics, InternationalSociety of Kazakh Language, 480099 KazakhstanUlitsa Furmanova Dom 229, Kvar. 23, Kazakstan

5.4 Intonational Structure of Kumamoto Japanese :A Perceptual Validation 119Kikuo Maekawa, The National Language ResearchInstitute 3-9-14, Nishiga'oka, Kita-ku, Tokyo, 115JAPAN.

5.5 Evaluation of Prosodic Transcription LabellingReliability in the TOBI Framework 123John F. Pitrelli, Mary E. Beckman, and JuliaHirschberg, NYNEXScience & Technology, Inc.,500 Westchester Avenue, White Plains, NY 10604,U.S.A.; Ohio State University; AT&T Bell Laborato-ries.

5.6 A Computational Model of Prosody Perception 127Neil P. McAngus Todd and Guy J. Brown, Dept. ofMusic and Computer Science, University ofSheffield, Regent Court 211 Portobello Street,Sheffield, S1 4DP, U.K.

5.7 Inter-Speaker Interaction in Speech Rhythm:Some Durational Properties of Sentences andIntersentence Intervals 131Kuniko Kakita, Dept. of Liberal Arts and SciencesFaculty of Engineering, Toyama Prefectural Univer-sity, Kosugi-machi, Imizu-gun, Toyama, 939-03Japan

5.8 The Final Lengthening Phenomenon in Swedish-A Consequence of Default Sentence Accent? 135Bertil Lyberg and Barbro Ekholm, Telia ResearchAB, S-136 80 Haninge, Sweden

5.9 Concurrent Effects of Focal Stress, PostvocalicVoicing and Distinctive Vowel Length on Syllable-Internal Timing in Norwegian 139Dawn M. Behne and Bente Moxness, Dept. ofPhonetics & Linguistics, Umea University, S-901 87Umea, Sweden

5.10 Prosodic Pattern of Utterance Units in JapaneseSpoken Dialogs 143Kazuyuki Takagi and Shuichi Itahashi, Institute ofInformation Sciences & Electronics, University ofTsukuba, 1-1-1, Tenno-dai, Tsukuba, 305 Japan

5.11 Some Prosodical Characteristics in SpontaneousSpoken Dialogue 147Akira Ichikawa and Shinji Sato, Dept. of Information& Computer Sciences, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba, 263 Japan

MONDAY AFTERNOON Sep. 19Session 6: Towards Natural Sounding Synthetic

Speech-Articulatory and Source Modeling-

Time: 13:45 to 17:00, September 19, 1994Place: Room CChairpersons:

Juergen Schroeter, AT&T Bell Laboratories, U.S.A.Shigeru Kiritani, Research Institute of Logopedics &Phoniatrics, Faculty of Medicine, University of TokyoJapan

6.1 Wrestling the Two-mass Model to Conform withReal Glottal Wave Forms 151Inger Karlsson and Johan Liljencrants, Dept. ofSpeech Communication and Music Acoustics, KTH.Box 70014, S-100 44, Stockholm, Sweden

6.2 Automatic Estimation of Voice Source Param-eters 155Helmer Strik and Louis Boves, Dept. of Languageand Speech, University of Nijmegen, P. O. Box9103, 6500 HD, Nijmegen, The Netherlands

6.3 Simultaneous Estimation of Vocal Tract andVoice Source Parameters with Application toSpeech Synthesis 159Wen Ding, Hideki Kasuya and Shuichi Adachi,Faculty of Engineering, Utsunomiya University, 2753,Ishii-machi, Utsunomiya, 321 Japan

6.4 Frication and Aspiration Noise Sources:Contribution of Experimental Data to ArticulatorySynthesis 163Pierre Badin, C.H, Shadle, Y. Pham Thi Ngoc, J.N.Carter and W.S.C. Chiu, C. Scully & K. StrombergInstitut de la Communication Parlee, UniversiteStendahl, 46 Avenue, Felix Viallet, F-38031Grenoble, CedexOI, France

6.5 Vocal Tract Model and 3-dimensional Effect ofArticulation 167Nobuhiro Miki, Pierre Badin, Pham Thi Ngoc Y. andYoshihiko Ogawa, Dept. of Electronic Engineering,Hokkaido University, Nishi-8, Kita-13, Kita-ku,Sapporo, 060 Japan

6.6 3-D FEM Analysis of Sound Propagation in theNasal and Paranasal Cavities 171Hisayoshi Suzuki, Jianwu Dang, Takayoshi Nakai,Akira Ishida and Hiroshi Sakakibara, Dept. ofElectronics, Faculty of Engineering, ShizuokaUniversity, 3-5-1, Johoku, Hamamatsu, 432 Japan

6.7 A Physiological Model of Speech Production andthe Implication of Tongue-Larynx Interaction 175Kiyoshi Honda, H. Hirai and J. Dang, ATR HumanInformation Processing Research Laboratories, 2-2,Hikari-dai, Seika-cho, Soraku-gun, Kyoto, 619-02Japan

6.8 A Dynamical Articulatory Model Using PotentialTask Representation 179Masaaki Honda and Tokihiko Kaburagi, InformationScience Research Laboratory, NTT Basic ResearchLaboratories, 3-1, Morinosato-Wakamiya, Atsugi,243-01 Japan

6.9 Control of a Klatt Synthesizer by ArticulatoryParameters 183Kenneth. N. Stevens, Corine. A. Bickley and David.R. Williams, Research Labaratory of ElectronicsMassachusetts Institute of Technology, Rm 36-517,Cambridge, MA 02139, U.S.A.

MONDAY AFTERNOON Sep. 19Session 7: Statistical Methods for Speech

Recognition

Time: 13:45 to 17:00, September 19, 1994Place: Room D PosterChairpersons:

Seiichi Nakagawa, Toyohashi University of Technol-ogy, JapanGerhard Rigoll, Fuculty of Electrical EngineeringUniversity of Duisburg, Germany

7.1 Speech Recognition Using HMM with DecreasedIntra-group Variation in the Temporal Structure 187Nobuaki Minematsu and Keikichi Hirose, Dept. ofElectronic Engineering, University of Tokyo, 7-3-1Hongo, Bunkyo-ku, Tokyo, 113 Japan

7.2 Spoken Word Recognition Using PhonemeDuration Information Estimated from SpeakingRate of Input Speech 191Yukihiro Osaka, Shozo Makino and Toshio Sone,Graduate School of Information Sciences, TohokuUniversity, SKK Building, 2-1-1 Kitahara, Aoba-ku,Sendai, 980 Japan

7.3 State Duration Constraint Using Syllable Durationfor Speech Recognition 195Yumi Wakita and Eiichi Tsuboka, Central ResearchLaboratories, Matsushita Electric Industrial Co., Ltd.,3-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-02 Japan

7.4 Statistical Modeling and Recognition of Rhythm 7.17in Speech 199Satoru Hayamizu and Kazuyo Tanaka,Electrotechnical Laboratory. 1-1-4, Umezono.Tsukuba, 305 Japan

7.5 Recognition of Chinese Tones in Monosyllabicand Disyllabic Speech Using HMM 203 7.18Xinhui Hu and Keikichi Hirose, Dept. of ElectronicEngineering, Faculty of Engineering, University ofTokyo, 7-3-1, Hongo, Bunkyo-ku. 113 Japan

7.6 Chinese Speech Understanding and Spelling-Word Translation Based on the Statistics of 7.19Corpus 207Jun Wu, Zuoying Wang, Jiasong Sun and Jin Guo,Dept. of Electronic Engineering, Tsinghua University.Beijing, 100084, China

7.7 State-CodeBook Based Quasi Continuous 7.20Density Hidden Markov Model with Applicationsto Recognition of Chinese Syllables 211Ren-Hua Wang and Hui Jiang, Speech Communica-tion Lab, University of Science and Technology ofChina, P.O.Box 4, Hefei, Anhui, 230027, China

7.8 Estimating Linear Discriminant Parameters for 7.21Continuous Density Hidden Markov Models 215Eluned S. Parris and Michael J. Carey, EnsigmaLtd., Turing House, Station Road, Chepstow. GwentNP6 5PB, U.K.

7.9 Discriminative State-Weighting in Hidden Markov 7.22Models 219F. Wolfertstetter and G. Ruske, Lehrstuhl furDatenverarbeitung, Technische UniversitatMunchen, Arcisstrasse 21, D-80333 Munchen.Germany 7.23

7.10 Speech Recognition Using Tree-StructuredProbability Density Function 223Takao Watanabe, Koichi Shinoda, Keizaburo Takagiand Eiko Yamada, Information Technology ResearchLaboratories, NEC Corporation, 4-1-1. Miyazaki, 7.24Miyamae-ku, Kawasaki, 216 Japan

7.11 Prediction of Word Confusabilities for SpeechRecognition 227David B. Roe and Michael D. Riley, AT&T BellLaboratories, 600 Mountain Avenue, P.O.Box 636.Murray Hill, NJ 07974, U.S.A. 7.25

7.12 A Comparison Study of Output ProbabilityFunctions in HMMs Through Spoken DigitRecognition 231Li Zhao, Hideyuki Suzuki and Seiichi Nakagawa,Dept. of Information and Computer Sciences. 7.26Toyohashi University of Technology, Tenpaku-cho,Toyohashi, 441 Japan

7.13 Connected Spoken Word Recognition Using aMany-State Markov Model 235Tomio Takara, Naoto Matayoshi and Kazuya Higa, 7.27Dept. of Information Engineering, College ofEngineering, University of the Ryukyus, 1 Senbaru.Nishihara-cho, Okinawa, 903-01 Japan

7.14 Global Optimisation of HMM Input Transforma-tions 239 7.28Finn Tore Johansen, Norwegian Institute of Technol-ogy, Norwegian Telecom Research, P. O. Box 83, N-2007 Kjeller, Norway

7.15 Nonstationary-state Hidden Markov Model withState-dependent Time Warping: Application toSpeech Recognition 243Don X. Sun and Li Deng, Dept. of Applied Math, andStats., SUNY at Stony Brook. NY 11794-3600, 7.29U.S.A.

7.16 Automatic Word Recognition Based on Second-Order Hidden Markov Models 247Jean-Francois Mari and Jean-Paul Haton, CRIN/CNRS & INRIA-Lorraine, BP 239 54506 Vandoeuvre-les-Nancy, France

On the Application of Multiple Transition BranchHidden Markov Models to Chinese Digit Recogni-tion 251Xixian Chen, Yinong Li, Xiaoming Ma and Lie Zhang,Beijing University of Posts and Telecommunications,Campus Box 103, Beijing, 100088, ChinaParallel Model Combination on a Noise CorruptedResource Management Task 255M.J.F, Gales and S.J. Young, Dept. of EngineeringUniversity of Cambridge, Trumpington Street,Cambridge CB2 1PZ, U.K.Robust Signal Preprocessing for HMM SpeechRecognition in Adverse Conditions 259Jean-Baptiste Puel and Regine Andre-Obrecht, IRIT-URA CNRS 1399- Universite Paul Sabatier 118.route de Narbonne-31062 Toulouse Cedex, FranceA Study on Viterbi Best-First Search for IsolatedWord Recognition Using Duration-ControlledHMM 263Masaharu Katoh and Masaki Kohda, Faculty ofEngineering, Yamagata University, Yonezawa, 992JapanAn HMM Duration Control Algorithm with a LowComputational Cost 267Satoshi Takahashi, Yasuhiro Minami and KiyohiroShikano, NTT Human Interface Laboratories, 3-9-11Midon-cho. Musashmo, 180 JapanFast Log-Likelihood Computation for MixtureDensities in a High-Dimensional Feature Space 271Peter Beyerlein, Philips GmbHForschungslaboratorium Aachen, P.O. Box 1980,D-52021 Aachen, GermanyTime Synchronous Heuristic Search in aStochastic Segment Based Recognizer 275Nick Cremelie and Jean-Pierre Martens, Electronicsand Information Systems Dept., University of Gent,Smt- Pietersnieuwstraat 41, B-9000 Gent, BelgiumApplying Speech Verification to a Large DataBase of German to Obtain a Statistical Surveyabout Rules of Pronunciation 279Maria-Barbara Wesenick and Florian Schiel, Institutfur Phonetik und Sprachliche Kommunikation,Universitat Munchen, Munchen, GermanyStructure of Allophonic Models and ReliableEstimation of the Contextual Parameters 283D. Jouvet. K Bartkova and A. Stouff, FranceTelecom. CENT Lannion, LAA/TSS/RCP, Route deTregastel. 22300 Lannion, FranceA Probabilistic Framework for Word RecognitionUsing Phonetic Features 287Christoph Windheuser, Frederic Bimbot and PatrickHaffner, Dept. Signal, Telecom Paris, CNRS, URA820, 46 Rue Barrault, 75634 Paris, Cedex 13, FranceNonlinear Time Alignment in StochasticTrajectory Models for Speech Recognition 291Mohamed Afify, Yifan Gong and Jean-Paul Haton,CRIN-CNRS & INRIA Lorraine, BP 239, 54506Vandoeuvre-les-Nancy, FranceConnected Digit Recognition Using ConnectionistProbability Estimators and Mixture-GaussianDensities 295David M. Lubensky, Ayman 0. Asadi and Jayant M.Naik, Speech Recognition and Language Under-standing Laboratory, NYNEX Science and Technol-ogy, Inc., 500 Westchester Avenue, White Plains, NY10604, U.S.A.A Trellis-Based Implementation of Minimum ErrorRate Training 299Kazuya Takeda, Tetsunori Murakami, ShingoKuroiwa and Seiichi Yamamoto, KDD R&D Laborato-ries, 2-1-15, Ohara, Kami-Fukuoka, 356 Japan

7.30 Concatenated Training of Subword HMMs UsingDetected Labels 303Jie Yi, Media Laboratory, Oki Electric Industry Co.,Ltd., 550-5, Higashi-asakawa-cho, Hachioji, 193Japan

7.31 An Initial Study on Speaker Adaptation forMandarin Syllable Recognition with MinimumError Discriminative Training . 307Chih-Heng Lin, Pao-Chung Chang and Chien-HsingWu, Telecommunication Laboratories, Ministry ofCommunications, Taiwan, R.O.C.

MONDAY AFTERNOON Sep. 19Session 8: Phonetics & Phonology I

Time: 13:45 to 17:00, September 19, 1994Place: Room E PosterChairpersons:

William J. Hardcastle, Dept. of Speech and Lan-guage Sciences, Queen Margaret College, U.K.Masatake Dantsuji, Dept. of English Literature,Kansai University, Japan

8.1 Phonetic Underspecification in Schwa 311Yuko Kondo, Dept. of Linguistics, University ofEdinburgh, Adam Freguson Building, GeorgeSquare, Edinburgh, EH8 9LL, U.K.

8.2 Some Remarks on the Compound Accent Rule inJapanese 315Shin-ichi Tanaka and Haruo Kubozono, Dept. ofJapanese, Osaka University of Foreign Studies,Minoo, Osaka, 562 Japan

8.3 Modification of Acoustic Features in RussianConnected Speech 319R. K. Potapova, Moscow State Linguistic University,Ostozhenca 38, 119837, Moscow Russia

8.4 A Prosodic Analysis of Three Sentence Typeswith "WH" Words in Korean 323Sun-Ah Jun and Mira Oh, Linguistics Dept., UCLA,405 Hilgard Avenue, Los Angeles, CA 90024-1543,U.S.A.

8.5 Distinguishing the Voiceless Fricatives F and THin English : A Study of Relevant Acoustic Pro-perties 327Kazue Hata, Heather Moran and Steve Pearson,Speech Technology Laboratory, Panasonic Tech-nologies, Inc., 3888 State Street, Santa Barbara, CA93105, U.S.A.

8.6 Correlation Analysis Between Speech Power andPitch Frequency for Twenty Spoken Languages 331Kenzo Itoh, NTT Human Interface Laboratories, 1-2356, Take, Yokosuka, 238-03 Japan

8.7 On Gestural Reduction and Gestural Overlap inKorean and English / PK / Clusters 335Jongho Jun, Linguistics Dept., UCLA, 405 HilgardAvenue, Los Angeles, CA 90024-1543, U.S.A.

8.8 Intonation Contours and the Prominence of F0Peaks 339Carlos Gussenhoven and Toni Rietveld, Dept. ofEnglish and Dept. of Language and Speech,University of Nijmegen, NL-6526 HT Nijmegen, theNetherlands

8.9 Phonation Types Analysis in Standard Chinese 343Agnes Belotel-Grenie and Michel Grenie, Universitede Nice-Sophia Antipolis, URA-CNRS 1235Langues, Langage & Cognition, 1361, Route desLucioles, F-06 560 Valbonne, France

8.10 Accent Phrase Segmentation by Finding N-BestSequences of Pitch Pattern Templates 347Mitsuru Nakai and Hiroshi Shimodaira, Dept. ofInformation Eng. Faculty of Eng., Tohoku University,Sendai, 980 Japan

8.11 Sound Similarity Judgments and Segment Promi-nence: a Cross-Linguistic Study 351Bruce L. Derwing and Terrance M. Nearey, Dept. ofLinguistics, University of Alberta, Edmonton, AB T6G2E7, Canada

8.12 Analysis and Synthesis of Accent and Intonationin Standard Spanish 355Hiroya Fujisaki, Sumio Ohno, Kei-ichi Nakamura,Miguelina Guirao and Jorge Gurlekian, Dept. ofApplied Electronics, Science University of Tokyo,2641, Yamazaki, Noda, 278 Japan

8.13 Italian Clusters in Continuous Speech 359E. Farnetani and M. Busa, Centro di Fonetica CNR,Padova, Italy

8.14 Rhythmic Constraints in Durational Control 363Cynthia Grover and Jacques Terken, Institute forPerception Research, Postbus513, 5600 MBEindhoven, The Netherlands

8.15 Further Evidence for Bi-Moraic Foot in Japanese 367Kazutaka Kurisu, Graduate School of DokkyoUniversity, 1-1, Gakuen-cho, Soka, 340 Japan

8.16 A Model for Generating Self-Repairs 371Yuji Sagawa, Masahiro Ito, Noboru Ohnishi andNoboru Sugie, Dept. of Information Engineering,Nagoya University, Furo-cho, Chikusa-ku, Nagoya,464-01 Japan

8.17 A Study on Interjections in Korean SpokenLanguageKee-Ho Kim, Joo-Kyeong Lee, Minsuck Song andTae-Yeob Jang, Dept. of English, Korea University,Anam-dong, Seoul 136-701, Korea

8.18 Accent Identification with a View to AssistingRecognition (Work in Progress) 375Chris Cleirigh, and Julie Vonwiller, Speech Technol-ogy Research Group, Dept. of ElectricalEngineering, The University of Sydney, P.O. Box 74,Wentworth Bldg., NSW 2006, Australia

8.19 Phonetic, Phonological and Morpho-Syntacticand Semantic Functions of Segmental Duration inSpoken Telugu: Acoustic Evidence 379K. Nagamma Reddy, Department of Linguistics,Osmania University, Hyderabad, 500 007, India

8.20 Timing Strategies within the Paragraph 383Zita McRobbie-Utasi, Dept. of Linguistics, SimonFraser University, Burnaby, British Columbia, V5A1S6, Canada

8.21 The Effect of the Following Vowel on the Fre-quency Normalization in the Perception ofVoiceless Stop Consonants 387Sotaro Sekimoto, Faculty of Medicine, University ofTokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113 Japan

8.22 Intonational Variations and the Structure ofDiscourseLi-Chiung Yang, Dept. of Linguistics, GeorgetownUniversity

8.23 Withdrawn 3918.24 Features of Prominent Particles in Japanese Dis-

course -Frequency, Functions and AcousticFeatures- 395Toshiko Muranaka and Noriyo Hara, Faculty ofIntegrated Arts and Sciences, Tokushima University,1-1, Minami-Josanjima-cho, Tokushima, 770 Japan

8.25 Vowel Quality Assessment Based on Analysis ofDistinctive Features 399Shuping Ran, Bruce Millar and lain Macleod,Computer Sciences Laboratory, Research School ofInformation Sciences & Engineering, AustralianNational Univeristy, ACT 0200, Canberra, Australia

8.26 Differences in the Fluctuation of Attention Duringthe Listening of Natural and Synthetic Passages 403Cristina Delogu, Stella Conte and Ciro Sementina,Fondazione Ugo Bordoni, via B. Castiglione 59.,00142 Rome, Italy

8.27 Production and Perception of Words withIdentical Segmental Structure but DifferentNumber of Syllables 407Barbara Heuft and Thomas Portele, Institut furKommunikationsforschung und Phonetik, UniversitatBonn, Poppeldorfer Allee 47, 53115 Bonn, Germany

8.28 Generation of Pronunciations from OrthographiesUsing Transformation-Based Error-DrivenLearning 411Caroline B. Huang, Mark A. Son-Bell and David M.Baggett, Signi Corp, 1318 Beacon St., Brookiine.MA, U.S.A.

8.29 Characteristics of Mispronunciation and Hesita-tion in Japanese Tongue Twister 415Hidenori Usuki, Jouji Suzuki and TetsuyaShimamura, Dept. of Information and ComputerSciences, Saitama University, 255 Shimo-okubo.Urawa, 338 Japan

8.30 A Duration Study of Speech Vowels Produced inNoise 419Jean-Claude Junqua, Speech Technology Labora-tory, Panasonic Technologies, Inc., 3888 StateStreet, Santa Barbara, CA 93105, U.S.A.

8.31 PROTRAN: A Prosody Transplantation Tool forText-to-Speech Applications 423B. Van Coile, L. Van Tichelen, A. Vorstermans, J. W.Jang and M. Staessen, Lernout & Hauspie SpeechProducts, leper, Belgium Sint-Krispijnstraat 7, B-8900 leper, Belgium

8.32 Complementary Phonology, a Theoretical Framefor Labelling an Acoustic Data Base of Dialogues 427Klaus J. Kohler, Institut fur Phonetik und DigitaleSprachverarbeitung, Christian-Albrechts-Universitatzu Kiel, D-24098 Kiel, Germany

TUESDA Y MORNING Sep. 20Session 9: Adaptation and Training Techniques

for Speech Recognition

Time: 09:00 to 12:15, September 20, 1994Place: Room AChairpersons:

Lawrence R, Rabiner, AT&T Bell Laboratories,U.S.A.Kiyohiro Shikano, Nara Advanced Institute ofScience and Technology, Japan

9.1 An Unsupervised Speaker Adaptation Method forContinuous Parameter HMM by Maximum APosteriori Probability Estimation 431Yutaka Tsurumi and Seiichi Nakagawa, Dept. ofInformation and Computer Science, ToyohashiUniversity of Technology, 1-1, Tenpaku-cho,Toyohashi, 441 Japan

9.2 Unsupervised Speaker Adaptation for SpeechRecognition Using Demi-Syllable HMM 435Koichi Shinoda and Takao Watanabe, InformationTechnology Research Laboratories, NEC Corpora-tion, 4-1-1, Miyazaki, Miyamae-ku, Kawasaki, 216Japan

9.3 Minimum Error Rate Training of Inter-WordContext Dependent Acoustic Model Units inSpeech Recognition 439W. Chou, C.-H. Lee and B.-H. Juang, AT&T BellLaboratories, 600 Mountain Avenue, Murray Hill, NJ07974, U.S.A.

9.4 Incremental Speaker Adaptation Using Phone-tically Balanced Training Sentences for MandarinSyllable Recognition Based on Segmental Pro-bability Models 443]Jia-lin Shen, Hsin-mr. Wang, Ren-yuan L.yu ana Lin-shan Lee, Dept. of Electrical Engineering. Rm. 520,National Taiwan University. Taipei. Taiwan

9.5 Incremental Training of a Speech Recognizer forVoice Dialling-by-Name 447L, Flssore and G. Micca and F. Ravera. CSELTCentro Studie Laboratori Teiecomunicazioni Via G.Reiss Romoli 274, 10148 Torino, Italy

9.6 Speaker Adaptation of Continuous Density HMMsUsing Multivariate Linear Regression 451C.J. Leggetterand P.C Woodland. EngineeringDept. Cambridge University. Trumpington Street.Cambridge CB2 1PZ, U.K.

9.7 Speaker Adaptation Based on Transfer Vectors ofMultiple Reference Speakers 455K. Ohkura, H. Ohnishi and M. lida, HypermediaResearch Center, Sanyo Electric Co., Ltd., 1-18-13Hashiridani, Hirakata, Osaka 573 Japan

9.8 Experiments with a New Algorithm for FastSpeaker Adaptation 459Nikko Strom, Department of Speech Communicationand Music Acoustic, KTH, Box 70014, S-10044,Stockholm. Sweden

9.9 A Study of Applying Adaptive Learning to a Multi-module System 463Tung-Hui Chiang, Yi-Chung Lin and Keh-Yih Su,Dept. of Electrical Engineering, National Tsing HuaUniversity. Hsinchu, 300, Taiwan

9.10 Speaker Adaptation Based on Fuzzy VectorQuantization 467Jun'ichi Nakahashi and Eiichi Tsuboka, CentralResearch Laboratories, Matsushita Electric IndustrialCo., Ltd., 3-4, Hikaridai, Seika-cho, Soraku-gun,Kyoto 619-02 Japan

9.11 A Study on the Simulated Annealing of SelfOrganized Map Algorithm for Korean PhonemeRecognition 471Myung-Kwang Kang, Seong-Kwon Lee and Soon-Hyob Kim, Dept. of Computer Engineering, Kwang-Woon University, 447-1, Wolgye-Dong, Nowon-ku,Seoul, 139-701, Korea

9.12 Discriminative Training of Garbage Model forNon-Vocabulary Utterance Rejection 475Celinda De la Torre and Alejandro Acero, SpeechTechnology Group, Telefonica Invesiigacidn yDesarrollo, Emilio Vargas 6. 28043 Madrid, Spain

TUESDA Y MORNING Sep. 20Session 10: Phonetics & Phonology II

Time: 09:00 to 12:15, September 20, 1994Place: Room BChairpersons:

Osamu Fujimura, Dept. of Speech & HearingScience, The Ohio State University, U.S.A.Takashi Otake, Dokkyo University, Japan

10.1 Distribution of Devoiced High Vowels in Korean 479Sun-Ah Jun, Linguistics, University of California, LosAngeles, Los Angeles, CA 90024-1543, U.S.A.

10.2 CV as a Phonological Unit in Korean 483Yeo Bom Yoon, Dept. of Linguistics, University ofAlberta, Edmonton, AB. T6G 2E7, Canada

10.3 Experiments on the Syllable in Hindi 487Manjari Ohala, San Jose State University, 1149Hillview Road, Berkeley. CA 94708, U.S.A.