analysis and synthesis of sppeech

Upload: ashok-kumar

Post on 03-Jun-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 analysis and synthesis of sppeech

    1/9

  • 8/12/2019 analysis and synthesis of sppeech

    2/9

  • 8/12/2019 analysis and synthesis of sppeech

    3/9

  • 8/12/2019 analysis and synthesis of sppeech

    4/9

  • 8/12/2019 analysis and synthesis of sppeech

    5/9

  • 8/12/2019 analysis and synthesis of sppeech

    6/9

  • 8/12/2019 analysis and synthesis of sppeech

    7/9

  • 8/12/2019 analysis and synthesis of sppeech

    8/9

    7. References[1] Bailey, B. J., Johnson, J. T., and Newlands, S. D., Head and

    Neck Surgery Otolaryngology , Lippincot, Williams & Wilkins,Philadelphia, PA, USA, 4th Ed., 1779-1780, 2006.

    [2] American Cancer Society, Cancer Facts and Figures 2012,Atlanta, GA : American Cancer Society . Retrieved on December26, 2012.

    [3] Denby, B., Schultz, T., Honda, K., Hueber, T., Gilbert, J. M.,and Brumberg, J. S. Silent speech interface, SpeechCommunication , 52:270-287, 2010.

    [4] Sproat, R. (Ed.), Multilingual text-to-speech s ynthesis : TheBell Labs approach, in Computational Linguistics (1st ed.), vol.24, p. 328. 1998: Springer.

    [5] Hueber, T., Benaroya, E.-L., Chollet, G., Denby, B., Dreyfus, G.,Stone, M., Development of a silent speech interface driven byultrasound and optical images of the tongue and lips, SpeechCommunication , 52:288300, 2010.

    [6] Denby, B., Cai, J., Roussel, P., Dreyfus, G., Crevier-Buchman,L., Pillot-Loiseau, C., Hueber, and T., Chollet, G., Tests of aninteractive, phrasebook-style post-laryngectomy voice-replacement system, the 17 th International Congress on

    Phonetic Sciences , Hong Kong, China, 572-575, 2011.

    [7]

    Jorgensen, C. and Dusan, S., Speech interfaces based uponsurface electromyography, Speech Communication , 52:354 366, 2010.

    [8] Heaton, J. T., Robertson, M., and Griffin, C., Development of awireless electromyographically controlled electrolarynx voice

    prosthesis, Proc. of the 33rd Annual Intl. Conf. of the IEEE Engineering in Medicine & Biology Society , Boston, MA, 5352-5355, 2011.

    [9] Wang, J., Samal, A., Green, J. R., and Rudzicz, F., Sentencerecognition from articulatory movements for silent speechinterfaces, Proc. IEEE Intl. Conf. on Acoustics, Speech, andSignal Processing , 4985-4988, 2012.

    [10] Wang, J., Samal, A., Green, J. R., and Rudzicz, F., Whole-wordrecognition from articulatory movements for silent speechinterfaces, Proc. Interspeech , Portland, OR, 1327-30, 2012.

    [11] Fagan, M. J., Ell, S. R., Gilbert, J. M., Sarrazin, E., and

    Chapman, P. M., Development of a (silent) speech recognitionsystem for patients following laryngectomy, Medical Engineering & Physics , 30(4):419-425, 2008.

    [12] King, S., Frankel, J., Livescu, K., McDermott, E., Richmond, K.,and Wester, M., Speech production knowledge in automaticspeech recognition, Journal of Acoustical Society of America ,121(2):723-742, 2007.

    [13] Livescu, K., Cetin, O., Hasegawa-Johnson, M., King, S., Bartels,C., Borges, N., Kantor, Lal, P., Yung, L., Bezman, A., Dawson-Haggerty, S., Woods, B., Articulatory feature-based methodsfor acoustic and audio-visual speech recognition: Summary fromthe 2006 JHU Summer Workshop, Proc. Intl. Conf. on

    Acoustics, Speech, and Signal Processing , 621-624, 2007.[14] Rudzicz, F., Articulatory knowledge in the recognition of

    dysarthric speech, IEEE Trans. on Audio, Speech, and Language Processing , 19(4):947-960, 2011.

    [15] Uraga, E. and Hain, T., Automatic speech recognitionexperiments with articulatory data, Proc. Inerspeech , 353-356,2006.

    [16] Sharma H. V., Hasegawa-Johnson, M., Gunderson, J., andPerlman A., Universal access: Speech recognition for talkerswith spastic dysarthria, Proc. Interspeech , 1451-1454, 2009.

    [17] Kantor, A., Pronunciation modeling for large vocabulary speechrecognition, PhD Dissertation, Dept. Comput. Sci., Universityof Illinois, Urbana, 2011.

    [18] Akdemir, E., and Ciloglu, T., The use of articulator motioninformation in automatic speech segmentation, SpeechCommunication , 50(7):594-604, 2008.

    [19] Gilbert, J. M., Rybchenko, S. I., Hofe, R., Ell, S. R., Fagan, M.J., Moore, R.K., and Green, P., Isolated word recognition ofsilent speech using magnetic implants and sensors, Medical

    Engineering & Physics , 32(10):1189-1197, 2011.

    [20] Green, J.R., Beukelman, D.R., and Ball, L. J., Algorithmicestimation of pauses in extended speech samples, Journal of

    Medical Speech-Language Pathology , 12, 149-154, 2004.[21] Kent, R. D., Adams, S. G., and Tuner, G. S. Models of speech

    production . Lass, N. J. : Principles of experimental Phonetics.Mosby, 1996.

    [22] Kent, R. D., and Minifie, F. D., Coarticulation in recent speech production models, Journal of Phonetics , 5(2):115133, 1977.

    [23] Lin, J., Keogh, E., Lonardi, S., and Chiu, B. A symbolicrepresentation of time series, with implications for streamingalgorithms, Proc. of the 8th ACM SIGMOD Workshop on

    Research Issues in Data Mining and Knowledge Discovery . SanDiego, CA, 2003.

    [24] Mueen, A., Keogh, E., Online discovery and maintenance oftime series motifs, Proc. 16th ACM SIGKDD Intl. Conf. on

    Knowledge Discovery and Data Mining , Washington, DC, 1089-98, 2010.

    [25] Wei, L., Kumar, N., Lolla, V. N., Keogh, E., Lonardi, S., andRatanamahatana, C. A., Assumption-free anomaly detection intime series, Proc. 17th International Scientific and Statistical

    Database Management Conference , Santa Barbara, CA, 237-240, 2005.

    [26] Lin, J., Keogh, E., and Lonard, S. Visualizing and discoveringnon-trivial patterns in large time series databases, InformationVisualization , 4(2):61-82, 2005.

    [27] Boser, B., Guyon, I., Vapnik, V., A training algorithm foroptimal margin classifiers, Conf. on Learning Theory (COLT) ,144152, 1992.

    [28] Chang, C. -C., and Lin. C. -J., LIBSVM: a library for supportvector machines, ACM Trans. on Intelligent Systems andTechnology , 2(27):1-27, 2011.

    [29] Wang, J., Silent speech recognition from articulatory motion,Ph.D. dissertation, Dept. Comput. Sci., Univ. of Nebraska-Lincoln, 2011.

    [30] Shutts, R. E., Burke, K. S., and Creston, J. E., Derivation oftwenty-five-word PB Lists, Journal of Speech Hearing

    Disorders , 29:442-447, 1964.[31] Perkell, J. S., Cohen, M. H., Svirsky, M. A., Matthies, M. L.,

    Garabieta, I., and Jackson, M. T. T., Electromagnetic

    midsagittal articulometer systems for transducing speecharticulatory movements, Journal of Acoustical Society of America , 92(6):30783096, 1992.

    [32] Hoole, P., and Zierdt, A., Five-dimensional articulography, inSpeech Motor Control: New Developments in Basic and Applied

    Research , B. Maassen and P. van Lieshout, Eds. OxfordUniversity Press, ch. 20, pp. 331349, 2010.

    [33] Green, J. R., Wang, J., and Wilson, D. L., SMASH: A tool forarticulatory data processing and analysis, Proc. Interspeech ,2013 (In press).

    [34] Yunusova, Y., Green, J. R., and Mefferd, A., Accuracyassessment for AG500 electromagnetic articulograph, Journalof Speech, Language, and Hearing Research , 52(2):547-555,2009.

    [35] Berry, J. Accuracy of the NDI wave speech research system, Journal of Speech, Language, and Hearing Research , 54:1295-

    1301, 2011.[36] Katz, W., Bharadwaj, S., Rush, M., and Stettler, M., Influences

    of EMA receiver coils on speech production by normal andaphasic/apraxic talkers, Journal of Speech, Language, and

    Hearing Research , 49:645-659, 2006.[37] Wang, J., Green, J. R., & Samal, A., Individual articulator's

    contribution to phoneme production, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing , Vancouver, Canada,7795-89, 2013.

    [38] Wang, J., Green, J. R., Samal, A. and Yunusova, Y.Articulatory distinctiveness of vowels and consonants: A data-driven approach, Journal of Speech, Language, and Hearing

    Research , 2013 (In press).[39] Wang, J., Green, J. R., Samal, A., and Marx, D. B. Quantifying

    articulatory distinctiveness of vowels, Proc. Interspeech ,Florence, Italy, 277-280, 2011.

    126

  • 8/12/2019 analysis and synthesis of sppeech

    9/9

    [40] Green, J. R. and Wang, Y., Tongue-surface movement patternsduring speech and swallowing, Journal of Acoustical Society of

    America , 113:2820-2833, 2003.[41] Westbury, J. X-ray microbeam speech production database

    users handbook . University of Wisconsin, 1994.[42] Chen, W.-H., Loke, W.-F., Thompson, G., and Jung, B., A

    0.5V, 440uW frequency synthesizer for implantable medicaldevices, IEEE Journal of Solid-State Circuits , 47:1896-1907,2012.

    [43] Park, H, Kiani, M., Lee, H. M., Kim, J., Block, J., Gosselin, B.,and Ghovanloo, M., "A wireless magnetoresistive sensingsystem for an intraoral tongue-computer interface", IEEETransactions on Biomedical Circuits and Systems , 6(6):571-585,2012.

    [44] Cai, J., Denby, B., Roussel, P., Dreyfus, G., and Crevier-Buchman, L., "Recognition and real time performances of alightweight ultrasound based silent speech interface employing alanguage model", Proc. Interspeech , Florence, Italy, 1005-08,2011.

    [45] Heracleous, P., and Hagita, N., Automatic recognition of speechwithout any audio information, Proc. IEEE Intl. Conf. on

    Acoustics, Speech, and Signal Processing , pp. 2392-2395, 2011.[46] Hofe, R., Ell, S. R., Fagan, M. J., Gilbert, J. M., Green, P. D.,

    Moore, R. K., and Rybchenko, S. I., Small-vocabulary speechrecognition using a silent speech interface based on magneticsensing, Speech Communication , 55(1):22-32, 2013.

    [47] Salvador, S., and Chan, P., Toward accurate dynamic timewarping in linear time and space, Intelligent Data Analysis ,11(5):561-580, 2007.

    [48] Frankel, J., Wester, M., and King, S., Articulatory featurerecognition using dynamic bayesian networks, ComputerSpeech Language , 21(4):620-640, 2006.

    127