最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 正文

characteristics in American English foreign accent

来源:动视网 责编:小OO 时间:2025-10-03 00:36:21
文档

characteristics in American English foreign accent

Bibliography[1]Abercrombie,D.(1967),Elementsofgeneralphonetics(EdinburghUniversityPress,Edinburgh).[2]Ainsworth,W.A.andFoster,H.M.(1985),“Theuseofdynamicfrequencywarpinginaspeaker-independentvowelclassifier”,inDeMori,R.andSuen,C.Y.(eds.),Proceedings
推荐度:
导读Bibliography[1]Abercrombie,D.(1967),Elementsofgeneralphonetics(EdinburghUniversityPress,Edinburgh).[2]Ainsworth,W.A.andFoster,H.M.(1985),“Theuseofdynamicfrequencywarpinginaspeaker-independentvowelclassifier”,inDeMori,R.andSuen,C.Y.(eds.),Proceedings
Bibliography

[1]Abercrombie, D. (1967), Elements of general phonetics (Edinburgh University Press,

Edinburgh).

[2]Ainsworth, W.A. and Foster, H.M. (1985), “The use of dynamic frequency warping in a

speaker-independent vowel classifier”, in De Mori, R. and Suen, C.Y. (eds.), Proceedings of the NATO Advanced Study Institute on New Systems and Architectures for Automatic Speech Recognition and Synthesis (Springer-Verlag, Berlin, Heidelberg): 3-403.

[3]Arslan, L.M. and Hansen, J.H.L. (1997), “A study of temporal features and frequency

characteristics in American English foreign accent”, Journal of the Acoustical Society of America102: 28-40.

[4]Assaleh, K.T. and Mammone, R.J. (1994), “New LP-Derived Features for Speaker

Identification”, IEEE Transactions on Speech and Audio Processing2: 630-638. [5]Assmann, P.F., Nearey, T.M. and Hogan, J.T. (1982), “Vowel identification:

Orthographic, perceptual, and acoustic aspects”, Journal of the Acoustical Society of America71: 975-9.

[6]Atal, B.S. (1970), “Determination of the Vocal-Tract Shape Directly from the Speech

Wave”, Journal of the Acoustical Society of America47: S65.

[7]Atal, B.S. (1974), “Effectiveness of linear prediction characteristics of the speech wave

for automatic speaker identification and verification”, Journal of the Acoustical Society of America55: 1304-1312.

[8]Atal, B.S., Chang, J.J., Mathews, M.V. and Tukey, J.W. (1978), “Inversion of

articulatory-to-acoustic transformation in the vocal tract by a computer-sorting technique”, Journal of the Acoustical Society of America63: 1535-1555.

[9]Atal, B.S. and Hanauer, S.L. (1971), “Speech Analysis and Synthesis by Linear

Prediction of the Speech Wave”, Journal of the Acoustical Society of America50: 637-655.

[10]Auckenthaler, R. and Mason, J.S. (1997), “Equalizing sub-band error rates in speaker

recognition”, Proceedings of the 5th European Conference on Speech Communication and Technology: 2303-2306.[11]Badin, P. and Fant, G. (1984), “Notes on Vocal Tract Computation”, Speech

Transmission Laboratory, Quarterly Progress and Status Report2-3: 53-108.

[12]Baer, T., Gore, J.C., Gracco, L.C. and Nye, P.W. (1991), “Analysis of vocal tract shape

and dimensions using magnetic resonance imaging: Vowels”, Journal of the Acoustical Society of America90: 799-828.

[13]Bartholomew, W.T. (1934), “A Physical Definition of “Good Voice-Quality” in the

Male Voice”, Journal of the Acoustical Society of America6: 25-33.

[14]Beautemps, D., Badin, P. and Laboissière, R. (1995), “Deriving vocal-tract area

functions from midsagittal profiles and formant frequencies: A new model for vowels and fricative consonants based on experimental data”, Speech Communication16: 27-47. [15]Békésy, G. von (1960), Experiments in hearing (McGraw-Hill, New York, Toronto,

London).

[16]Bernard, J.R.L. (1967), “Some measurements of some sounds of Australian English”,

unpublished Doctoral Thesis, The University of Sydney, Sydney, Australia.

[17]Bernard, J.R.L. (1970), “Toward the acoustic specification of Australian English”,

Zeitschrift für phonetik, Sprachwissenschaft und Kommunika-tionsforschung23: 113-128.

[18]Bernard, J.R.L. (19), “Quantitative aspects of the sounds of Australian English”, in

Collins, P. and Blair, D. (eds.), Australian English: The Language of a New Society (University of Queensland Press, Queensland, Australia): 187-204.

[19]Besacier, L. and Bonastre, J.-F. (1997), “Independent Processing and Recombination of

Partial Frequency Bands for Automatic Speaker Recognition”, Proceedings of the International Conference on Speech Processing: 575-579.

[20]Bladon, A. (1982), “Arguments against formants in the auditory representation of

speech”, in Carlson, R. and Granström, B. (eds.), The Representation of Speech in the Peripheral Auditory System (Elsevier Biomedical Press, Amsterdam, New York, Oxford): 95-102.

[21]Bladon, R.A.W., Henton, C.G. and Pickering, J.B. (1983), “Outline of an Auditory

Theory of Speaker Normalization”, in Van den Broecke, M.P.R. and Cohen, A. (eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (Foris Publications, Dordrecht, The Netherlands): 313-317.

[22]Bladon, R.A.W. and Lindblom, B. (1981), “Modeling the judgment of vowel quality

differences”, Journal of the Acoustical Society of America69: 1414-1422.[23]Bogert, B.P. (1953), “On the Band Width of Vowel Formants”, Journal of the

Acoustical Society of America25: 791-792.

[24]Bonder, L.J. (1983a), “The n-Tube Formula and Some of its Consequences”, Acustica

52: 216-226.

[25]Bonder, L.J. (1983b), “Between Formant Space and Articulation Space”, Proceedings of

the 10th International Congress of Phonetic Sciences: 347-353.

[26]Bonder, L.J. (1983c), “Equivalency of Lossless n-Tubes”, Acustica 53: 193-200.

[27]Borg, G. (1946), “Eine Umkehrung der Sturm-Liouvilleschen Eigenwertaufgabe”, (“An

inversion of the Sturm-Liouville eigenvalue problem”, in German), Acta Mathematica 78: 1-96.

[28]Broad, D.J. (1972), “Formants in Automatic Speech Recognition”, International

Journal of Man-Machine Studies4: 411-424.

[29]Broad, D.J. (1976), “Toward Defining Acoustic Phonetic Equivalence for Vowels”,

Phonetica33: 401-424.

[30]Broad, D.J. (1981), “Piecewise-planar vowel formant distributions across speakers”,

Journal of the Acoustical Society of America69: 1423-1429.

[31]Broad, D.J. (1982), “Generalised Acoustic Phonetics I. Determinants of Acoustic

Parameter Values”, manuscript of talk presented at Speech Technology Laboratory, Santa Barbara, California, USA.

[32]Broad, D.J. and Shoup, J.E. (1975), “Concepts for Acoustic Phonetic Recognition”, in

Reddy, D.R. (ed.), Speech Recognition: Invited Papers Presented at the 1974 IEEE Symposium (Academic Press, New York): 243-274.

[33]Broad, D.J. and Wakita, H. (1977), “Piecewise-planar representation of vowel formant

frequencies”, Journal of the Acoustical Society of America62: 1467-1473.

[34]Broad, D.J. and Wakita, H. (1978), “A phonetic approach to automatic vowel

recognition”, in Bolc, L. (ed.), Speech Communication with Computers (Carl Hanser Verlag, München, Wien): 55-92.

[35]Burgess, N. (1969), “A spectrographic investigation of some diphthongal phonemes in

Australian English”, Language and Speech12: 238-246.

[36]Butler, S.J. and Wakita, H. (1982), “Articulatory constraints on vocal tract area

functions and their acoustic implications”, Journal of the Acoustical Society of America 72: S79.[37]Candille, L. and Meloni, H. (1995), “Automatic Speech Recognition Using Production

Models”, Proceedings of the 13th International Congress of Phonetic Sciences4: 256-259.

[38]Carré, R. (1971), “Identification des locuteurs; exploitation des données relatives aux

frequences des formants”, Proceedings of the 7th International Congress on Acoustics: 29-32.

[39]Carré, R., Chennoukh, S. and Mrayati, M. (1992), “Vowel-consonant-vowel transitions:

Analysis, modeling, and synthesis”, Proceedings of the 2nd International Conference on Spoken Language Processing: 819-822.

[40]Carré, R. and Mrayati, M. (1991), “Vowel-vowel trajectories and region modeling”,

Journal of Phonetics19: 433-443.

[41]Charpentier, F. (1984), “Determination of the vocal tract shape from the formants by

analysis of the articulatory-to-acoustic nonlinearities”, Speech Communication3: 291-308.

[42]Chiba, T. and Kajiyama, M. (1958), The vowel: its nature and structure (Phonetic

Society of Japan, Tokyo; first published 1941).

[43]Chistovich, L.A. and Lublinskaya, V.V. (1979), “The ‘center of gravity’ effect in vowel

spectra and critical distance between the formants: Psychoacoustical study of the perception of vowel-like stimuli”, Hearing Research1: 185-195.

[44]Chrystal, G. (19), Algebra: An elementary text-book, Part 1, Seventh Edition,

(Chelsea Publishing Company, New York).

[45]Chung, H., Makino, S. and Kido, K. (1988), “Analysis, perception and recognition of

Korean isolated vowels”, Paper presented at the Second Joint Meeting of the Acoustical Societies of America and Japan, Journal of the Acoustical Society of America84: S213.

[46]Claes, T., Dologlou, I., ten Bosch, L. and Van Compernolle, D. (1997), “New

transformations of cepstral parameters for automatic vocal tract length normalization in speech recognition”, Proceedings of the 5th European Conference on Speech Communication and Technology: 1363-1366.

[47]Clermont, F. (1991), “Formant-contour models of diphthongs: A study in acoustic

phonetics and computer modelling of speech”, unpublished Doctoral Thesis, Computer Sciences Laboratory, Research School of Physical Sciences and Engineering, Australian National University, Canberra, Australia.

[48]Clermont, F. (1993), “Spectro-temporal description of diphthongs in F1-F2-F3 space”,

Speech Communication13: 377-390.[49]Clermont, F. (1996), “Multi-speaker formant data on the Australian English vowels: A

tribute to J.R.L. Bernard’s (1967) pioneering research”, Proceedings of the 6th Australian International Conference on Speech Science and Technology: 145-150. [50]Clermont, F. and Broad, D.J. (1995), “Back-Front Classification of English Vowels

using a Cepstrum-to-Formant Model”, Journal of the Acoustical Society of America98: 2966.

[51]Clermont, F. and Mokhtari, P. (1994), “Frequency-band specification in cepstral

distance computation”, Proceedings of the 5th Australian International Conference on Speech Science and Technology: 354-359.

[52]Coker, C.H. (1976), “A Model of Articulatory Dynamics and Control”, Proceedings of

the IEEE: 452-460.

[53]Compton, A.J. (1963), “Effects of Filtering and Vocal Duration upon the Identification

of Speakers, Aurally”, Journal of the Acoustical Society of America35: 1748-1752. [54]Cooper, C. and Clermont, F. (1994), “An investigation of the speaker factor in vowel

nuclei”, Proceedings of the 5th Australian International Conference on Speech Science and Technology: 368-373.

[55]Crichton, R.G. and Fallside, F. (1974), “Linear Prediction Model of Speech Production

with Applications to Deaf Speech Training”, Proceedings of IEE121: 865-873. [56]Dautrich, B.A., Rabiner, L.R. and Martin, T.B. (1983), “On the Effects of Varying

Filter Bank Parameters on Isolated Word Recognition”, IEEE Transactions on Acoustics, Speech, and Signal Processing31: 793-807.

[57]Davis, S.B. and Mermelstein, P. (1980), “Comparison of Parametric Representations for

Monosyllabic Word Recognition in Continuously Spoken Sentences”, IEEE Transactions on Acoustics, Speech, and Signal Processing28: 357-366.

[58]Delattre, P. (1951), “The physiological interpretation of sound spectrograms”,

Publications of the Modern Language Association of America66: 8-875.

[59]Delattre, P., Liberman, A.M., Cooper, F.S. and Gerstman, L.J. (1952), “An

experimental study of the acoustic determinants of vowel color; observations on one- and two-formant vowels synthesised from spectrographic patterns”, Word8: 195-210. [60]Denes, P.B. (1963), “On the statistics of spoken English”, Journal of the Acoustical

Society of America35: 2-904.

[61]Di Benedetto, M.-G. and Liénard, J.-S. (1992), “Extrinsic normalization of vowel

formant values based on cardinal vowels mapping”, Proceedings of the 2nd International Conference on Spoken Language Processing: 579-582.[62]Duda, R.O. and Hart, P.E. (1973), Pattern Classification and Scene Analysis (John

Wiley & Sons, New York).

[63]Dukiewicz, L. (1970), “Frequency-band dependence of speaker identification”, in

Jassem, W. (ed.), Speech Analysis and Synthesis, Vol.II (Polish Academy of Sciences, Warsaw): 41-50.

[]Dunn, H.K. (1961), “Methods of Measuring Vowel Formant Bandwidths”, Journal of

the Acoustical Society of America33: 1737-1746.

[65]Eatock, J.P. and Mason, J.S. (1994), “A quantitative assessment of the relative speaker

discriminating properties of phonemes”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing Vol. I: 133-136.

[66]Endres, W., Bambach, W. and Flösser, G. (1971), “Voice Spectrograms as a Function of

Age, Voice Disguise, and Voice Imitation”, Journal of the Acoustical Society of America49: 1842-1848.

[67]Ewan, W.G. and Krones, R. (1974), “Measuring larynx movement using the

thyroumbrometer”, Journal of Phonetics2: 327-335.

[68]Fant, G. (1960), Acoustic Theory of Speech Production (Mouton, The Hague, The

Netherlands).

[69]Fant, G. (1962), “Formant bandwidth data”, Speech Transmission Laboratory,

Quarterly Progress and Status Report1: 1-2.

[70]Fant, G. (1966), “A note on vocal tract size factors and non-uniform F-pattern scalings”,

Speech Transmission Laboratory, Quarterly Progress and Status Report4: 22-30. [71]Fant, G. (1972), “Vocal tract wall effects, losses, and resonance bandwidths”, Speech

Transmission Laboratory, Quarterly Progress and Status Report2-3: 28-52.

[72]Fant, G. (1975a), “Non-uniform vowel normalization”, Speech Transmission

Laboratory, Quarterly Progress and Status Report2-3: 1-19.

[73]Fant, G. (1975b), “Vocal-tract area and length perturbations”, Speech Transmission

Laboratory, Quarterly Progress and Status Report4: 1-14.

[74]Fant, G. (1980), “The Relations between Area Functions and the Acoustic Signal”,

Phonetica37: 55-86.

[75]Fant, G. and Risberg, A. (1962), “Auditory matching of vowels with two formant

synthetic sounds”, Speech Transmission Laboratory, Quarterly Progress and Status Report4: 7-11.

[76]Flanagan, J.L. (1955), “A difference limen for vowel formant frequency”, Journal of the

Acoustical Society of America27: 613-617.

[77]Flanagan, J.L. (1972), Speech Analysis, Synthesis and Perception, Second Edition,

(Springer-Verlag, Berlin, Heidelberg, New York).

[78]Flanagan, J.L., Ishizaka, K. and Shipley, K.L. (1980), “Signal models for low bit-rate

coding of speech”, Journal of the Acoustical Society of America68: 780-791.

[79]Foley, D.H. (1972), “Considerations of Sample and Feature Size”, IEEE Transactions

on Information Theory18: 618-626.

[80]Fuchi, K. (1977), “Vowel Approximation by Multi-band Filtering Characteristics and

Estimation of Antimetrical Vocal Tract Shapes”, Electrotechnical Laboratory Progress Report on Speech Research 14: 56-58.

[81]Fuchi, K. and Ohta, K. (1978), “Observation on Group Delay Characteristics of

Connected Vowels”, Electrotechnical Laboratory Progress Report on Speech Research 18: 44-47.

[82]Fuchi, K. and Ohta, K. (1979), “Estimation of Symmetrical Acoustic Tubes

Representing Vowel Characteristics”, Electrotechnical Laboratory Progress Report on Speech Research20: 5-9.

[83]Fujimura, O. and Lindqvist, J. (1971), “Sweep-tone measurements of vocal-tract

characteristics”, Journal of the Acoustical Society of America49: 541-558.

[84]Furui, S. (1981), “Cepstral Analysis Technique for Automatic Speaker Verification”,

IEEE Transactions on Acoustics, Speech, and Signal Processing29: 254-272.

[85]Furui, S. (19), “Unsupervised Speaker Adaptation Based on Hierarchical Spectral

Clustering”, IEEE Transactions on Acoustics, Speech, and Signal Processing37: 1923-1930.

[86]Furui, S. (1991), “Speaker-dependent-feature extraction, recognition and processing

techniques”, Speech Communication10: 505-520.

[87]Furui, S. (1992), “Speaker-Independent and Speaker-Adaptive Recognition Techniques”,

in Furui, S. and Sondhi, M.M. (eds.), Advances in Speech Signal Processing (Marcel Dekker, New York, Basel, Hong Kong): 597-622.

[88]Furui, S. and Akagi, M. (1985), “Perception of voice individuality and physical

correlates”, Journal of the Acoustical Society of Japan (English reprint), H-85-18: 1-8.

[]Garvin, P.L. and Ladefoged, P. (1963), “Speaker Identification and Message

Identification in Speech Recognition”, Phonetica9: 193-199.[90]Gath, I. and Yair, E. (1988), “Analysis of vocal tract parameters in Parkinsonian

speech”, Journal of the Acoustical Society of America84: 1628-1634.

[91]Gay, T., Boë, L-J., Perrier, P., Feng, G. and Swayne, E. (1991), “The acoustic

sensitivity of vocal tract constrictions: a preliminary report”, Journal of Phonetics19: 445-452.

[92]Gay, T., Lindblom, B. and Lubker, J. (1981), “Production of bite-block vowels:

Acoustic equivalence by selective compensation”, Journal of the Acoustical Society of America69: 802-810.

[93]Gerstman, L.J. (1968), “Classification of Self-Normalized Vowels”, IEEE Transactions

on Audio and Electroacoustics16: 78-80.

[94]Goldstein, U.G. (1979), “Modelling children’s vocal tracts”, Speech Communication

Papers presented at the 97th Meeting of the Acoustical Society of America, Paper I22: 139-142.

[95]Golibersuch, R.J. (1983), “Automatic prediction of linear frequency warp for speech

recognition”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing: 769-772.

[96]Gopinath, B. and Sondhi, M.M. (1970), “Determination of the Shape of the Human

Vocal Tract from Acoustical Measurements”, The Bell System Technical Journal49: 1195-1214.

[97]Gray, A.H. and Markel, J.D. (1974), “A Spectral Flatness Measure for Studying the

Autocorrelation Method of Linear Prediction of Speech Analysis”, IEEE Transactions on Acoustics, Speech, and Signal Processing22: 207-217.

[98]Gray, A.H. and Markel, J.D. (1976), “Distance Measures for Speech Processing”, IEEE

Transactions on Acoustics, Speech, and Signal Processing24: 380-391.

[99]Gupta, S.K. and Schroeter, J. (1993), “Pitch-synchronous frame-by-frame and segment-

based articulatory analysis by synthesis”, Journal of the Acoustical Society of America 94: 2517-2530.

[100]Hafer, E.H. and Coker, C.H. (1975), “Determining tongue body motion from the acoustic speech wave”, Journal of the Acoustical Society of America57: S3.

[101]Hansen, J.H.L. and Womack, B.D. (1996), “Feature Analysis and Neural Network-Based Classification of Speech Under Stress”, IEEE Transactions on Speech and Audio Processing4: 307-313.[102]Hanson, B.A. and Wakita, H. (1987), “Spectral Slope Distance Measures with Linear Prediction Analysis for Word Recognition in Noise”, IEEE Transactions on Acoustics, Speech, and Signal Processing35: 968-973.

[103]Harshman, R., Ladefoged, P. and Goldstein, L. (1977), “Factor analysis of tongue shapes”, Journal of the Acoustical Society of America62: 693-707.

[104]Hawks, J.W. and Miller, J.D. (1995), “A formant bandwidth estimation procedure for vowel synthesis”, Journal of the Acoustical Society of America97: 1343-1344. [105]Hayakawa, S. and Itakura, F. (1994), “Text-dependent speaker recognition using the information in the higher frequency band”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing Vol. I: 137-140.

[106]Hecker, M.H.L. (1971), Speaker Recognition: An Interpretive Survey of the Literature, ASHA Monographs Number 16 (American Speech and Hearing Association, Washington D.C.).

[107]Hermansky, H. (1990), “Perceptual linear predictive (PLP) analysis of speech”, Journal of the Acoustical Society of America87: 1738-1752.

[108]Hermansky, H. and Broad, D.J. (19), “The effective second formant F2’ and the vocal tract front-cavity”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing: 480-483.

[109]Hill, D.R., Manzara, L. and Taube-Schock, C.-R. (1995), “Real-time articulatory speech-synthesis-by-rules”, Proceedings of the 14th Annual International Voice Technologies and Applications Conference of the American Voice I/O Society: 27-44. [110]Hillenbrand, J. and Gayvert, R.T. (1993), “Vowel Classification Based on Fundamental Frequency and Formant Frequencies”, Journal of Speech and Hearing Research36: 694-700.

[111]Hillenbrand, J., Getty, L.A., Clark, M.J. and Wheeler, K. (1995), “Acoustic characteristics of American English vowels”, Journal of the Acoustical Society of America97: 3099-3111.

[112]Höfker, U. (1976), “Die Eignung verschiedener Sprachlaute für die automatische Sprechererkennung”, Proc. IITB Kolloquium “Akustische Mustererkennnung”. [113]Högberg, J. (1995), “From sagittal distance to area function and male to female scaling of the vocal tract”, Speech Transmission Laboratory, Quarterly Progress and Status Report4: 11-53.[114]Hogden, J., Lofqvist, A., Gracco, V., Zlokarnik, I., Rubin, P. and Saltzman, E. (1996),“Accurate recovery of articulator positions from acoustics: New conclusions based on human data”, Journal of the Acoustical Society of America100: 1819-1834.

[115]Holmes, J.N., Holmes, W.J. and Garner, P.N. (1997), “Using formant frequencies in speech recognition”, Proceedings of the 5th European Conference on Speech Communication and Technology: 2083-2086.

[116]Honda, K., Hashi., M., Wu, C.-M. and Westbury, J.R. (1997), “Effects of the size and form of the orofacial structure on vowel production”, Journal of the Acoustical Society of America102: 3133.

[117]Honikman, B. (19), “Articulatory Settings”, in Abercrombie, D., Fry, D.B., MacCarthy, P.A.D., Scott, N.C. and Trim, J.L.M. (eds.), In Honour of Daniel Jones (Longmans, Green and Co., London): 73-84.

[118]Hughes, O.M. and Abbs, J.H. (1976), “Labial-Mandibular Coordination in the Production of Speech: Implications for the Operation of Motor Equivalence”, Phonetica 33: 199-221.

[119]ILS (1983), Interactive Laboratory System V4.1: Programmer’s Guide, Signal Technology Incorporated.

[120]Ishizaki, S. (1978a), “Interspeaker Normalization Using Vocal Tract Length and Vowel Feature Extraction”, Electrotechnical Laboratory Progress Report on Speech Research 18: 66-69.

[121]Ishizaki, S. (1978b), “Vowel Discrimination by Use of Articulatory Model”, Proceedings of the 4th International Joint Conference on Pattern Recognition: 1050-1052.

[122]Itahashi, S. (1984), “On the relation between cepstra and formants of speech”, Preprints of the Spring Meeting of the Acoustical Society of Japan, Paper 3-2-8: 185-186. [123]Itahashi, S. (1988), “On properties of speech cepstra”, Denshi Joho Tsushin Gakkai Ronbushi (Trans. Inst. Electron. Inform. Comm. Eng. Jpn.)J71-D: 1839-1842. [124]Itakura, F. (1975), “Minimum Prediction Residual Principle Applied to Speech Recognition”, IEEE Transactions on Acoustics, Speech, and Signal Processing23: 67-

72.

[125]Itakura, F. and Umezaki, T. (1987), “Distance measure for speech recognition based on the smoothed group delay spectrum”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing: 1257-1260.[126]Jesorsky, P. (1978), “Principles of automatic speaker-recognition”, in Bolc, L. (ed.), Speech Communication with Computers (Carl Hanser Verlag, München, Wien): 93-137. [127]Juang, B.-H., Rabiner, L.R. and Wilpon, J.G. (1987), “On the Use of Bandpass Liftering in Speech Recognition”, IEEE Transactions on Acoustics, Speech, and Signal Processing35: 947-953.

[128]Kashyap, R.L. (1976), “Speaker Recognition from an Unknown Utterance and Speech-Speaker Interaction”, IEEE Transactions on Acoustics, Speech, and Signal Processing 24: 481-488.

[129]Kasuya, H. and Wakita, H. (1979), “An Approach to Segmenting Speech into Vowel-and Nonvowel-Like Intervals”, IEEE Transactions on Acoustics, Speech, and Signal Processing27: 319-327.

[130]Kelly, J.L., Jr. and Lochbaum, C. (1963), “Speech synthesis”, Proceedings of the Speech Communication Seminar, Stockholm, Speech Transmission Laboratory, Royal Institute of Technology, Vol. II: Paper F7.

[131]Kitamura, T. and Akagi, M. (1994), “Speaker Individualities in Speech Spectral Envelopes”, Proceedings of the 3rd International Conference on Spoken Language Processing: 1183-1186.

[132]Kitamura, T. and Akagi, M. (1996), “Relationship between physical characteristics and speaker individualities in speech spectral envelopes”, Journal of the Acoustical Society of America100: 2600.

[133]Kiukaanniemi, H., Siponen, P. and Mattila, P. (1982), “Individual Differences in the Long-Term Speech Spectrum”, Folia Phoniatrica34: 21-28.

[134]Klatt, D.H. (1982), “Prediction of perceived phonetic distance from critical-band spectra: A first step”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing: 1278-1281.

[135]Klatt, D.H. (1986), “The problem of variability in speech recognition and in models of speech perception”, in Perkell, J.S. and Klatt, D.H. (eds.), Invariance and variability in speech processes (Lawrence Erlbaum Associates, Hillsdale, New Jersey): 300-319. [136]Klein, W., Plomp, R. and Pols, L.C.W. (1970), “Vowel Spectra, Vowel Spaces, and Vowel Identification”, Journal of the Acoustical Society of America48: 999-1009. [137]Koenig, W., Dunn, H.K. and Lacy, L.Y. (1946), “The sound spectrograph”, Journal of the Acoustical Society of America18: 19-49.

[138]Kopp, G.A. and Green, H.C. (1946), “Basic Phonetic Principles of Visible Speech”, Journal of the Acoustical Society of America18: 74-.[139]Kumar, K. (1996), “Computer recognition of Australian English vowels based on multi-speaker spectrographic data”, unpublished M.Inf.Sc. Thesis, School of Computer Science, University of New South Wales, Australian Defence Force Academy, Canberra, Australia.

[140]Kuwabara, H. and Takagi, T. (1991), “Acoustic parameters of voice individuality and voice-quality control by analysis-synthesis method”, Speech Communication10: 491-495.

[141]Ladefoged, P. (1993), A Course In Phonetics, Third Edition (Harcourt Brace Jovanovich, Florida).

[142]Ladefoged, P. and Broadbent, D.E. (1957), “Information Conveyed by Vowels”, Journal of the Acoustical Society of America29: 98-104.

[143]Ladefoged, P., Harshman, R. and Goldstein, L. (1977), “Vowel articulations and formant frequencies”, UCLA Working Papers in Phonetics38: 16-40.

[144]Larar, J.N., Schroeter, J. and Sondhi, M.M. (1988), “Vector Quantization of the Articulatory Space”, IEEE Transactions on Acoustics, Speech, and Signal Processing 36: 1812-1818.

[145]Laver, J. (1980), The phonetic description of voice quality (Cambridge University Press, Cambridge).

[146]Lea, W.A., Medress, M.F. and Skinner, T.E. (1975), “A Prosodically Guided Speech Understanding Strategy”, IEEE Transactions on Acoustics, Speech, and Signal Processing23: 30-38.

[147]Lee, L. and Rose, R. (1998), “A Frequency Warping Approach to Speaker Normalization”, IEEE Transactions on Speech and Audio Processing6: 49-60. [148]Lewis, D. (1936), “Vocal Resonance”, Journal of the Acoustical Society of America8: 91-99.

[149]Lewis, D. and Tuthill, C. (1940), “Resonant Frequencies and Damping Constants of Resonators Involved in the Production of Sustained Vowels “O” and “Ah””, Journal of the Acoustical Society of America11: 451-456.

[150]Li, K.-P. and Hughes, G.W. (1974), “Talker differences as they appear in correlation matrices of continuous speech spectra”, Journal of the Acoustical Society of America 55: 833-837.

[151]Liljencrants, J. (1971), “Fourier series description of the tongue profile”, Speech Transmission Laboratory, Quarterly Progress and Status Report4: 9-18.

[152]Lin, Q. and Che, C. (1995), “Normalizing The Vocal Tract Length For Speaker Independent Speech Recognition”, IEEE Signal Processing Letters2: 201-203. [153]Lin, Q. and Fant, G. (19), “Vocal-tract area-function parameters from formant frequencies”, Proceedings of the 1st European Conference on Speech Communication and Technology: 673-676.

[154]Lin, Q., Jan, E.-E., Che, C.W., Yuk, D.-S. and Flanagan, J. (1996), “Selective use of the speech spectrum and a VQGMM method for speaker identification”, Proceedings of the 4th International Conference on Spoken Language Processing: 2415-2418.

[155]Lindblom, B., Lubker, J. and Gay, T. (1979), “Formant frequencies of some fixed-mandible vowels and a model of speech motor programming by predictive simulation”, Journal of Phonetics7: 147-161.

[156]Lindblom, B.E.F. and Sundberg, J.E.F. (1971), “Acoustical Consequences of Lip, Tongue, Jaw, and Larynx Movement”, Journal of the Acoustical Society of America50: 1166-1179.

[157]Linggard, R.L. (1985), Electronic synthesis of speech (Cambridge University Press, Cambridge).

[158]Liou, H.-S. and Mammone, R.J. (1995), “Application of phonetic weighting to the neural tree network based speaker recognition system”, Proceedings of the 4th European Conference on Speech Communication and Technology: 3-6.

[159]Lobanov, B.M. (1971), “Classification of Russian Vowels Spoken by Different Speakers”, Journal of the Acoustical Society of America49: 606-608.

[160]Luck, J.E. (1969), “Automatic Speaker Verification Using Cepstral Measurements”, Journal of the Acoustical Society of America46: 1026-1032.

[161]Maeda, S. (1979), “An articulatory model of the tongue based on a statistical analysis”, Speech Communication Papers presented at the 97th Meeting of the Acoustical Society of America, Paper I2: 67-70.

[162]Maeda, S. (1988), “Improved articulatory model”, Journal of the Acoustical Society of America81: S146.

[163]Maeda, S. (1991), “On articulatory and acoustic variabilities”, Journal of Phonetics19: 321-331.

[1]Makhoul, J. (1975a), “Linear Prediction: A Tutorial Review”, Proceedings of the IEEE 63: 561-580.[165]Makhoul, J. (1975b), “Spectral Linear Prediction: Properties and Applications”, IEEE Transactions on Acoustics, Speech, and Signal Processing23: 283-296.

[166]Mammone, R.J., Zhang, X. and Ramachandran, R.P. (1996), “Robust Speaker Recognition”, IEEE Signal Processing Magazine13(5): 58-71.

[167]Markel, J.D. and Gray, A.H. (1976), Linear Prediction of Speech (Springer-Verlag, Berlin, Heidelberg, New York).

[168]Mathieu, B. and Laprie, Y. (1997), “Adaptation of Maeda’s model for acoustic to articulatory inversion”, Proceedings of the 5th European Conference on Speech Communication and Technology: 2015-2018.

[169]Matsumoto, H. and Wakita, H. (1978), “Vowel normalization by frequency warping”, Journal of the Acoustical Society of America: S180.

[170]Matsumoto, H. and Wakita, H. (1986), “Vowel normalization by frequency warped spectral matching”, Speech Communication5: 239-251.

[171]McGowan, R.S. (1994), “Recovering articulatory movement from formant frequency trajectories using task dynamics and a genetic algorithm: Preliminary model tests”, Speech Communication14: 19-48.

[172]McGowan, R.S. (1997), “Normalization for articulatory recovery”, Journal of the Acoustical Society of America101: 3175.

[173]Meisel, W.S. (1972), Computer-oriented approaches to pattern recognition (Academic Press, New York, London).

[174]Mella, O. (1994), “Extraction of formants of oral vowels and critical analysis for speaker characterization”, Proceedings of the ESCA Workshop on Automatic Speaker Recognition, Identification and Verification, Martigny, Switzerland: 193-196. [175]Mermelstein, P. (1967), “Determination of the Vocal-Tract Shape from Measured Formant Frequencies”, Journal of the Acoustical Society of America41: 1283-1294. [176]Mermelstein, P. (1973), “Articulatory model for the study of speech production”, Journal of the Acoustical Society of America53: 1070-1082.

[177]Morse, P.M. and Ingard, K.U. (1968), Theoretical Acoustics (McGraw-Hill, New York).

[178]Mrayati, M., Carré, R. and Guérin, B. (1988), “Distinctive Regions and Modes: A new theory of speech production”, Speech Communication7: 257-286.[179]Nakajima, T., Ohmura, H., Tanaka, K. and Ishizaki, S. (1973), “Estimation of Vocal Tract Area Functions by Adaptive Inverse Filtering Methods”, Bulletin of the Electrotechnical Laboratory37: 462-481.

[180]Narayanan, S., Alwan, A. and Song, Y. (1997), “New results in vowel production: MRI, EPG, and acoustic data”, Proceedings of the 5th European Conference on Speech Communication and Technology: 1007-1010.

[181]Nearey, T.M. (1978), “Phonetic feature systems for vowels”, Indiana University Linguistics Club, Bloomington, Indiana, USA.

[182]Nolan, F. (1983), The phonetic bases of speaker recognition (Cambridge University Press, Cambridge).

[183]Nordström, P-E. (1977), “Female and infant vocal tracts simulated from male area functions”, Journal of Phonetics5: 81-92.

[184]Ohmura, H. (1993), “An Algorithm for Calculating Vocal Tract Transfer Functions in terms of Reflection Coefficients”, Electrotechnical Laboratory Progress Report on Speech Research: 1-23.

[185]Oppenheim, A.V., Schafer, R.W. and Stockham, T.G. (1968), “Nonlinear Filtering of Multiplied and Convolved Signals”, Proceedings of the IEEE56: 12-1291. [186]Owren, M.J. and Bachorowski, J.-A. (1997), “Reliable cues to gender and talker identity are present in a short vowel segment recorded in running speech”, Journal of the Acoustical Society of America102: 3132.

[187]Paige, A. and Zue, V.W. (1970), “Calculation of Vocal Tract Length”, IEEE Transactions on Audio and Electroacoustics18: 268-270.

[188]Paliwal, K.K. (1982), “On the performance of the quefrency-weighted cepstral coefficients in vowel recognition”, Speech Communication1: 151-154.

[1]Paliwal, K.K. (1984a), “Effectiveness of different vowel sounds in automatic speaker identification”, Journal of Phonetics12: 17-21.

[190]Paliwal, K.K. (1984b), “Effect of preemphasis on vowel recognition performance”, Speech Communication3: 101-106.

[191]Paliwal, K.K. and Ainsworth, W.A. (1985), “Dynamic frequency warping for speaker adaptation in automatic speech recognition”, Journal of Phonetics13: 123-134. [192]Paliwal, K.K. and Rao, P.V.S. (1982), “Evaluation of various linear prediction parametric representations in vowel recognition”, Signal Processing4: 323-327.[193]Parthasarathy, S. and Coker, C.H. (1992), “On automatic estimation of articulatory parameters in a text-to-speech system”, Computer Speech and Language6: 37-75. [194]Payan, Y. and Perrier, P. (1993), “Vowel normalization by articulatory normalization: First attempts for vowel transitions”, Proceedings of the 3rd European Conference on Speech Communication and Technology: 417-420.

[195]Perkell, J.S. (1969), Physiology of Speech Production: Results and Implications of a Quantitative Cineradiographic Study (Research Monograph No.53, The M.I.T. Press, Cambridge, Massachusetts).

[196]Perkell, J.S., Matthies, M.L., Svirsky, M.A. and Jordan, M.I. (1993), “Trading relations between tongue-body raising and lip rounding in production of the vowel /u/: A pilot “motor equivalence” study”, Journal of the Acoustical Society of America93: 2948-2961.

[197]Perrier, P., Apostol, L. and Payan, Y. (1995), “Evaluation of a vowel normalisation procedure based on speech production knowledge”, Proceedings of the 4th European Conference on Speech Communication and Technology: 1925-1928.

[198]Peters, R.W. (1954), “Studies in extra messages: Listener identification of speakers’voices under conditions of certain restrictions imposed upon the voice signal”, U.S.

Naval School of Aviation Medicine, Joint Project NM 001-0-01, Rpt.30, Pensacola, Florida.

[199]Peterson, G.E. (1952), “The Information-Bearing Elements of Speech”, Journal of the Acoustical Society of America24: 629-637.

[200]Peterson, G.E. (1959), “The Acoustics of Speech — Part II. Acoustical Properties of Speech Waves”, in Travis, L.E. (ed.), Handbook of Speech Pathology (Peter Owen, London): 137-173.

[201]Peterson, G.E. (1961), “Parameters of Vowel Quality”, Journal of Speech and Hearing Research4: 10-29.

[202]Peterson, G.E. and Barney, H.L. (1952), “Control Methods Used in a Study of the Vowels”, Journal of the Acoustical Society of America24: 175-184.

[203]Piper, J. (1992), “Variability and bias in experimentally measured classifier error rates”, Pattern Recognition Letters13: 685-692.

[204]Plomp, R., Pols, L.C.W. and van de Geer, J.P. (1967), “Dimensional Analysis of Vowel Spectra”, Journal of the Acoustical Society of America41: 707-712.

[205]Pols, L.C.W., van der Kamp, L.J.Th. and Plomp, R. (1969), “Perceptual and Physical Space of Vowel Sounds”, Journal of the Acoustical Society of America46: 458-467.[206]Pols, L.C.W., Tromp, H.R.C. and Plomp, R. (1973), “Frequency analysis of Dutch vowels from 50 male speakers”, Journal of the Acoustical Society of America53: 1093-1101.

[207]Potter, R.K. and Steinberg, J.C. (1950), “Toward the Specification of Speech”, Journal of the Acoustical Society of America22: 807-820.

[208]Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T. (1988), Numerical Recipes in C: The Art of Scientific Computing (Cambridge University Press). [209]Rabiner, L. and Juang, B.-H. (1993), Fundamentals of Speech Recognition (Prentice Hall, New Jersey).

[210]Richards, H.B., Mason, J.S., Hunt, M.J. and Bridle, J.S. (1995), “Deriving articulatory representations of speech”, Proceedings of the 4th European Conference on Speech Communication and Technology: 761-7.

[211]Riordan, C.J. (1977), “Control of vocal-tract length in speech”, Journal of the Acoustical Society of America62: 998-1002.

[212]Rosenberg, A.E. and Sambur, M.R. (1975), “New Techniques for Automatic Speaker Verification”, IEEE Transactions on Acoustics, Speech, and Signal Processing23: 169-176.

[213]Saito, S. and Itakura, F. (1983), “Frequency spectrum deviation between speakers”, Speech Communication2: 149-152.

[214]Saito, S. and Nakata, K. (1985), Fundamentals of Speech Signal Processing (Academic Press, Tokyo, Florida, London).

[215]Sakoe, H. and Chiba, S. (1978), “Dynamic programming algorithm optimization for spoken word recognition”, IEEE Transactions on Acoustics, Speech, and Signal Processing26: 43-49.

[216]Sambur, M.R. (1975), “Selection of Acoustic Features for Speaker Identification”, IEEE Transactions on Acoustics, Speech, and Signal Processing23: 176-182.

[217]Sarma, V.V.S. and Venugopal, D. (1977), “Performance Evaluation of Automatic Speaker Verification Systems”, IEEE Transactions on Acoustics, Speech, and Signal Processing25: 2-266.

[218]Sato, S., Yokota, M. and Kasuya, H. (1982), “Statistical Relationships among the First Three Formant Frequencies in Vowel Segments in Continuous Speech”, Phonetica39: 36-46.[219]Savariaux, C., Perrier, P. and Orliaguet, J.P. (1995), “Compensation strategies for the perturbation of the rounded vowel [u] using a lip tube: A study of the control space in speech production”, Journal of the Acoustical Society of America98: 2428-2442. [220]Schroeder, M.R. (1967), “Determination of the Geometry of the Human Vocal Tract by Acoustic Measurements”, Journal of the Acoustical Society of America41: 1002-1010. [221]Schroeter, J. and Sondhi, M.M. (19), “Dynamic Programming Search of Articulatory Codebooks”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing: 588-591.

[222]Schroeter, J. and Sondhi, M.M. (1992), “Speech Coding Based on Physiological Models of Speech Production”, in Furui, S. and Sondhi, M.M. (eds.), Advances in Speech Signal Processing (Marcel Dekker, New York, Basel, Hong Kong): 231-268.

[223]Schroeter, J. and Sondhi, M.M. (1994), “Techniques for Estimating Vocal-Tract Shapes from the Speech Signal”, IEEE Transactions on Speech and Audio Processing2: 133-150.

[224]Sejnoha, V. and Mermelstein, P. (1983), “Speaker normalizing transforms for automatic recognition”, Journal of the Acoustical Society of America74: S17.

[225]Shikano, K. and Itakura, F. (1992), “Spectrum Distance Measures for Speech Recognition”, in Furui, S. and Sondhi, M.M. (eds.), Advances in Speech Signal Processing (Marcel Dekker, New York, Basel, Hong Kong): 419-452.

[226]Shirai, K. and Honda, M. (1977), “Estimation of articulatory motion”, in Sawashima, M. and Cooper, F.S. (eds.), Dynamic Aspects of Speech Production (University of Tokyo Press, Tokyo): 279-304.

[227]Singh, S. and Singh, K.S. (1976), Phonetics: principles and practice (University Park Press, Baltimore, London, Tokyo).

[228]Singh, S. and Woods, D.R. (1971), “Perceptual Structure of 12 American English Vowels”, Journal of the Acoustical Society of America49: 1861-1866.

[229]Sondhi, M.M. (1979), “Estimation of Vocal-Tract Areas: The Need for Acoustical Measurements”, IEEE Transactions on Acoustics, Speech, and Signal Processing27: 268-273.

[230]Sondhi, M.M. and Schroeter, J. (1987), “A Hybrid Time-Frequency Domain Articulatory Speech Synthesizer”, IEEE Transactions on Acoustics, Speech, and Signal Processing35: 955-967.

[231]Soquet, A. and Saerens, M. (1994), “A comparison of different acoustic and articulatory representations for the determination of place of articulation of plosives”, Proceedings of the 3rd International Conference on Spoken Language Processing: 13-16. [232]Sorokin, V.N. (1992), “Determination of vocal tract shape for vowels”, Speech Communication11: 71-85.

[233]Spiegel, M. (1994), “Re: Peterson and Barney’s data”, article 1519 of the comp.speech Internet newsgroup.

[234]Stevens, K.N. (1971), “Sources of inter- and intra-speaker variability in the acoustic properties of speech sounds”, Proceedings of the 7th International Congress of Phonetic Sciences: 206-232.

[235]Stevens, K.N. and House, A.S. (1955), “Development of a Quantitative Description of Vowel Articulation”, Journal of the Acoustical Society of America27: 484-493. [236]Stevens, K.N. and House, A.S. (1963), “Perturbation of Vowel Articulations By Consonantal Context: An Acoustical Study”, Journal of Speech and Hearing Research 6: 111-128.

[237]Stevens, K.N., Williams, C.E., Carbonell, J.R. and Woods, B. (1968), “Speaker Authentication and Identification: A Comparison of Spectrographic and Auditory Presentations of Speech Material”, Journal of the Acoustical Society of America44: 1596-1607.

[238]Story, B.H., Titze, I.R. and Hoffman, E.A. (1996), “Vocal tract area functions from magnetic resonance imaging”, Journal of the Acoustical Society of America100: 537-554.

[239]Strube, H.W. (1977), “Can the area function of the human vocal tract be determined from the speech wave?”, in Sawashima, M. and Cooper, F.S. (eds.), Dynamic Aspects of Speech Production (University of Tokyo Press, Tokyo): 233-250.

[240]Strube, H.W. (1980), “Linear prediction on a warped frequency scale”, Journal of the Acoustical Society of America68: 1071-1076.

[241]Sundberg, J. (1974), “Articulatory interpretation of the ‘singing formant’”, Journal of the Acoustical Society of America55: 838-844.

[242]Sundberg, J. (1995), “The singer’s formant revisited”, Speech Transmission Laboratory, Quarterly Progress and Status Report2-3: 83-96.

[243]Suomi, K. (1984), “On talker and phoneme information conveyed by vowels: A whole spectrum approach to the normalization problem”, Speech Communication3: 199-209.[244]Syrdal, A.K. and Gopal, H.S. (1986), “A perceptual model of vowel recognition based on the auditory representation of American English vowels”, Journal of the Acoustical Society of America79: 1086-1100.

[245]Tanaka, K. and Nakajima, T. (1975), “Estimation of Vocal Tract Length and Formant Frequencies by Adaptive Enhancing Filter”, Electrotechnical Laboratory Progress Report on Speech Research9: 10-14.

[246]Tohkura, Y. (1986), “A Weighted Cepstral Distance Measure for Speech Recognition”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing: 761-7.

[247]Tou, J.T. and Gonzalez, R.C. (1974), Pattern Recognition Principles (Addison-Wesley, Reading, Massachusetts).

[248]Toussaint, G.T. (1974), “Bibliography on Estimation of Misclassification”, IEEE Transactions on Information Theory20: 472-479.

[249]Umesh, S., Cohen, L., Marinovic, N. and Nelson, D. (1996), “Frequency-Warping in Speech”, Proceedings of the 4th International Conference on Spoken Language Processing: 414-417.

[250]van den Heuvel, H., Cranen, B. and Rietveld, A.C.M. (1993), “Speaker-variability in spectral bands of Dutch vowel segments”, Proceedings of the 3rd European Conference on Speech Communication and Technology: 635-638.

[251]van den Heuvel, H. and Rietveld, T. (1992), “Speaker related variability in cepstral representations of Dutch speech segments”, Proceedings of the 2nd International Conference on Spoken Language Processing: 1581-1584.

[252]van Nierop, D.J.P.J., Pols, L.C.W. and Plomp, R. (1973), “Frequency Analysis of Dutch Vowels from 25 Female Speakers”, Acustica29: 110-118.

[253]Wakita, H. (1973), “Direct Estimation of the Vocal Tract Shape by Inverse Filtering of Acoustic Speech Waveforms”, IEEE Transactions on Audio and Electroacoustics21: 417-427.

[254]Wakita, H. (1977), “Normalization of Vowels by Vocal-Tract Length and Its Application to Vowel Identification”, IEEE Transactions on Acoustics, Speech, and Signal Processing25: 183-192.

[255]Wakita, H. (1979), “Estimation of Vocal-Tract Shapes from Acoustical Analysis of the Speech Wave: The State of the Art”, IEEE Transactions on Acoustics, Speech, and Signal Processing27: 281-285.[256]Wakita, H. and Fant, G. (1978), “Toward a better vocal tract model”, Speech Transmission Laboratory, Quarterly Progress and Status Report1: 9-29.

[257]Wakita, H. and Gray, A.H. (1975), “Numerical Determination of the Lip Impedance and Vocal Tract Area Functions”, IEEE Transactions on Acoustics, Speech, and Signal Processing23: 574-580.

[258]Watrous, R.L. (1991), “Current status of Peterson-Barney vowel formant data”, Journal of the Acoustical Society of America: 2459-2460.

[259]Welch, P.D. and Wimpress, R.S. (1961), “Two Multivariate Statistical Computer Programs and Their Application to the Vowel Recognition Problem”, Journal of the Acoustical Society of America33: 426-434.

[260]Wood, S. (1979), “A radiographic analysis of constriction locations for vowels”, Journal of Phonetics7: 25-43.

[261]Wood, S. (1986), “The acoustical significance of tongue, lip, and larynx maneuvers in rounded palatal vowels”, Journal of the Acoustical Society of America80: 391-401. [262]Yang, C.S. and Kasuya, H. (1994), “Accurate measurement of vocal tract shapes from magnetic resonance images of child, female and male subjects”, Proceedings of the 3rd International Conference on Spoken Language Processing: 623-626.

[263]Yang, C.-S. and Kasuya, H. (1995), “Vowel normalization revisited: integration of articulatory, acoustic, and perceptual measurements”, Proceedings of the 13th International Congress of Phonetic Sciences3: 234-237.

[2]Yang, C.-S. and Kasuya, H. (1996), “Speaker individualities of vocal tract shapes of Japanese vowels measured by magnetic resonance images”, Proceedings of the 4th International Conference on Spoken Language Processing: 949-952.

[265]Yegnanarayana, B. (1978), “Formant extraction from linear-prediction phase spectra”, Journal of the Acoustical Society of America63: 1638-10.

[266]Yegnanarayana, B. and Reddy, D.R. (1979), “A distance measure based on the derivative of linear prediction phase spectrum”, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing: 744-747.

[267]Yehia, H. and Itakura, F. (1994), “Determination of human vocal-tract dynamic geometry from formant trajectories using spatial and temporal Fourier analysis”, Proceedings of the International Conference on Acoustics, Speech and Signal Processing I: 477-480.[268]Yehia, H. and Itakura, F. (1996), “A method to combine acoustic and morphological constraints in the speech production inverse problem”, Speech Communication18: 151-174.

[269]Young, S. (1996), “A Review of Large-vocabulary Continuous-speech Recognition”, IEEE Signal Processing Magazine13(5): 45-57.

[270]Zahorian, S.A. and Jagharghi, A.J. (1993), “Spectral-shape features versus formants as acoustic correlates for vowels”, Journal of the Acoustical Society of America94: 1966-1982.

[271]Zue, V.W. (1969), “A noniterative computation of vocal-tract area function”, unpublished M.S. Thesis, University of Florida, Gainesville, USA.

[272]Zwicker, E. (1961), “Subdivision of the Audible Frequency Range into Critical Bands (Frequenzgruppen)”, Journal of the Acoustical Society of America33: 248.

[273]Zwicker, E. and Terhardt, E. (1980), “Analytical expressions for critical-band rate and critical bandwidth as a function of frequency”, Journal of the Acoustical Society of America68: 1523-1525.

文档

characteristics in American English foreign accent

Bibliography[1]Abercrombie,D.(1967),Elementsofgeneralphonetics(EdinburghUniversityPress,Edinburgh).[2]Ainsworth,W.A.andFoster,H.M.(1985),“Theuseofdynamicfrequencywarpinginaspeaker-independentvowelclassifier”,inDeMori,R.andSuen,C.Y.(eds.),Proceedings
推荐度:
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top