著名語言學家Louis Goldstein講座通知
著名語言學家Louis Goldstein講座通知
學術講座題目:
Articulatory Phonology: A dynamical theory of phonological representation and the dance of the tongue
演講人: Louis Goldstein 教授
美國南加州大學教授、國際著名語言學家
Articulatory Phonology理論創始人
時間🏪: 2016年3月23日(周三)下午 1:30 – 3:00
地點‼️: 意昂体育注册外語意昂(閔行校區) 楊詠曼樓111號
聯系人: 丁紅衛
演講人介紹:
Louis Goldstein received his PhD in linguistics in 1977 from UCLA, and was a student of Peter Ladefoged and Vicky Fromkin. After post-docs at MIT and at the Instituut voor Perceptie Onderzodek (Institute for Perception Research) in Eindhoven, Netherlands, he joined the faculty of Yale University in 1980, beginning as an Assistant Professor. At the same time, he began as a research scientist at Haskins Laboratories, with which he is still affiliated today. In 2007, he left Yale and joined the faculty at the University of Southern California (USC), where continues to be Professor of Linguistics.
Goldstein is best known for his work with Catherine Browman, developing the theory of articulatory phonology (see abstract above for summary), that proposes that the primitives of speech and phonology are articulatory gestures, modeled using dynamical systems. The theory has informed his and others’ work in speech production, speech perception, phonological theory, phonological development, L2 acquisition, reading, automatic speech recognition, speech planning and prosody. He has over the years overseen the development of a computer simulation of the gestural model of speech, called TaDA (Task Dynamic Application). Over the last ten years, he has worked on developing a model of speech planning and syllable structure using employing coupled oscillators, in collaboration with Elliiot Saltzman and Hosung Nam. Some of the predictions of this model have been tested on languages as diverse as Romanian, Italian, French, Polish, Georgian, Tashlhiyt Berber, Chinese, and Moroccan
Arabic. Since arriving at USC in 2007, he has been an active participant in the SPAN (Speech production and Articulation kNowledge) group, headed by Shri Narayanan. His work here has involved developing experiments and methods that employ real-time MRI of the vocal tract to test hypotheses about the gestural composition of speech.
演講內容提要:
A classic problem in understanding the sound structure of human languages is the apparent mismatch between the sequence of discrete phonological units into which words be decomposed and the continuous and variable movements of the vocal tract articulators (and their resulting sound properties) that characterize speaking in real time. Articulatory phonology (Browman & Goldstein, 1986; 1992) was originally developed to provide one possible solution to this apparent mismatch, by viewing speech as a dance performed by the vocal tract articulators. The dance can be decomposed into steps, or articulatory gestures. Each gesture can be represented as a task-dynamical system (Saltzman & Munhall, 1987) that produces a constriction within the vocal tract: its parameters are fixed during the time the gesture is being produced, but continuous motion of the articulators lawfully results. In this talk, I will present a brief tutorial on dynamical systems and how they can be applied to
modeling phonological gestures. This will be followed by presentation of some recent results showing how gestures can be recovered from the acoustic signal, and how doing so can lead to improved speech recognition performance.
As we produce words and sentences, the gestures that compose them must be coordinated appropriately in time, according to the dance of the particular language. This coordination needs to be both stable (the pattern for a particular word must be reliably reproduced whenever that word is spoken) and flexible (the pattern can be stretched or shrunk in real time to accommodate different prosodic contexts, resulting in temporal variability). The coupled oscillator model of speech of speech planning (Goldstein, Byrd & Saltzman, 2006; Nam, Goldstein & Saltzman 2009) attempts to model this coordination by hypothesizing that the initiation of a gesture is triggered by a planning clock and that the ensemble of clocks for all the gestures that compose a syllable are coupled to another in distinct modes (in-phase, anti-phase, other). This model will be presented and it will be shown how it not only can solve the coordination problem, but can contribute to an understanding of syllable
structure, coordination of tones and segments, speech errors, and certain types of phonological alternations.
Browman, C. P. & Goldstein, L. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219-252.
Browman, C.P. & Goldstein, L. (1992). Articulatory Phonology: An overview. Phonetica, 49, 155-180.
Goldstein, L., Byrd, D., and Saltzman, E. (2006) The role of vocal tract gestural action units in understanding the evolution of phonology. In M. Arbib (Ed.) From Action to Language: The Mirror Neuron System. Cambridge: Cambridge University Press. pp. 215-249.
Nam, H., Goldstein, L., & Saltzman, E. (2009). Self-organization of syllable structure: a coupled oscillator model. In F. Pellegrino, E. Marisco, & I. Chitoran, (Eds). Approaches to phonological complexity. Berlin/New York: Mouton de Gruyter. pp. 299-328.
Saltzman, E. L., and Munhall, K. G., (1989). A dynamical approach to gestural patterning in speech production. Ecological Psychology, 1, 333-382.