Articulatory Modelling 3

Home
Full List of Titles
1: ICSLP'98 Proceedings
Keynote Speeches
Text-To-Speech Synthesis 1
Spoken Language Models and Dialog 1
Prosody and Emotion 1
Hidden Markov Model Techniques 1
Speaker and Language Recognition 1
Multimodal Spoken Language Processing 1
Isolated Word Recognition
Robust Speech Processing in Adverse Environments 1
Spoken Language Models and Dialog 2
Articulatory Modelling 1
Talking to Infants, Pets and Lovers
Robust Speech Processing in Adverse Environments 2
Spoken Language Models and Dialog 3
Speech Coding 1
Articulatory Modelling 2
Prosody and Emotion 2
Neural Networks, Fuzzy and Evolutionary Methods 1
Utterance Verification and Word Spotting 1 / Speaker Adaptation 1
Text-To-Speech Synthesis 2
Spoken Language Models and Dialog 4
Human Speech Perception 1
Robust Speech Processing in Adverse Environments 3
Speech and Hearing Disorders 1
Prosody and Emotion 3
Spoken Language Understanding Systems 1
Signal Processing and Speech Analysis 1
Spoken Language Generation and Translation 1
Spoken Language Models and Dialog 5
Segmentation, Labelling and Speech Corpora 1
Multimodal Spoken Language Processing 2
Prosody and Emotion 4
Neural Networks, Fuzzy and Evolutionary Methods 2
Large Vocabulary Continuous Speech Recognition 1
Speaker and Language Recognition 2
Signal Processing and Speech Analysis 2
Prosody and Emotion 5
Robust Speech Processing in Adverse Environments 4
Segmentation, Labelling and Speech Corpora 2
Speech Technology Applications and Human-Machine Interface 1
Large Vocabulary Continuous Speech Recognition 2
Text-To-Speech Synthesis 3
Language Acquisition 1
Acoustic Phonetics 1
Speaker Adaptation 2
Speech Coding 2
Hidden Markov Model Techniques 2
Multilingual Perception and Recognition 1
Large Vocabulary Continuous Speech Recognition 3
Articulatory Modelling 3
Language Acquisition 2
Speaker and Language Recognition 3
Text-To-Speech Synthesis 4
Spoken Language Understanding Systems 4
Human Speech Perception 2
Large Vocabulary Continuous Speech Recognition 4
Spoken Language Understanding Systems 2
Signal Processing and Speech Analysis 3
Human Speech Perception 3
Speaker Adaptation 3
Spoken Language Understanding Systems 3
Multimodal Spoken Language Processing 3
Acoustic Phonetics 2
Large Vocabulary Continuous Speech Recognition 5
Speech Coding 3
Language Acquisition 3 / Multilingual Perception and Recognition 2
Segmentation, Labelling and Speech Corpora 3
Text-To-Speech Synthesis 5
Spoken Language Generation and Translation 2
Human Speech Perception 4
Robust Speech Processing in Adverse Environments 5
Text-To-Speech Synthesis 6
Speech Technology Applications and Human-Machine Interface 2
Prosody and Emotion 6
Hidden Markov Model Techniques 3
Speech and Hearing Disorders 2 / Speech Processing for the Speech and Hearing Impaired 1
Human Speech Production
Segmentation, Labelling and Speech Corpora 4
Speaker and Language Recognition 4
Speech Technology Applications and Human-Machine Interface 3
Utterance Verification and Word Spotting 2
Large Vocabulary Continuous Speech Recognition 6
Neural Networks, Fuzzy and Evolutionary Methods 3
Speech Processing for the Speech-Impaired and Hearing-Impaired 2
Prosody and Emotion 7
2: SST Student Day
SST Student Day - Poster Session 1
SST Student Day - Poster Session 2

Author Index
A B C D E F G H I
J K L M N O P Q R
S T U V W X Y Z

Multimedia Files

An Electropalatographic, Kinematic, and Acoustic Analysis of Supralaryngeal Correlates of Word-Level Prominence Contrasts in English

Authors:

Jonathan Harrington, Speech, Hearing and Language Research Centre, Macquarie University, Sydney (Australia)
Mary E. Beckman, Department of Linguistics, Ohio State University, Columbus, Ohio (USA)
Janet Fletcher, Department of Linguistics, University of Melbourne, Victoria (Australia)
Sallyanne Palethorpe, Speech, Hearing and Language research Centre, Macquarie University, Sydney (Australia)

Page (NA) Paper number 646

Abstract:

This study examines the phonetic characteristics of primary versus secondary stress on the first syllables of the surname 'Wheateron' and related adjective 'Wheateresque' in post-nuclear, deaccented position in a dialogue produced 40 times by 3 Australian English talkers. Synchronised acoustic, electromagnetometer, and electropalatographic recordings were analysed. One subject had a higher F0 in the primary stressed syllable. The other two had a longer acoustic duration for the syllable's voiced portion, corresponding to a longer lip closing movement. One of these two also had a larger and faster lip opening movement into the vowel. Taken together, the results show that primary versus secondary lexical stress may be differentiated even when accent contrasts are neutralised, although the differences are inconsistent across talkers and small by comparison to those that have been shown to characterise the accented-unaccented contrast.

SL980646.PDF (From Author) SL980646.PDF (Rasterized)

TOP


Consistencies and Inconsistencies Between EPG and Locus Equation Data on Coarticulation

Authors:

Marija Tabain, Speech, Hearing and Languages Research Centre, Department of Linguistics, Macquarie University, Sydney. (Australia)

Page (NA) Paper number 668

Abstract:

Following a previous study using locus equation (LE) and electropalatographic (EPG) data to examine coarticulation of voiced consonants and vowels in CV syllables [1], the present study examines voiceless stops and fricatives using the same analysis techniques. It is found that when LE data for stops is sampled at the stop burst, rather than at vowel onset, the correlation between LE data on coarticulation and EPG data on coarticulation is quite high. By contrast, results for the fricatives are quite poor. It is suggested that the LE is capable of capturing rather gross differences in coarticulatory resistance, such as that involving a tongue tip rather than a tongue body, but that it is not capable of capturing more subtle differences in coarticulation, such as those involving different coronal articulations. This explanation is supported by work in progress on Australian Aboriginal languages which have up to four coronal places of articulation [2].

SL980668.PDF (From Author) SL980668.PDF (Rasterized)

TOP


Synergy Between Jaw And Lips/Tongue Movements : Consequences In Articulatory Modelling

Authors:

Gérard Bailly, Institut de la Communication Parlée (ICP) (France)
Pierre Badin, Institut de la Communication Parlée (ICP) (France)
Anne Vilain, Institut de la Communication Parlée (ICP) (France)

Page (NA) Paper number 66

Abstract:

Linear component articulatory models are built using an iterative subtraction of linear predictors of the vocal tract geometry. In this paper we consider the contribution of jaw displacement to tongue and lips movements using sets of cineradiographic data from three different speakers. We show that linear prediction overestimates this contribution by capturing not only the intrinsic mechanical jaw-tongue coupling but also the synergetic control observed in the corpus. We then propose a subtraction of the jaw contribution which do not affect the performance of the model in terms of data prediction.

SL980066.PDF (From Author) SL980066.PDF (Rasterized)

TOP


Modelling Tongue Configuration in German Vowel Production

Authors:

Philip Hoole, Phonetics Institute, Munich University (Germany)

Page (NA) Paper number 1096

Abstract:

The PARAFAC method of factor analysis was used to investigate patterns of tongue shaping in a corpus of 15 German vowels spoken in 3 consonant contexts by 7 speakers at 2 speech rates, using data from electromagnetic articulography. A two-factor model was extracted, giving a succinct, speaker-independent characterization of the German vowel space and of some important coarticulatory effects on vowel articulation. Moreover, the factors appeared to have a plausible physiological substrate. The PARAFAC model places strong constraints on the form that speaker-specific effects can take, since speaker differences must be captured in a single multiplicative weight per speaker and factor. While these constraints appeared acceptable for modelling vocalic aspects of articulation, more consonantally-related aspects, such as coarticulatory behaviour of the tongue-tip, appeared much more difficult to capture in the PARAFAC framework.

SL981096.PDF (From Author) SL981096.PDF (Rasterized)

TOP


Optopalatograph: Real-time Feedback of Tongue Movement in 3D

Authors:

Alan A. Wrench, Queen Margaret College (U.K.)
Alan D. McIntosh, Queen Margaret College (U.K.)
Colin Watson, Queen Margaret College (U.K.)
William J. Hardcastle, Queen Margaret (U.K.)

Page (NA) Paper number 1117

Abstract:

In this paper the latest prototype Optopalatograph (OPG) is described and its operation is demonstrated graphically and in comparison to theoretical predictions. The system is divided into three parts - the optopalate itself; a separate self contained unit composed of 16 switched infra-red light sources, associated control logic and 16 receivers; and a computer with A/D card running software to analyse and interpret graphically the sensor outputs. The current prototype measures distances of up to 20mm between all of the 16 pre-selected points on the hard palate and the surface of the tongue at a frame rate of 100Hz. We conclude that the new prototype provides a practical measurement system with a subjectively informative real-time display but further development is required in order to obtain objective accuracy.

SL981117.PDF (From Author) SL981117.PDF (Rasterized)

TOP


Effects of Contrastive Focal Accent on Linguopalatal Articulation and Coarticulation in the French [kskl] Cluster

Authors:

Yohann Meynadier, Laboratoire Parole et Langage, CNRS, ESA 6057 and Institut de Phonetique d'Aix-en-Provence (France)
Michel Pitermann, Department of Psychology, Queen's University of Kingston (Canada)
Alain Marchal, CNRS, DR 19, Caen (France)

Page (NA) Paper number 542

Abstract:

This paper investigates the effects of contrastive focal accent placement on lingual articulation and coarticulation of French [kskl] clusters in word-medial position. The EPG results show that (i) this type of accent does not systematically increase the amplitude, but the duration of linguopalatal constrictions (particularly their release); (ii) it directly lengthens the temporal interval between the articulatory hold phases of two contiguous consonants; (iii) no matter what the accent position is, it can affect the whole cluster; (iv) the gestural co-ordination of biconsonant sequences seemed to vary with the focal accent more according to articulatory than syllable boundary rhythmic constraints.

SL980542.PDF (From Author) SL980542.PDF (Rasterized)

TOP