ABSTRACT
This paper describes recent speechreading experiments for a speaker independent continuous digit recognition task. Visual feature extraction is performed by a lip tracker which recovers information about the lip shape and information about the grey- level intensity around the mouth. These features are used to train visual word models using continuous density HMMs. Results show that the method generalises well to new speakers and that the recognition rate is highly variable across digits as expected due to the high visual confusability of certain words.
ABSTRACT
The goal of this work is to use phonetic recognition to drive a synthetic image with speech. Phonetic units are identied by the phonetic recognition engine and mapped to mouth gestures, known as visemes, the visual counter- part of phonemes. The acoustic waveform and visemes are then sent to a synthetic image player, called FaceMe! where they are rendered synchronously. This paper provides background for the core technologies involved in this process and describes asynchronous and synchronous prototypes of a combined phonetic recognition/FaceMe! system which we use to render mouth gestures on an animated face.
ABSTRACT
This paper describes a new approach for automatic speechreading. First, we use efficient, but effective representation of visible speech: a geometric lip-shape model. Then we present an automatic objective method to merge phonemes that appear visually similar into visemes for our speaker. In order to determine visemes, we trained SOM using the Kohonen algorithm on each phoneme extracted from our visual database. We go into the presentation of our visual speech recognition systems based on heuristics and neural networks (TDNN or JNN) trained to discriminate visual information. On a continuous spelling task, visual-alone recognition performance of about 37 % was achieved using the TDNN and about 33 % using the JNN one.
ABSTRACT
The Teleface Project, a project that aims at evaluating the possibilities for a telephone communication aid for hard of hearing persons, is presented as well as the different parts of the project: audio-visual speech synthesis, visual speech measurement and multimodal speech intelligibility studies. The experiments showed a noticeable intelligibility advantage for the addition of the face information, both for natural and synthetic faces.
ABSTRACT
This paper presents a new approach to lip tracking for lipreading. Instead of only tracking features on lips, we propose to track lips along with other facial features such as pupils and nostril. In the new approach, the face is first located in an image using a stochastic skin-color model, the eyes, lip-corners and nostrils are then located and tracked inside the facial region. The new approach can effectively improve the robustness of lip-tracking and simplify automatic detection and recovery of tracking failure. The feasibility of the proposed approach has been demonstrated by implementation of a lip tracking sys- tem. The system has been tested by a database that contains 900 image sequences of different speakers spelling words. The system has successfully extract lip regions from the image sequences to obtain training data for the audio-visual speech recognition system. The system has been also applied to extract the lip region in real-time from live video images to obtain the visual input for an audio-visual speech recognition system. On test sequences we have achieved a reduction of the number of frames with tracking failures by a factor of two using detection and prediction of outliers in the set of found features.
ABSTRACT
This paper presents a method for the extraction of articulatory parameters from direct processing of raw images of the lips. The system architecture is made of three independent parts. First, a new greyscale mouth image is centred and downsampled. Second, the image is aligned and projected onto a basis of artificial images. These images are the eigenvectors computed from a PCA applied on a set of 23 reference lip shapes. Then, a multilinear interpolation predicts articulatory parameters from the image projection coefficients onto the eigenvectors. In addition, the projection coefficients and the predicted parameters were evaluated by an HMM-based visual speech recogniser. Recognition scores obtained with our method are compared to reference scores and discussed.