Speechreading essentials: signal, paralinguistic cues, and skill
Björn Lidestam, Björn Lyxell
The INFLUENCE OF THE LEXICON ON VISUAL SPOKEN WORD RECognition
Edward T. Auer Jr., Lynne E. Bernstein, Sven Mattys
TAS: A new test of adult speechreading - deaf people really can be better speechreaders
Tara Ellis, Mairead MacSweeney, Barbara Dodd, Ruth Campbell
Is it easier to lipread one's own speech gestures than those of somebody else? it seems not!
Jean-Luc Schwartz, Christophe Savariaux
Towards the facecoder: dynamic face synthesis based on image motion estimation in speech
Christian Kroos, Saeko Masuda, Takaaki Kuratate, Eric Vatikiotis-Bateson
Viseme space for realistic speech animation
Sumedha Kshirsagar, Nadia Magnenat-Thalmann
Audiovisual speech perception in Williams Syndrome
M. Bohning, Ruth Campbell, A. Karmiloff-Smith
Comparing cortical activity during the perception of two forms of biological motion for language communication
Edward T. Auer Jr., Lynne E. Bernstein, Manbir Singh
Neural areas underlying the processing of visual speech information under conditions of degraded auditory information
Daniel Callan, Akiko Callan, Eric Vatikiotis-Bateson
Similarity structure in visual phonetic perception and optical phonetics
Lynne E. Bernstein, Jintao Jiang, Abeer Alwan, Edward T. Auer Jr.
The mismatch negativity (MMN) and the McGurk effect
C. Colin, M. Radeau, P. Deltenre
A case of multimodal aprosodia: impaired auditory and visual speech prosody perception in a patient with right hemisphere damage
Karen Nicholson, Shari Baum, Lola Cuddy, Kevin Munhall
Extraction of 3D facial motion parameters from mirror-reflected multi-view video for audio-visual synthesis
I-Chen Lin, Jeng-Sheng Yeh, Ming Ouhyoung
Modelling an Italian talking head
C. Pelachaud, E. Magno-Caldognetto, C. Zmarich, P. Cosi
Visual speech synthesis using statistical models of shape and appearance
Barry J. Theobald, J. Andrew Bangham, Iain Matthews, Gavin C. Cawley
Hidden Markov models for visual speech synthesis with limited data
Allan Arb, Steven Gustafson, Timothy Anderson, Raymond Slyh
Creating and controlling video-realistic talking heads
F. Elisei, M. Odisio, Gérard Bailly, Pierre Badin
Multimodal translation
Shigeo Morishima, Shin Ogata, Satoshi Nakamura
Electrophysiology of unimodal and audiovisual speech perception
Lynne E. Bernstein, Curtis W. Ponton, Edward T. Auer Jr.
Development of a lip-sync algorithm based on an audio-visual corpus
Jinyoung Kim, Seungho Choi, Joohun Lee
Analysis of audio-video correlation in vowels in Australian English
Roland Goecke, J. Bruce Millar, Alexander Zelinsky, Jordi Robert-Ribes
Non-verbal correlates to focal accents in Swedish
Christel Ekvall, Bertil Lyberg, Michael Randén
Visible speech cues and auditory detection of spoken sentences: an effect of degree of correlation between acoustic and visual properties
Jeesun Kim, Chris Davis
Speech intelligibility derived from asynchronous processing of auditory-visual information
Ken W. Grant, Steven Greenberg
Asking a naive question about the McGurk effect: Why does audio [b] give more [d] percepts with visual [g] than with visual [d]?
M.A. Cathiard, Jean-Luc Schwartz, C. Abry
Investigating the role of luminance boundaries in visual and audiovisual speech recognition using line drawn faces
M.V. McCotter, T.R. Jordan
Auditory-visual L2 speech perception: Effects of visual cues and acoustic-phonetic context for Spanish learners of English
M. Ortega-Llebaria, A. Faulkner, Valerie Hazan
Visual discrimination of cantonese tone by tonal but non-Cantonese speakers, and by non-tonal language speakers
Denis Burnham, Susanna Lau, Helen Tam, Colin Schoknecht
Bimodal word identification: effects of modality, speech style, sentence and phonetic/visual context
Debra M. Hardison
Visual attention influences audiovisual speech perception
K. Tiippana, M. Sams, T. S. Andersen
Modeling of audiovisual speech perception in noise
T.S. Andersen, K. Tiippana, J. Lampinen, M. Sams
Automatic speechreading of impaired speech
Gerasimos Potamianos, Chalapathy Neti
Audio-visual recognition of spectrally reduced speech
Frederic Berthommier
A hybrid ANN/HMM audio-visual speech recognition system
Martin Heckmann, Frederic Berthommier, Kristian Kroschel
Noise-based audio-visual fusion for robust speech recognition
E. K. Patterson, S. Gurbuz, Z. Tufekci, J. N. Gowdy
LIPPS - A visual telephone for hearing-impaired
Hans-Heinrich Bothe
Cortical substrates of seeing speech: still and moving faces
G. A. Calvert, M. J. Brammer, Ruth Campbell
Development of a completely computerized McGurk design under variation of the signal to noise ratio
Bjorn Kabisch, Carol Nisch, Eckart R. Straube, Ruth Campbell
Estimating focus of attention based on gaze and sound
Rainer Stiefelhagen, Jie Yang, Alex Waibel
Obtaining person-independent feature space for lip reading
Jacek C. Wojdel, Leon J.M. Rothkrantz
Animated speech: research progress and applications
Michael M. Cohen, Rashid Clark, Dominic W. Massaro