Who presents worst? a study on expressions of negative feedback in different intergroup contexts
Mandy Visser, Emiel Krahmer, Marc Swerts
Audio-visual speaker conversion using prosody features
Adela Barbulescu, Thomas Hueber, Gérard Bailly, Remi Ronfard
Spontaneous synchronisation between repetitive speech and rhythmic gesture
Gregory Zelic, Jeesun Kim, Chris Davis
Culture and nonverbal cues: how does power distance influence facial expressions in game contexts?
Phoebe Mui, Martijn Goudbeek, Marc Swerts, Per van der Wijst
Predicting head motion from prosodic and linguistic features
Angelika Hönemann, Diego Evin, Alejandro J. Hadad, Hansjörg Mixdorff, Sascha Fagel
Visual control of hidden-semi-Markov-model based acoustic speech synthesis
Jakob Hollenstein, ichael Pucher, Dietmar Schabus
Objective and subjective feature evaluation for speaker-adaptive visual speech synthesis
Dietmar Schabus, Michael Pucher, Gregor Hofer
Audio-visual interaction in sparse representation features for noise robust audio-visual speech recognition
Peng Shen, Satoshi Tamura, Satoru Hayamizu
Assessing the visual speech perception of sampled-based talking heads
Paula D. Paro Costa, José Mario De Martino
Speech animation using electromagnetic articulography as motion capture data
Ingmar Steiner, Korin Richmond, Slim Ouni
Phonetic information in audiovisual speech is more important for adults than for infants; preliminary findings.
Martijn Baart, Jean Vroomen, Kathleen E. Shaw, Heather Bortfeld
Audiovisual speech perception in children with autism spectrum disorders and typical controls
Julia R. Irwin, Lawrence Brancazio
Looking for the bouba-kiki effect in prelexical infants
Mathilde Fort, Alexa Weiß, Alexander Martin, Sharon Peperkamp
Audiovisual speech perception in children and adolescents with developmental dyslexia: no deficit with McGurk stimuli
Margriet A. Groen, Alexandra Jesse
Effects of forensically-realistic facial concealment on auditory-visual consonant recognition in quiet and noise conditions
Natalie Fecher, Dominic Watt
Impact of cued speech on audio-visual speech integration in deaf and hearing adults
Clémence Bayard, Cécile Colin, Jacqueline Leybaert
Acoustic and visual adaptations in speech produced to counter adverse listening conditions
Valerie Hazan, Jeesun Kim
Role of audiovisual plasticity in speech recovery after adult cochlear implantation
Pascal Barone, Kuzma Strelnikov, Olivier Déguine
Auditory and auditory-visual Lombard speech perception by younger and older adults
Michael Fitzpatrick, Jeesun Kim, Chris Davis
Integration of acoustic and visual cues in prominence perception
Hansjörg Mixdorff, Angelika Hönemann, Sascha Fagel
Detecting auditory-visual speech synchrony: how precise?
Chris Davis, Jeesun Kim
How far out? the effect of peripheral visual speech on speech perception
Jeesun Kim, Chris Davis
Temporal integration for live conversational speech
Ragnhild Eg, Dawn M. Behne
Mixing faces and voices: a study of the influence of faces and voices on audiovisual intelligibility
Jérémy Miranda, Slim Ouni
The touch of your lips: haptic information speeds up auditory speech processing
Avril Treille, Camille Cordeboeuf, Coriandre Vilain, Marc Sato
Data and simulations about audiovisual asynchrony and predictability in speech perception
Jean-Luc Schwartz, Christophe Savariaux
The effect of musical aptitude on the integration of audiovisual speech and non-speech signals in children
Kaisa Tiippana, Kaupo Viitanen, Riia Kivimäki
The sight of your tongue: neural correlates of audio-lingual speech perception
Avril Treille, Coriandre Vilain, Thomas Hueber, Jean-Luc Schwartz, Laurent Lamalle, Marc Sato
Visual front-endwars: Viola-Jones face detector vs Fourier Lucas-Kanade
Shahram Kalantari, Rajitha Navarathna, David Dean, Sridha Sridharan
Aspects of co-occurring syllables and head nods in spontaneous dialogue
Simon Alexanderson, David House, Jonas Beskow
Avatar user interfaces in an OSGi-based system for health care services
Sascha Fagel, Andreas Hilbert, Christopher Mayer, Martin Morandell, Matthias Gira, Martin Petzold
Automatic feature selection for acoustic-visual concatenative speech synthesis: towards a perceptual objective measure
Utpala Musti, Vincent Colotte, Slim Ouni, Caroline Lavecchia, Brigitte Wrobel-Dautcourt, Marie-Odile Berger
Modulating fusion in the McGurk effect by binding processes and contextual noise
Olha Nahorna, Ganesh Attigodu Chandrashekara, Frédéric Berthommier, Jean Luc Schwartz
Visual voice activity detection at different speeds
Bart Joosten, Eric Postma, Emiel Krahmer
GMM mapping of visual features of cued speech from speech spectral features
Zuheng Ming, Denis Beautemps, Gang Feng
Confusion modelling for automated lip-reading using weighted finite-state transducers
Dominic Howell, Barry-John Theobald, Stephen Cox
Transforming neutral visual speech into expressive visual speech
Felix Shaw, Barry-John Theobald
Differences in the audio-visual detection of word prominence from Japanese and English speakers
Martin Heckmann, Keisuke Nakamura, Kazuhiro Nakadai
Speaker separation using visually-derived binary masks
Faheem Khan, Ben Milner
Improvement of lipreading performance using discriminative feature and speaker adaptation
Takumi Seko, Naoya Ukai, Satoshi Tamura, Satoru Hayamizu
Efficient face model for lip reading
Takeshi Saitoh