ISCA Archive AVSP 2009 Booklet
  ISCA Archive Booklet
top

Auditory-Visual Speech Processing

University of East Anglia, Norwich, UK
10-13 September 2009

Papers


Alignment in iconic gestures: does it make sense?
Lisette Mol, Emiel Krahmer, Marc Swerts

Aging effect on audio-visual speech asynchrony perception: comparison of time-expanded speech and a moving image of a talker's face
Shuichi Sakamoto, Akihiro Tanaka, Shun Numahata, Atsushi Imai, Tohru Takagi, Yôiti Suzuki

LW2a: an easy tool to transform voice WAV files into talking animations
Piero Cosi, Graziano Tisato

Effects of smiled speech on lips, larynx and acoustics
Sascha Fagel

Visual speech information aids elderly adults in stream segregation
Alexandra Jesse, Esther Janse

The development of speechreading in deaf and hearing children: introducing a new test of child speechreading (toCS)
Fiona Kyle, Mairead MacSweeney, Tara Mohammed, Ruth Campbell

Audio-visual mutual dependency models for biometric liveness checks
Girija Chetty, Roland Göcke, Michael Wagner

Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentials
Satoko Hisanaga, Kaoru Sekiyama, Tomohiko Igasaki, Nobuki Murayama

Effects of visual prominence cues on speech intelligibility
Samer Al Moubayed, Jonas Beskow

Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniques
Wesley Mattheyses, Lukas Latacz, Werner Verhelst

Speaker-dependent audio-visual emotion recognition
Sanaul Haq, Philip J. B. Jackson

Audio-visual speech perception in mild cognitive impairment and healthy elderly controls
Natalie A. Phillips, Shari Baum, Vanessa Taler

Are virtual humans uncanny?: varying speech, appearance and motion to better understand the acceptability of synthetic humans
Takaaki Kuratate, Kathryn Ayers, Jeesun Kim, Marcia Riley, Denis Burnham

Visual influence on auditory perception: is speech special?
Christian Kroos, Katherine Hogan

Auditory-visual perception of talking faces at birth: a new paradigm
Marion Coulon, Bahia Guellaï, Arlette Streri

Area of mouth opening estimation from speech acoustics using blind deconvolution technique
Cong-Thanh Do, Abdeldjalil Aissa-El-Bey, Dominique Pastor, André Goalic

Comparison of human and machine-based lip-reading
Sarah Hilder, Richard Harvey, Barry-John Theobald

Untying the knot between gestures and speech
Marieke Hoetjes, Emiel Krahmer, Marc Swerts

Can you tell if tongue movements are real or synthesized?
Olov Engwall, Preben Wik

Comparing visual features for lipreading
Yuxuan Lan, Richard Harvey, Barry-John Theobald, Eng-Jon Ong, Richard Bowden

Auditory-visual infant directed speech in Japanese and English
Takaaki Shochi, Kaoru Sekiyama, Nicole Lees, Mark Boyce, Roland Göcke, Denis Burnham

Recalibration of audiovisual simultaneity in speech
Akihiro Tanaka, Kaori Asakawa, Hisato Imai

Audiovisual speech recognition with missing or unreliable data
Dorothea Kolossa, Steffen Zeiler, Alexander Vorwerk, Reinhold Orglmeister

Older and younger adults use fewer neural resources during audiovisual than during auditory speech perception
Axel H. Winneke, Natalie A. Phillips

Startegies and results for the evaluation of the naturalness of the LIPPS facial animation system
Jana Eger, Hans-Heinrich Bothe

Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues
Chris Davis, Jeesun Kim

Effective visually-derived Wiener filtering for audio-visual speech processing
Ibrahim Almajai, Ben Milner

Pairing audio speech and various visual displays: binding or not binding?
Aymeric Devergie, Frédéric Berthommier, Nicolas Grimault

Effects of exhaustivity and uncertainty on audiovisual focus production
Charlotte Wollermann, Bernhard Schröder

Voice activity detection based on fusion of audio and visual information
Shin’ichi Takeuchi, Takashi Hashiba, Satoshi Tamura, Satoru Hayamizu

Space-time audio-visual speech recognition with multiple multi-class probabilistic support vector machines
Samuel Pachoud, Shaogang Gong, Andrea Cavallaro

Refinement of lip shape in sign speech synthesis
Zdeněk Krňoul

An image-based talking head system
Kang Liu, Joern Ostermann

The UWB 3d talking head text-driven system controlled by the SAT method used for the LIPS 2009 challenge
Zdeněk Krňoul, Miloš Železný

Synface - verbal and non-verbal face animation from audio
Jonas Beskow, Giampiero Salvi, Samer Al Moubayed

HMM-based motion trajectory generation for speech animation synthesis
Lijuan Wang, Wei Han, Xiaojun Qian, Frank Soong


×

Papers