Auditory-Visual Speech Processing (AVSP) 2009

University of East Anglia, Norwich, UK
September 10-13, 2009

Bibliographic Reference

[AVSP-2009] International Conference on Auditory-Visual Speech Processing (AVSP) 2009, University of East Anglia, Norwich, UK, September 10-13, 2009; ed. by Barry-John Theobald and Richard Harvey; ISCA Archive,

Author Index and Quick Access to Abstracts

Aissa-El-Bey   Al_Moubayed (169)   Al_Moubayed (43)   Almajai   Asakawa   Ayers   Baum   Berthommier   Beskow (169)   Beskow (43)   Bothe   Bowden   Boyce   Burnham (65)   Burnham (107)   Campbell   Cavallaro   Chetty   Cosi   Coulon   Davis   Devergie   Do   Eger   Engwall   Fagel   Goalic   Göcke (32)   Göcke (107)   Gong   Grimault   Guellaï   Han   Haq   Harvey (86)   Harvey (102)   Hashiba   Hayamizu   Hilder   Hisanaga   Hoetjes   Hogan   Igasaki   Imai, Atsushi   Imai, Hisato   Jackson   Janse   Jesse   Kim (65)   Kim (130)   Kolossa   Krahmer (3)   Krahmer (90)   Kroos   Krňoul (167)   Krňoul (161)   Kuratate   Kyle   Lan   Latacz   Lees   Liu   MacSweeney   Mattheyses   Milner   Mohammed   Mol   Murayama   Numahata   Ong   Orglmeister   Ostermann   Pachoud   Pastor   Phillips (59)   Phillips (123)   Qian   Riley   Sakamoto   Salvi   Schröder   Sekiyama (38)   Sekiyama (107)   Shochi   Soong   Streri   Suzuki   Swerts (3)   Swerts (90)   Takagi   Takeuchi   Taler   Tamura   Tanaka (9)   Tanaka (113)   Theobald (86)   Theobald (102)   Tisato   Verhelst   Vorwerk   Wagner   Wang   Wik   Winneke   Wollermann   Zeiler   Železný  

Names written in boldface refer to first authors, in CAPITAL letters to keynote and invited papers. Full papers can be accessed from the abstracts (ISCA members only). Please note that each abstract opens in a separate window.

Table of Contents and Access to Abstracts

Introduction to the Workshop

Mol, Lisette / Krahmer, Emiel / Swerts, Marc: "Alignment in iconic gestures: does it make sense?", 3-8.

Sakamoto, Shuichi / Tanaka, Akihiro / Numahata, Shun / Imai, Atsushi / Takagi, Tohru / Suzuki, Yôiti: "Aging effect on audio-visual speech asynchrony perception: comparison of time-expanded speech and a moving image of a talker²s face", 9-12.

Cosi, Piero / Tisato, Graziano: "LW2a: an easy tool to transform voice WAV files into talking animations", 13-17.

Fagel, Sascha: "Effects of smiled speech on lips, larynx and acoustics", 18-21.

Jesse, Alexandra / Janse, Esther: "Visual speech information aids elderly adults in stream segregation", 22-27.

Kyle, Fiona / MacSweeney, Mairead / Mohammed, Tara / Campbell, Ruth: "The development of speechreading in deaf and hearing children: introducing a new test of child speechreading (toCS)", 28-31.

Chetty, Girija / Göcke, Roland / Wagner, Michael: "Audio-visual mutual dependency models for biometric liveness checks", 32-37.

Hisanaga, Satoko / Sekiyama, Kaoru / Igasaki, Tomohiko / Murayama, Nobuki: "Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentials", 38-42.

Al Moubayed, Samer / Beskow, Jonas: "Effects of visual prominence cues on speech intelligibility", 43-46.

Mattheyses, Wesley / Latacz, Lukas / Verhelst, Werner: "Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniques", 47-53.

Haq, Sanaul / Jackson, Philip J. B.: "Speaker-dependent audio-visual emotion recognition", 53-58.

Phillips, Natalie A. / Baum, Shari / Taler, Vanessa: "Audio-visual speech perception in mild cognitive impairment and healthy elderly controls", 59-64.

Kuratate, Takaaki / Ayers, Kathryn / Kim, Jeesun / Riley, Marcia / Burnham, Denis: "Are virtual humans uncanny?: varying speech, appearance and motion to better understand the acceptability of synthetic humans", 65-69.

Kroos, Christian / Hogan, Katherine: "Visual influence on auditory perception: is speech special?", 70-75.

Coulon, Marion / Guellaï, Bahia / Streri, Arlette: "Auditory-visual perception of talking faces at birth: a new paradigm", 76-79.

Do, Cong-Thanh / Aissa-El-Bey, Abdeldjalil / Pastor, Dominique / Goalic, André: "Area of mouth opening estimation from speech acoustics using blind deconvolution technique", 80-85.

Hilder, Sarah / Harvey, Richard / Theobald, Barry-John: "Comparison of human and machine-based lip-reading", 86-89.

Hoetjes, Marieke / Krahmer, Emiel / Swerts, Marc: "Untying the knot between gestures and speech", 90-95.

Engwall, Olov / Wik, Preben: "Can you tell if tongue movements are real or synthesized?", 96-101.

Lan, Yuxuan / Harvey, Richard / Theobald, Barry-John / Ong, Eng-Jon / Bowden, Richard: "Comparing visual features for lipreading", 102-106.

Shochi, Takaaki / Sekiyama, Kaoru / Lees, Nicole / Boyce, Mark / Göcke, Roland / Burnham, Denis: "Auditory-visual infant directed speech in Japanese and English", 107-112.

Tanaka, Akihiro / Asakawa, Kaori / Imai, Hisato: "Recalibration of audiovisual simultaneity in speech", 113-116.

Kolossa, Dorothea / Zeiler, Steffen / Vorwerk, Alexander / Orglmeister, Reinhold: "Audiovisual speech recognition with missing or unreliable data", 117-122.

Winneke, Axel H. / Phillips, Natalie A.: "Older and younger adults use fewer neural resources during audiovisual than during auditory speech perception", 123-126.

Eger, Jana / Bothe, Hans-Heinrich: "Startegies and results for the evaluation of the naturalness of the LIPPS facial animation system", 127-129.

Davis, Chris / Kim, Jeesun: "Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues", 130-133.

Almajai, Ibrahim / Milner, Ben: "Effective visually-derived Wiener filtering for audio-visual speech processing", 134-139.

Devergie, Aymeric / Berthommier, Frédéric / Grimault, Nicolas: "Pairing audio speech and various visual displays: binding or not binding?", 140-146.

Wollermann, Charlotte / Schröder, Bernhard: "Effects of exhaustivity and uncertainty on audiovisual focus production", 145-150.

Takeuchi, Shin’ichi / Hashiba, Takashi / Tamura, Satoshi / Hayamizu, Satoru: "Voice activity detection based on fusion of audio and visual information", 151-154.

Pachoud, Samuel / Gong, Shaogang / Cavallaro, Andrea: "Space-time audio-visual speech recognition with multiple multi-class probabilistic support vector machines", 155-160.

Krňoul, Zdeněk: "Refinement of lip shape in sign speech synthesis", 161-165.

Liu, Kang / Ostermann, Joern: "An image-based talking head system", 166.

Krňoul, Zdeněk / Železný, Miloš: "The UWB 3d talking head text-driven system controlled by the SAT method used for the LIPS 2009 challenge", 167-168.

Beskow, Jonas / Salvi, Giampiero / Al Moubayed, Samer: "Synface - verbal and non-verbal face animation from audio", 169.

Wang, Lijuan / Han, Wei / Qian, Xiaojun / Soong, Frank: "HMM-based motion trajectory generation for speech animation synthesis", 170.