Auditory-Visual Speech Processing (AVSP) 2013

Annecy, France
August 29 - September 1, 2013

Bibliographic Reference

[AVSP-2013] Auditory-Visual Speech Processing (AVSP) 2013, ed. by Slim Ouni, Frédéric Berthommier, and Alexandra Jesse; ISCA Archive,

Introduction to the Workshop

Author Index and Quick Access to Abstracts

Alexanderson   Baart   Bailly   Barbulescu   Barone   Bayard   Beautemps   Behne   Berger   Berthommier   Beskow   Bortfeld   Brancazio   CANGELOSI   Chandrashekara   Colin   Colotte   Cordeboeuf   Cox   Davis (17)   Davis (105)   Davis (117)   Davis (123)   Dean   Déguine   Eg   Evin   Fagel (27)   Fagel (111)   Fagel (173)   Fecher   Feng   Fitzpatrick   Fort   Gira   Goudbeek   Groen   Hadad   Hayamizu (43)   Hayamizu (221)   Hazan   Heckmann   Hilbert   Hönemann (27)   Hönemann (111)   Hofer   Hollenstein   House   Howell   Hueber (11)   Hueber (157)   Irwin   Jesse   Joosten   Kalantari   Khan   Kim (17)   Kim (93)   Kim (105)   Kim (117)   Kim (123)   Kivimäki   Krahmer (5)   Krahmer (187)   Lamalle   Lavecchia   Leybaert   Martin   Martino   Mayer   Milner   Ming   Miranda   Mixdorff (27)   Mixdorff (111)   Morandell   Mui   Musti   Nahorna   Nakadai   Nakamura   Navarathna   Ouni (55)   Ouni (135)   Ouni (175)   Paro Costa   Peperkamp   Petzold   Postma   Pucher, ichael   Pucher, Michael   Richmond   Ronfard   Saitoh   Sato (141)   Sato (157)   Savariaux   Schabus (31)   Schabus (37)   Schwartz (147)   Schwartz (157)   Schwartz (181)   Seko   Shaw, Felix   Shaw, Kathleen E.   Shen   SPENCE   Sridharan   Steiner   Strelnikov   Swerts (5)   Swerts (21)   Tamura (43)   Tamura (221)   Theobald (197)   Theobald (203)   Tiippana   Treille (141)   Treille (157)   Ukai   Viitanen   Vilain (141)   Vilain (157)   Visser   Vroomen   Watt   Weiß   van der Wijst   Wrobel-Dautcourt   Zelic  

Names written in boldface refer to first authors, in CAPITAL letters to keynote and invited papers. Full papers can be accessed from the abstracts. Please note that each abstract opens in a separate window.

Table of Contents and Access to Abstracts

Invited Papers

Cangelosi, Angelo: "Embodied language learning with the humanoid robot icub", 1.

Spence, Charles: "Audiovisual speech integration: modulatory factors and the link to sound symbolism", 3.

Audiovisual Prosody

Visser, Mandy / Krahmer, Emiel / Swerts, Marc: "Who presents worst? a study on expressions of negative feedback in different intergroup contexts", 5-10.

Barbulescu, Adela / Hueber, Thomas / Bailly, Gérard / Ronfard, Remi: "Audio-visual speaker conversion using prosody features", 11-16.

Zelic, Gregory / Kim, Jeesun / Davis, Chris: "Spontaneous synchronisation between repetitive speech and rhythmic gesture", 17-20.

Mui, Phoebe / Goudbeek, Martijn / Swerts, Marc / Wijst, Per van der: "Culture and nonverbal cues: how does power distance influence facial expressions in game contexts?", 21-26.

Hönemann, Angelika / Evin, Diego / Hadad, Alejandro J. / Mixdorff, Hansjörg / Fagel, Sascha: "Predicting head motion from prosodic and linguistic features", 27-30.

Audiovisual Speech by Machines

Hollenstein, Jakob / Pucher, ichael / Schabus, Dietmar: "Visual control of hidden-semi-Markov-model based acoustic speech synthesis", 31-36.

Schabus, Dietmar / Pucher, Michael / Hofer, Gregor: "Objective and subjective feature evaluation for speaker-adaptive visual speech synthesis", 37-42.

Shen, Peng / Tamura, Satoshi / Hayamizu, Satoru: "Audio-visual interaction in sparse representation features for noise robust audio-visual speech recognition", 43-48.

Paro Costa, Paula D. / Martino, José Mario De: "Assessing the visual speech perception of sampled-based talking heads", 49-54.

Steiner, Ingmar / Richmond, Korin / Ouni, Slim: "Speech animation using electromagnetic articulography as motion capture data", 55-60.

Development of Audiovisual Speech Perception

Baart, Martijn / Vroomen, Jean / Shaw, Kathleen E. / Bortfeld, Heather: "Phonetic information in audiovisual speech is more important for adults than for infants; preliminary findings.", 61-64.

Irwin, Julia R. / Brancazio, Lawrence: "Audiovisual speech perception in children with autism spectrum disorders and typical controls", 65-70.

Fort, Mathilde / Weiß, Alexa / Martin, Alexander / Peperkamp, Sharon: "Looking for the bouba-kiki effect in prelexical infants", 71-76.

Groen, Margriet A. / Jesse, Alexandra: "Audiovisual speech perception in children and adolescents with developmental dyslexia: no deficit with McGurk stimuli", 77-80.

Audiovisual Speech Perception in Adverse Listening

Fecher, Natalie / Watt, Dominic: "Effects of forensically-realistic facial concealment on auditory-visual consonant recognition in quiet and noise conditions", 81-86.

Bayard, Clémence / Colin, Cécile / Leybaert, Jacqueline: "Impact of cued speech on audio-visual speech integration in deaf and hearing adults", 87-92.

Hazan, Valerie / Kim, Jeesun: "Acoustic and visual adaptations in speech produced to counter adverse listening conditions", 93-98.

Barone, Pascal / Strelnikov, Kuzma / Déguine, Olivier: "Role of audiovisual plasticity in speech recovery after adult cochlear implantation", 99-104.

Fitzpatrick, Michael / Kim, Jeesun / Davis, Chris: "Auditory and auditory-visual Lombard speech perception by younger and older adults", 105-110.

Binding of Audiovisual Speech Information

Mixdorff, Hansjörg / Hönemann, Angelika / Fagel, Sascha: "Integration of acoustic and visual cues in prominence perception", 111-116.

Davis, Chris / Kim, Jeesun: "Detecting auditory-visual speech synchrony: how precise?", 117-122.

Kim, Jeesun / Davis, Chris: "How far out? the effect of peripheral visual speech on speech perception", 123-128.

Eg, Ragnhild / Behne, Dawn M.: "Temporal integration for live conversational speech", 129-134.

Miranda, Jérémy / Ouni, Slim: "Mixing faces and voices: a study of the influence of faces and voices on audiovisual intelligibility", 135-140.

Neuropsychology and Multimodality

Treille, Avril / Cordeboeuf, Camille / Vilain, Coriandre / Sato, Marc: "The touch of your lips: haptic information speeds up auditory speech processing", 141-146.

Schwartz, Jean-Luc / Savariaux, Christophe: "Data and simulations about audiovisual asynchrony and predictability in speech perception", 147-152.

Tiippana, Kaisa / Viitanen, Kaupo / Kivimäki, Riia: "The effect of musical aptitude on the integration of audiovisual speech and non-speech signals in children", 153-156.

Treille, Avril / Vilain, Coriandre / Hueber, Thomas / Schwartz, Jean-Luc / Lamalle, Laurent / Sato, Marc: "The sight of your tongue: neural correlates of audio-lingual speech perception", 157-162.

Poster Sessions

Kalantari, Shahram / Navarathna, Rajitha / Dean, David / Sridharan, Sridha: "Visual front-endwars: Viola-Jones face detector vs Fourier Lucas-Kanade", 163-168.

Alexanderson, Simon / House, David / Beskow, Jonas: "Aspects of co-occurring syllables and head nods in spontaneous dialogue", 169-172.

Fagel, Sascha / Hilbert, Andreas / Mayer, Christopher / Morandell, Martin / Gira, Matthias / Petzold, Martin: "Avatar user interfaces in an OSGi-based system for health care services", 173-174.

Musti, Utpala / Colotte, Vincent / Ouni, Slim / Lavecchia, Caroline / Wrobel-Dautcourt, Brigitte / Berger, Marie-Odile: "Automatic feature selection for acoustic-visual concatenative speech synthesis: towards a perceptual objective measure", 175-180.

Nahorna, Olha / Chandrashekara, Ganesh Attigodu / Berthommier, Frédéric / Schwartz, Jean Luc: "Modulating fusion in the McGurk effect by binding processes and contextual noise", 181-186.

Joosten, Bart / Postma, Eric / Krahmer, Emiel: "Visual voice activity detection at different speeds", 187-190.

Ming, Zuheng / Beautemps, Denis / Feng, Gang: "GMM mapping of visual features of cued speech from speech spectral features", 191-196.

Howell, Dominic / Theobald, Barry-John / Cox, Stephen: "Confusion modelling for automated lip-reading usingweighted finite-state transducers", 197-202.

Shaw, Felix / Theobald, Barry-John: "Transforming neutral visual speech into expressive visual speech", 203-208.

Heckmann, Martin / Nakamura, Keisuke / Nakadai, Kazuhiro: "Differences in the audio-visual detection of word prominence from Japanese and English speakers", 209-214.

Khan, Faheem / Milner, Ben: "Speaker separation using visually-derived binary masks", 215-220.

Seko, Takumi / Ukai, Naoya / Tamura, Satoshi / Hayamizu, Satoru: "Improvement of lipreading performance using discriminative feature and speaker adaptation", 221-226.

Saitoh, Takeshi: "Efficient face model for lip reading", 227-232.