ISCA Archive AVSP 1997
  ISCA Archive
top

Auditory-Visual Speech Processing

Rhodes, Greece
26-27 September 1997

Papers


The perception of mouthshape: photographic images of natural speech sounds can be perceived categorically
Ruth Campbell, P. J. Benson, S. B. Wallace

Italian consonantal visemes: relationships between spatial/ temporal articulatory characteristics and coproduced acoustic signal
Emanuela Magno Caldognetto, C. Zmarich, Piero Cosi, Franco Ferrero

Negative effect of homophones on speechreading in Japanese
Shizuo Hiki, Yumiko Fukuda

Visual rhyming effects in deaf children
Jacqueline Leybaert, Daniela Marchetti

Context sensitive faces
Isabella Poggi, Catherine Pelachaud

Effects of phonetic variation and the structure of the lexicon on the uniqueness of words
E. T. Auer Jr., L. E. Bernstein, R. S. Waldstein, P. E. Tucker

A methodology to quantify the contribution of visual and prosodic information to the process of speech comprehension
Loredana Cerrato, Federico Albano Leoni, Andrea Paoloni

The effects of speaking rate on visual speech intelligibility
Jean-Pierre Gagné, Lina Boutin

Micro- and macro-bimodality
Emanuela Magno Caldognetto, Isabella Poggi

Can the visual input make the audio signal "pop out" in noise ? a first study of the enhancement of noisy VCV acoustic sequences by audio-visual fusion
L. Girin, Jean-Luc Schwartz, G. Feng

Quantitative association of orofacial and vocal-tract shapes
Hani Yehia, Philip Rubin, Eric Vatikiotis-Bateson

Phonological representaion and speech understanding with cochlear implants in deafened adults
Björn Lyxell, Ulf Andersson, Stig Arlinger, Henrik Harder, Jerker Rönnberg

Audio visual speech recognition and segmental master slave HMM
Regine André-Obrecht, Bruno Jacob, Nathalie Parlangeau

Combining noise compensation with visual information in speech recognition
Stephen Cox, Iain Matthews, Andrew Bangham

Neural architectures for sensor fusion in speech recognition
G. Krone, B. Talk, A. Wichert, G. Palm

Adaptive determination of audio and visual weights for automatic speech recognition
Alexandrina Rogozan, Paul Deléglise, Mamoun Alissali

Speaker independent audio-visual database for bimodal ASR
Gerasimos Potamianos, Eric Cosatto, Hans Peter Graf, David B. Roe

Word-dependent acoustic-labial weights in HMM-based speech recognition
Pierre Jourlin

Audio-visual speech perception without traditional speech cues: a second report
Robert E. Remez, Jennifer M. Fellowes, David B. Pisoni, Winston D. Goh, Philip E. Rubin

Impairment of visual speech integration in prosopagnosia
Beatrice de Gelder, Nancy Etcoff, Jean Vroomen

Audiovisual intelligibility of an androgynous speaker
C. Schwippert, Christian Benoît

Audiovisual speech perception in dyslexics: impaired unimodal perception but no audiovisual integration deficit
Ruth Campbell, A. Whittingham, U. Frith, Dominic W. Massaro, M. M. Cohen

Elucidating the complex relationships between phonetic perception and word recognition in audiovisual speech perception
Z. E. Bernstein, P. Iverson, E. T. Auer Jr.

The Japanese Mcgurk effect: the role of linguistic and cultural factors an auditory-visual speech perception
Denis Burnham, Sheila Keane

Auditory-visual interaction in voice localization and in bimodal speech recognition: the effects of desynchronization
Paul Bertelson, Jean Vroomenti, Beatrice de Gelderti

Audiovisual fusion in finnish syllables and words
M. Sams, V. Surakka, P. Helin, R. Kättö

Analytical method for linguistic information of facial gestures in natural dialogue languages
A. Ichikawa, Y. Okada, A. Imiya, K. Horiuchi

An approach to face localization based on signature analysis
B. Raducanu, M. Grana

Preprocessing of visual speech under real world conditions
Uwe Meier, Rainer Stiefelhagen, Me Yang

An hybrid approach to orientation-free liptracking
L. Revéret, F. Garcia, Christian Benoît, Eric Vatikiotis-Bateson

Recovering 3d lip structure from 2d observations using a model trained from video
Sumit Basu, Alex Pentland

Interpreted multi-state lip models for audio-visual speech recognition
Michael Vogt

Intelligibility of speech mediated by low frame-rate video
Anne H. Anderson, Art Blokland

Lip synchronization of speech
David F. McAllister, Robert D. Rodman, Donald L. Bitzer, Andrew S. Freeman

Speech to lip movement synthesis by HMM
Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano

Videorealistic talking faces: a morphing approach
Tony Ezzat, Tomaso Poggio

A French-speaking synthetic head
Bertrand Le Goff, Christian Benoît

Animation of talking agents
Jonas Beskow

Video rewrite: visual speech synthesis from video
Christoph Bregler, Michele Covell, Malcolm Slaney


×

Papers