ESCA Workshop on Audio-Visual Speech Processing (AVSP'97)

September 26-27, 1997
Rhodes, Greece



Bibliographic Reference

[AVSP-1997] ESCA Workshop on Audio-Visual Speech Processing (AVSP'97), Rhodes, Greece, September 26-27, 1997, ed. by Christian Benoît and Ruth Campbell, ISCA Archive, http://www.isca-speech.org/archive_open/avsp97



Author Index and Quick Access to Abstracts

Albano_Leoni   Alissali   Anderson   Andersson   André-Obrecht   Arlinger   Auer Jr. (21)   Auer Jr. (89)   Bangham   Basu   Benoît (81)   Benoît (117)   Benoît (145)   Benson   Bernstein, L. E.   Bernstein, Z. E.   Bertelson   Beskow   Bitzer   Blokland   Boutin   Bregler   Burnham   Campbell (1)   Campbell (85)   Cerrato   Cohen   Cosatto   Cosi   Covell   Cox   Deléglise   Etcoff   Ezzat   Fellowes   Feng   Ferrero   Freeman   Frith   Fukuda   Gagné   Garcia   Gelder   Gelderti   Girin   Goff   Goh   Graf   Grana   Harder   Helin   Hiki   Horiuchi   Ichikawa   Imiya   Iverson   Jacob   Jourlin   Kättö   Keane   Krone   Leybaert   Lyxell   Magno Caldognetto (5)   Magno Caldognetto (33)   Marchetti   Massaro   Matthews   McAllister   Meier   Nakamura   Okada   Palm   Paoloni   Parlangeau   Pelachaud   Pentland   Pisoni   Poggi (17)   Poggi (33)   Poggio   Potamianos   Raducanu   Remez   Revéret   Rodman   Roe   Rogozan   Rönnberg   Rubin, Philip   Rubin, Philip E.   Sams   Schwartz   Schwippert   Shikano   Slaney   Stiefelhagen   Surakka   Talk   Tucker   Vatikiotis-Bateson (41)   Vatikiotis-Bateson (117)   Vogt   Vroomen   Vroomenti   Waldstein   Wallace   Whittingham   Wichert   Yamamoto   Yang   Yehia   Zmarich  

Names written in boldface refer to first authors. Full papers can be accessed from the abstracts (ISCA members only). Please note that each abstract opens in a separate window.



Table of Contents and Access to Abstracts

Campbell, Ruth / Benson, P. J. / Wallace, S. B.: "The perception of mouthshape: photographic images of natural speech sounds can be perceived categorically", 1-4.

Magno Caldognetto, Emanuela / Zmarich, C. / Cosi, Piero / Ferrero, Franco: "Italian consonantal visemes: relationships between spatial/ temporal articulatory characteristics and coproduced acoustic signal", 5-8.

Hiki, Shizuo / Fukuda, Yumiko: "Negative effect of homophones on speechreading in Japanese", 9-12.

Leybaert, Jacqueline / Marchetti, Daniela: "Visual rhyming effects in deaf children", 13-16.

Poggi, Isabella / Pelachaud, Catherine: "Context sensitive faces", 17-20.

Auer Jr., E. T. / Bernstein, L. E. / Waldstein, R. S. / Tucker, P. E.: "Effects of phonetic variation and the structure of the lexicon on the uniqueness of words", 21-24.

Cerrato, Loredana / Albano Leoni, Federico / Paoloni, Andrea: "A methodology to quantify the contribution of visual and prosodic information to the process of speech comprehension", 25-28.

Gagné, Jean-Pierre / Boutin, Lina: "The effects of speaking rate on visual speech intelligibility", 29-32.

Magno Caldognetto, Emanuela / Poggi, Isabella: "Micro- and macro-bimodality", 33-36.

Girin, L. / Schwartz, Jean-Luc / Feng, G.: "Can the visual input make the audio signal "pop out" in noise ? a first study of the enhancement of noisy VCV acoustic sequences by audio-visual fusion", 37-40.

Yehia, Hani / Rubin, Philip / Vatikiotis-Bateson, Eric: "Quantitative association of orofacial and vocal-tract shapes", 41-44.

Lyxell, Björn / Andersson, Ulf / Arlinger, Stig / Harder, Henrik / Rönnberg, Jerker: "Phonological representaion and speech understanding with cochlear implants in deafened adults", 45-48.

André-Obrecht, Regine / Jacob, Bruno / Parlangeau, Nathalie: "Audio visual speech recognition and segmental master slave HMM", 49-52.

Cox, Stephen / Matthews, Iain / Bangham, Andrew: "Combining noise compensation with visual information in speech recognition", 53-56.

Krone, G. / Talk, B. / Wichert, A. / Palm, G.: "Neural architectures for sensorfusion in speechrecognition", 57-60.

Rogozan, Alexandrina / Deléglise, Paul / Alissali, Mamoun: "Adaptive determination of audio and visual weights for automatic speech recognition", 61-64.

Potamianos, Gerasimos / Cosatto, Eric / Graf, Hans Peter / Roe, David B.: "Speaker independent audio-visual database for bimodal ASR", 65-68.

Jourlin, Pierre: "Word-dependent acoustic-labial weights in HMM-based speech recognition", 69-72.

Remez, Robert E. / Fellowes, Jennifer M. / Pisoni, David B. / Goh, Winston D. / Rubin, Philip E.: "Audio-visual speech perception without traditional speech cues: a second report", 73-76.

Gelder, Beatrice de / Etcoff, Nancy / Vroomen, Jean: "Impairment of visual speech integration in prosopagnosia", 77-80.

Schwippert, C. / Benoît, Christian: "Audiovisual intelligibility of an androgynous speaker", 81-84.

Campbell, Ruth / Whittingham, A. / Frith, U. / Massaro, Dominic W. / Cohen, M. M.: "Audiovisual speech perception in dyslexics: impaired unimodal perception but no audiovisual integration deficit", 85-88.

Bernstein, Z. E. / Iverson, P. / Auer Jr., E. T.: "Elucidating the complex relationships between phonetic perception and word recognition in audiovisual speech perception", 89-92.

Burnham, Denis / Keane, Sheila: "The Japanese Mcgurk effect: the role of linguistic and cultural factors an auditory-visual speech perception", 93-96.

Bertelson, Paul / Vroomenti, Jean / Gelderti, Beatrice de: "Auditory-visual interaction in voice localization and in bimodal speech recognition: the effects of desynchronization", 97-100.

Sams, M. / Surakka, V. / Helin, P. / Kättö, R.: "Audiovisual fusion in finnish syllables and words", 101-104.

Ichikawa, A. / Okada, Y. / Imiya, A. / Horiuchi, K.: "Analytical method for linguistic information of facial gestures in natural dialogue languages", 105-108.

Raducanu, B. / Grana, M.: "An approach to face localization based on signature analysis", 109-112.

Meier, Uwe / Stiefelhagen, Rainer / Yang, Me: "Preprocessing of visual speech under real world conditions", 113-116.

Revéret, L. / Garcia, F. / Benoît, Christian / Vatikiotis-Bateson, Eric: "An hybrid approach to orientation-free liptracking", 117-120.

Basu, Sumit / Pentland, Alex: "Recovering 3d lip structure from 2d observations using a model trained from video", 121-124.

Vogt, Michael: "Interpreted multi-state lip models for audio-visual speech recognition", 125-128.

Anderson, Anne H. / Blokland, Art: "Intelligibility of speech mediated by low frame-rate video", 129-132.

McAllister, David F. / Rodman, Robert D. / Bitzer, Donald L. / Freeman, Andrew S.: "Lip synchronization of speech", 133-136.

Yamamoto, Eli / Nakamura, Satoshi / Shikano, Kiyohiro: "Speech to lip movement synthesis by HMM", 137-140.

Ezzat, Tony / Poggio, Tomaso: "Videorealistic talking faces: a morphing approach", 141-144.

Goff, Bertrand Le / Benoît, Christian: "A French-speaking synthetic head", 145-148.

Beskow, Jonas: "Animation of talking agents", 149-152.

Bregler, Christoph / Covell, Michele / Slaney, Malcolm: "Video rewrite: visual speech synthesis from video", 153-156.