EUROSPEECH '97
5th European Conference on Speech Communication and Technology

Rhodes, Greece
September 22-25, 1997


Speech Translation Based on Automatically Trainable Finite-State Models

Juan Carlos Amengual (1), Jose Miguel Benedi (2), Klaus Beulen (3), Francisco Casacuberta (2), Asuncion Castano (1), Antonio Castellanos (1), Victor M. Jimenez (2), David Llorens (2), Andres Marzal (1), Hermann Ney (3), Federico Prat (1), Enrique Vidal(2), Juan Miguel Vilar(1)

(1) Unidad Predepartamental de Informatica Campus Penyeta Roja, Universitat Jaume, Castell, Spain
(2) Departamento de Sistemas Informaticos y Computacion Universidad Politecnica de Valencia, Valencia, Spain
(3) Lehrstuhl für Informatik VI RWTH Aachen, University of Technology Aachen, Germany

This paper extends previous work exploring the use of Subsequential Transducers to perform speech-input translation in limited-domain tasks. This is done following an integrated approach in which a Subsequential Transducer replaces the input-language model of a conventional speech recognition system, and is used both as language and translation model. This way, the search for the recognised sentence also produces the corresponding translation. A corpus-based approach is adopted in order to build the required models from training data. Experimental results are presented for the translation task considered in the EUTRANS project: one in the hotel domain with more than 500 words per language and language perplexities near to 10.

Full Paper

Bibliographic reference.  Amengual, Juan Carlos / Benedi, Jose Miguel / Beulen, Klaus / Casacuberta, Francisco / Castano, Asuncion / Castellanos, Antonio / Jimenez, Victor M. / Llorens, David / Marzal, Andres / Ney, Hermann / Prat, Federico / Vida, Enrique / Vila, Juan Miguel (1997): "Speech translation based on automatically trainable finite-state models", In EUROSPEECH-1997, 1439-1442.