This paper describes research on the integration of the MIT SUMMIT speech recognition system  with the TINA language understanding system . Our goal is the creation of a spoken language system whose input consists of spontaneous speaker-independent spoken queries and whose output consists of cooperative responses to those queries . We describe a series of experiments to test the hypothesis that a combination of linguistic and acoustic information can improve system performance over the use of acoustic information alone. We use several configurations, moving from a loosely coupled interface between recognizer and language understanding system to a tightly coupled system where the language understanding component predicts next possible words for the recognizer. We achieved improvement in two areas. First, for the set of sentences that had an answer for a perfect transcription, we improved the percent of sentences correctly understood from 23. 4% using no linguistic information to 67. 6% in the tightly coupled system where sentence hypotheses are sorted based on a linear combination of acoustic and linguistic score. Second, we improved overall system score (defined as percent correct minus percent incorrect) from 12. 5% with no linguistic information to 29. 4% in the tightly coupled system. This was done by incorporating rejection criteria based on linguistic score and measures of work.
Bibliographic reference. Goodine, David / Seneff, Stephanie / Hirschman, Lynette / Phillips, Michael (1991): "Full integration of speech and language understanding in the MIT spoken language system", In EUROSPEECH-1991, 845-848.