Semantic Hidden Markov Networks (SHMNs) were first introduced in  as a new technique of interfacing between linguistic analysis and word recognition in speech understanding systems. The main difference between SHMNs and the use of traditional language models is that SHMNs always refer to a linguistic concept and impose the linguistic structure as closely as possible on its acoustic counterpart - a hierarchically structured HMM. Normally the result of decoding a HMM is merely the sequence of best fitting elementary acoustic concepts, e.g. phonemes or words. Taking into account the structure of the recognition task a structured instance can be computed. This complex acoustic instance can easily be transformed into a linguistic instance by a recursive computation but without any searching. In this paper we present an algorithm for generating linguistic instances from word recognition results based on SHMNs. Additionally, we present recognition results obtained when evaluating a set of SHMNs for the task domain of train schedule information.
Keywords: Speech Recognition and Understanding
Bibliographic reference. Fink, Gernot A. / Kummert, Franz / Sagerer, Gerhard / Schukat-Talamazzini, Ernst G. (1993): "Speech recognition using semantic hidden Markov networks", In EUROSPEECH'93, 1571-1574.