Towards a Knowledge Graph based Speech Interface

Ashwini Jaya Kumar, Sören Auer, Christoph Schmidt, Joachim Köhler


Applications which use human speech as an input require a speech interface with high recognition accuracy. The words or phrases in the recognized text are annotated with a machine-understandable meaning and linked to knowledge graphs for further processing by the target application. This type of knowledge representation facilitates to use speech interfaces with any spoken input application, since the information is represented in logical, semantic form., retrieving and storing can be followed using any web standard query languages. In this work, we develop a methodology for linking speech input to knowledge graphs. We show that for a corpus with lower WER, the annotation and linking of entities to the DBpedia knowledge graph is considerable. DBpedia Spotlight, a tool to interlink text documents with the linked open data is used to link the speech recognition output to the DBpedia knowledge graph. Such a knowledge-based speech recognition interface is useful for applications such as question answering or spoken dialog systems.


 DOI: 10.21437/GLU.2017-2

Cite as: Kumar, A.J., Auer, S., Schmidt, C., Köhler, J. (2017) Towards a Knowledge Graph based Speech Interface. Proc. GLU 2017 International Workshop on Grounding Language Understanding, 8-12, DOI: 10.21437/GLU.2017-2.


@inproceedings{Kumar2017,
  author={Ashwini Jaya Kumar and Sören Auer and Christoph Schmidt and Joachim Köhler},
  title={Towards a Knowledge Graph based Speech Interface},
  year=2017,
  booktitle={Proc. GLU 2017 International Workshop on Grounding Language Understanding},
  pages={8--12},
  doi={10.21437/GLU.2017-2},
  url={http://dx.doi.org/10.21437/GLU.2017-2}
}