The Spoken Wikipedia project unites volunteer readers of encyclopedic entries. Their recordings make encyclopedic knowledge accessible to persons who are unable to read (out of alexia, visual impairment, or because their sight is currently occupied, e. g. while driving). However, on Wikipedia, recordings are available as raw audio files that can only be consumed linearly, without the possibility for targeted navigation or search. We present a reading application which uses an alignment between the recording, text and article structure and which allows to navigate spoken articles, through a graphical or voice-based user interface (or a combination thereof). We present the results of a usability study in which we compare the two interaction modalities. We find that both types of interaction enable users to navigate articles and to find specific information much more quickly compared to a sequential presentation of the full article. In particular when the VUI is not restricted by speech recognition and understanding issues, this interface is on par with the graphical interface and thus a real option for browsing the Wikipedia without the need for vision or reading.
Cite as: Rohde, M., Baumann, T. (2016) Navigating the Spoken Wikipedia. Proc. 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT 2016), 9-13, doi: 10.21437/SLPAT.2016-2
@inproceedings{rohde16_slpat, author={Marcel Rohde and Timo Baumann}, title={{Navigating the Spoken Wikipedia}}, year=2016, booktitle={Proc. 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT 2016)}, pages={9--13}, doi={10.21437/SLPAT.2016-2} }