Improving Transcription of Manuscripts with Multimodality and Interaction

Emilio Granell, Carlos David Martinez Hinarejos, VerĂ³nica Romero


State-of-the-art Natural Language Recognition systems allow transcribers to speed-up the transcription of audio, video or image documents. These systems provide transcribers an initial draft transcription that can be corrected with less effort than transcribing the documents from scratch. However, even the drafts offered by the most advanced systems based on Deep Learning contain errors. Therefore, the supervision of those drafts by a human transcriber is still necessary to obtain the correct transcription. This supervision can be eased by using interactive and assistive transcription systems, where the transcriber and the automatic system cooperate in the amending process. Moreover, the interactive system can combine different sources of information in order to improve their performance, such as text line images and the dictation of their textual contents. In this paper, the performance of a multimodal interactive and assistive transcription system is evaluated on one Spanish historical manuscript. Although the quality of the draft transcriptions provided by a Handwriting Text Recognition system based on Deep Learning is pretty good, the proposed interactive and assistive approach reveals an additional reduction of transcription effort. Besides, this effort reduction is increased when using speech dictations over an Automatic Speech Recognition system, allowing for a faster transcription process.


 DOI: 10.21437/IberSPEECH.2018-20

Cite as: Granell, E., Martinez Hinarejos, C.D., Romero, V. (2018) Improving Transcription of Manuscripts with Multimodality and Interaction. Proc. IberSPEECH 2018, 92-96, DOI: 10.21437/IberSPEECH.2018-20.


@inproceedings{Granell2018,
  author={Emilio Granell and Carlos David {Martinez Hinarejos} and VerĂ³nica Romero},
  title={{Improving Transcription of Manuscripts with Multimodality and Interaction}},
  year=2018,
  booktitle={Proc. IberSPEECH 2018},
  pages={92--96},
  doi={10.21437/IberSPEECH.2018-20},
  url={http://dx.doi.org/10.21437/IberSPEECH.2018-20}
}