Interactive Spoken Content Retrieval by Deep Reinforcement Learning

Yen-Chen Wu, Tzu-Hsiang Lin, Yang-De Chen, Hung-Yi Lee, Lin-Shan Lee

User-machine interaction is important for spoken content retrieval. For text content retrieval, the user can easily scan through and select on a list of retrieved item. This is impossible for spoken content retrieval, because the retrieved items are difficult to show on screen. Besides, due to the high degree of uncertainty for speech recognition, the retrieval results can be very noisy. One way to counter such difficulties is through user-machine interaction. The machine can take different actions to interact with the user to obtain better retrieval results before showing to the user. The suitable actions depend on the retrieval status, for example requesting for extra information from the user, returning a list of topics for user to select, etc. In our previous work, some hand-crafted states estimated from the present retrieval results are used to determine the proper actions. In this paper, we propose to use Deep-Q-Learning techniques instead to determine the machine actions for interactive spoken content retrieval. Deep-Q-Learning bypasses the need for estimation of the hand-crafted states, and directly determine the best action base on the present retrieval status even without any human knowledge. It is shown to achieve significantly better performance compared with the previous hand-crafted states.

DOI: 10.21437/Interspeech.2016-1237

Cite as

Wu, Y., Lin, T., Chen, Y., Lee, H., Lee, L. (2016) Interactive Spoken Content Retrieval by Deep Reinforcement Learning. Proc. Interspeech 2016, 943-947.

author={Yen-Chen Wu and Tzu-Hsiang Lin and Yang-De Chen and Hung-Yi Lee and Lin-Shan Lee},
title={Interactive Spoken Content Retrieval by Deep Reinforcement Learning},
booktitle={Interspeech 2016},