INTERSPEECH 2004 - ICSLP
The learning of dialogue strategies in spoken dialogue systems using reinforcement learning is a promising approach to acquire robust dialogue strategies. However, the trade-off between available dialogue data and information in the dialogue state either forces information to be excluded from the state representations or requires large amount of training data. In this paper, we propose to use dynamic state aggregation to efficiently learn dialogue policies using less data. State aggregation reduces the size of the problem to be solved. Experimental results show that the proposed method converges faster and that in case of data sparseness, the proposed method is less sensitive to atypical training examples.
Bibliographic reference. Denecke, Matthias / Dohsaka, Kohji / Nakano, Mikio (2004): "Learning dialogue policies using state aggregation in reinforcement learning", In INTERSPEECH-2004, 325-328.