Connectionist temporal classification (CTC) is a powerful approach for sequence-to-sequence learning, and has been popularly used in speech recognition. The central ideas of CTC include adding a label “blank” during training. With this mechanism, CTC eliminates the need of segment alignment, and hence has been applied to various sequence-to-sequence learning problems. In this work, we applied CTC to abstractive summarization for spoken content. The “blank” in this case implies the corresponding input data are less important or noisy; thus it can be ignored. This approach was shown to outperform the existing methods in term of ROUGE scores over Chinese Giga-word and MATBN corpora. This approach also has the nice property that the ordering of words or characters in the input documents can be better preserved in the generated summaries.
Cite as: Lu, B.-R., Shyu, F., Chen, Y.-N., Lee, H.-Y., Lee, L.-S. (2017) Order-Preserving Abstractive Summarization for Spoken Content Based on Connectionist Temporal Classification. Proc. Interspeech 2017, 2899-2903, doi: 10.21437/Interspeech.2017-862
@inproceedings{lu17b_interspeech, author={Bo-Ru Lu and Frank Shyu and Yun-Nung Chen and Hung-Yi Lee and Lin-Shan Lee}, title={{Order-Preserving Abstractive Summarization for Spoken Content Based on Connectionist Temporal Classification}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={2899--2903}, doi={10.21437/Interspeech.2017-862} }