A Deep Learning Approach to Automatic Characterisation of Rhythm in Non-Native English Speech

Konstantinos Kyriakopoulos, Kate M. Knill, Mark J.F. Gales


A speaker’s rhythm contributes to the intelligibility of their speech and can be characteristic of their language and accent. For non-native learners of a language, the extent to which they match its natural rhythm is an important predictor of their proficiency. As a learner improves, their rhythm is expected to become less similar to their L1 and more to the L2. Metrics based on the variability of the durations of vocalic and consonantal intervals have been shown to be effective at detecting language and accent. In this paper, pairwise variability (PVI, CCI) and variance (varcoV, varcoC) metrics are first used to predict proficiency and L1 of non-native speakers taking an English spoken exam. A deep learning alternative to generalise these features is then presented, in the form of a tunable duration embedding, based on attention over an RNN over durations. The RNN allows relationships beyond pairwise to be captured, while attention allows sensitivity to the different relative importance of durations. The system is trained end-to-end for proficiency and L1 prediction and compared to the baseline. The values of both sets of features for different proficiency levels are then visualised and compared to native speech in the L1 and the L2.


 DOI: 10.21437/Interspeech.2019-3186

Cite as: Kyriakopoulos, K., Knill, K.M., Gales, M.J. (2019) A Deep Learning Approach to Automatic Characterisation of Rhythm in Non-Native English Speech. Proc. Interspeech 2019, 1836-1840, DOI: 10.21437/Interspeech.2019-3186.


@inproceedings{Kyriakopoulos2019,
  author={Konstantinos Kyriakopoulos and Kate M. Knill and Mark J.F. Gales},
  title={{A Deep Learning Approach to Automatic Characterisation of Rhythm in Non-Native English Speech}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1836--1840},
  doi={10.21437/Interspeech.2019-3186},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3186}
}