A Real-Time Wideband Neural Vocoder at 1.6kb/s Using LPCNet

Jean-Marc Valin, Jan Skoglund


Neural speech synthesis algorithms are a promising new approach for coding speech at very low bitrate. They have so far demonstrated quality that far exceeds traditional vocoders, at the cost of very high complexity. In this work, we present a low-bitrate neural vocoder based on the LPCNet model. The use of linear prediction and sparse recurrent networks makes it possible to achieve real-time operation on general-purpose hardware. We demonstrate that LPCNet operating at 1.6 kb/s achieves significantly higher quality than MELP and that uncompressed LPCNet can exceed the quality of a waveform codec operating at low bitrate. This opens the way for new codec designs based on neural synthesis models.


 DOI: 10.21437/Interspeech.2019-1255

Cite as: Valin, J., Skoglund, J. (2019) A Real-Time Wideband Neural Vocoder at 1.6kb/s Using LPCNet. Proc. Interspeech 2019, 3406-3410, DOI: 10.21437/Interspeech.2019-1255.


@inproceedings{Valin2019,
  author={Jean-Marc Valin and Jan Skoglund},
  title={{A Real-Time Wideband Neural Vocoder at 1.6kb/s Using LPCNet}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3406--3410},
  doi={10.21437/Interspeech.2019-1255},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1255}
}