Time-regularized Linear Prediction for Noise-robust Extraction of the Spectral Envelope of Speech

Manu Airaksinen, Lauri Juvela, Okko Räsänen, Paavo Alku


Feature extraction of speech signals is typically performed in short-time frames by assuming that the signal is stationary within each frame. For the extraction of the spectral envelope of speech, which conveys the formant frequencies produced by the resonances of the slowly varying vocal tract, an often used frame length is within 20-30 ms. However, this kind of conventional frame-based spectral analysis is oblivious of the broader temporal context of the signal and is prone to degradation by, for example, environmental noise. In this paper, we propose a new frame-based linear prediction (LP) analysis method that includes a regularization term that penalizes energy differences in consecutive frames of an all-pole spectral envelope model. This integrates the slowly varying nature of the vocal tract as a part of the analysis. Objective evaluations related to feature distortion and phonetic representational capability were performed by studying the properties of the mel-frequency cepstral coefficient (MFCC) representations computed from different spectral estimation methods under noisy conditions using the TIMIT database. The results show that the proposed time-regularized LP approach exhibits superior MFCC distortion behavior while simultaneously having the greatest average separability of different phoneme categories in comparison to the other methods.


 DOI: 10.21437/Interspeech.2018-1230

Cite as: Airaksinen, M., Juvela, L., Räsänen, O., Alku, P. (2018) Time-regularized Linear Prediction for Noise-robust Extraction of the Spectral Envelope of Speech. Proc. Interspeech 2018, 701-705, DOI: 10.21437/Interspeech.2018-1230.


@inproceedings{Airaksinen2018,
  author={Manu Airaksinen and Lauri Juvela and Okko Räsänen and Paavo Alku},
  title={Time-regularized Linear Prediction for Noise-robust Extraction of the Spectral Envelope of Speech},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={701--705},
  doi={10.21437/Interspeech.2018-1230},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1230}
}