This paper presents two new classes of linear prediction schemes. The first one is based on the concept of creating a sparse residual rather than a minimum variance one, which will allow a more efficient quantization; we will show that this works well in presence of voiced speech, where the excitation can be represented by an impulse train, and creates a sparser residual in the case of unvoiced speech. The second class aims at finding sparse prediction coefficients; interesting results can be seen applying it to the joint estimation of long-term and short-term predictors. The proposed estimators are all solutions to convex optimization problems, which can be solved efficiently and reliably using, e.g., interior-point methods.
Bibliographic reference. Giacobello, Daniele / Christensen, Mads Græsbøll / Dahl, Joachim / Jensen, Søren Holdt / Moonen, Marc (2008): "Sparse linear predictors for speech processing", In INTERSPEECH-2008, 1353-1356.