Sixth European Conference on Speech Communication and Technology
(EUROSPEECH'99)

Budapest, Hungary
September 5-9, 1999

Real-time Speech Modeling Using Computationally Efficient Locally Recurrent Neural Networks (CERNs)

John J. Soraghan (1), Amir Hussain (2), Ivy Shim (1)

(1) Signal Processing Division, University of Strathclyde, Glasgow, UK
(2) Dept. of Electronic Engineering and Physics, University of Paisley, Paisley, UK

A general class of Computationally Efficient locally Recurrent Networks (CERN) is described for real-time adaptive signal processing. The structure of the CERN is based on linear-in-the-parameters single-hidden-layered feedforward neural networks such as the Radial Basis Function (RBF) network, the Volterra Neural Network (VNN) and the recently developed Functionally Expanded Neural Network (FENN), adapted to employ local output feedback. The corresponding learning algorithms are described and key structural and computational complexity comparisons are made between the CERN and conventional Recurrent Neural Networks. A speech signal is used, which shows that a Recurrent FENN based adaptive CERN predictor can significantly outperform the corresponding feedforward FENN and conventionally employed linear adaptive filtering models.


Full Paper (PDF)   Gnu-Zipped Postscript

Bibliographic reference.  Soraghan, John J. / Hussain, Amir / Shim, Ivy (1999): "Real-time speech modeling using computationally efficient locally recurrent neural networks (CERNs)", In EUROSPEECH'99, 355-358.