5th International Conference on Spoken Language Processing
We propose a novel strategy for training neural networks using sequential Monte Carlo algorithms. This global optimisation strategy allows us to learn the probability distribution of the network weights in a sequential framework. It is well suited to applications involving on-line, nonlinear or non-stationary signal processing. We show how the new algorithms can outperform extended Kalman filter (EKF) training.
Bibliographic reference. Freitas, Joao F. G. de / Johnson, Sue E. / Niranjan, Mahesan / Gee, Andrew H. (1998): "Global optimisation of neural network models via sequential sampling-importance resampling", In ICSLP-1998, paper 0213.