Different kinds of Multilayer Perceptrons, using a back-propagation learning algorithm, have been used to perform data compression tasks. Depending upon the architecture and the type of problem learned to solve (classification or auto-association), the networks provide different kinds of dimensionality reduction preserving different properties of the data space. Some experiments show that using the non-linearities of the MLP units may improve performances of classical linear dimensionality reduction. All the experiments reported here have been carried out on speech data.
Cite as: Blanchet, P. (1989) Multilayer perceptron architectures for data compression tasks. Proc. First European Conference on Speech Communication and Technology (Eurospeech 1989), 1329-1332, doi: 10.21437/Eurospeech.1989-84
@inproceedings{blanchet89_eurospeech, author={Pascal Blanchet}, title={{Multilayer perceptron architectures for data compression tasks}}, year=1989, booktitle={Proc. First European Conference on Speech Communication and Technology (Eurospeech 1989)}, pages={1329--1332}, doi={10.21437/Eurospeech.1989-84} }