Real to H-Space Encoder for Speech Recognition

Titouan Parcollet, Mohamed Morchid, Georges Linarès, Renato De Mori


Deep neural networks (DNNs) and more precisely recurrent neural networks (RNNs) are at the core of modern automatic speech recognition systems, due to their efficiency to process input sequences. Recently, it has been shown that different input representations, based on multidimensional algebras, such as complex and quaternion numbers, are able to bring to neural networks a more natural, compressive and powerful representation of the input signal by outperforming common real-valued NNs. Indeed, quaternion-valued neural networks (QNNs) better learn both internal dependencies, such as the relation between the Mel-filter-bank value of a specific time frame and its time derivatives, and global dependencies, describing the relations that exist between time frames. Nonetheless, QNNs are limited to quaternion-valued input signals, and it is difficult to benefit from this powerful representation with real-valued input data. This paper proposes to tackle this weakness by introducing a real-to-quaternion encoder that allows QNNs to process any one dimensional input features, such as traditional Mel-filter-banks for automatic speech recognition.


 DOI: 10.21437/Interspeech.2019-1539

Cite as: Parcollet, T., Morchid, M., Linarès, G., Mori, R.D. (2019) Real to H-Space Encoder for Speech Recognition. Proc. Interspeech 2019, 4415-4419, DOI: 10.21437/Interspeech.2019-1539.


@inproceedings{Parcollet2019,
  author={Titouan Parcollet and Mohamed Morchid and Georges Linarès and Renato De Mori},
  title={{Real to H-Space Encoder for Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4415--4419},
  doi={10.21437/Interspeech.2019-1539},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1539}
}