Deep Neural Factorization for Speech Recognition

Jen-Tzung Chien, Chen Shen


Conventional speech recognition system is constructed by unfolding the spectral-temporal input matrices into one-way vectors and using these vectors to estimate the affine parameters of neural network according to the vector-based error back-propagation algorithm. System performance is constrained because the contextual correlations in frequency and time horizons are disregarded and the spectral and temporal factors are excluded. This paper proposes a spectral-temporal factorized neural network (STFNN) to tackle this weakness. The spectral-temporal structure is preserved and factorized in hidden layers through two ways of factor matrices which are trained by using the factorized error backpropagation. Affine transformation in standard neural network is generalized to the spectro-temporal factorization in STFNN. The structural features or patterns are extracted and forwarded towards the softmax outputs. A deep neural factorization is built by cascading a number of factorization layers with fully-connected layers for speech recognition. An orthogonal constraint is imposed in factor matrices for redundancy reduction. Experimental results show the merit of integrating the factorized features in deep feedforward and recurrent neural networks for speech recognition.


 DOI: 10.21437/Interspeech.2017-892

Cite as: Chien, J., Shen, C. (2017) Deep Neural Factorization for Speech Recognition. Proc. Interspeech 2017, 3682-3686, DOI: 10.21437/Interspeech.2017-892.


@inproceedings{Chien2017,
  author={Jen-Tzung Chien and Chen Shen},
  title={Deep Neural Factorization for Speech Recognition},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3682--3686},
  doi={10.21437/Interspeech.2017-892},
  url={http://dx.doi.org/10.21437/Interspeech.2017-892}
}