The paper describes a neural network language model that jointly models language in many related domains. In addition to the traditional layers of a neural network language model, the proposed model also trains a vector of factors for each domain in the training data that are used to modulate the connections from the projection layer to the hidden layer. The model is found to outperform simple neural network language models as well as domain-adapted maximum entropy language models in perplexity evaluation and speech recognition experiments.
Bibliographic reference. Alumäe, Tanel (2013): "Multi-domain neural network language model", In INTERSPEECH-2013, 2182-2186.