Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation

Jean-Benoit Delbrouck, St├ęphane Dupont, Omar Seddati


In Multimodal Neural Machine Translation (MNMT), a neural model generates a translated sentence that describes an image, given the image itself and one source descriptions in English. This is considered as the multimodal image caption translation task. The images are processed with Convolutional Neural Network (CNN) to extract visual features exploitable by the translation model. So far, the CNNs used are pre-trained on object detection and localization task. We hypothesize that richer architecture, such as dense captioning models, may be more suitable for MNMT and could lead to improved translations. We extend this intuition to the word-embeddings, where we compute both linguistic and visual representation for our corpus vocabulary. We combine and compare different configurations and show state-of-the-art results according to previous work.


 DOI: 10.21437/GLU.2017-13

Cite as: Delbrouck, J., Dupont, S., Seddati, O. (2017) Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation. Proc. GLU 2017 International Workshop on Grounding Language Understanding, 62-67, DOI: 10.21437/GLU.2017-13.


@inproceedings{Delbrouck2017,
  author={Jean-Benoit Delbrouck and St├ęphane Dupont and Omar Seddati},
  title={Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation},
  year=2017,
  booktitle={Proc. GLU 2017 International Workshop on Grounding Language Understanding},
  pages={62--67},
  doi={10.21437/GLU.2017-13},
  url={http://dx.doi.org/10.21437/GLU.2017-13}
}