GLU 2017 International Workshop on Grounding Language Understanding

25 August 2017, Stockholm, Sweden

Chair: Giampiero Salvi and Stéphane Dupont

DOI: 10.21437/GLU.2017

Regular papers

Communication with Speech and Gestures: Applications of Recurrent Neural Networks to Robot Language Learning
Alexandre Antunes, Gabriella Pizzuto, Angelo Cangelosi

Towards a Knowledge Graph based Speech Interface
Ashwini Jaya Kumar, Sören Auer, Christoph Schmidt, Joachim Köhler

Relational Symbol Grounding through Affordance Learning: An Overview of the ReGround Project
Laura Antanas, Jesse Davis, Luc De Raedt, Amy Loutfi, Andreas Persson, Alessandro Saffiotti, Deniz Yuret, Ozan Arkan Can, Emre Unal, Pedro Zuidberg Dos Martires

Building a Multimodal Lexicon: Lessons from Infants' Learning of Body Part Words
Rana Abu-Zhaya, Amanda Seidl, Ruth Tincoff, Alejandrina Cristia

Sparse Autoencoder Based Semi-Supervised Learning for Phone Classification with Limited Annotations
Akash Kumar Dhaka, Giampiero Salvi

Partitioning of Posteriorgrams Using Siamese Models for Unsupervised Acoustic Modelling
Arvid Fahlström Myrman, Giampiero Salvi

Comparison of Effect of Speaker's Eye Gaze on Selection of Next Speaker between Native- and Second-Language Conversations
Koki Ijuin, Takato Yamashita, Tsuneo Kato, Seiichi Yamamoto

Language is Not About Language: Towards Formalizing the Role of Extra-Linguistic Factors in Human and Machine Language Acquisition and Communication
Okko Räsänen

SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO Data Set
William Havard, Laurent Besacier, Olivier Rosec

Vision-based Active Speaker Detection in Multiparty Interaction
Kalin Stefanov, Jonas Beskow, Giampiero Salvi

Automatic Speaker's Role Classification With a Bottom-up Acoustic Feature Selection
Vered Silber-Varod, Anat Lerner, Oliver Jokisch

Analysis of Audio-Visual Features for Unsupervised Speech Recognition
Jennifer Drexler, James Glass

Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation
Jean-Benoit Delbrouck, Stéphane Dupont, Omar Seddati

Proposal of a Generative Model of Event-based Representations for Grounded Language Understanding
Simon Brodeur, Luca Celotti, Jean Rouat

Finding Regions of Interest from Multimodal Human-Robot Interactions
Pablo Azagra, Javier Civera, Ana C. Murillo

Enhancing Reference Resolution in Dialogue Using Participant Feedback
Todd Shore, Gabriel Skantze

Interactive Robot Learning of Gestures, Language and Affordances
Giovanni Saponaro, Lorenzo Jamone, Alexandre Bernardino, Giampiero Salvi

Grounding Imperatives to Actions is Not Enough: A Challenge for Grounded NLU for Robots from Human-Human data
Julian Hough, Sina Zarriess, David Schlangen