This paper discusses recent results obtained with a computational model of language acquisition. This model, developed in the ACORNS project, has shown to be able to learn word-like units from stimuli in which utterances are paired with visual information. In this paper we extend the ACORNS experiments to ambiguous stimuli, as to obtain a computational correlate of the findings by Smith and Yu in 2008. Smith and Yu stipulate that a young infant is confronted with an uncertainty problem, how to pair a word, embedded in a sentence, and a referent, embedded in a rich visual scene. They show that young infants can resolve the uncertainty problem by evaluating the statistical evidence across many individually ambiguous words and scenes. We investigate to what extent the ACORNS model is able to deal with cross-modal ambiguity. Moreover, we show the positive effect of an 'active' role during learning when confronted with ambiguity, based on internal confidence.
Bibliographic reference. Bosch, Louis ten / Boves, Lou (2010): "Language acquisition and cross-modal associations: computational simulation of the result of infant studies", In INTERSPEECH-2010, 2926-2929.