INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Analysis of Laughter Events in Real Science Classes by Using Multiple Environment Sensor Data

Carlos Ishi, Hiroaki Hatano, Norihiro Hagita

ATR IRC, Japan

The extraction of sound events in environments where a large number of people are present is a challenging problem. In order to tackle that problem, we have been developing a sound environment intelligence system which is able to get information about who is talking, where and when, based on integration of multiple microphone arrays and human tracking technologies. We installed the developed system in a science room of an elementary school, and collected data of real science classes during a period of one month. In the present paper, among the sound activities appearing in the science classes, we focused on the analysis of laughter events, considering that laughter conveys important social functions in communication. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system. Subjective evaluation of the laughter events revealed relationship between the laughter type (including production, style, and vowel-quality aspects), the functions in communication, and the appropriateness in the classroom context.

Full Paper

Bibliographic reference.  Ishi, Carlos / Hatano, Hiroaki / Hagita, Norihiro (2014): "Analysis of laughter events in real science classes by using multiple environment sensor data", In INTERSPEECH-2014, 1043-1047.