In this demonstration, we present the auditory-visual pronunciation system that we have developed. One of the key features of this system is that it employs easy-to-understand visuals of the speech organ that can be seen from different angles. In addition, the internal organs in movement can also be presented by changing the mode to transparent. Furthermore, unlike most systems that can present only the ideal model movements of the speech organs, our system allows users to freely adjust the tongue and jaw movements by controllers. This allows instructors, for example, to visually indicate and point out the deviant movement(s) of the learners so that the learners themselves can understand their present state (i.e. problems) with the help of visual information and feedback.
Bibliographic reference. Miyakoda, Haruko (2013): "Development of a pronunciation training system based on auditory-visual elements", In INTERSPEECH-2013, 2660-2661.