Cross-Modal Analysis Between Phonation Differences and Texture Images Based on Sentiment Correlations

Win Thuzar Kyaw, Yoshinori Sagisaka


Motivated by the success of speech characteristics representation by color attributes, we analyzed the cross-modal sentiment correlations between voice source characteristics and textural image characteristics. For the analysis, we employed vowel sounds with representative three phonation differences (modal, creaky and breathy) and 36 texture images with 36 semantic attributes (e.g., banded, cracked and scaly) annotated one semantic attribute for each texture. By asking 40 subjects to select the most fitted textures from 36 figures with different textures after listening 30 speech samples with different phonations, we measured the correlations between acoustic parameters showing voice source variations and the parameters of selected textural image differences showing coarseness, contrast, directionality, busyness, complexity and strength. From the texture classifications, voice characteristics can be roughly characterized by textural differences: modal — gauzy, banded and smeared, creaky — porous, crystalline, cracked and scaly, breathy — smeared, freckled and stained. We have also found significant correlations between voice source acoustic parameters and textural parameters. These correlations suggest the possibility of cross-modal mapping between voice source characteristics and textural parameters, which enables visualization of speech information with source variations reflecting human sentiment perception.


 DOI: 10.21437/Interspeech.2017-1236

Cite as: Kyaw, W.T., Sagisaka, Y. (2017) Cross-Modal Analysis Between Phonation Differences and Texture Images Based on Sentiment Correlations. Proc. Interspeech 2017, 679-683, DOI: 10.21437/Interspeech.2017-1236.


@inproceedings{Kyaw2017,
  author={Win Thuzar Kyaw and Yoshinori Sagisaka},
  title={Cross-Modal Analysis Between Phonation Differences and Texture Images Based on Sentiment Correlations},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={679--683},
  doi={10.21437/Interspeech.2017-1236},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1236}
}