ISCA Archive SSW 2021
ISCA Archive SSW 2021

Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis

Erica Cooper, Xin Wang, Junichi Yamagishi

Speech synthesis and music audio generation from symbolic input differ in many aspects but share some similarities. In this study, we investigate how text-to-speech synthesis techniques can be used for piano MIDI-to-audio synthesis tasks. Our investigation includes Tacotron and neural sourcefilter waveform models as the basic components, with which we build MIDI-to-audio synthesis systems in similar ways to TTS frameworks. We also include reference systems using conventional sound modeling techniques such as samplebased and physical-modeling-based methods. The subjective experimental results demonstrate that the investigated TTS components can be applied to piano MIDI-to-audio synthesis with minor modifications. The results also reveal the performance bottleneck – while the waveform model can synthesize high quality piano sound given natural acoustic features, the conversion from MIDI to acoustic features is challenging. The full MIDI-to-audio synthesis system is still inferior to the sample-based or physical-modeling-based approaches, but we encourage TTS researchers to test their TTS models for this new task and improve the performance.

doi: 10.21437/SSW.2021-23

Cite as: Cooper, E., Wang, X., Yamagishi, J. (2021) Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis. Proc. 11th ISCA Speech Synthesis Workshop (SSW 11), 130-135, doi: 10.21437/SSW.2021-23

  author={Erica Cooper and Xin Wang and Junichi Yamagishi},
  title={{Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis}},
  booktitle={Proc. 11th ISCA Speech Synthesis Workshop (SSW 11)},