INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Speech Synthesis by Modeling Harmonics Structure with Multiple Function

Toru Nakashika (1), Ryuki Tachibana (2), Masafumi Nishimura (2), Tetsuya Takiguchi (1), Yasuo Ariki (1)

(1) Kobe University, Japan
(2) IBM Research, Japan

In this paper, we present a new approach for the speech synthesis, in which speech utterances are synthesized using the parameters of spectro-modeling function (Multiple function). With this approach, only harmonic-parts are extracted from the phoneme spectrum, and the time-varying spectrum corresponding to the harmonics or sinusoidal components is modeled using the Multiple function. We introduce two types of the functions, and present the method to estimate the parameters of each function using the observed phoneme spectrum. In the synthesis stage, speech signals are generated from the parameters of the Multiple function. The advantage of this method is that it only requires a few speech synthesis parameters. We discuss the effectiveness of our proposed method through experimental results.

Full Paper

Bibliographic reference.  Nakashika, Toru / Tachibana, Ryuki / Nishimura, Masafumi / Takiguchi, Tetsuya / Ariki, Yasuo (2010): "Speech synthesis by modeling harmonics structure with multiple function", In INTERSPEECH-2010, 945-948.