ISCA Archive Interspeech 2006
ISCA Archive Interspeech 2006

Automatic metadata generation and video editing based on speech and image recognition for medical education contents

Satoshi Tamura, Koji Hashimoto, Jiong Zhu, Satoru Hayamizu, Hirotsugu Asai, Hideki Tanahashi, Makoto Kanagawa

This paper reports a metadata generation system as well as an automatic video edit system. The metadata are information described about the other data. In the audio metadata generation system, speech recognition using general language model (LM) and specialized LM is performed to input speech in order to obtain segment (event group) and audio metadata (event information) respectively. In the video edit system, visual metadata obtained by image recognition and audio metadata are combined into audio-visual metadata. Subsequently, multiple videos are edited to one video using the audio-visual metadata. Experiments were conducted to evaluate event detection of the systems using medical education contents, ACLS and BLS. The audio metadata system achieved about a 78% event detection correctness. In the edit system, an 87% event correctness was obtained by audio-visual metadata, and the survey proved that the edited video is appropriate and useful.


doi: 10.21437/Interspeech.2006-618

Cite as: Tamura, S., Hashimoto, K., Zhu, J., Hayamizu, S., Asai, H., Tanahashi, H., Kanagawa, M. (2006) Automatic metadata generation and video editing based on speech and image recognition for medical education contents. Proc. Interspeech 2006, paper 1132-Thu2WeO.4, doi: 10.21437/Interspeech.2006-618

@inproceedings{tamura06_interspeech,
  author={Satoshi Tamura and Koji Hashimoto and Jiong Zhu and Satoru Hayamizu and Hirotsugu Asai and Hideki Tanahashi and Makoto Kanagawa},
  title={{Automatic metadata generation and video editing based on speech and image recognition for medical education contents}},
  year=2006,
  booktitle={Proc. Interspeech 2006},
  pages={paper 1132-Thu2WeO.4},
  doi={10.21437/Interspeech.2006-618}
}