ISCA Archive Interspeech 2005
ISCA Archive Interspeech 2005

Is color information really useful for lip-reading ? (or what is lost when color is not used)

Philippe Daubias

In this paper, we report on experiments aiming at evaluating quantitatively the amount of information carried by color and luminance (gray level images) for automatic lip-reading. More precisely, we focus on the lip location problem: we trained Artificial Neural Networks (ANN) classifiers which were proven to be effective for the lip, skin and inner-mouth classification task, with both color and gray scaled image blocks extracted from the same images. Experiments were conducted with 6 subjects (1 female, 5 males, some with little facial hair) taken from the freely available LIUM-AVS database. A few different ANN architectures were tested, and in all cases, the use of color information enabled an important classification error reduction. Considering all the images blocks from the lip region that were available, the classification error was reduced from 30% for gray-level to 5% using color.


doi: 10.21437/Interspeech.2005-366

Cite as: Daubias, P. (2005) Is color information really useful for lip-reading ? (or what is lost when color is not used). Proc. Interspeech 2005, 1193-1196, doi: 10.21437/Interspeech.2005-366

@inproceedings{daubias05_interspeech,
  author={Philippe Daubias},
  title={{Is color information really useful for lip-reading ? (or what is lost when color is not used)}},
  year=2005,
  booktitle={Proc. Interspeech 2005},
  pages={1193--1196},
  doi={10.21437/Interspeech.2005-366}
}