The successful application of speech recognition systems to new domains greatly depends on the tuning of the architecture to the new task, especially if the amount of training data is small. In this paper we present 1.) an improved version of our Automatic Structure Optimization (ASO) algorithm that does this tuning automatically and 2.) a new Automatic Validation Analyzing Control System (AVACS) that is designed to detect poorly generalizing models as early as possible and to selectively change their learning and automatic structuring process. ASO and AVACS were applied to a Multi State Time Delay Neural Network and could improve the generalization performance of an already handtuned architecture from 85% to 92.3% on an alphabet recognition task.
Bibliographic reference. Bodenhausen, Ulrich / Waibel, Alex (1993): "Tuning by doing: flexibility through automatic structure optimization", In EUROSPEECH'93, 1485-1488.