Dear Members,Christian Wellekens
At the last board phone meeting, the venue of Interspeech 2009 was decided:
Brighton UK following a bid prepared by Professor Roger K. Moore. We wish him the
best success and thank him for accepting this tough task.
I recommend that you visit our archive where, thanks to Professor Wolfgang Hess, full papers of
Interspeech 2005 are now available, including the slides of the keynote speakers.
I remind you two important requests:
First, SIG leaders are urged to submit brief activity reports
Second, if you are aware of new books devoted to speech science and/or technology,
please draw my attention
to them, so that I can advertise these books in ISCApad.
See you soon at ICASSP in Toulouse.
TABLE OF CONTENTS
- ISCA News
- SIG's activities
- Courses, internships
- Books, databases,
- Job openings
- Future Interspeech Conferences
- Future ISCA Tutorial and Research Workshops (ITRW)
- Forthcoming Events supported (but not organized) by ISCA
- Future Speech Science and technology events
From ISCA Student activity committee (SAC)
ISCA Speech Labs Listing?
ISCA-SAC is in the process of updating ISCA databases. An important part of
this process is to have an extensive list of speech labs and groups from all
over the world. Right now, there are 102 labs from 24 countries. Please, check
the listing, and enter your
group's information at http://www.isca-students.org/new-speech-lab.php
if your group is not listed.
Do you want to become a board member in ISCA Student Advisory Committee?
ISCA-SAC is looking for new motivated members (PhD students early in their
degrees are preferred). There are available positions on ISCA-SAC board. If
you want to volunteer for ISCA and contribute to ISCA-SAC efforts (to get an
idea please visit our website), get into
contact with us by sending email to .
exciting projects that current board members and volunteering students are
working on. Join us!
ISCA-SAC Student Coordinator
PhD Student, University of Colorado at Boulder
Research Intern, University of Texas at Dallas
From our archivist Professor Wolfgang Hess
Full papers of Interspeech 2005 (Lisbon) are just uploaded and are now online.
are available for students and young scientists
For more information: http://www.isca-speech.org/grants
A list of Speech Interest Groups can be found on
Call for NATO Advanced Study Institute
International NATO Summer School "E.R.Caianiello" XI Course on
The Fundamentals of Verbal and Non-verbal Communication and the Biometrical Issue
September 2-12, 2006 Vietri sul Mare Italy
Website for details
Ecoles thematiques CNRS DIALOGUE et INTERACTION
2-8 juillet 2006, AUTRANS (ISERE, FRANCE)
Site Web pour plus d'informations
Une fiche d'inscription est disponible sur le site. Date limite 2 juin
The 12th ELSNET European Summer School on Language and Speech Communication
INFORMATION FUSION IN NATURAL LANGUAGE SYSTEMS
hosted by the University of Hamburg, Hamburg, Germany
3 - 14 July 2006
The summer school
will depart from a survey of phenomena and
mechanisms for information fusion. It continues
with studying various approaches for sensor-data
fusion in technical systems, like robots. Finally
it will investigate the issue of information
fusion from the perspective of a range of speech
and language processing tasks, namely:
speech recognition and spoken language systems
distributed and multilingual information systems
multimodal speech and language systems
Information fusion for command and control,
Pontus Svenson (FOI Stockholm, Sweden)
Audio visual speech recognition, Rainer
Stiefelhagen (University Karlsruhe, Germany)
XML integration of natural language processing
components, Ulrich Schaefer, (DFKI, Germany)
Hybrid Parsing, Kilian Forth and Wolfgang
Menzel (University of Hamburg, Germany)
Ontologies for information fusion, Luciano Serafini, (ITC-IRST Trento, I
Syntax semantics integration in HPSG, Valia Kordoni, (DFKI Germany)
Hybrid approaches in machine translation,
Stephan Oepen (University of Oslo, Norway)
Ensemble based architectures, to be announced
Information fusion in multi-document summarization, to be announced
Courses will have the duration of one week. Some
of them will include practical exercises
Pre-registration deadline 30.05.2006
Notification of acceptance 10.06.2006
Payment Deadline 30.06.2006
Summer school 3.07 - 14.07.2006
In order to pre-register, candidates are required
to send a statement of interest to participate in
the summer school, a curriculum vitae and a title
for a contribution to the "student" session as
well as courses of interest, by
Walther v. Hahn
University of Hamburg, Dept. of Computer Science
Natural Language Systems Division
Vogt-Koelln Str. 30
Tel: +49 40 428832533
Fax: +49 40 428832515
BOOKS, DATABASES, SOFTWARES
Reconnaissance automatique de la parole: Du signal a
Authors: Jean-Paul Haton
PHONETICA Journal-Editor: K. Kohler, Karger (Kiel)
Special offer to members of ISCA members:
CHF 145.55/EUR 107.85/USD 132.25 for 2006 online or print subscription
Phonetic Science is a field increasingly accessible to experimental verification. Reflecting this development, ‘Phonetica’ is an international
and interdisciplinary forum which features expert original work covering
all aspects of the subject: descriptive linguistic phonetics and phonology
(comprising segmental as well as prosodic phenomena) are focussed
side by side with the experimental measuring domains of speech physiology,
articulation, acoustics, and perception. ‘Phonetica’ thus provides an overall
representation of speech communication. Papers published in this journal
report both theoretical issues and empirical data.
Please enter your ISCA member number
o online CHF 145.55/EUR 107.85/USD 132.25
o print* CHF 145.55/EUR 107.85/USD 132.25
o combined (online and print)* CHF 190.55/EUR 140.85/USD 173.25
*+ postage and handling: CHF 22.40/EUR 16.20/USD 30.40
by credit card (American Express,Diners,Visa,Eurocard).Send your card number
and type and expiration date
by Check enclosed
or ask to be billed
Name/Address (please print):
Date and signature required
Projects on *Speaker Classification*
for a Springer LNCS/LNAI Book
edited by Christian Müller (DFKI, Germany) and Susanne Schötz (University of
Christian Müller from the German Research Center for Artificial Intelligence and
Susanne Schötz from the University of Lund (Sweden) are editing a collection on
speaker classification which is published by Springer in the LNCS/LNAI series
(section 'State-of-the-Art Surveys' respectively 'Hot-Topics'). In parallel to
the printed book, it is published in full-text electronic form via the Springer
internet platform www.springerlink.com.
More about the book
We invite contributions from a variety of areas related to speaker
including artificial intelligence (machine learning, pattern classification),
natural language technology as well as phonetics.
The list of topics includes (but is not restricted to):
* identifying the speakers identity
* accent identification
* dialect identification
* sociolect identification
* language identification
* age and gender recognition
* emotion recognition
* recognition of cognitive state (e.g. working memory load)
* acquiring any other kind of information about the speaker on the basis of
her/his speech or speaking behavior
Please note that we *explicitely allow* contributions that have already been
published in conference proceedings or journals. Novel papers are welcome as
well, of course.
If you are interested in contributing to the book, please send an abstract to
Christian.Mueller@dfki.de. The abstract has not to be formatted. Feel free to
send .doc, .pdf, .txt or .tex files.
ABSTRACT submissions due April 30, 2006 (earlier submission are welcome)
notification of acceptance June 1, 2006
full paper manuscripts due open, scheduled for August 2006
camera read versions due open, scheduled for September 2006
publication of the book open, scheduled for October 2006
Dr. Christian Müller
German Reseach Center for Artificial Intelligence (DFKI)
We invite all laboratories and industrial companies which have job
offers to send them to the ISCApad
editor: they will appear in the newsletter and on our website for
free. (also have a look at http://www.isca-speech.org/jobsas
well as http://www.elsnet.org/Jobs)
Computational Linguist, Text-to-Speech Synthesis, Boston area
Location: Boston area (Position AXG-1005)
The Computational Linguist will work with the company's technical team
to develop and integrate linguistic resources and applications for the
company's TTS engine.
Areas of Competence
* Computational Linguistics
* Speech corpus
* Text corpus
* Produce and maintain speech corpus, audio data, transcripts and
phonetic dictionary, data annotation, and component/model configuration
* Verify existing corpus
* Developing utilities, lexicons, and other language resources for
company's unique TTS
* Adapt text language parsing and analysis software for new TTS needs
* Thorough grounding in phonology, phonetics, lexicography, orthography,
semantics, morphology, syntax, and other branches of linguistics
* Experience with language parsing and analysis software, such as
part-of-speech (POS) and syntactic taggers, semantics, and discourse
* Experience with formant or concatenated based speech synthesis
* Experience working on medium-scale, multi-developer software projects
* Experience with development of speech corpus, transcripts, data
annotation, and phonetic dictionary
* Programming experience in C/C++/Matlab/Perl
* Self-motivation and ability to work independently
* Familiarity with concepts and techniques from DSP theory, machine
learning and statistical modeling is a plus
Must have a Master/PhD. in Engineering, Computer Science or Linguistics
with development or research experience in speech
Direct your confidential response to:
Arnold L. Garlick III
Pacific Search Consultants
(949) 366-9000 Ext. 2#
Please refer to Position AXG-1005
L'Institut de la Communication Parlée (ICP) et Laboratoire des Images et
des Signaux (LIS) à Grenoble
recrutent un(e) doctorant(e) sur une bourse
de l'école doctorale EEATS, fléchée par l'Institut National
Polytechnique de Grenoble (INPG). La thèse s'effectuera dans le cadre
d'un projet financé par le Bonus Qualité Recherche (BQR) de l'INPG et
portera sur la caractérisation acoustique (via stéthoscopie
maxillo-faciale) et électromyographique de la voix silencieuse et du
Toutes les modalités sont detaillées sur
notre site web
(sous "Sujets de thèses")
The Institut de la Communication Parlée (ICP, the Institute of Spoken
Communication) and the Laboratoire des Images et des Signaux (LIS, the
Laboratory of Images and Signals) in Grenoble, France
applications for a three-year doctoral research fellowship (full funding
for the length of the three-year doctoral program) to participate in an
ongoing INPG-funded project to explore the acoustic (via maxillo-facial
stethoscopy) and electromyographic features of silent speech and
Application details and further information on our website
(under "Sujets de thèses")
Doctoral (PhD) Positions in the field of Content-based Multimedia Information Retrieval and Management
Department of Computer Science - Faculty of Sciences - University of Geneva - Switzerland
The Viper group , part of the Computer Vision and Multimedia
Laboratory , has a long research experience in Content-based Multimedia Information Retrieval
(image, video, text, ...). Its activities have led amongst other results to the development of
interactive demo systems for content-based video ( Vicode )
and image ( GIFT) retrieval and
management. We wish to continue these activities.
Description of posts:
Several doctoral positions are open in relation with international and national project funds
awarded on the basis of our research activities in the broad field of content-based multimedia
information search, retrieval and management. The research performed will form direct contributions
in our current and upcoming projects, including ViCode and the Collection Guide (see our main
website for details).
The successful applicants should show knowledge and interest in one or many of the following domains:
* Data mining, statistical data analysis
* Statistical learning
* Signal, image, audio processing
* Knowledge engineering
* Indexing, Databases
* Operation research
Starting date: No later than September 2006.
Salary: 48'000CHF per annum (1st year)
Supervision: Dr. S. Marchand-Maillet and Dr. E. Bruno
Application: Applications by email are welcome to:
Dr. Eric Bruno
Computer Vision and Multimedia Laboratory
Department of Computer Science, University of Geneva
24, rue du General Dufour, CH-1211 Geneva 4
This announce (with more info).
Contrat de recherche à CNRS-Sorbonne-nouvelle
SHS 16 Bases phonétiques des traits distinctifs
Description du poste :
Selon la théorie des traits distinctifs, les sons de la parole (consonnes, voyelles, tons)
sont constitués d'unités primitives appelées traits distinctifs, définies selon
certains grands axes phonétiques. L'inventaire de phonèmes d'une langue est défini par
son choix de traits et de combinaisons de traits. Non seulement les traits définissent
la structure et l'économie particulière (points d'équilibre, points d'instabilité)
de chaque système sonore, mais ils jouent aussi un rôle central au niveau cognitif,
structurant la perception et la production des sons du langage. Malgré des recherches
importantes menées depuis plusieurs anné&es, les bases phonétiques des traits restent
insuffisamment connues ; les traits sont définis soit dans le domaine articulatoire et
aérodynamique, soit dans le domaine acoustique ou auditif, sans qu'il existe un accord
entre chercheurs sur ce point. Depuis quelques années, une nouvelle démarche s'est
dégagée dans le cadre de ce qu'on appelle la Théorie quantique des traits, développée
dans un premier temps par K.N. Stevens et ses collaborateurs au MIT (Stevens 1989, 1998,
2005, etc.). L'originalité principale de cette théorie est le statut égal qu'elle
accorde aux dimensions acoustique, articulatoire et perceptive de la langue parlée.
Il s'agit de l'un des modèles récents qui réussit le mieux intégrer la phonétique et
la phonologie. Notre projet se propose d'examiner en détail les prédictions et les
conséquences empiriques de ce modèle. Sont posées, entre autres, les questions suivantes :
- Quels sont les traits, et comment sont-ils définis ?
- Pourquoi les traits exploitent-ils certaines dimensions acoustiques et articulatoires
et non pas d'autres ?
- Tous les traits ont-ils une définition quantique ?
- Dans quelle mesure les attributs des traits sont-ils robustes sous des conditions
diverses (choix de locuteur, vitesse du débit, type de son, contexte phonologique
ou prosodique) ?
- Dans quelle mesure ces attributs varient-ils selon la langue ?
- Quel est le rôle des traits dans l'acquisition du langage, la reconnaissance lexicale,
ou la perception de la langue maternelle ou d'une langue étrangère ? - Quel est leur
statut cognitif ? L'étude de ces questions, qui se situent au croisement de divers
domaines (phonétique, phonologique, psycholinguistique, neurolinguistique,
développementale), est d'une importance primordiale pour une meilleure intégration de
la phonologie et la phonétique, de la langue et la parole. Notre projet a reçu un
financement dans le cadre du programme ACI-PROSODIE du Ministère Délégué de la Recherche
pour une durée de 3 ans (2004-2007)
(voir le site). Actuellement
il réunit plusieurs activités : séminaires, communications invitées, réunions
d'un groupe de recherche composé de post-doctorants et doctorants, organisation de
colloques et d' échanges internationaux.
Profil du poste
Le Laboratoire de phonétique et phonologie (UMR 7018, CNRS/Sorbonne-nouvelle, Paris),
cherche un candidat qui mènera des recherches au sein d'un projet ACI-PROSODIE portant
sur l'étude des bases phonétiques des traits distinctifs dans le cadre général de la
Théorique quantique ; voir le descriptif ci-dessus et le
site pour plus de précisions.
Le candidat aura récemment soutenu une thèse portant sur un domaine pertinent pour
ce projet et aura une formation en modélisation articulatoire, en analyse acoustique
et/ou en perception de la parole ainsi qu'une bonne connaissance des rapports entre
ces trois domaines. Un fort engagement dans le sens de l'intégration des approches
phonologique, phonétique et cognitive à l'étude du langage est souhaitable. Le candidat
travaillera dans une équipe interactive et sera invité à participer aux activités
du laboratoire. Pour d'autres précisions s'adresser à Nick Clements
Le dossier de candidature, disponible sur ce site, est
à envoyer à M. Clements de préférence par courrier électronique, accompagné d'une
lettre de motivation, d'un CV et des noms et des coordonnées de trois références.
Durée : 1 an(s)
Laboratoire de Phonétique et Phonologie
+33 (0)1 45 32 05 76
Nuance (Burlington MA)_#1465 - Senior Research Scientist - Speech
Nuance, formerly Scansoft, a worldwide leader in imaging, speech and language solutions, has an opening for a research scientist in speech recognition.
The candidate will work on improving recognition performance of speech recognition
engine and its main application in Nuance's award-winning dictation products.
Dragon NaturallySpeaking® is our market-leading desktop dictation product.
The recently released version 8 showed substantial accuracy improvements over
previous versions. DragonMT is our new medical transcription server, which
brings the benefit of ScanSoft’s dictation technology to the problem of machine
assisted medical transcription. We are looking for an individual who wants to solve
difficult speech recognition problems, and help get those solutions into our products,
so that our customers can work more effectively.
Main responsibilities of the candidate will include:
provide experimental and theoretical analysis of speech recognition problems
formulate new algorithms, create research tools, design and carry out experiments
to verify new algorithms
work with other members in the team to improve the performance of our products and
add new product features to meet business requirements
work with other team members to deliver acoustic models for products
work with development engineers to insure a high quality implementation of algorithms
and models in company products
follow developments in speech recognition to keep our research work state-of-the-art
patent new algorithms and write scientific papers when appropriate
Ph.D. or Master degree in computer science or electrical engineering
good analytical and diagnostic skills
experience with C/C++, scripting using Perl, Python and csh in UNIX environment
ability to work with a large existing code base
desire and ability to be a team player
strong desire and demonstrated ability to work on and solve engineering problems.
Preference will give to candidates who have strong speech recognition background.
Previous envolvement in DARPA EARS project is a plus. New graduates with good GPA from top
universities are encouraged to apply.
The position will be located in our new headquarters in Burlington, MA, which is
approximated 15 miles west of Boston.
Information about Scansoft and its products can be found in our website.
Nuance (Burlington(MA)_#1322 Research Engineer - Natural Language Understanding)
Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world. Our technologies, applications and services make the user experience more compelling by transforming the way people interact with information and how they create, share and use documents. Every day, millions of users and thousands of businesses, experience Nuance by calling directory assistance, getting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched. Making each of those experiences productive and compelling is what Nuance is all about.
We comprise the world's largest portfolio of speech and imaging products backed by the expertise of our professional services organization and a partner network that can create solutions for businesses and organizations around the globe. So whether it's switching to speech to improve customer service or business productivity, or simplifying the way people work with documents, Nuance has the solution.
The candidate will work in the Network NL group, which develops technology, tools and runtime software to enable our customers to build speech applications using natural language. Some of the current problems include
Generating language models for new applications with little application-specific training data. Statistical semantics, e.g. training classifiers for call routing. Robust parsing and other techniques to extract richer semantics than a routing destination.
The candidate will work on the full product cycle: speak with professional service engineers or customers to identify NL needs and help with solutions, develop new algorithms, conduct experiments, and write product quality software to deliver these new algorithms in the product release cycle.
Strong software skills. C++ required. Perl/python desirable.
Needed both for research code and for product quality, unit-tested code that we ship.
Advanced degree in computer science or related field.
Experience in natural language processing, especially call routing, language modeling
and related areas.
Ability to take initiative, but also follow a plan and work well in a group environment.
A strong desire to make things “really work” in practice.
Please apply on-line
Nuance (Burlington MA)_ #1365 Software Engineer (Burlington MA)
Nuance Communications, Inc, a worldwide leader in speech and imaging solutions,
has an opening for a senior software engineer to maintain and improve acoustic model
training and testing toolkits in the Dragon R&D department.
The candidate will join a group of talented speech scientists and research engineers
to advance acoustic modeling techniques for Dragon dictation solutions and other
Nuance speech recognition products. We are looking for a self-motivated, goal-driven
individual who has strong programming and software architecture skills.
• Maintain and improve acoustic modeling toolkit
o Improve efficiency, flexibility and, when appropriate, architecture of training
o Improve resource utilization of the toolkit in a large grid computing environment
o Implement new training algorithms in cooperation with speech scientists
o Handle toolkit bug reports and feature requests
o Clean up legacy codes, improve code quality and maintainability
o Perform regression tests and release toolkits
o Improve toolkit documentation
• Improve the software implementation of our research testing framework
• Update acoustic modeling and testing toolkits to work with new versions of
• Bachelor’s or Master’s degree in computer science or electrical engineering
• Strong programming skill using C/C++ and scripting languages Perl/Python in UNIX environment
• Significant experience in creating and maintaining a software toolkit. This includes
version control, bug reporting, testing, and releasing code to a user community.
• Ability to work with a large existing code base
• Good software design and architecture skill
• Attention to detail: ability and interest in getting lots of details right on a work
• Desire and ability to be a team player
? Experience with building acoustic models for speech recognition
? Experience with CVS
? Experience coming up to speed on a large existing code base in a short period of time
? Knowledge of speech recognition algorithms, including model training algorithms
Preference will give to candidates who have experience in maintaining a speech recognition
toolkit. Previous experience in computer administration and grid software management is a
Please apply on-line
POSTDOCTORAL POSITION at LINKÖPING UNIVERSITY (SWEDEN)
A position for a postdoctoral associate is available within the
Technology Group , Digital Media
Division at the Department of Science and
Technology (ITN) Linköping University at
Campus Norrköping, Sweden.
Our research is focused on physical and perceptual models of sound
sources, sound source separation and adapted signal representations.
Candidates must have a strong background in research and a completed
Programming skills (e.g. Matlab, C/C++ or Java) are very desirable, as
well as expertise in conducting acoustic/auditory experiments.
We are especially interested in candidates with research background in
the following areas:
. Auditory Scene Analysis
. Sound Processing
. Spatial Audio and Hearing
. Time-Frequency and Wavelet Representations
but those with related research interests are also welcome to apply.
Inquiries and CVs must be addressed to Prof. G. Evangelista (please
consult the sound technology web page in order to obtain the e-mail
Professor of Sound Technology
Digital Media Division
Department of Science and Technology (ITN)
Linköping Institute of Technology (LiTH) at Campus Norrköping
SE-60174 Norrköping, Sweden
Call for papers for a Special Issue of Speech Communication:
"Bridging the Gap Between Human and Automatic Speech Processing"
This special issue of Speech Communication is entirely devoted to
studies that seek to bridge the gap between human and automatic speech
recognition. It follows the special session at INTERSPEECH 2005 on the
announcement sent out in January AND February 2006
submission date: April 30 2006
papers out for review: May 7
first round of reviews in: June 30 2006
acceptance/revisions/rejections: July 7 2006
revisions due: August
notification of acceptance: August 30 2006
final manuscript due:
September 30 2006
tentative publication date: December
Papers are invited that cover one or several of
the following issues:
- quantitative comparisons of human and automatic
speech processing capabilities, especially under varying environmental
- computational approaches to modelling human speech
- use of automatic speech processing as an experimental
tool in human speech perception research
perception/production-inspired modelling approaches for speech
recognition, speaker/language recognition, speaker tracking, sound source
- use of perceptually motivated models for providing rich
transcriptions of speech signals (i.e. annotations going beyond the word,
such as emotion, attitude, speaker characteristics, etc.)
phonetic details: how should we envisage the design and evaluation of
computational models of the relation between fine phonetic details in the
signal on the one hand, and key effects in (human) speech processing on
the other hand.
- how can advanced detectors for articulatory-phonetic
features be integrated in the computational models for human speech
- the influence of speaker recognition on speech
Papers must be submitted by April 30, 2006 at Elsevier website. During the
submission mention that you are submitting to the Special Issue on
"Bridging the Gap..." in the paper section/category or author comments and
request Julia Hirschberg as managing editor for the paper.
Department of Electrical Engineering
University of Washington
Seattle, WA, 98195
Louis ten Bosch
Dept. of Language and Speech
Post Box 9103
6500 HD Nijmegen
Papers accepted for FUTURE PUBLICATION in Speech
Full text available on http://www.sciencedirect.com/ for
Speech Communication subscribers and subscribing institutions. Click on
Publications, then on Speech Communication and on Articles in press. The
list of papers in press is displayed and a .pdf file for each paper is
Abhinav Sethy, Shrikanth Narayanan and S. Parthasarthy, A split lexicon approach for improved recognition of spoken names, Speech Communication, In Press, Uncorrected Proof, , Available online 5 May 2006, .
Keywords: Syllable; Spoken name recognition; Reverse lookup; Split lexicon
Akira Sasou, Futoshi Asano, Satoshi Nakamura and Kazuyo Tanaka, HMM-based noise-robust feature compensation, Speech Communication, In Press, Uncorrected Proof, , Available online 4 May 2006, .
Keywords: Noise robust; Hidden Markov model; AURORA2
Alejandro Bassi, Nestor Becerra Yoma and Patricio Loncomilla, Estimating tonal prosodic discontinuities in Spanish using HMM, Speech Communication, In Press, Uncorrected Proof, , Available online 2 May 2006, .
Esfandiar Zavarehei, Saeed Vaseghi and Qin Yan, Inter-frame modeling of DFT trajectories of speech and noise for speech enhancement using Kalman filters, Speech Communication, In Press, Corrected Proof, , Available online 25 April 2006, .
Keywords: Speech enhancement; Kalman filter; AR modeling of DFT; DFT distributions
SungHee Kim, Robert D. Frisina and D. Robert Frisina, Effects of age on speech understanding in normal hearing listeners: Relationship between the auditory efferent system and speech intelligibility in noise, Speech Communication, In Press, Corrected Proof, , Available online 7 April 2006, .
Keywords: Aging; Presbycusis; Medial efferent system; Release from masking; Cocktail party effect
Fatih Ögüt, Mehmet Akif Kiliç, Erkan Zeki Engin and Rasit Midilli, Voice onset times for Turkish stop consonants, Speech Communication, In Press, Uncorrected Proof, , Available online 3 April 2006, .
Keywords: Articulation; Consonant; Acoustics; Speech; Stop consonants; Voice onset time
Frédéric Bimbot, Marcos Faundez-Zanuy and Renato de Mori, Editorial, Speech Communication, In Press, Corrected Proof, , Available online 10 March 2006, .
Jan Stadermann and Gerhard Rigoll, Hybrid NN/HMM acoustic modeling techniques for distributed speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 3 March 2006, .
Keywords: Distributed speech recognition; Tied-posteriors; Hybrid speech recognition
Gerasimos Xydas and Georgios Kouroupetroglou, Tone-Group F0 selection for modeling focus prominence in small-footprint speech synthesis, Speech Communication, In Press, Corrected Proof, , Available online 2 March 2006, .
Keywords: Text-to-speech synthesis; Tone-Group unit-selection; Intonation and emphasis in speech synthesis
Antonio Cardenal-López, Carmen García-Mateo and Laura Docío-Fernández, Weighted Viterbi decoding strategies for distributed speech recognition over IP networks, , Speech Communication, In Press, Corrected Proof, , Available online 28 February 2006, .
Keywords: Distributed speech recognition; Weighted Viterbi decoding; Missing data
Felicia Roberts, Alexander L. Francis and Melanie Morgan, The interaction of inter-turn silence with prosodic cues in listener perceptions of "trouble" in conversation, Speech Communication, In Press, Corrected Proof, , Available online 28 February 2006, .
Keywords: Silence; Prosody; Pausing; Human conversation; Word duration
Ismail Shahin, Enhancing speaker identification performance under the shouted talking condition using second-order circular hidden Markov models, Speech Communication, In Press, Corrected Proof, , Available online 14 February 2006, .
Keywords: First-order left-to-right hidden Markov models; Neutral talking condition; Second-order circular hidden Markov models; Shouted talking condition
A. Borowicz, M. Parfieniuk and A.A. Petrovsky, An application of the warped discrete Fourier transform in the perceptual speech enhancement, Speech Communication, In Press, Corrected Proof, , Available online 10 February 2006, .
Keywords: Speech enhancement; Warped discrete Fourier transform; Perceptual processing
Pushkar Patwardhan and Preeti Rao, Effect of voice quality on frequency-warped modeling of vowel spectra, Speech Communication, In Press, Corrected Proof, , Available online 3 February 2006, .
Keywords: Voice quality; Spectral envelope modeling; Frequency warping; All-pole modeling; Partial loudness
Veronique Stouten, Hugo Van hamme and Patrick Wambacq, Model-based feature enhancement with uncertainty decoding for noise robust ASR, Speech Communication, In Press, Corrected Proof, , Available online 3 February 2006, .
Keywords: Noise robust speech recognition; Model-based feature enhancement; Additive noise; Convolutional noise; Uncertainty decoding
Jinfu Ni and Keikichi Hirose, Quantitative and structural modeling of voice fundamental frequency contours of speech in Mandarin, Speech Communication, In Press, Corrected Proof, , Available online 26 January 2006, .
Keywords: Prosody modeling; F0 contours; Tone; Intonation; Tone modulation; Resonance principle; Analysis-by-synthesis; Tonal languages
Francisco Campillo Díaz and Eduardo Rodríguez Banga, A method for combining intonation modelling and speech unit selection in corpus-based speech synthesis systems, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, .
Keywords: Speech synthesis; Unit selection; Corpus-based; Intonation
Jean-Baptiste Maj, Liesbeth Royackers, Jan Wouters and Marc Moonen, Comparison of adaptive noise reduction algorithms in dual microphone hearing aids, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, .
Keywords: Adaptive beamformer; Adaptive directional microphone; Calibration; Noise reduction algorithms; Hearing aids
Roberto Togneri and Li Deng, A state-space model with neural-network prediction for recovering vocal tract resonances in fluent speech from Mel-cepstral coefficients, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, .
Keywords: Vocal tract resonance; Tracking; Cepstra; Neural network; Multi-layer perceptron; EM algorithm; Hidden dynamics; State-space model
T. Nagarajan and H.A. Murthy, Language identification using acoustic log-likelihoods of syllable-like units, Speech Communication, In Press, Corrected Proof, , Available online 19 January 2006, .
Keywords: Language identification; Syllable; Incremental training
Yasser Ghanbari and Mohammad Reza Karami-Mollaei, A new approach for speech enhancement based on the adaptive thresholding of the wavelet packets, Speech Communication, In Press, Corrected Proof, , Available online 19 January 2006, .
Keywords: Speech processing; Speech enhancement; Wavelet thresholding; Noisy speech recognition
Mohammad Ali Salmani-Nodoushan, A comparative sociopragmatic study of ostensible invitations in English and Farsi, Speech Communication, In Press, Corrected Proof, , Available online 11 January 2006, .
Keywords: Ostensible invitations; Politeness; Speech act theory; Pragmatics; Face threatening acts
Laurent Benaroya, Frédéric Bimbot, Guillaume Gravier and Rémi Gribonval, Experiments in audio source separation with one sensor for robust speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 19 December 2005, .
Keywords: Noise suppression; Source separation; Speech enhancement; Speech recognition
Naveen Srinivasamurthy, Antonio Ortega and Shrikanth Narayanan, Efficient scalable encoding for distributed speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 19 December 2005, .
Keywords: Distributed speech recognition; Scalable encoding; Multi-pass recognition; Joint coding-classification
Leigh D. Alsteris and Kuldip K. Paliwal, Further intelligibility results from human listening tests using the short-time phase spectrum, Speech Communication, In Press, Corrected Proof, , Available online 5 December 2005, .
Keywords: Short-time Fourier transform; Phase spectrum; Magnitude spectrum; Speech perception; Overlap-add procedure; Automatic speech recognition; Feature extraction; Group delay function; Instantaneous frequency distribution
Luis Fernando D'Haro, Ricardo de Córdoba, Javier Ferreiros, Stefan W. Hamerich, Volker Schless, Basilis Kladis, Volker Schubert, Otilia Kocsis, Stefan Igel and José M. Pardo, An advanced platform to speed up the design of multilingual dialog applications for multiple modalities, Speech Communication, In Press, Corrected Proof, , Available online 5 December 2005, .
Keywords: Automatic dialog systems generation; Dialog management tools; Multiple modalities; Multilinguality; XML; VoiceXML
Ben Milner and Xu Shao, Clean speech reconstruction from MFCC vectors and fundamental frequency using an integrated front-end, Speech Communication, In Press, Corrected Proof, , Available online 21 November 2005, .
Keywords: Distributed speech recognition; Speech reconstruction; Sinusoidal model; Source-filter model; Fundamental frequency estimation; Auditory model
Min Chu, Yong Zhao and Eric Chang, Modeling stylized invariance and local variability of prosody in text-to-speech synthesis, Speech Communication, In Press, Corrected Proof, , Available online 18 November 2005, .
Keywords: Prosody; Stylized invariance; Local variability; Soft prediction; Unit selection; Text-to-speech
Stephen So and Kuldip K. Paliwal, Scalable distributed speech recognition using Gaussian mixture model-based block quantisation, Speech Communication, In Press, Corrected Proof, , Available online 18 November 2005, .
Keywords: Distributed speech recognition; Gaussian mixture models; Block quantisation; Aurora-2
Junho Park and Hanseok Ko, Achieving a reliable compact acoustic model for embedded speech recognition system with high confusion frequency model handling, Speech Communication, In Press, Corrected Proof, , Available online 11 November 2005, .
Keywords: Tied-mixture HMM; Compact acoustic modeling; Embedded speech recognition system
Amalia Arvaniti, D. Robert Ladd and Ineke Mennen, Phonetic effects of focus and "tonal crowding" in intonation: Evidence from Greek polar questions, Speech Communication, In Press, Corrected Proof, , Available online 26 October 2005, .
Keywords: Intonation; Focus; Tonal alignment; Phrase accent; Tonal crowding
Dimitrios Dimitriadis and Petros Maragos, Continuous energy demodulation methods and application to speech analysis, Speech Communication, In Press, Corrected Proof, , Available online 25 October 2005, .
Keywords: Nonstationary speech analysis; Energy operators; AM-FM modulations; Demodulation; Gabor filterbanks; Feature distributions; ASR; Robust features; Nonlinear speech analysis
Daniel Recasens and Aina Espinosa, Dispersion and variability of Catalan vowels, Speech Communication, In Press, Corrected Proof, , Available online 24 October 2005, .
Keywords: Vowels; Catalan; Schwa; Vowel spaces; Contextual and non-contextual variability for vowels; Acoustic analysis; Electropalatography
Cynthia G. Clopper and David B. Pisoni, The Nationwide Speech Project: A new corpus of American English dialects, Speech Communication, In Press, Corrected Proof, , Available online 21 October 2005, .
Keywords: Speech corpus; Dialect variation; American English
SungHee Kim, Robert D. Frisina, Frances M. Mapes, Elizabeth D. Hickman and D. Robert Frisina, Effect of age on binaural speech intelligibility in normal hearing adults, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, .
Keywords: Age; Presbycusis; HINT; Speech intelligibility in noise
Tong Zhang, Mark Hasegawa-Johnson and Stephen E. Levinson, Cognitive state classification in a spoken tutorial dialogue system, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, .
Keywords: Intelligent tutoring system; User affect recognition; Spoken language processing
Marcos Faundez-Zanuy, Speech coding through adaptive combined nonlinear prediction, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, .
Keywords: Speech coding; Nonlinear prediction; Neural networks; Data fusion
Praveen Kakumanu, Anna Esposito, Oscar N. Garcia and Ricardo Gutierrez-Osuna, A comparison of acoustic coding models for speech-driven facial animation, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, .
Keywords: Speech-driven facial animation; Audio-visual mapping; Linear discriminants analysis
Giampiero Salvi, Dynamic behaviour of connectionist speech recognition with strong latency constraints, Speech Communication, In Press, Corrected Proof, , Available online 14 June 2005, .
Keywords: Speech recognition; Neural network; Low latency; Non-linear dynamics
Erhard Rank and Gernot Kubin, An oscillator-plus-noise model for speech synthesis, Speech Communication, In Press, Corrected Proof, , Available online 21 April 2005, .
Keywords: Non-linear time-series; Oscillator model; Speech production; Noise modulation
Kevin M. Indrebo, Richard J. Povinelli and Michael T. Johnson, Sub-banded reconstructed phase spaces for speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 24 February 2005, .
Keywords: Speech recognition; Dynamical systems; Nonlinear signal processing; Sub-bands
Publication policy: Hereunder, you will find very short announcements
of future events. The full call for participation can be accessed on the
See also our Web pages (http://www.isca-speech.org/) on
conferences and workshops.
FUTURE INTERSPEECH CONFERENCES
INTERSPEECH 2006 - ICSLP, the Ninth International Conference on
Spoken Language Processing dedicated to the interdisciplinary study
of speech science and language technology, will be held in
Pittsburgh, Pennsylvania, September 17-21, 2006, under the
sponsorship of the International Speech Communication Association
The INTERSPEECH meetings are considered to be the top international
conference in speech and language technology, with more than 1000
attendees from universities, industry, and government agencies. They
are unique in that they bring together faculty and students from
universities with researchers and developers from government and
industry to discuss the latest research advances, technological
innovations, and products. The conference offers the prospect of
meeting the future leaders of our field, exchanging ideas, and
exploring opportunities for collaboration, employment, and sales
through keynote talks, tutorials, technical sessions, exhibits, and
poster sessions. In recent years the INTERSPEECH meetings have taken
place in a number of exciting venues including most recently Lisbon,
Jeju Island (Korea), Geneva, Denver, Aalborg (Denmark), and Beijing.
ISCA, together with the INTERSPEECH 2006 - ICSLP organizing
committee, would like to encourage submission of papers for the
upcoming conference in the following
TOPICS of INTEREST
Linguistics, Phonetics, and Phonology
Discourse and Dialog
Physiology and Pathology
Paralinguistic and Nonlinguistic Information (e.g. Emotional Speech)
Signal Analysis and Processing
Speech Coding and Transmission
Spoken Language Generation and Synthesis
Speech Recognition and Understanding
Spoken Dialog Systems
Single-channel and Multi-channel Speech Enhancement
Language and Dialect Identification
Speaker Characterization and Recognition
Acoustic Signal Segmentation and Classification
Spoken Language Acquisition, Development and Learning
Spoken Language Information Retrieval
Spoken Language Translation
Resources and Annotation
Assessment and Standards
Spoken Language Processing for the Challenged and Aged
Other Relevant Topics
In addition to the regular sessions, a series of special sessions has
been planned for the meeting. Potential authors are invited to
submit papers for special sessions as well as for regular sessions,
and all papers in special sessions will undergo the same review
process as papers in regular sessions. Confirmed special sessions
and their organizers include:
* The Speech Separation Challenge, Martin Cooke (Sheffield) and Te-Won
* Speech Summarization, Jean Carletta (Edinburgh) and Julia Hirschberg
* Articulatory Modeling, Eric Bateson (University of British Columbia)
* Visual Intonation, Marc Swerts (Tilburg)
* Spoken Dialog Technology R&D, Roberto Pieraccini (Tell-Eureka)
* The Prosody of Turn-Taking and Dialog Acts, Nigel Ward (UTEP) and
Elizabeth Shriberg (SRI and ICSI)
* Speech and Language in Education, Patti Price (pprice.com) and Abeer
* From Ideas to Companies, Janet Baker (formerly of Dragon Systems)
Notification of paper status: June 9, 2006
Early registration deadline: June 23, 2006
Tutorial Day: September 17, 2006
Main Conference: September 18-21, 2006
Further information via Website or
Professor Richard M. Stern (General Chair)
Carnegie Mellon University
Electrical Engineering and Computer Science
5000 Forbes Avenue
Pittsburgh, PA 15213-3890
Fax: +1 412 268-3890
INTERSPEECH 2007-EUROSPEECHAugust 27-31,2007,Antwerp,
Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves,
INTERSPEECH 2008-ICSLP September 22-26, 2008, Brisbane,
Chairman: Denis Burnham, MARCS, University of West Sydney.
INTERSPEECH 2009-EUROSPEECH Brighton, UK,
Chairman: Prof. Roger Moore, University of Sheffield.
FUTURE ISCA TUTORIAL AND RESEARCH WORKSHOP (ITRW)
ITRW on Speech Recognition and Intrinsic Variation (SRIV)-
May 20th 2006, Toulouse, France
email address .
Accented speech modeling and recognition,
- Children speech modeling
- Non-stationarity and relevant analysis methods,
- Speech spectral and temporal variations,
- Spontaneous speech
modeling and recognition,
- Speech variation due to emotions,
Speech corpora covering sources of variation,
correlates of variations,
- Impact and characterization of speech
variations on ASR,
- Speaker adaptation and adapted training,
Novel analysis and modeling structures,
- Man/machine confrontation:
ASR and HSR (human speech recognition),
- Disagnosis of speech
- Intrinsic variations in multimodal recognition,
- Review papers on these topics are also welcome,
and services scenarios involving strong speech variations
Submission deadline: Feb. 1, 2006
acceptance: Mar. 1, 2006
Final manuscript due: Mar. 15,
Progam available: Mar. 22, 2006
deadline: Mar. 29, 2006
Workshop: May 20, 2006 (after
This event is organized as a satellite
of the ICASSP 2006 conference. The workshop will take place in Toulouse,
on 20 May 2006, just after the conference, which ends May 19. The workshop
will consist of oral and poster sessions, as well as talks by guest
On-line registration is open on the workshop website
email address .
ITRW on Experimental
28-30 August 2006, Athens Greece
The general aims of the Workshop are to bring
together researchers of linguistics and related disciplines in a unified
context as well as to discuss the development of experimental
methodologies in linguistic research with reference to linguistic theory,
linguistic models and language applications.
SUBJECTS AND RELATED
1. Theory of language
2. Cognitive linguistics
4. Speech production
5. Speech acoustics
14. Discourse linguistics
15. Computational linguistics
16. Language technology
1 February 2006,
deadline of abstract submission
1 March 2006, notification of
1 April 2006, registration
1 May 2006, camera ready
28-30 August 2006, Workshop
Antonis Botinis, University of Athens, Greece
Marios Fourakis, University of Wisconsin-Madison, USA
Gawronska, University of Skövde, Sweden
Aikaterini Bakakou-Orphanou, University of Athens
Botinis, University of Athens
Christoforos Charalambakis, University of
ISCA Workshop on Experimental Linguistics
Department of Linguistics
University of Athens
Workshop site address
2nd ITRW on PERCEPTUAL
QUALITY OF SYSTEMS
Berlin, Germany, 4 - 6 September 2006
The quality of systems which address human perception is difficult to describe.
Since quality is not an inherent property of a system, users have to decide on
what is good or bad in a specific situation. An engineering approach to quality
includes the consideration of how a system is perceived by its users, and how
the needs and expectations of the users develop. Thus, quality assessment
and prediction have to take the relevant human perception and judgement
factors into account. Although significant progress has been made in several
areas affecting quality within the last two decades, there is still no
consensus on the definition of quality and its contributing components,
as well as on assessment, evaluation and prediction methods.
Perceptual quality is attributed to all systems and services which involve
human perception. Telecommunication services directly provoke such perceptions:
Speech communication services (telephone, Voice over IP), speech technology
(synthesis, spoken dialogue systems), as well as multimodal services and
interfaces (teleconference, multimedia on demand, mobile phones, PDAs).
However, the situation is similar for the perception of other products,
like machines, domestic devices, or cars. An integrated view on system quality
makes use of knowledge gained in different disciplines and may therefore help
to find general underlying principles. This will assist the increase of usability
and perceived quality of systems and services, and finally yield better acceptance.
The workshop is intended to provide an interdisciplinary exchange of ideas between
both academic and industrial researchers working on different aspects of perceptual
quality of systems. Papers are invited which refer to methodological aspects of
quality and usability assessment and evaluation, the underlying perception and
judgment processes, as well as to particular technologies, systems or services.
Perception-based as well as instrumental approaches will complement each other
in giving a broader picture of perceptual quality. It is expected that this will
help technology providers to develop successful, high-quality systems and services.
The following non-exhaustive list gives examples of topics which are
relevant for the workshop, and for which papers are invited:
- Methodologies and Methods of Quality Assessment and Evaluation
- Metrology: Test Design and Scaling
- Quality of Speech and Music
- Quality of Multimodal Perception
- Perceptual Quality vs. Usability
- Semio-Acoustics and -Perception
- Quality and Usability of Speech Technology Devices
- Telecommunication Systems and Services
- Multi-Modal User Interfaces
- Virtual Reality
- Product-Sound Quality
April 15, 2006 (updated): Abstract submission (approx. 800 words)
May 15, 2006: Notification of acceptance
June 15, 2006: Submission of the camera-ready paper (max. 6 pages)
September 4-6, 2006: Workshop
The workshop will take place in the "Harnack-Haus", a villa-like
conference center located in the quiet western part of Berlin, near the
Free University. As long as space permits, all participants will be
accommodated in this center. Accommodation and meals are included in
the workshop fees. The center is run by the Max-Planck-Gesellschaft and
can easily be reached from all three airports of Berlin (Tegel/TXL,
Schönefeld/SXF and Tempelhof/THF). Details on the venue,
accommodation and transportation will be announced soon.
CD workshop proceedings will be available upon registration at the
conference venue and subsequently on the workshop web site.
The official language of the workshop will be English.
LOCAL WORKSHOP ORGANIZATION
Ute Jekosch (IAS, Technical University of
Sebastian Möller (Deutsche Telekom Labs, Technical
University of Berlin)
Alexander Raake (Deutsche Telekom Labs, Technical
University of Berlin)
Sebastian Möller, Deutsche Telekom Labs, Ernst-Reuter-Platz 7,
D-10587 Berlin, Germany
phone +49 30 8353 58465, fax +49 30 8353 58409
ITRW on Statistical and Perceptual Audition ( 2006)
satellite workshop of INTERSPEECH 2006 -ICSLP
September 16, 2006,
Pittsburgh, PA, USA
This will be a one-day workshop with a limited number of oral presentations,
chosen for breadth and provocation, and an informal atmosphere to promote
discussion. We hope that the participants in the workshop will be exposed
to a broader perspective, and that this will help foster new research and
interesting variants on current approaches.
In all cases, preference will be given to papers that clearly involve
both perceptually-defined or perceptually-related problems, and statistical
or machine-learning based solutions.
Submission of a 4-6 pages long paper deadline (double column)
April 21 2006
Notification of acceptance June 9, 2006
NOLISP'07: Non linear Speech Processing
May 22-25, 2007 , Paris, France
6th ISCA Speech Synthesis Research Workshop
Bonn (Germany), August 22-24, 2007
A satellite of
INTERSPEECH 2007 (Antwerp)in collaboration with SynSIG
Details will be
posted by early 2007
Prof. Wolfgang Hess
ITRW on Robustness
November 2007, Santiago, Chile
FORTHCOMING EVENTS SUPPORTED (but not organized) by ISCA
5th SALTMIL Workshop on Minority Languages
Strategies for developing machine translation for minority languages
Tuesday May 23rd 2006 (morning)
Magazzini del Cotone Conference Centre, Genoa, Italy
Organised in conjunction with LREC 2006: Fifth International Conference on
Language Resources and Evaluation, Genoa, Italy, 24-26 May 2006
This workshop continues the series of LREC workshops organized by SALTMIL
( SALTMIL is the ISCA Special Interest Group for Speech And Language
Technology for Minority Languages.
The workshop will begin with the following talks from invited speakers:
* Lori Levin (Carnegie Mellon University, USA): "Omnivorous MT: Using
whatever resources are available."
* Anna Sågvall Hein (University of Uppsala, Sweden): "Approaching new
languages in machine translation."
* Hermann Ney (Rheinisch-Westfälische Technische Hochschule, Aachen,
Germany): "Statistical Machine Translation with and without a bilingual
* Delyth Prys (University of Wales, Bangor): "The BLARK matrix and its
relation to the language resources situation for the Celtic languages."
* Daniel Yacob (Ge'ez Frontier Foundation) "Unicode Development for
* Mikel Forcada (Universitat d’Alacant, Spain): "Open source machine
translation: an opportunity for minor languages"
These talks will be followed by a poster session with contributed papers.
Papers are invited that describe research and development in the following areas:
* The BLARK (Basic Language Resource Kit) matrix at ELDA, and how it
relates to minority languages.
* The advantages and disadvantages of different corpus-based strategies for
developing MT, with reference to a) speed of development, and b) level of
researcher expertise required.
* What open-source or free language resources are available for developing MT?
* Existing resources for minority languages, with particular emphasis on
software tools that have been found useful.
All contributed papers will be presented in poster format. All contributions
will be included in the workshop proceedings (CD). They will also be
published on the SALTMIL website.
* Final version of paper: April 10, 2006
* Workshop: May 23, 2006 (morning)
* Briony Williams (University of Wales, Bangor, UK): Programme Chair
* Kepa Sarasola (University of the Basque Country)
* Bojan Petek (University of Ljubljana, Slovenia)
* Julie Berndsen (University College Dublin, Ireland)
* Atelach Alemu Argaw (University of Stockholm, Sweden)
HLT-NAACL 2006 Call for Demos
2006 Human Language
Technology Conference and North American chapter of the Association for
Computational Linguistics annual meeting.
New York City, New
Conference date: June 4-9, 2006
Submission deadline: March 3,
invited for the HLT-NAACL 2006 Demonstrations Program. This program is
aimed at offering first-hand experience with new systems, providing
opportunities to exchange ideas gained from creating systems, and
collecting feedback from expert users. It is primarily intended to
encourage the early exhibition of research prototypes, but interesting
mature systems are also eligible. Submission of a demonstration proposal
on a particular topic does not preclude or require a separate submission
of a paper on that topic; it is possible that some but not all of the
demonstrations will illustrate concepts that are described in companion
John Dowding, University of
Natasa Milic-Frayling, Microsoft Research,
Cambridge, United Kingdom
Alexander Rudnicky, Carnegie Mellon
Areas of Interest
We encourage the submission of
proposals for demonstrations of software and hardware related to all areas
of human language technology. Areas of interest include, but are not
limited to, natural language, speech, and text systems for:
recognition and generation;
- Speech retrieval and summarization;
Rich transcription of speech;
- Interactive dialogue;
retrieval, filtering, and extraction;
- Document classification,
clustering, and summarization;
- Language modeling, text mining, and
- Machine translation;
- Multilingual and
- Multimodal user interface;
- Tools for Ontology, Lexicon, or other NLP
- Applications in growing domains (web-search,
Please be referred to the HLT-NAACL 2006 CFP
for a more detailed but not necessarily an exhaustive list of relevant
Submission of final demo
related literature: April 17, 2006
Conference: June 4-9,
A demo proposal should consist of
the following parts:
- An extended abstract of up to four pages,
including the title, authors, full contact information, and technical
content to be demonstrated. It should give an overview of what the
demonstration is aimed to achieve, how the demonstration illustrates novel
ideas or late-breaking results, and how it relates to other systems or
projects described in the context of other research (i.e., references to
- A detailed requirement description of hardware,
software, and network access expected to be provided by the local
organizer. Demonstrators are encouraged to be flexible in their
requirements (possibly preparing different demos for different logistical
situations). Please state what you can bring yourself and what you
absolutely must be provided with. We will do our best to provide equipment
and resources but at this point we cannot guarantee anything beyond the
space and power supply.
- A concise outline of the demo script,
including the accompanying narrative, and either a web address to access
the demo or visual aids (e.g., screen-shots, snapshots, or sketches). The
demo script should be no more than 6 pages.
The demo abstract must be
submitted electronically in the Portable Document Format (PDF). It should
follow the format guidelines for the main conference papers. Authors are
encouraged to use the style files provided on the HLT-NAACL 2006 website.
It is the responsibility of the authors to ensure that their proposals use
no unusual format features and can be printed on a standard Postscript
Demo proposals should be submitted
electronically to the demo
Demo proposals will be evaluated on the
basis of their relevance to the conference, innovation, scientific
contribution, presentation, and usability, as well as potential logistical
The accepted demo abstracts will be
published in the Companion Volume to the Proceedings of the HLT-NAACL
Further details on the date,
time, and format of the demonstration session(s) will be determined and
provided at a later date. Please send any inquiries to the demo
Call for Tutorial
Proposals are invited for the Tutorial Program for
HLT-NAACL 2006, to be held at the New York Marriott at the Brooklyn Bridge
from June 4 to 9, 2006. The tutorial day is June 4, 2006. The HLT-NAACL
conferences combine the HLT (Human Language Technology) and NAACL (North
American chapter of the Association for Computational Linguistics)
conference series, and bring together researchers in NLP, IR, and speech.
For details, see our website
We seek half-day tutorials covering topics in Speech Processing,
Information Retrieval, and Natural Language Processing, including their
theoretical foundations, intersections, and applications. Tutorials will
normally move quickly, but they are expected to be accessible,
understandable, and of interest to a broad community of researchers,
preferably from multiple areas of Human Language Technology. Our target is
to have four to six tutorials.
for tutorials should be submitted by electronic mail, in plain text, PDF,
Microsoft Word, or HTML. They should be submitted, by the date shown
below, by email. The
subject line should be: "HLT-NAACL'06 TUTORIAL PROPOSAL".
1. A title and brief (2-page max) description of the
tutorial topic and content. Include a brief outline of the tutorial
structure showing that the tutorial's core content can be covered in a
three hours (two 1.5 hour sessions). Tutorials should be accessible to the
broadest practical audience. In keeping with the focus of the conference,
please highlight any topics spanning disciplinary boundaries that you plan
to address. (These are not strictly required, but they are a big plus.)
2. An estimate of the audience size. If approximately the same
tutorial has been given elsewhere, please list previous venues and
approximate audience sizes. (There's nothing wrong with repeat tutorials;
we'd just like to know.)
3. The names, postal addresses, phone numbers,
and email addresses of the organizers, with one-paragraph statements of
their research interests and areas of expertise.
4. A description of
special requirements for technical needs (computer infrastructure, etc).
Tutorials must be financially self-supporting. The conference organizers
will establish registration rates that will cover the room, audio-visual
equipment, internet access, snacks for breaks, and reproduction the
tutorial notes. A description of any additional anticipated expenses must
be included in the proposal.
tutorial speakers will be asked to provide descriptions of their tutorials
suitable for inclusion in all of: email announcements, the conference
registration material, the printed program, the website, and the
proceedings. This will involve producing text and/or HTML and/or
LaTeX/Word/PDF versions of appropriate lengths.
Tutorial notes will be
printed and distributed by the Association for Computational Linguistics
(ACL). These materials, containing at least copies of the slides that will
be presented and a bibliography for the material that will be covered,
must be submitted by the date indicated below to allow adequate time for
reproduction. Presenters retain copyright for their materials, but ACL
requires that presenters execute a non-exclusive distribution license to
permit distribution to participants and sales to others.
presenters will be compensated in accordance with current ACL policies;
material due: May 1, 2006
Tutorial date: Jun 4, 2006
Jim Glass, Massachusetts Institute of Technology
Christopher Manning, Stanford University
Douglas W. Oard,
University of Maryland
7th SIGdial workshop on discourse and
Sydney (co-located with COLING/ACL)
Contact: Dr Jan Alexandersson
11-th International Conference SPEECH AND
25-29 June 2006
Organized by St. Petersburg Institute for Informatics and Automation of
the Russian Academy of Sciences (SPIIRAS)
Supported by SIMILAR NoE,
INTAS association, ELSNET and ISCA.
processing and feature extraction;
- Multimodal analysis and
- Speech recognition and understanding;
- Speaker and language identification;
- Speech perception and speech disorders;
- Speech and
- Applied systems for Human-Computer
- Early registration deadline: 15 April
- Conference SPECOM: 25-29 June 2006
The conference venue and
dates were selected so that the attendees can possibly be exposed to St.
Petersburg unique and wonderful phenomenon known as the White Nights, for
our city is the world's only metropolis where such a phenomenon occurs
SPECOM'2006, SPIIRAS, 39,
14th line, St-Petersburg, 199178, RUSSIA
Tel.: +7 812 3287081 Fax: +7
IEEE Odyssey 2006: The Speaker and Language
28 - 30 June 2006
Ritz Carlton Hotel, Spa
San Juan, Puerto Rico
The IEEE Odyssey 2006 Workshop on
Speaker and Language Recognition will be held in scenic San Juan, Puerto
Rico at the Ritz Carlton Hotel. This Odyssey is sponsored by the IEEE, is
an ISCA Tutorial and Research Workshop of the ISCA Speaker and Language
Characterization SIG, and is hosted by The Polytechnic University of
Topics of interest include speaker
recognition (verification, identification, segmentation, and clustering);
text-dependent and -independent speaker recognition; multispeaker training
and detection; speaker characterization and adaptation; features for
speaker recognition; robustness in channels; robust classification and
fusion; speaker recognition corpora and evaluation; use of extended
training data; speaker recognition with speech recognition; forensics,
multimodality, and multimedia speaker recognition; speaker and language
confidence estimation; language, dialect, and accent recognition; speaker
synthesis and transformation; biometrics; human recognition; and
are invited to submit papers written in English via the Odyssey website. The style
guide, templates, and submission form can be downloaded from the Odyssey
website. Two members of the Scientific Committee will review each paper.
At least one author of each paper is required to register. The workshop
proceedings will be published on CD- ROM.
Preliminary program 21 April
Workshop 28-30 June 2006
Registration will be handled via the Odyssey website
. NIST SRE
The NIST Speaker Recognition Evaluation 2006 Workshop
will be held immediately before Odyssey ‘06 at the same location on 25-27
June. Everyone is invited to evaluate their systems via the NIST SRE. The
NIST Workshop is only for participants and by prearrangement. Please
contact Dr. Alvin Martin to participate and see the NIST website for
Kay Berkling, Co-Chair Polytechnic
University of Puerto Rico
Pedro A. Torres-Carrasquillo, Co-Chair MIT
Lincoln Laboratory, USA
IV Jornadas en Tecnologia del Habla
November 8-10, 2006
Call for papers-International Workshop on Spoken Language
Translation (IWSLT 2006)
Evaluation campaign for language translation
Palulu Plaza Kyoto (right in front of Kyoto Station)
November 30-December 1 2006
Spoken language translation technologies attempt to cross the language
barriers between people having different native languages who each want
to engage in conversation by using their mother-tongue. Spoken language
translation has to deal with problems of automatic speech recognition
(ASR) and machine translation (MT).
One of the prominent research activities in spoken language translation
is the work being conducted by the Consortium for Speech Translation
Advanced Research (C-STAR III), which is an international partnership of
research laboratories engaged in automatic translation of spoken language.
Current members include ATR (Japan), CAS (China), CLIPS (France), CMU (USA),
ETRI (Korea), ITC-irst (Italy), and UKA (Germany).
A multilingual speech corpus comprised of tourism-related sentences (BTEC*)
has been created by the C-STAR members and parts of this corpus were already
used for previous IWSLT workshops focusing on the evaluation of MT results
using text input () and the translation of
ASR output (word lattice, NBEST list) using read speech as input
(). The full BTEC* corpus consists of
160K of sentence-aligned text data and parts of the corpus will be provided
to the participants for training purposes.
In this workshop, we focus on the translation of spontaneous speech which
includes ill-formed utterances due to grammatical incorrectness, incomplete
sentences, and redundant expressions. The impact of spontaneity aspects on
the ASR and MT systems performance as well as the robustness of state-of-
the-art MT engines towards speech recognition errors will be investigated
Two types of submissions are invited:
1) participants in the evaluation campaign of spoken language translation
2) technical papers on related issues.
Evaluation campaign (see details on our website)
Each participant in the evaluation campaign is requested to submit a paper
describing the utilized ASR and MT systems and to report results using
the provided test data.
Technical Paper Session
The workshop also invites technical papers related to spoken language
translation. Possible topics include, but are not limited to:
+ Spontaneous speech translation
+ Domain and language portability
+ MT using comparable and non-parallel corpora
+ Phrase alignment algorithms
+ MT decoding algorithms
+ MT evaluation measures
+ Evaluation Campaign
May 12, 2006 -- Training Corpus Release
August 1, 2006 -- Test Corpus Release [00:01 JST]
August 3, 2006 -- Result Submission Due [23:59 JST]
September 15, 2006 -- Result Feedback to Participants 2006
September 29, 2006 -- Paper Submission Due
October 14, 2006 -- Notification of Acceptance
October 27, 2006 -- Camera-ready Submission Due
- system registrations will be accepted until release of
- late result submissions will be treated as unofficial
+ Technical Papers
July 21, 2006 -- Paper Submission Due [23:59 JST]
September 29, 2006 -- Notification of Acceptance
October 27, 2006 -- Camera-ready Submission Due
ATR Spoken Language Communication Research Laboratories
2-2-2 Hikaridai, Keihanna Science City, Kyoto 619-0288 Japan
Call for papers International Symposium on Chinese Spoken
Language Processing (ISCSLP'2006)
Singapore Dec. 13-16, 2006
ISCSLP'06 will feature world-renowned plenary speakers, tutorials, exhibits,
and a number of lecture and poster sessions on the following topics:
* Speech Production and Perception
* Phonetics and Phonology
* Speech Analysis
* Speech Coding
* Speech Enhancement
* Speech Recognition
* Speech Synthesis
* Language Modeling and Spoken Language Understanding
* Spoken Dialog Systems
* Spoken Language Translation
* Speaker and Language Recognition
* Indexing, Retrieval and Authoring of Speech Signals
* Multi-Modal Interface including Spoken Language Processing
* Spoken Language Resources and Technology Evaluation
* Applications of Spoken Language Processing Technology
The official language of ISCSLP is English. The regular papers will be
published as a volume in the Springer LNAI series, and the poster papers
will be published in a companion volume. Authors are invited to submit
original, unpublished work on all the aspects of Chinese spoken language
The conference will also organize four special sessions:
* Special Session on Rich Information Annotation and Spoken Language
* Special Session on Robust Techniques for Organizing and Retrieving
* Special Session on Speaker Recognition
* Special Panel Session on Multilingual Corpus Development
* Full paper submission by Jun. 15, 2006
* Notification of acceptance by Jul. 25, 2006
* Camera ready papers by Aug. 15, 2006
* Early registration by Nov. 1, 2006
Please visit the conference website for
ISCSLP 2006-Special session on speaker recognition
Singapore, Dec 13-16, 2006
Dr Thomas Fang Zheng, Tsinghua Univ., Beijing.
Speaker recognition (or voiceprint recognition, VPR) is one of the most
important branches in speech processing. Its applications become wider and
wider in various fields, such as public security, anti-terrorism, justice,
telephony banking, personal services, and so on. However, there are still
many fundamental and theoretical problems to solve, such as issues of
background noises, cross-channel, multiple-speakers, and short speech
segment for training and testing.
The purpose of this special session is to invite researchers in this field
to present their state-of-art technical achievements. Papers are invited to
cover, but not limited to, the following topics:
* Text-dependent and text-independent speaker identification
* Text-dependent and text-independent speaker verification
* Speaker detection
* Speaker segmentation
* Speaker tracking
* Speaker recognition systems and application
* Resource creation for speaker recognition
This special session also provides a platform for developers in this field
to evaluate their speaker recognition systems using the same database
provided by this special session. Evaluation of speaker recognition systems
will cover the following tasks:
* Text-independent speaker identification
* Text-dependent and text-independent speaker verification
* Text-independent cross-channel speaker identification
* Text-dependent and text-independent cross-channel speaker
Final details on these tasks (including evaluation criteria) will be made
available in due course. The development and testing data will be provided
by the Chinese Corpus Consortium (CCC). The data sets will be extracted from
two CCC databases, which are CCC-VPR3C2005 and CCC-VPR2C2005-1000.
Participants are required to submit a full paper to the conference
describing their algorithms, systems and results.
Schedule for this special session
* Feb. 01, 2006: On-line registration open, CLOSED on May 1st, 2006
* May. 01, 2006: Development data made available to participants
* May. 21, 2006 (revised): Test data made available to participants
* Jun. 7, 2006 (revised):Test results due at CCC
* Jun. 10, 2006: Results released to participants
* Jun. 15, 2006: Papers due (using ISCSLP standard format)
* Jul. 25, 2006: The full set of the two databases made available to
the participants of this special session upon request
* Dec. 16, 2006: Conference presentation
This special session is organized by the CCC
Please address your enquiries to Dr. Thomas Fang
Speaker Recognition Evaluation Registration Form
FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS
LREC 2006 - 5th Conference on Language Resources and
Magazzini del Cotone Conference Center, GENOA -
24-25-26 MAY 2006
WORKSHOPS and TUTORIALS: 22-23 and 27-28 MAY
Conference web site
fifth international conference on Language Resources and Evaluation, LREC
2006, is organised by ELRA in cooperation with a wide range of
international associations and organisations.
Issues in the design, construction and use of Language
Issues in Human Language Technologies (HLT)
LREC targets the integration of
different types of LRs (spoken, written, and other modalities), and of the
respective communities. To this end, LREC encourages submissions covering
issues which are common to different types of LRs and language
technologies, such as dialogue strategy, written and spoken translation,
domain-specific data, multimodal communication or multimedia document
processing, and will organise, in addition to the usual tracks, common
sessions encompassing the different areas of LRs.
The 2006 Conference
emphasises in particular the importance of promoting:
- synergies and
integration between (multilingual) LRs and Semantic Web technologies,
new paradigms for sharing and integrating LRs and LT coming from different
- communication with neighbouring fields for applications in
e-government and administration,
- common evaluation campaigns for the
objective evaluation of the performances of different systems,
systems and products (also industrial ones) based on large-size and high
LREC therefore encourages submissions of papers, panels,
workshops, tutorials on the use of LRs in these areas.
Submitted abstracts of papers for oral and poster or
demo presentations should consist of about 1000 words.
number of panels, workshops and tutorials is foreseen: proposals will be
reviewed by the Programme Committee.
For panels, please send a brief
description, including an outline of the intended structure (topic,
organiser, panel moderator, tentative list of panelists).
and tutorials, see the dedicated section below.
submissions will be considered. Further details about submission will be
circulated in the 2nd Call for Papers to be issued at the end of July and
posted on the LREC web site (www.lrec-conf.org).
* Conference: 24-26 May 2006
* Pre-conference workshops and
tutorials: 22 and 23 May 2006
* Post-conference workshops and
tutorials: 27 and 28 May 2006
Pre-conference workshops and tutorials will be organised
on 22 and 23 May 2006, and post-conference workshops and tutorials on 27
and 28 May 2006. A workshop/tutorial can be either half day or full day.
Proposals for workshops and tutorials should be no longer than three
pages, and include:
* A brief technical description of the specific
technical issues that the workshop/tutorial will address.
reasons why the workshop/tutorial is of interest this time.
names, postal addresses, phone and fax numbers and email addresses of the
workshop/tutorial organising committee, which should consist of at least
three people knowledgeable in the field, coming from different
* The name of the member of the workshop/tutorial
organising committee designated as the contact person.
* A time
schedule of the workshop/tutorial and a preliminary programme.
summary of the intended workshop/tutorial call for participation.
list of audio-visual or technical requirements and any special room
CONSORTIA AND PROJECT MEETINGS
projects wishing to take this opportunity for organising meetings should
contact the ELDA office .
TC-STAR Second Evaluation Campaign 2006
an European integrated project focusing on Speech-to-Speech Translation
(SST). To encourage significant advances in all SST technologies, annual
competitive evaluations are organized. Automatic Speech Recognition (ASR),
Spoken Language Translation (SLT) and Text-To-Speech (TTS) are evaluated
independently and within an end-to-end system. The project targets a
selection of unconstrained conversational speech domains-speeches and
broadcast news-and three languages: European English, European Spanish,
and Mandarin Chinese. The first evaluation took place in March 2005 for
ASR and SLT and September 2005 for TTS. TC-STAR welcomes outside
participants in its 2nd evaluation of January-February 2006. This
participation is free of charge. The TC-STAR 2006 evaluation campaign will
· SLT in the following directions :
o Spanish-to-English (European Parliament plenary
o English-to-Spanish (European Parliament plenary speeches)
· ASR in the following languages :
o English (European Parliament
o Spanish (European Parliament plenary speeches)
o Mandarin Chinese (Broadcast News)
· TTS in Chinese, English, and
Spanish under the following conditions:
o Complete system:
participants use their own training data
o Voice conversion
intralingual and crosslingual, expressive speech: data provided by TC-STAR
o Component evaluation
For ASR and SLT, training data will be made
available by the TC-STAR project for English and Spanish and can be
purchased at LDC for Chinese. Development data will be provided by the
TC-STAR project. Legal issues regarding the data will be detailed in the
2nd Call For Participation.
All participants will be given the
opportunity to present and discuss their results in the TC-STAR evaluation
workshop in Barcelona in June 2006.
Submission of papers: May 2006
Workshop: June 2006
Contact: Djamel Mostefa
tel. +33 1 43 13 33 33
JOINT INFERENCE FOR NATURAL LANGUAGE PROCESSING
Workshop at HLT/NAACL 2006, in New York City
June 8, 2006
* Notification of accepted papers: Thursday, April 21
* Camera ready papers due: Wednesday, May 3
*LATE-BREAKING PAPER DEADLINE (will not appear in proceedings): Friday May 5
* Workshop: June 8, 2006
Charles Sutton, University of Massachusetts
Andrew McCallum, University of Massachusetts
Jeff Bilmes, University of Washington
XXVIèmes Journées d'Étude sur la Parole
principaux thèmes retenus pour la conférence sont:
1 Production de
2 Acoustique de la parole
3 Perception de parole
Phonétique et phonologie
6 Reconnaissance et
compréhension de la parole
7 Reconnaissance de la langue et du
8 Modèles de langage
9 Synthèse de la parole
codage et compression de la parole
11 Applications à composantes orales
12 Évaluation, corpus et ressources
14 Acquisition de la parole et du langage
Apprentissage d'une langue seconde
16 Pathologies de la parole
DATES À RETENIR
Soumission des articles finaux 1 mai 2006
du congrès 12-16 juin 2006
Pour les questions
scientifiques, contactez Pascal Perrier, Président de l'AFCP.
renseignements pratiques, firstname.lastname@example.org.
PERCEPTION AND INTERACTIVE TECHNOLOGIES (
Kloster Irsee in southern Germany from June 19 to June 21,
will be short/demo or full papers of 4-10 pages.
April 18, 2006: Deadline for advance registration
June 7, 2006: Final programme
available on the web
It is envisioned to publish the proceedings in the
LNCS/LNAI Series by Springer.
Elisabeth André, Laila Dybkjaer, Wolfgang Minker, Heiko
Neumann, Michael Weber, Marcus Hennecke, Gregory Baratoff
9th Western Pacific Acoustics Conference(WESPAC IX 2006)
June 26-28, 2006
Program Highlights of
WESPAC IX 2006
(by Session Topics)
* Human Related Topics-
* Product Oriented Topics
* Speech Communication
* Analysis: Through Software and Hardware
* Underwater Acoustics
* Physics: Fundamentals and Applications
* Other Hot Topics in
WESPAC IX 2006 Secretariat
University, Acoustics Research Laboratory
300 Chunchun-dong, Jangan-ku,
Suwon 440-746, Republic of Korea
Tel: +82-31-290-5957 Fax:
Mercredi 5 juillet 2006 de 9h00 à 18h30.
Auditoire Hotyat (1er étage), Université de Mons-Hainaut, 17, Place
Warocqué, 7000 Mons.
Pierre Badin (Institut de la Communication Parlée, Grenoble, France)
Abigail Cohn (Cornell University, New York, USA)
Didier Demolin (Universidade de Sao Paulo, Brazil & Université Libre de
Date de notification de l'acceptation
Mercredi 26 avril 2006
Date du colloque
Mercredi 5 juillet 2006
*Un livre contenant les résumés des communications sera distribué à
toutes les personnes inscrites au colloque.
*Les participants sont invités à soumettre une version écrite de leur
communication pour une éventuelle publication dans le numéro spécial
de la revue Parole qui sera consacré au colloque.
Date limite de soumission des papiers: mercredi 9 aout 2006.
Inscrivez-vous par simple mail à l'adresse: email@example.com.
Contact: Véronique Delvaux
Laboratoire de Phonétique
Université de Mons-Hainaut
18, place du Parc,
AAAI Workshop on Statistical and Empirical
Approaches for Spoken Dialogue Systems
16 or 17 July 2006
This workshop seeks to draw new work on
statistical and empirical approaches for spoken dialogue systems. We
welcome both theoretical and applied work, addressing issues such as:
Representations and data structures suitable for automated learning of
* Machine learning techniques for automatic generation
and improvement of dialogue managers
* Machine learning techniques for
ontology construction and integration
* Techniques to accurately
simulate human-computer dialogue
* Creation, use, and evaluation of
* Methods for automatic evaluation of dialogue systems
Integration of spoken dialogue systems into larger intelligent agents,
such as robots
* Investigations into appropriate optimization criteria
for spoken dialogue systems
* Applications and real-world examples of
spoken dialogue systems incorporating statistical or empirical
* Use of statistical or empirical techniques within
multi-modal dialogue systems
* Application of statistical or empirical
techniques to multi-lingual spoken dialogue systems
* Rapid development
of spoken dialogue systems from database content and corpora
Adaptation of dialogue systems to new domains and languages
* The use
and application of techniques and methods from related areas, such as
cognitive science, operations research, emergence models, etc.
other aspect of the application of statistical or empirical techniques to
Spoken Dialogue Systems.
This will be a
one-day workshop immediately before the main AAAI conference and will
consist mainly of presentations of new work by participants.
will also feature a keynote talk from Satinder Singh (University of
Michigan), who will speak about using Reinforcement Learning in the spoken
Interaction will be encouraged and sufficient time
will be left for discussion of the work presented. To facilitate a
collaborative environment, the workshop size will be limited to authors,
presenters, and a small number of other participants.
the workshop will be published as an AAAI technical
SUBMISSION AND REVIEW PROCESS
Prospective authors are
invited to submit full-length, 6-page, camera-ready papers via email.
Authors are requested to use the AAAI paper template and follow the AAAI
Authors are asked to email papers to Jason Williams.
will be reviewed electronically by three reviewers. Comments will be
provided and time will be given for incorporation of comments into
For accepted papers, at least one author from each
paper is expected to register and attend. If no authors of an accepted
paper register for the workshop, the paper may be removed from the
workshop proceedings. Finally, authors of accepted papers will be expected
to sign a standard AAAI-06 "Permission to distribute"
* Monday 24 April 2006 : Acceptance
* Friday 5 May 2006 : AAAI-06 and workshop registration
* Friday 12 May 2006 : Final camera-ready papers and "AAAI
Permission to distribute" forms due
* Friday 19 May 2006 : AAAI-06
Early registration deadline
* Friday 16 June 2006 : AAAI-06 Late
* Sunday 16 or Monday 17 July 2006 :
* Tuesday-Thursday 18-20 July 2006 : Main AAAI-06
Pascal Poupart, University of
Massachusetts Institute of Technology
Jason D. Williams, University of
University of Cambridge
additional information please contact: Jason D. Williams
Phone: +44 7786
Fax: +44 1223 332662
2006 IEEE International Workshop on Machine
Learning for Signal Processing
(Formerly the IEEE Workshop on
Neural Networks for Signal Processing)
September 6 - 8, 2006, Maynooth,
The sixteenth in a
series of IEEE workshops on Machine Learning for Signal Processing (MLSP)
will be held in Maynooth, Ireland, September 6-8, 2006. Maynooth is
located 15 miles west of Dublin in Co. Kildare, Ireland?s equestrian and
golfing heartland (and home to the 2006 Ryder Cup). It is a pleasant 18th
century planned town, best known for its seminary, St. Patrick's College,
where Catholic Priests have been trained since 1795. Co.Kildare.
formally known as Neural Networks for Signal Processing (NNSP), is
sponsored by the IEEE Signal Processing society (SPS) and organized by the
MLSP technical committee of the IEEE SPS. The name of the NNSP technical
committee, and hence the workshop, was changed to Machine Learning for
Signal Processing in September 2003 to better reflect the areas
represented by the technical committee.
will feature keynote addresses, technical presentations, special sessions
and tutorials, all of which will be included in the registration. Papers
are solicited for, but not limited to, the following areas:
Theory and Modeling; Bayesian Learning and Modeling; Sequential Learning;
Sequential Decision Methods; Information-theoretic Learning; Neural
Network Learning; Graphical and Kernel Models; Bounds on performance;
Blind Signal Separation and Independent Component Analysis; Signal
Detection; Pattern Recognition and Classification, Bioinformatics
Applications; Biomedical Applications and Neural Engineering; Intelligent
Multimedia and Web Processing; Communications Applications; Speech and
Audio Processing Applications; Image and Video Processing Applications.
A data analysis and signal processing competition is being organized
in conjunction with the workshop. This competition is envisioned to become
an annual event where problems relevant to the mission and interests of
the MLSP community will be presented with the goal of advancing the
current state-of-the-art in both theoretical and practical aspects. The
problems are selected to reflect the current trends to evaluate existing
approaches on common benchmarks as well as areas where crucial
developments are thought to be necessary. Details of the competition can
be found on the workshop website.
Selected papers from MLSP 2006 will
be considered for a special issue of Neurocomputing to appear in 2007. The
winners of the data analysis and signal processing competition will also
be invited to contribute to the special issue.
Prospective authors are invited to submit a double column
paper of up to six pages using the electronic submission procedure
described at the workshop homepage. Accepted papers will be published in a
bound volume by the IEEE after the workshop and a CDROM volume will be
distributed at the workshop.
MCLOONE, NUI Maynooth,
Technical Chair:Tülay ADALI , University of
Maryland, Baltimore County
Workshop on Multimedia Content
Representation, Classification and Security (MRCS)
September 11 -
The International Association for Pattern Recognition
The European Association for Signal-Image Processing
Bilge Gunsel,Istanbul Technical
Anil K. Jain, Michigan State University,
TECHNICAL PROGRAM CHAIR
Murat Tekalp,Koc University,
SPECIAL SESSIONS CHAIR
Kivanc Mihcak, Microsoft
Prospective authors are invited to submit extended
summaries of not more than six (6) pages including results, figures and
references. Submitted papers will be reviewed by at least two members of
the program committee. Conference Proceedings will be available on site.
Please check the website for
Notification of Acceptance: June 10,
Camera-ready Paper Submission Due: July 10,
The areas of interest include but are not limited
- Feature extraction, multimedia content representation and
- Multimedia signal processing
Authentication, content protection and digital rights management
- Information hiding,
- Audio/Video/Image hashing and clustering
- Evolutionary algorithms in content based multimedia data
representation, indexing and retrieval
- Transform domain
- Multimedia mining
- Benchmarking and comparative
- Multimedia applications (broadcasting, medical, biometrics,
content aware networks, CBIR.)
Ninth International Conference on TEXT, SPEECH and DIALOGUE
Brno, Czech Republic, 11-15 September 2006
The conference is
organized by the Faculty of Informatics, Masaryk University, Brno, and the
Faculty of Applied Sciences, University of West Bohemia, Pilsen. The
conference is supported by International Speech Communication
TSD series evolved as a prime forum
for interaction between researchers in both spoken and written language
processing from the former East Block countries and their Western
colleagues. Proceedings of TSD form a book published by Springer-Verlag in
their Lecture Notes in Artificial Intelligence (LNAI)
Topics of the conference will include (but are
not limited to):
text corpora and tagging
transcription problems in
links between text and speech
parsing issues, especially parsing problems in spoken
multi-lingual issues, especially multi-lingual dialogue
information retrieval and information extraction
machine translation semantic networks and
semantic web speech modeling
search in speech for IR and
prosody in dialogues
emotions and personality
knowledge representation in relation to
dialogue systems assistive technologies based on speech and dialogue
applied systems and software facial animation visual speech synthesis
Papers on processing of languages other than English are strongly
Frederick Jelinek, USA (general
Hynek Hermansky, USA (executive chair)
Eduard Hovy, USA
Louise Guthrie, GB
FORMAT OF THE CONFERENCE
program will include presentation of invited papers, oral presentations,
and a poster/demonstration sessions. Papers will be presented in plenary
or topic oriented sessions.
Social events including a trip in the
vicinity of Brno will allow for additional informal
The conference program will
include oral presentations and poster/demonstration sessions with
sufficient time for discussions of the issues raised. The conference will
welcome three keynote speakers - Eduard Hovy, Louise Guthrie and James
Pustejovsky, and it will offer two special panels devoted to Emotions and
Search in Speech.
May 15 2006 .............. Notification of acceptance
2006 .............. Final papers (camera ready) and registration
23 2006 ............. Submission of demonstration abstracts
2006 ............. Notification of acceptance for demonstrations sent to
September 11-15 2006 ..... Conference date
contributions to the conference will be published in proceedings that will
be made available to participants at the time of the
of the conference will be
All correspondence regarding the conference
should be addressed to
Dana Hlavackova, TSD 2006
Informatics, Masaryk University
Botanicka 68a, 602 00 Brno, Czech
phone: +420-5-49 49 33 29
fax: +420-5-49 49 18 20
is the the second largest city in the Czech Republic with a population of
almost 400.000 and is the country's judiciary and trade-fair center. Brno
is the capital of Moravia, which is in the south-east part of the Czech
Republic. It had been a Royal City since 1347 and with its six
universities it forms a cultural center of the region.
Brno can be
reached easily by direct flights from London and Munich and by trains or
buses from Prague (200 km) or Vienna (130 km).
IEEE Signal Processing Society 2006 International Workshop
on Multimedia Signal Processing (MMSP06),
October 3-6, 2006,
Fairmount Empress Hotel, Victoria, BC, Canada
- A Student Paper Contest with awards sponsored by Microsoft Research. To
enter the contest a paper submission must have a student as the first
- Overview sessions that consist of papers presenting the state-of-the-art
in methods and applications for selected topics of interest in multimedia
- Wrap-up presentations that summarize the main contributions of the papers
accepted at the workshop, hot topics and current trends in multimedia
- New content requirements for the submitted papers
- New review guidelines for the submitted papers
Papers are solicited for, but not limited to, the general areas:
- Multimedia Processing (modalities: audio, speech, visual, graphics,
other; processing: pre- and post- processing of multimodal data, joint
audio/visual and multimodal processing, joint source/channel coding, 2-D
and 3-D graphics/geometry coding and animation, multimedia streaming)
- Multimedia Databases (content analysis, representation, indexing,
recognition, and retrieval)
- Multimedia Security (data hiding, authentication, and access control)
- Multimedia Networking (priority-based QoS control and scheduling, traffic
engineering, soft IP multicast support, home networking technologies,
- Multimedia Systems Design, Implementation and Applications (design:
distributed multimedia systems, real-time and non real-time systems;
implementation: multimedia hardware and software; applications:
entertainment and games, IP video/web conferencing, wireless web, wireless
video phone, distance learning over the Internet, telemedicine over the
Internet, distributed virtual reality)
- Human-Machine Interfaces and Interaction using multiple modalities
- Human Perception (including integration of art and technology)
- Notification of acceptance by: June 8,
- Camera-ready paper submission by: July 8, 2006
(Instructions for Authors)
Check the workshop website
Manage your subscription at:
Call for papers 8th International Conference on Signal Processing
Nov. 16-20, 2006, Guilin, China
The 8th International Conference on Signal Processing will be held in Guilin,
China on Nov. 16-20, 2006. It will include sessions on all aspects of theory,
design and applications of signal processing. Prospective authors are invited to
propose papers in any of the following areas, but not limited to:
A. Digital Signal Processing (DSP)
B. Spectrum Estimation & Modeling
C. TF Spectrum Analysis & Wavelet
D. Higher Order Spectral Analysis
E. Adaptive Filtering &SP
F. Array Signal Processing
G. Hardware Implementation for Signal Processing
H. Speech and Audio Coding
I. Speech Synthesis & Recognition
J. Image Processing & Understanding
K. PDE for Image Processing
L. Video compression &Streaming
M. Computer Vision & VR
N. Multimedia & Human-computer Interaction
O. Statistic Learning & Pattern Recognition
P. AI & Neural Networks
Q. Communication Signal processing
R. SP for Internet and Wireless Communications
S. Biometrics & Authentification
T. SP for Bio-medical & Cognitive Science
U. SP for Bio-informatics
V. Signal Processing for Security
W. Radar Signal Processing
X. Sonar Signal Processing and Localization
Y. SP for Sensor Networks
Z. Application & Others
CFP CI 2006 Special Session on
Natural Language Processing for Real Life Applications
November 20-22, 2006 San Francisco, California, USA
The Special Session on Natural Language Processing for Real Life Applications
will cover the following topics (but is not limited to):
1. speech recognition, in particular
* multilingual speech recognition
* large vocabulary continuous speech recognition with focus on the
2. real life dialog systems
* natural language dialog systems
* multimodal dialog systems
3. speech-based classification
* speaker classification, i.e. exploiting paralinguistic features of
the speech to gather information about the speaker (for example age, gender,
cognitive load, and emotions)
* language and accent identification
Please submit papers for the special session directly to the session chair
(firstname.lastname@example.org). DO NOT submit the papers through the IASTED
website. Otherwise, the papers will be handled as general papers for the
conference. Each submission will be reviewed by at least two independent
reviewers. The final selection of papers for the session will be done by the
session chair; acceptance/rejection letters and review comments along with
registration information will be provided by IASTED by the general Notification
Please follow the formatting instructions provided by IASTED.
Submissions due June 15, 2006
Notification of acceptance August 1, 2006
Camera-ready manuscripts due September 1, 2006
Registration Deadline September 15, 2006
Conference November 20 - 22, 2006
All papers accepted for the special session are required to register before the
general conference registration deadline.
IEEE/ACL 2006 Workshop on Spoken Language Technology
Palm Beach, Aruba
December 10 -- December 13, 2006
Spoken language understanding; Spoken document summarization, Machine
translation for speech; Spoken dialog systems; Spoken language
generation; Spoken document retrieval; Human/Computer Interactions
(HCI); Speech data mining; Information extraction from speech;
Question/Answering from speech; Multimodal processing; Spoken language
systems, applications and standards.
Submissions for the Technical Program
The workshop program will consist of tutorials, oral and poster
presentations, and panel discussions. Attendance will be limited with
priority for those who will present technical papers; registration is
required of at least one author for each paper. Submissions are
encouraged on any of the topics listed above. The style guide,
templates, and submission form will follow the IEEE ICASSP
style. Three members of the Scientific Committee will review each
paper. The workshop proceedings will be published on a CD-ROM.
Camera-ready paper submission deadline July 15, 2006
Hotel Reservation and Workshop registration opens July 30, 2006
Paper Acceptance / Rejection September 1, 2006
Hotel Reservation and Workshop Registration closes October 15, 2006
Workshop December 10-13, 2006
Registration and Information
Registration and paper submission, as well as other workshop
information, can be found on the SLT website.
General Chair: Mazin Gilbert, AT&T, USA
Co-Chair: Hermann Ney, RWTH Aachen, Germany
Finance Chair: Gokhan Tur, SRI, USA
Publication Chair: Brian Roark, OGI/OHSU, USA
Publicity Chair: Eric Fosler-Lussier, Ohio State U., USA
Industrial Chair: Roberto Pieraccini, Tell-Eureka, USA
16th International Congress of Phonetic Sciences
Saarland University, Saarbrücken,
6-10 August 2007.
The first call for papers will be made in April 2006. The deadline for
*full-paper submission* to ICPhS 2007 Germany will be February 2007.
Further information is available under