Hurricane Wilma caused a lot of damages in Yucatan and Florida and a lot of people
were injured and buildings destroyed. Among them, the conference hotel for next ASRU 2005
is unable to welcome the conference. Organizers succeeded to find another venue for the
conference at the same dates but in San Juan, Porto Rico.
I recommend all of you who planned to attend this workshop to have a careful look on its
Do not forget to send the information you want to display for members in time to be included in IscaPad
(last week of each month).
TABLE OF CONTENTS
- ISCA News
- Courses, internships
- Books, databases, softwares
- Job openings
- Future Interspeech Conferences
- Future ISCA Tutorial and Research Workshops (ITRW)
- Forthcoming Events supported (but not organized) by ISCA
- Future Speech Science and technology events
-ORGANIZATION of INTERSPEECH 2009 -- EUROSPEECH
Call for proposals
Individuals or organisations interested in organizing INTERSPEECH 2009
-- EUROSPEECH should submit by 15 December 2005 a brief preliminary
* The name and position of the proposed general chair and other
The proposed period in September/October 2009 when the conference would be held
*The institution assuming financial responsibility for the conference and any other
*The city and conference center proposed
(with information on that center's capacity)
transportation and housing for conference participants
from local bodies (e.g. governmental)
*The commercial conference
organizer (if any)
*A preliminary budget
Guidelines for the preparation of the proposal are available at
information can be provided by Isabel
. Those who plan to put in a bid are asked to
inform ISCA of their intentions as soon as possible.
Proposals should be submitted by email to the above address.
Candidates fulfilling basic requirements will be asked to submit a
detailed proposal by 28 February 2006.
Vice President, ISCA
9 rua Alves Redol
1000-029 Lisbon, Portugal
are available for students and young scientists
attending meetings. Even if no information
on the grants is advertised on the conference announcement, they may apply.
For more information:
-RECHERCHES ACTUELLES EN PHONETIQUE ET PHONOLOGIE: UN ETAT DE L'ART
Seminaire organise par Bernard Laks et Noel Nguyen sous l'egide
del'Association pour le traitement automatique des langues (ATALA)
21 janvier 2006, 9h30-17h
ENST, 46 rue Barrault 75 013 Paris
1 La phonetique quantique et les primitives phonologiques,
Nick Clements (LPP, CNRS & Univ. Paris III)
2 Phonetique et phonologie de corpus,
Jacques Durand (ERSS, Univ. Toulouse Le Mirail / CNRS)
3 Modeles exemplaristes du traitement de la parole,
No=EBl Nguyen (LPL, CNRS & Univ. Provence)
4 Phonetics, phonology and neurolinguistics,
John Coleman (Univ. Oxford)
5 Intonation et perception de la parole,
Jacqueline Vaissiere (LPP, Univ. Paris III / CNRS)
6 Caracterisations d'accents a travers le traitement automatique de
Martine Adda & Philippe Boulay de Mareuil (LIMSI, CNRS)
7 Phonologie declarative,
Jean-Pierre Angoujard (Univ. Nantes)
8 Phonologie connexionniste,
Bernard Laks & Atanas Tchobanov (MoDyCo, Univ. Paris X / CNRS)
Le seminaire est gratuitement ouvert a toutes les personnes
interessees. Il n'est pas necessaire de s'inscrire auparavant.
-1st INTERNATIONAL PhD SCHOOL IN LANGUAGE AND SPEECH TECHNOLOGIES
Rovira i Virgili University
Research Group on Mathematical Linguistics
Website of the Group
Foundational courses (April-June 2006)
Foundations of Linguistics I: Morphology, Lexicon and Syntax -- M. Dolores Jiménez-López, Tarragona
Foundations of Linguistics II: Semantics, Pragmatics and Discourse -- Gemma Bel-Enguix, Tarragona
Formal Languages -- Carlos Martín-Vide, Tarragona
Declarative Programming Languages: Prolog, Lisp -- various researchers at the host institute
Procedural Programming Languages: C, Java, Perl, Matlab -- various researchers at the host institute
Main courses (July-December 2006)
POS Tagging, Chunking, and Shallow Parsing -- Yuji Matsumoto, Nara
Empirical Approaches to Word Sense Disambiguation, Semantic Role Labeling, Semantic Parsing, and Information Extraction -- Raymond Mooney, Austin TX
Ontology Engineering: From Cognitive Science to the Semantic Web -- M. Teresa Pazienza, Roma
Anaphora Resolution in Natural Language Processing -- Ruslan Mitkov, Wolverhampton
Language Processing for Human-Machine Dialogue Modelling -- Yorick Wilks, Sheffield
Spoken Dialogue Systems -- Diane Litman, Pittsburgh PA
Natural Language Processing Pragmatics: Probabilistic Methods and User Modeling Implications -- Ingrid Zukerman, Clayton
Machine Learning Approaches to Developing Language Processing Modules -- Walter Daelemans, Antwerpen
Multimodal Speech-Based Interfaces -- Elisabeth André, Augsburg
Information Extraction -- Guy Lapalme, Montréal QC
Search Methods in Natural Language Processing -- Helmut Horacek, Saarbrücken
Optional courses (from the 5th International PhD School in Formal Languages and Applications)
Tree Adjoining Grammars -- James Rogers, Richmond IN
Uni?cation Grammars -- Shuly Wintner, Haifa
Context-Free Grammar Parsing -- Giorgio Satta, Padua
Probabilistic Parsing -- Mark-Jan Nederhof, Groningen
Categorial Grammars -- Michael Moortgat, Utrecht
Weighted Finite-State Transducers -- Mehryar Mohri, New York NY
Finite State Technology for Linguistic Applications -- André Kempe, Xerox, Grenoble
Natural Language Processing with Symbolic Neural Networks -- Risto Miikkulainen, Austin TX
Candidate students for the programme are welcome from around the world.
Most appropriate degrees include Computer Science and Linguistics, but other
students (for instance, from Psychology, Logic, Engineering or Mathematics) can be
accepted depending on the strengths of their undergraduate training. The ?rst two
months of class are intended to homogenize the students’ varied background.
In order to check eligibility for the programme, the student must be certain that
the highest university degree s/he got enables her/him to be enrolled in a doctoral
programme in her/his home country.
1,700 euros in total, approximately.
After following the courses, the students enrolled in the programme
will have to write and defend a research project and, later, a dissertation in
English in their own area of interest, in order to get the so-called European
PhD degree (which is a standard PhD degree with an additional mark of quality).
All the professors in the programme will be allowed to supervise students’ work.
During the teaching semesters, funding opportunities will be provided,
among others, by the Spanish Ministry for Foreign Affairs and Cooperation
(Becas MAEC), and by the European Commission (Alban scheme for Latin American citizens).
Additionally, the host university will have a limited amount of economic resources
itself for covering the tuition fees and full-board accommodation of a few students.
Immediately after the courses and during the writing of the PhD dissertation,
some of the best students will be offered 4-year research fellowships, which
will allow them to work in the framework of the host research group.
In order to pre-register, one should post (not fax, not e-mail) to the
a xerocopy of the main page of the passport,
a xerocopy of the highest university education diploma,
a xerocopy of the academic record,
letters of recommendation (optional),
any other document to prove background, interest and motivation (optional).
Announcement of the programme: September 12, 2005
Pre-registration deadline: November 30, 2005
Selection of students: December 7, 2005
Starting of the classes: April 18, 2006
Summer break (tentative): July 25, 2006
Re-starting of the classes (tentative): September 4, 2006
End of the classes (tentative): December 22, 2006
Defense of the research project (tentative): September 14, 2007
DEA examination (tentative): April 27, 2008
Questions and Further Information:
Please, contact the programme chairman, Carlos
Research Group on Mathematical Linguistics
Rovira i Virgili University
Pl. Imperial Tàrraco, 1
43005 Tarragona, Spain
Phone: +34-977-559543, +34-977-554391
Fax: +34-977-559597, +34-977-554391
BOOKS, DATABASES, SOFTWARES
We invite all laboratories and industrial companies which have job offers
to send them to the ISCApad editor:
they will appear in the newsletter and on our website for free.
(also have a look at http://www.isca-speech.org/jobs
as well as http://www.elsnet.org Jobs)
-PhD Position in Greece (Crete)-on Speech Processing
A PhD position is available for a period of 3 years at the Institute of Computer Science of Foundation
of Research and Technology Hellas, FORTH in collaboration with The Computer Science Department of the University of Crete (), Heraklion Crete, Greece. The research topic is basic research on speech processing algorithms for speech analysis, with applications to speech synthesis. The position is financed by the Research and Development Center of France Telecom. Applicants should have MSc degree in Electrical Engineering, or Computer Science or equivalent. A strong background in Signal Processing and Statistical Signal Processing is required. Candidates must have followed a course on Speech Processing as a graduate or undergraduate course. Knowledge of programming with Matlab is expected.
Starting date is estimated to be on Feb. 1st 2006
Prospective applicants should forward by Dec 10th, 2005, their resume (CV) and at least two
recommendation letters to:
Ass. Prof. Yannis Stylianou,
Department of Computer Science
University of Crete
714 09 Heraklion, Crete,
Ph: +30 2810 393 559
Fax: +30 2810 393 501
Prof Yannis Stylianou
-PostDoc position in Speech recognition, LORIA-INRIA, France
The work will be carried out within ST-TAP project, grant-aided by the
French Research Department. The aim of this
project is to use tools provided by speech recognition research to
speed up the creation of close captions for TV program for deaf people.
The objective of this project is to provide, nearly in
real time, close captions for TV broadcast news.
The work is situated at the crossroads of research and implementation.
The objective of this research is to investigate approaches to speech
recognition that have the potential to improve the generation of close
captions. Therefore, two tasks could be investigated:
- when the newscaster reads the teleprompter, the software must
perform an alignment between the text of the teleprompter and the audio
signal to obtain the beginning and the end of each uttered word.
- when the newscaster improvises or during an interview, an automatic speech
recognition will be performed and the result will be manually corrected.
Suitable background of applicants
Previous experiences in engineering, computing and speech recognition.
good knowledge in C/C++
Duration of employment
1700 euros per month
Flexible, but preferably as soon as possible
Applications should be sent electronically to Dominique Fohr.
Applications should include a CV, a detailed resume and name and
contact information of two references
(e.g. Master thesis supervisor, head of department).
Questions and information
Dominique Fohr (+33) 383 59 20 27
-POSTDOCTORAL RESEARCH SCIENTIST POSITION IN ARABIC DIALECT NLP AT COLUMBIA
UNIVERSITY, NEW YORK, USA
The CADIM (Columbia Arabic DIalect Modeling) group at the Center for
Computational Learning Systems at Columbia University is looking for a
postdoc, to start fall 2005/winter 2006.
We are working on a project to develop natural language processing (NLP)
tools for Arabic dialects. Since many of the dialects are resource poor, the
premise of the work is to adapt resources from Modern Standard Arabic
(fusha). We are interested in using explicit linguistic knowledge in NLP
tools. While the engineering goal of the project is to build NLP tools, the
scientific goal is to better understand dialectal variation. The candidate
will work with a team of researchers on morphology, parsing, analysis of
code switching, and related topics.
As minimum requirements, the candidate will have:
* A doctorate in computational linguistics, computer science, linguistics,
cognitive science, electrical engineering, or a related discipline.
* Experience with corpus-based methods.
* Familiarity with linguistic notions from phonology, morphology, syntax
* Familiarity with Semitic language phenomena
* Programming skills.
The ideal candidate would also have:
* Hands-on experience with machine learning.
* Command of Modern Standard Arabic (fusha) and one of the Arabic dialects.
* Excitement about interdisciplinary work.
The exact start and end dates are open to negotiation. Columbia University
will help the candidate obtain any necessary visa and housing. Columbia
University is located in the heart of New York City, one of the culturally
most exciting, diverse, and inclusive cities in the world. For more
information, please contact Mona Diab, Nizar Habash , or Owen Rambow . Also check out the
-MSc researcher at University of Ulm (Germany)
The Dialogue Systems Group in the Department of Engineering Sciences,
University of Ulm is seeking
a researcher at MSc level to work on
aspects of Spoken Language Dialogue Systems Development in close
cooperation with our industry partner
, Tell-Eureka New York (USA).
building tools for rapid design, adaptation and improvement of
high performance statistical spoken language understanding systems.
Perspective: PhD Degree.
Good programming skills in C, C++, Perl, VoiceXML, Java,
expertise in speech and dialogue technologies would also be
Financial support is envisoned on a grant basis within the framework
of the Graduate School: Mathematical Analysis of Evolution,
Information and Complexity - University of Ulm
Candidates should send their application electronically to
Wolfgang Minker. The application should include a
short resume and a transcript of records with the results of exams
relevant to the Diploma/MSc Degree. A pdf-version of the Diploma/MSc
Thesis may also be included.
Professor Wolfgang Minker
University of Ulm
Department of Information Technology
Phone: +49 731 502 6254/-6251
Fax: +49 691 330 3925516
PhD position in Articulatory measurements and modeling-Centre for Speech Technology, KTH, Sweden
The work will be carried out within ASPI - audiovisual-to-articulatory
speech inversion, a project funded by the European Union’s 6th Framework
Program with partners in LORIA, Nancy, France; ENST, Paris, France; ULB,
Brussels, Belgium and ICCS-NTU, Athens, Greece.
Audiovisual-to-articulatory inversion consists in recovering the
dynamics of the vocal tract shape from the acoustical speech signal and
image analysis of the speaker's face. Being able to recover this
information automatically would be a major break-through in speech
technology, e.g. for pronunciation training in computer-assisted
The work aims at providing a flexible acquisition technique and
associated processing methods in order to train and test articulatory
inversion. The set-up will combine various imaging and sensor techniques
that bring complementary information on the articulation either on a
temporal or a spacial point of view.
The first objective is to design a measurement set-up consisting of
ultrasound, electromagnetic tracking and stereovision in collaboration
with LORIA, Nancy, France, where an identical measurement set-up will be
used. The second is to successfully use the data collected with the
acquisition set-up to improve a dynamic three-dimensional model of
Suitable background of applicants:
Previous experiences in engineering, computing, phonetics, medical
imaging, image analysis or articulatory measurements, as well as an
excellent level of English are desirable.
Duration of employment:
3 years with possibility for prolongation.
The salary will follow KTH PhD salaries, i.e. a monthly base salary of
20200 SEK (approx. 2200 EUR) increasing to 24600 SEK (approx. 2650 EUR)
Flexible, but preferably as soon as the application process and the
successful applicant allow for.
Applications should be sent electronically to
Olov Engwall and
as soon as possible. We will begin reviewing applications by November
15th 2005, but accept applications until the position is filled.
Applications should include a CV, a detailed resume and name and contact
information of two references (e.g. Master thesis supervisor, head of
Questions and information:
Olov Engwall , +468-790 75 65
Centre for Speech Technology, KTH
SE-100 44 Stockholm, SWEDEN
A description of the work can be found on
-Several open positions at the National Centre of Competence in
Research on Affective Sciences (NCCR) (Switserland)
We invite applications for two research fellowships in a new National Centre of Competence
in Research on Affective Sciences (NCCR) financed by the Swiss National Science
Foundation and the University of Geneva
Project 1 Klaus Scherer/Guido Gendolla - Appraisal and Emotion Elicitation
We are conducting research on the role of motivational factors in emotion-antecedent appraisal using a variety of ANS response measures. We are looking for a person having recently terminated a doctorate involving psychophysiological research who might be interested in running such studies as part of a one- or two-year postdoctoral fellowship (a stipend of approx. CHF 45'000 per year plus travel expenses) in the Leading House of the Center at the University of Geneva as well as develop their own research program.
Alternatively, we could imagine a graduate student well-trained in psychophysiological
methods to use this opportunity for a research internship or a year abroad, allowing
participation in the Center's Graduate School (stipend approx. CHF 33'000 plus travel
Project 2 Klaus Scherer/Susanne Kaiser - Emotional Response Patterning
In the context of empirical research on the synchronisation of multimodal response patterning in emotion episodes, we offer a postdoctoral research position for a signal processing and modeling specialist. The applicant should have a strong background in the mathematical and statistical bases of biosignal processing and have some experience with modeling, including the use of Matlab including Simulink (or other simulation and modeling software). Salary level: Between CHF 50'000 to 60'000 a year depending on age and experience.
The postdocs will participate in all activities of the interdisciplinary Center for Affective Sciences, which provides a stimulating and enriching academic experience as well as additional training in both emotion theory and a variety of pertinent methods.
Potential candidates can find further information about the NCCR as well as an
application form at the following website
Inquiries can be directed to our email address.
-RESEARCH OPENINGS AT ICSI (Berkeley).
The International Computer Science Institute
(ICSI) invites applications
for positions in speech processing. Interested parties with a range of
experience (e.g., both recent PhDs and those with more extensive
experience) are encouraged to apply.
The ICSI Speech Group (including its predecessor, the ICSI Realization
Group) has been a source of novel approaches to speech processing since
1988. It is primarily known for its work in speech recognition, although
it has housed major projects in speaker recognition, metadata
extraction, and speech coding in the last few years.
Applications should include a cover letter, vita, and the names of at
least 3 references (with both postal and email addresses). Applications
should be sent by email
and by postal mail to:
Director (Speech Research)
1947 Center Street
Berkeley, CA 94704
ICSI is an Affirmative Action/Equal Opportunity Employer. Applications
from women and minorities are especially encouraged. Hiring is
contingent on eligibility to work in the United States.
PhD Studentship on 'Communicative/Expressive Speech Synthesis' at University of
Recent years have seen a substantial growth in the capabilities of Speech
Technology systems, both in the research laboratory and in the commercial
marketplace. However, despite this progress, contemporary speech technology
is not able to fulfil the requirements demanded by many potential
applications, and performance is still significantly short of the
capabilities exhibited by human talkers and listeners, especially in
interactive real-world environments.
This shortfall is especially noticeable in the 'text-to-speech' (TTS)
systems that have been developed for automated spoken language output.
Considerable advances have been made in naturalness and voice quality, yet
state-of-the-art TTS systems still exhibit a rather limited range of
speaking styles, a general lack of expressiveness and restricted
The objective of this research is to investigate novel approaches to
text-to-speech synthesis that have the potential to overcome these
limitations, and which could contribute to the next-generation of
speech-based systems, especially in application areas such as assistive
Funding is available immediately for an eligible UK/EU student. Applicants
should possess a computational background and should ideally have some
knowledge/experience of speech processing.
Thesis Supervisor: Prof. Roger K. Moore
For further information, contact
Prof. Roger Moore or see
for how to apply.
The Speech and Hearing research group in
Computer Science at the University of Sheffield has an international
reputation in the multi-disciplinary field of speech and hearing research.
With three chairs, four faculty, five research associates and around twelve
research students, this is one of the strongest teams worldwide. A unique
aspect of the group is the wide spectrum of research topics covered, from
the psychophysics of hearing through to the engineering of state-of-the-art
speech technology systems.
-IDIAP Research Institute, Switzerland: two open positions for
IDIAP is currently seeking exceptional senior researchers with a proven
record of high level research, as well as project management, in the areas
of speech processing and computer vision.
Activities in speech processing currently cover speech recognition
(using HMMs and hybrid HMM/ANN approaches), novel feature extraction and
acoustic modeling techniques, decoders for large vocabulary speech recognition
systems, sound source localizaion and tracking (microphone arrays),
speaker turn detection, etc.
Activities in computer vision currently cover object recognition, motion
analysis, text recognition, detection and recognition (for faces, gestures, etc),
and video indexing.
Most of these research ativities take place in the framework of National
long term research initiatives such as the National Centre of Competence
in Research (NCCR) on "Interactive Multimodal Information Management"
(IM2 for more detail) or large European projects such
as "Augmented Multi-party Interaction" (see AMI
Succesfull candidates are expected to have several years experience
in the above areas, with good and practical knowledge of C/C++ and
related programming languages. While still being active in research,
they also have experience in project management and supervision of
researchers, including PhD students. Given the links between IDIAP
and the Swiss Federal Institute of Technology in Lausanne (EPFL),
academic careers can also be envisionned for exceptional candidates.
Interested candidates should send a letter of motivation, along with
their detailed CV and names of 3 references to our
human resources dpt.
More information can also be obtained by contacting
Prof. Hervé Bourlard
-Papers accepted for FUTURE PUBLICATION in Speech Communication
Full text available on
for Speech Communication subscribers and subscribing institutions.
Click on Publications, then on Speech Communication and on Articles in press.
The list of papers in press is displayed and a .pdf file for each paper is available.
Amalia Arvaniti, D. Robert Ladd and Ineke Mennen, Phonetic effects of focus and "tonal crowding" in intonation: Evidence from Greek polar questions, Speech Communication, In Press,
Uncorrected Proof, Available online 26 October 2005, .
Dimitrios Dimitriadis and Petros Maragos, Continuous energy demodulation methods and application to speech analysis, Speech Communication, In Press, Uncorrected Proof, Available online 25 October 2005, .
Daniel Recasens and Aina Espinosa, Dispersion and variability of Catalan vowels, Speech Communication, In Press, Uncorrected Proof, Available online 24 October 2005, .
Cynthia G. Clopper and David B. Pisoni, The Nationwide Speech Project: A new corpus of American English dialects, Speech Communication, In Press, Corrected Proof, Available online 21 October 2005, .
Diane J. Litman and Kate Forbes-Riley, Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors, Speech Communication, In Press, Uncorrected Proof, Available online 19 October 2005, .
Carsten Meyer and Hauke Schramm, Boosting HMM acoustic models in large vocabulary speech recognition, Speech Communication, In Press, Corrected Proof, Available online 19 October 2005, .
SungHee Kim, Robert D. Frisina, Frances M. Mapes, Elizabeth D. Hickman and D. Robert Frisina, Effect of age on binaural speech intelligibility in normal hearing adults, Speech Communication, In Press, Corrected Proof, Available online 17 October 2005, .
Tong Zhang, Mark Hasegawa-Johnson and Stephen E. Levinson, Cognitive state classification in a spoken tutorial dialogue system, Speech Communication, In Press, Corrected Proof, Available online 17 October 2005, .
Mark D. Skowronski and John G. Harris, Applied principles of clear and Lombard speech for automated intelligibility enhancement in noisy environments, Speech Communication, In Press, Corrected Proof, Available online 17 October 2005, .
Marcos Faúndez-Zanuy, Speech coding through adaptive combined nonlinear prediction, Speech Communication, In Press, Uncorrected Proof, Available online 17 October 2005, .
Praveen Kakumanu, Anna Esposito, Oscar N. Garcia and Ricardo Gutierrez-Osuna, A comparison of acoustic coding models for speech-driven facial animation, Speech Communication, In Press, Corrected Proof, Available online 17 October 2005, .
Srinivas Bangalore, Dilek Hakkani-Tür and Gokhan Tur, Introduction to the Special Issue on Spoken Language Understanding in Conversational Systems, Speech Communication, In Press, Corrected Proof, Available online 28 September 2005, .
Sundarrajan Rangachari and Philipos C. Loizou, A noise-estimation algorithm for highly non-stationary environments, Speech Communication, In Press, Corrected Proof, Available online 21 September 2005, .
Marián Képesi and Luis Weruaga, Adaptive chirp-based time-frequency analysis of speech signals, Speech Communication, In Press, Corrected Proof, Available online 21 September 2005, .
Hauke Schramm, Xavier Aubert, Bart Bakker, Carsten Meyer and Hermann Ney, Modeling spontaneous speech variability in professional dictation, Speech Communication, In Press, Corrected Proof, Available online 19 September 2005, .
Kotta Manohar and Preeti Rao, Speech enhancement in nonstationary noise environments using noise properties, Speech Communication, In Press, Corrected Proof, Available online 15 September 2005, .
Srinivas Bangalore, Dilek Hakkani-Tür and Gokhan Tur, Introduction to the Special Issue on Spoken Language Understanding in Conversational Systems, Speech Communication, In Press, Corrected Proof, Available online 13 September 2005, .
Cheng-Lung Lee, Wen-Whei Chang and Yuan-Chuan Chiang, Spectral and prosodic transformations of hearing-impaired Mandarin speech, Speech Communication, In Press, Corrected Proof, Available online 7 September 2005, .
Tong Zhang, Mark Hasegawa-Johnson and Stephen E. Levinson, Extraction of pragmatic and semantic salience from spontaneous spoken English, Speech Communication, In Press, Corrected Proof, Available online 16 August 2005, .
Vlasios Doumpiotis and William Byrne, Lattice segmentation and minimum Bayes risk discriminative training for large vocabulary continuous speech recognition, Speech Communication, In Press, Corrected Proof, Available online 15 August 2005, .
Konstantin Markov, Jianwu Dang and Satoshi Nakamura, Integration of articulatory and spectrum features based on the hybrid HMM/BN modeling framework, Speech Communication, In Press, Corrected Proof, Available online 15 August 2005, .
A. Facco, D. Falavigna, R. Gretter and M. Viganò, Design and evaluation of acoustic and language models for large scale telephone services, Speech Communication, In Press, Corrected Proof, Available online 15 August 2005, .
Arnaud Martin and Laurent Mauuary, Robust speech/non-speech detection based on LDA-derived parameter and voicing parameter for speech recognition in noisy environments, Speech Communication, In Press, Corrected Proof, Available online 15 August 2005, .
Hilda Hardy, Alan Biermann, R. Bryce Inouye, Ashley McKenzie, Tomek Strzalkowski, Cristian Ursu, Nick Webb and Min Wu, The Amitiés system: Data-driven techniques for automated dialogue, Speech Communication, In Press, Corrected Proof, Available online 15 August 2005, .
Ye-Yi Wang and Alex Acero, Rapid development of spoken language understanding grammars, Speech Communication, In Press, Uncorrected Proof, Available online 8 August 2005, .
Johan Boye, Joakim Gustafson and Mats Wirén, Robust spoken language understanding in a computer game, Speech Communication, In Press, Corrected Proof, Available online 8 August 2005, .
Christian Raymond, Frédéric Béchet, Renato De Mori and Géraldine Damnati, On the use of finite state transducers for semantic interpretation, Speech Communication, In Press, Corrected Proof, Available online 28 July 2005, .
Junfeng Li and Masato Akagi, A noise reduction system based on hybrid noise estimation technique and post-filtering in arbitrary noise environments, Speech Communication, In Press, Corrected Proof, Available online 28 July 2005, .
Yassine Mami and Delphine Charlet, Speaker recognition by location in the space of reference speakers, Speech Communication, In Press, Corrected Proof, Available online 28 July 2005, .
Murat Saraçlar and Brian Roark, Utterance classification with discriminative language modeling, Speech Communication, In Press, Corrected Proof, Available online 25 July 2005, .
Ryuichiro Higashinaka, Katsuhito Sudoh and Mikio Nakano, Incorporating discourse features into confidence scoring of intention recognition results in spoken dialogue systems, Speech Communication, In Press, Corrected Proof, Available online 25 July 2005, .
Patrick Haffner, Scaling large margin classifiers for spoken language understanding, Speech Communication, In Press, Corrected Proof, Available online 22 July 2005, .
Ruiqiang Zhang and Genichiro Kikui, Integration of speech recognition and machine translation: Speech recognition word lattice translation, Speech Communication, In Press, Corrected Proof, Available online 22 July 2005, .
Marie Roch, Gaussian-selection-based non-optimal search for speaker identification, Speech Communication, In Press, Corrected Proof, Available online 18 July 2005, .
Qiang Huang and Stephen Cox, Task-independent call-routing, Speech Communication, In Press, Corrected Proof, Available online 12 July 2005, .
Yulan He and Steve Young, Spoken language understanding using the Hidden Vector State Model, Speech Communication, In Press, Corrected Proof, Available online 12 July 2005, .
K.Y. Leung, M.W. Mak, M.H. Siu and S.Y. Kung, Adaptive articulatory feature-based conditional pronunciation modeling for speaker verification, Speech Communication, In Press, Corrected Proof, Available online 29 June 2005, .
Chang Huai You, Soo Ngee Koh and Susanto Rahardja, Masking-based [beta]-order MMSE speech enhancement, Speech Communication, In Press, Corrected Proof, Available online 29 June 2005, .
Tomoki Toda, Hisashi Kawai, Minoru Tsuzaki and Kiyohiro Shikano, An evaluation of cost functions sensitively capturing local degradation of naturalness for segment selection in concatenative speech synthesis,
Speech Communication, In Press, Corrected Proof, Available online 29 June 2005, .
Sebastian Möller, Jan Krebber and Paula Smeele, Evaluating the speech output component of a smart-home system, Speech Communication, In Press, Corrected Proof, Available online 14 June 2005, .
Giampiero Salvi, Dynamic behaviour of connectionist speech recognition with strong latency constraints, Speech Communication, In Press, Corrected Proof, Available online 14 June 2005, .
Christopher Dromey, Shawn Nissen, Petrea Nohr and Samuel G. Fletcher, Measuring tongue movements during speech: Adaptation of a magnetic jaw-tracking system, Speech Communication, In Press, Corrected Proof, Available online 14 June 2005, .
Erhard Rank and Gernot Kubin, An oscillator-plus-noise model for speech synthesis, Speech Communication, In Press, Corrected Proof, Available online 21 April 2005, .
Chai Wutiwiwatchai and Sadaoki Furui, A multi-stage approach for Thai spoken language understanding, Speech Communication, In Press, Corrected Proof, Available online 21 April 2005, .
Kevin M. Indrebo, Richard J. Povinelli and Michael T. Johnson, Sub-banded reconstructed phase spaces for speech recognition, Speech Communication, In Press, Corrected Proof, Available online 24 February 2005, .
Publication policy: Hereunder, you will find very short announcements of future
events. The full call for participation can be accessed on the conference websites
See also our Web pages (www.isca-speech.org)
on conferences and workshops.
FUTURE INTERSPEECH CONFERENCES
-INTERSPEECH (ICSLP)-2006 17-21 September 2006, Pittsburgh, PA, USA
Chair: Richard M.Stern, Carnegie Mellon University,USA
-INTERSPEECH (EUROSPEECH)-2007 August 27-31,2007,Antwerp, Belgium
Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves, K.U.Nijmegen
-INTERSPEECH (ICSLP)-2008 September 22-26, 2008,
Brisbane, New South Wales, Australia
Chairman: Denis Burnham,
MARCS, University of West Sydney.
FUTURE ISCA TUTORIAL AND RESEARCH WORKSHOP (ITRW)
First Call for Papers for ASIDE2005 - COST278 Final Workshop -Aalborg, Denmark
Applied Spoken Language Interaction in Distributed Environments
November 10th and 11th 2005 , Aalborg University, Denmark
· Wired and wireless distributed environments for spoken language interaction
· Experiences from deploying services and systems
· Distributed architectures
· Distributed Multi-modal interactive systems
· Personalisation and context-awareness
· Evaluation of interactive systems
· Research challenges in spoken language interaction
· Standard formalisms and architectures
In addition to regular technical sessions, the workshop will include invited plenary
talks on topics of related general interest. The workshop will be divided into four
sessions during the two days.
email address .
Participation to the workshop will be restricted to around 90 people.
EXTENDED Submission deadline: September 26, 2005
Notification of acceptance: October 5, 2005
Workshop: November 10th and 11th, 2005
ISCA Workshop on Multilingual Speech and Language Processing (MULTILING 2006)
Organized by: Stellenbosch University Centre for Language and Speech Technology
in collaboration with ISCA
9-11 April 2006, Stellenbosch, South Africa
Keynote speaker: Tanja Schultz - Interactive Systems Laboratories,
Carnegie Mellon University
EXTENDED Deadline for abstract submission: 12 September 2005
Notification of acceptance: 14 October 2005
Deadline of early registration & full paper submission: 10 February 2006
Workshop dates: 9-11 April 2006
Contact: Justus Roux or consult the
Workshop on Speech Recognition and Intrinsic Variation - Toulouse, France
ITRW on Speech Recognition and Intrinsic Variation
May 20th 2006, Toulouse, France
Satellite of ICASSP-2006
email address .
- Accented speech modeling and recognition,
- Children speech modeling and recognition,
- Non-stationarity and relevant analysis methods,
- Speech spectral and temporal variations,
- Spontaneous speech modeling and recognition,
- Speech variation due to emotions,
- Speech corpora covering sources of variation,
- Acoustic-phonetic correlates of variations,
- Impact and characterization of speech variations on ASR,
- Speaker adaptation and adapted training,
- Novel analysis and modeling structures,
- Man/machine confrontation: ASR and HSR (human speech recognition),
- Disagnosis of speech recognition models,
- Intrinsic variations in multimodal recognition,
- Review papers on these topics are also welcome,
- Application and services scenarios involving strong speech variations
Submission deadline: Feb. 1, 2006
Notification acceptance: Mar. 1, 2006
Final manuscript due: Mar. 15, 2006
Progam available: Mar. 22, 2006
Registration deadline: Mar. 29, 2006
Workshop: May 20, 2006 (after ICASSP 2006)
This event is organized as a satellite of the ICASSP 2006 conference.
The workshop will take place in Toulouse, on 20 May 2006, just after the
conference, which ends May 19. The workshop will consist of oral and poster
sessions, as well as talks by guest speakers.
email address .
ISCA Tutorial and Research Workshop on Experimental Linguistics
28-30 August 2006, Athens Greece
CALL FOR PAPERS
The general aims of the Workshop are to bring together researchers of linguistics and related disciplines in a
unified context as well as to discuss the development of experimental methodologies in linguistic research with
reference to linguistic theory, linguistic models and language applications.
SUBJECTS AND RELATED DISCIPLINES
1. Theory of language
2. Cognitive linguistics
4. Speech production
5. Speech acoustics
10. Speech perception
14. Discourse linguistics
15. Computational linguistics
16. Language technology
1 February 2006, deadline of abstract submission
1 March 2006, notification of acceptance
1 April 2006, registration
1 May 2006, camera ready paper submission
28-30 August 2006, Workshop
Antonis Botinis, University of Athens, Greece
Marios Fourakis, University of Wisconsin-Madison, USA
Barbara Gawronska, University of Skövde, Sweden
Aikaterini Bakakou-Orphanou, University of Athens
Antonis Botinis, University of Athens
Christoforos Charalambakis, University of Athens
ISCA Workshop on Experimental Linguistics
Department of Linguistics
University of Athens
Workshop site address
- Second ISCA Tutorial and Research Workshop on PERCEPTUAL QUALITY OF SYSTEMS
Berlin, Germany, 4 - 6 September 2006
Harnack-Haus, Berlin, Germany, Website
Ute Jekosch (IAS, Technical University of Dresden)
Sebastian Moeller (Deutsche Telekom Labs, Technical University of
Alexander Raake (Deutsche Telekom Labs, Technical University of Berlin)
Detailed information on the workshop and a Call for Papers will follow
in the next ISCApad.
-ITRW on Statistical and Perceptual Audition (
A satellite workshop of ICSLP-Interspeech 2006
September 16, 2006, Pittsburgh, PA, USA
Generalized audio analysis
Submission of a 4-6 pages long paper deadline (double column) April 21 2006
Notification of acceptance June 9, 2006
FORTHCOMING EVENTS SUPPORTED (but not organized) by ISCA
SPEECH PROSODY 2006 - ANNOUNCEMENT AND CALL FOR PAPERS
International Conference on Speech Prosody
May 2-5 2006
International Congress Center, Dresden, Germany
For further information, visit our
We invite contributions in
any of the following areas and also appreciate suggestions for Special
* Prosody and the Brain
* Prosody and Speech Production
* Analysis, Formulation and Modeling of Prosody
* Syntax, Semantics, Pragmatics and Prosody
* Cross-linguistic Studies of Prosody
* Prosodic Variability
* Prosody of Dialogues and Spontaneous Speech
* Prosody and Affect
* Prosody and Speech Perception
* Prosody in Speech Synthesis
* Prosody in Speech Recognition and Understanding
* Prosody in Language Learning
* Auditory-Visual Production and Perception of Prosody
* Pathology of Prosody and Aids for the Impaired
* Annotation and Speech Corpus Creation
Ruediger Hoffmann - Chair
Hansjoerg Mixdorff - Program Chair
Oliver Jokisch - Technical Chair
Proposals for special sessions: November 11, 2005
Full 4-page paper submission: December 9, 2005
Advanced registration deadline: February 28, 2006
Conference: May 2-5, 2006
Post-conference day: May 6, 2006
ISCA 2nd Workshop on Multimodal User Authentication
A satellite conference of ICASSP 2006 in Toulouse France.
Eye and face analysis
Audio/Image indexing and retrieval
Joint audio/video processing
Multimodal Fusion and Integration Techniques for Authentication
Intelligent interfaces for biometric systems and data bases and tools for system
Applications and implementations of multimodal user authentication systems
Privacy issues and standards
Electronic submission of photo ready paper January 15, 2006
Notification of acceptance March 8 2006
Advance registration before March 15 2006
Final papers due March 15, 2006
-6th ISCA Speech Synthesis Research Workshop (SSW-6)
Bonn (Germany), August 22-24, 2007
A satellite of Interspeech 2007 (Antwerp)in collaboration with SynSIG
Details will be posted by early 2007
Prof. Wolfgang Hess
FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS
-IEEE ASRU 2005
Automatic Speech Recognition and Understanding Workshop
moved from Cancun, Mexico to San Juan, Porto Rico
November 27 - December 1, 2005
Due to the damages caused by hurridcane Wilma to the conference hotel, ASRU 2005 will be moved to San Juan Porto Rico
with unchanged dates. PLease have a look on the website for regular updates.
Papers in all areas of human language technology
are encouraged to be submitted, with emphasis placed on automatic speech recognition
and understanding technology, speech to text systems, spoken dialog systems,
multilingual language processing, robustness in ASR, spoken document retrieval,
and speech-to-speech translation.
Submit full-length, 4-6 page papers, including
figures and references, to www.asru2005.org
Special session proposals should be submitted by June 15 ,2005, to
email@example.com and must include a topical title, rationale, session
outline, contact information, and a description of how the session will be organized.
May 1, 2005 Workshop registration opens
July 1, 2005 Camera-ready paper submission deadline
August 15, 2005 Paper Acceptance / Rejection notices mailed
Sept. 15, 2005 Revised Papers Due and Author Registration Deadline
Oct. 1, 2005 Hotel Reservation and Workshop Registration
Nov. 27 - Dec.1, 2005 Workshop IN SAN JUAN PORTO RICO
-TC-STAR Workshop on Speech Translation: Trento Italy
TC-STAR Workshop on Speech Translation
Trento, 29-31 March 2006 (before EACL 2006)
First Call For Participation
This workshop is sponsored by the European Integrated Project TC-STAR (Technologies and Corpora for
Speech-to-speech Translation Research). It aims to expand outside the TC-STAR research community and
to work in the areas of Automatic Speech Recognition (ASR) and Spoken Language Translation (SLT).
Students and researchers in the field of human language technology are invited to contribute to the following
topics proposed by the organizers:
* Integration of ASR and SLT
* System combination in ASR and SLT
Some months before the workshop, shared tasks will be defined and language resources and tools for them
will be made available to registered participants. The considered application domain will be the translation
of European Parliament speeches from Spanish to English, and vice versa. For both tasks, word graphs and
n-best lists generated by different ASR and SLT systems will be provided. Training and testing collections
to develop and evaluate a SLT system will distributed, too.
Participants will be given the opportunity to present and discuss their results at the workshop and to attend
tutorials held by experts in the field. A limited number of grants will be made available to students and
junior researchers to cover lodging and food expenses.
Marcello Federico, ITC-irst, Trento
Ralf Schlüter, RWTH, Aachen
-TC-STAR Second Evaluation Campaign 2006
TC-STAR is an European integrated project focusing on Speech-to-Speech Translation (SST). To encourage significant advances in all SST technologies, annual competitive evaluations are organized. Automatic Speech Recognition (ASR), Spoken Language Translation (SLT) and Text-To-Speech (TTS) are evaluated independently and within an end-to-end system. The project targets a selection of unconstrained conversational speech domains-speeches and broadcast news-and three languages: European English, European Spanish, and Mandarin Chinese.
The first evaluation took place in March 2005 for ASR and SLT and September 2005 for TTS. TC-STAR welcomes outside participants in its 2nd evaluation of January-February 2006. This participation is free of charge.
The TC-STAR 2006 evaluation campaign will consider:
· SLT in the following directions :
o Chinese-to-English (Broadcast News)
o Spanish-to-English (European Parliament plenary speeches)
o English-to-Spanish (European Parliament plenary speeches)
· ASR in the following languages :
o English (European Parliament plenary speeches)
o Spanish (European Parliament plenary speeches)
o Mandarin Chinese (Broadcast News)
· TTS in Chinese, English, and Spanish under the following conditions:
o Complete system: participants use their own training data
o Voice conversion intralingual and crosslingual, expressive speech: data provided by TC-STAR
o Component evaluation
For ASR and SLT, training data will be made available by the TC-STAR project for English and Spanish and
can be purchased at LDC for Chinese. Development data will be provided by the TC-STAR project. Legal
issues regarding the data will be detailed in the 2nd Call For Participation.
All participants will be given the opportunity to present and discuss their results in the TC-STAR evaluation
workshop in Barcelona in June 2006.
Registration: October 2005 (early expression of interest is welcome)
ASR evaluation: from mid January to end of January 2006
SLT evaluation: from begin February to mid February 2006
TTS evaluation: from begin February to end of February 2006
Release: April 2006
Submission of papers: May 2006
Workshop: June 2006
Contact: Djamel Mostefa (ELDA)
tel. +33 1 43 13 33 33
- SECOND INTERNATIONAL CONFERENCE ON TONAL ASPECTS OF LANGUAGES
La Rochelle (France) April 27-29th, 2006
CALL FOR PAPERS
Jointly organised by La Rochelle University and Paris 3 University( Phonetics & Phonology laboratory, UMR 7018 CNRS).
Satellite conference of Prosody 2006 that will be held in Dresden (Germany) in May 02-05, 2006.
The aim of the TAL 2006 conference is to bring together researchers interested in all areas of tone languages.
The conference welcomes papers on the following topics:
- typology and phonology of tone languages
- tone languages acquisition
- speech physiology and pathology in tone languages
- tones production
- perception in tone languages
- tone languages prosody
- modelling of tones and intonation
- speech processing in tone languages
- cognitive aspects of tone languages
The deadline for full paper submission (4 pages, 2 columns, simple-spaced, Time New Roman 10 points) is January 15, 2006.
Paper submission is possibly exclusively via the conference website, in accordance with the submission guidelines.
No previously published papers should be submitted.
Each corresponding author will be notified by e-mail of the acceptance of the paper by January, 31, 2006.
Intention of participation: before October 30, 2005
Full paper submission deadline: January 15, 2006
Notification of paper acceptance/rejection: February 1st, 2006
Early registration deadline: February 28, 2005
final paper: March 31, 2005
If you want to be updated as more information becomes available, please send an email.
- LREC 2006 - 5th Conference on Language Resources and Evaluation
Magazzini del Cotone Conference Center, GENOA - ITALY
Deadlines for proposals of panes, workshops, tutorials and for paper
submissions are extended to October 20
MAIN CONFERENCE: 24-25-26 MAY 2006
WORKSHOPS and TUTORIALS: 22-23 and 27-28 MAY 2006
Conference web site
The fifth international conference on Language Resources and Evaluation, LREC 2006, is organised by
ELRA in cooperation with a wide range of international associations and organisations.
Issues in the design, construction and use of Language Resources (LRs)
Issues in Human Language Technologies (HLT) evaluation
LREC targets the integration of different types of LRs (spoken, written, and other modalities), and of the
respective communities. To this end, LREC encourages submissions covering issues which are common
to different types of LRs and language technologies, such as dialogue strategy, written and spoken
translation, domain-specific data, multimodal communication or multimedia document processing, and
will organise, in addition to the usual tracks, common sessions encompassing the different areas of LRs.
The 2006 Conference emphasises in particular the importance of promoting:
- synergies and integration between (multilingual) LRs and Semantic Web technologies,
- new paradigms for sharing and integrating LRs and LT coming from different sources,
- communication with neighbouring fields for applications in e-government and administration,
- common evaluation campaigns for the objective evaluation of the performances of different
- systems and products (also industrial ones) based on large-size and high quality LRs.
LREC therefore encourages submissions of papers, panels, workshops, tutorials on the use of LRs
in these areas.
Submitted abstracts of papers for oral and poster or demo presentations should consist of about 1000
A limited number of panels, workshops and tutorials is foreseen: proposals will be reviewed by the
For panels, please send a brief description, including an outline of the intended structure (topic, organiser,
panel moderator, tentative list of panelists).
For workshops and tutorials, see the dedicated section below.
Only electronic submissions will be considered. Further details about submission will be circulated in
the 2nd Call for Papers to be issued at the end of July and posted on the LREC web site (www.lrec-conf.org).
* Submission of proposals for panels, workshops and tutorials: 14 October 2005
* Submission of proposals for oral and poster papers, referenced demos: 14 October 2005
* Notification of acceptance of panels, workshops and tutorials proposals: 7 November 2005
* Notification of acceptance of oral papers, posters, referenced demos: 16 January 2006
* Final versions for the proceedings: 20 February 2006
* Conference: 24-26 May 2006
* Pre-conference workshops and tutorials: 22 and 23 May 2006
* Post-conference workshops and tutorials: 27 and 28 May 2006
WORKSHOPS AND TUTORIALS
Pre-conference workshops and tutorials will be organised on 22 and 23 May 2006, and post-conference
workshops and tutorials on 27 and 28 May 2006. A workshop/tutorial can be either half day or full day.
Proposals for workshops and tutorials should be no longer than three pages, and include:
* A brief technical description of the specific technical issues that the workshop/tutorial will
* The reasons why the workshop/tutorial is of interest this time.
* The names, postal addresses, phone and fax numbers and email addresses of the
workshop/tutorial organising committee, which should consist of at least three people
knowledgeable in the field, coming from different institutions.
* The name of the member of the workshop/tutorial organising committee designated as the
* A time schedule of the workshop/tutorial and a preliminary programme.
* A summary of the intended workshop/tutorial call for participation.
* A list of audio-visual or technical requirements and any special room requirements.
CONSORTIA AND PROJECT MEETINGS
Consortia or projects wishing to take this opportunity for organising meetings should contact the
-HLT-NAACL 2006 Call for Demos
2006 Human Language Technology Conference and North American chapter
of the Association for Computational Linguistics annual meeting.
New York City, New York
Conference date: June 4-9, 2006
Submission deadline: March 3, 2006
Proposals are invited for the HLT-NAACL 2006 Demonstrations
Program. This program is aimed at offering first-hand experience with
new systems, providing opportunities to exchange ideas gained from
creating systems, and collecting feedback from expert users. It is
primarily intended to encourage the early exhibition of research
prototypes, but interesting mature systems are also
eligible. Submission of a demonstration proposal on a particular topic
does not preclude or require a separate submission of a paper on that
topic; it is possible that some but not all of the demonstrations will
illustrate concepts that are described in companion papers.
John Dowding, University of California/Santa Cruz
Natasa Milic-Frayling, Microsoft Research, Cambridge, United Kingdom
Alexander Rudnicky, Carnegie Mellon University.
Areas of Interest
We encourage the submission of proposals for demonstrations of
software and hardware related to all areas of human language
technology. Areas of interest include, but are not limited to,
natural language, speech, and text systems for:
- Speech recognition and generation;
- Speech retrieval and summarization;
- Rich transcription of speech;
- Interactive dialogue;
- Information retrieval, filtering, and extraction;
- Document classification, clustering, and summarization;
- Language modeling, text mining, and question answering;
- Machine translation;
- Multilingual and cross-lingual processing;
- Multimodal user interface;
- Mobile language-enabled devices;
- Tools for Ontology, Lexicon, or other NLP resource development;
- Applications in growing domains (web-search, bioinformatics, ...).
Please be referred to the
HLT-NAACL 2006 CFP for a more detailed but
not necessarily an exhaustive list of relevant topics.
Submission deadline: March 3, 2006
Notification of acceptance: April 6, 2006
Submission of final demo related literature: April 17, 2006
Conference: June 4-9, 2006
A demo proposal should consist of the following parts:
- An extended abstract of up to four pages, including the title,
authors, full contact information, and technical content to be
demonstrated. It should give an overview of what the demonstration is
aimed to achieve, how the demonstration illustrates novel ideas or
late-breaking results, and how it relates to other systems or projects
described in the context of other research (i.e., references to
- A detailed requirement description of hardware, software, and
network access expected to be provided by the local
organizer. Demonstrators are encouraged to be flexible in their
requirements (possibly preparing different demos for different
logistical situations). Please state what you can bring yourself and
what you absolutely must be provided with. We will do our best to
provide equipment and resources but at this point we cannot guarantee
anything beyond the space and power supply.
- A concise outline of the demo script, including the accompanying
narrative, and either a web address to access the demo or visual aids
(e.g., screen-shots, snapshots, or sketches). The demo script should
be no more than 6 pages.
The demo abstract must be submitted electronically in the Portable
Document Format (PDF). It should follow the format guidelines for the
main conference papers. Authors are encouraged to use the style files
provided on the HLT-NAACL 2006 website. It is the responsibility of
the authors to ensure that their proposals use no unusual format
features and can be printed on a standard Postscript printer.
Demo proposals should be submitted electronically to the demo co-chairs.
Demo proposals will be evaluated on the basis of their relevance to
the conference, innovation, scientific contribution, presentation, and
usability, as well as potential logistical constraints.
The accepted demo abstracts will be published in the Companion Volumne
to the Proceedings of the HLT-NAACL 2006 Conference.
Further details on the date, time, and format of the demonstration
session(s) will be determined and provided at a later date. Please
send any inquiries to the demo co-chairs.
Call for Tutorial Proposals
Proposals are invited for the Tutorial Program for HLT-NAACL 2006,
to be held at the New York Marriott at the Brooklyn Bridge from June 4
to 9, 2006. The tutorial day is June 4, 2006. The HLT-NAACL
conferences combine the HLT (Human Language Technology) and NAACL
(North American chapter of the Association for Computational
Linguistics) conference series, and bring together researchers in NLP,
IR, and speech. For details, see
our website .
We seek half-day tutorials covering topics in Speech Processing,
Information Retrieval, and Natural Language Processing, including
their theoretical foundations, intersections, and applications.
Tutorials will normally move quickly, but they are expected to be
accessible, understandable, and of interest to a broad community of
researchers, preferably from multiple areas of Human Language
Technology. Our target is to have four to six tutorials.
Proposals for tutorials should be submitted by electronic mail, in
plain text, PDF, Microsoft Word, or HTML. They should be submitted,
by the date shown below, by email.
The subject line should be: "HLT-NAACL'06 TUTORIAL PROPOSAL".
Proposals should contain:
1. A title and brief (2-page max) description of the tutorial topic
and content. Include a brief outline of the tutorial structure
showing that the tutorial's core content can be covered in a three
hours (two 1.5 hour sessions). Tutorials should be accessible to the
broadest practical audience. In keeping with the focus of the
conference, please highlight any topics spanning disciplinary
boundaries that you plan to address. (These are not strictly required,
but they are a big plus.)
2. An estimate of the audience size. If approximately the same
tutorial has been given elsewhere, please list previous venues and
approximate audience sizes. (There's nothing wrong with repeat
tutorials; we'd just like to know.)
3. The names, postal addresses, phone numbers, and email addresses of
the organizers, with one-paragraph statements of their research
interests and areas of expertise.
4. A description of special requirements for technical needs (computer
infrastructure, etc). Tutorials must be financially self-supporting.
The conference organizers will establish registration rates that will
cover the room, audio-visual equipment, internet access, snacks for
breaks, and reproduction the tutorial notes. A description of any
additional anticipated expenses must be included in the proposal.
Accepted tutorial speakers will be asked to provide descriptions of
their tutorials suitable for inclusion in all of: email announcements,
the conference registration material, the printed program, the website,
and the proceedings. This will involve producing text and/or HTML
and/or LaTeX/Word/PDF versions of appropriate lengths.
Tutorial notes will be printed and distributed by the Association for
Computational Linguistics (ACL). These materials, containing at least
copies of the slides that will be presented and a bibliography for the
material that will be covered, must be submitted by the date indicated
below to allow adequate time for reproduction. Presenters retain
copyright for their materials, but ACL requires that presenters
execute a non-exclusive distribution license to permit distribution to
participants and sales to others.
Tutorial presenters will be compensated in accordance with current ACL
Submission: Jan 20, 2006
Notification: Feb 10, 2006
Descriptions due: Mar 1, 2006
Course material due: May 1, 2006
Tutorial date: Jun 4, 2006
Jim Glass, Massachusetts Institute of Technology
Christopher Manning, Stanford University
Douglas W. Oard, University of Maryland
-XXVIèmes Journées d'Étude sur la Parole
12-16 juin 2006
Les principaux thèmes retenus pour la conférence sont:
1 Production de parole
2 Acoustique de la parole
3 Perception de parole
4 Phonétique et phonologie
6 Reconnaissance et compréhension de la parole
7 Reconnaissance de la langue et du locuteur
8 Modèles de langage
9 Synthèse de la parole
10 Analyse, codage et compression de la parole
11 Applications à composantes orales (dialogue, indexation...)
12 Évaluation, corpus et ressources
14 Acquisition de la parole et du langage
15 Apprentissage d'une langue seconde
16 Pathologies de la parole
17 Autres ...
DATES À RETENIR
Date limite de soumission des propositions 1 mars 2006
Notification aux auteurs de l'acceptation ou du refus 3 avril 2006
Soumission des articles finaux 1 mai 2006
Date du congrès 12-16 juin 2006
Pour les questions scientifiques, contactez Pascal Perrier, Président
Pour des renseignements pratiques, firstname.lastname@example.org.
-PERCEPTION AND INTERACTIVE TECHNOLOGIES (PIT06)
Kloster Irsee in southern Germany from June 19 to June 21, 2006.
Submissions will be short/demo or full papers of 4-10 pages.
January 31, 2006: Deadline for Long, Short and Demo Papers
March 15, 2006: Notification of acceptance/rejection
April 1, 2006: Deadline for final submission of accepted paper
April 1, 2006: Deadline for advance registration
June 7, 2006: Final programme available on the web
It is envisioned to publish the proceedings in the LNCS/LNAI Series by
PIT'06 Organising Committee:
Elisabeth André, Laila Dybkjaer, Wolfgang Minker, Heiko Neumann,
Michael Weber, Marcus Hennecke, Gregory Baratoff
- 9th Western Pacific Acoustics Conference(WESPAC IX 2006)
June 26-28, 2006
Program Highlights of WESPAC IX 2006
(by Session Topics)
* Human Related Topics- Aeroacoustics
* Product Oriented Topics
* Speech Communication
* Analysis: Through Software and Hardware
* Underwater Acoustics
* Physics: Fundamentals and Applications
* Other Hot Topics in Acoustics
WESPAC IX 2006 Secretariat
SungKyunKwan University, Acoustics Research Laboratory
300 Chunchun-dong, Jangan-ku, Suwon 440-746, Republic of Korea
Tel: +82-31-290-5957 Fax: +82-31-290-7055
- MMSP 2006 International Workshop on Multimedia Signal Processing
October 3-6th, 2006
Fairmont Empress Hotel
Multimedia processing: all modalities
Multimedia data bases
Multimedia Systems Design, Implementation and Applications
Human Machine Interfaces and Interaction using multimodalities
Special sessions (see website) March 6, 2006
Papers April 8th,2006
Notification of acceptance June 8th, 2006
Camera ready paper July 8th, 2006