1 . Editorial

Dear Members,

 Here is the March issue of ISCApad.  Many of you will attend ICASSP 2010  in Dallas and I wish them an excellent conference. 

A very short deadlne is drawn to the attention of undergraduate students: CLSP at Johns Hopkins is hiring students for its summer school! (details under section Job openings).

Concerning ISCApad, I invite you to inform your management that he may freely advertise job offers in this newsletter.

Authors, don't forget to send me a description of your last book !

Please pay attention that the newsletter is monthly published and do not postpone any communication since it could take a month to be  edited and sent to the members.

Prof. em. Chris Wellekens 

Institut Eurecom

Sophia Antipolis




Back to Top

2 . ISCA News


Back to Top

2-1 . A new section of ISCApad: Industry Notes

We are pleased to announce the start of a new monthly section of ISCAPad called "Industry Notes". The purpose of this section is to allow our Industry Affiliates ( to post items of a timely interest to ISCA members. If you would like your company to become an ISCA Industry Affiliate please contact 

Back to Top

3 . Future ISCA Conferences and Workshops (ITRW)


Back to Top

3-1 . (2010-06-28) Odyssey 2010 Brno Czech Republic

Odyssey 2010: The Speaker and Language Recognition Workshop will be hosted by Brno University of Technology in Brno, Czech Republic. Odyssey’10 is an ISCA Tutorial and Research Workshop held in cooperation with the ISCA Speaker and Language Characterization SIG. The need for fast, efficient, accurate, and robust means of recognizing people and languages is of growing importance for commercial, forensic, and government applications. The aim of this workshop is to continue to foster interactions among researchers in speaker and language recognition as the successor of previous successful events held in Martigny (1994), Avignon (1998), Crete (2001), Toledo (2004), San Juan (2006) and Stellenbosch (2008).



Topics of interest include speaker and language recognition (verification, identification, segmentation, and clustering): text-dependent and -independent speaker recognition; multispeaker training and detection; speaker characterization and adaptation; features for speaker recognition; robustness in channels; robust classification and fusion; speaker recognition corpora and evaluation; use of extended training data; speaker recognition with speech recognition; forensics, multimodality, and multimedia speaker recognition; speaker and language confidence estimation; language, dialect, and accent recognition; speaker synthesis and transformation; biometrics; human recognition of speaker and language; and commercial applications.


Draft papers due:
15 February 2010
Notification of acceptance:
16 April 2010
Final papers due:
30 April 2010
Preliminary program:
17 May 2010
28 June – 1 July 2010 
Back to Top

3-2 . (2010-09-22) 7th ISCA Speech Synthesis Workshop (SSW7), Kyoto,Japan

7th ISCA Speech Synthesis Workshop (SSW7)
Kyoto, Japan - September 22-24, 2010

The Seventh ISCA Tutorial and Research Workshop (ITRW) on Speech
Synthesis will take place at ATR, Kyoto, Japan, September 22-24, 2010.
It is co-sponsored by the International Speech Communication
Association (ISCA), the ISCA Special Interest Group on Speech
Synthesis (SynSIG), the National Institute of Information and
Communications Technology (NICT), and the Effective Multilingual
Interaction in Mobile Environments (EMIME) project.  The workshop will
be held as a satellite workshop of Interspeech 2010 (Chiba, Japan,
September 26-30, 2010).  This workshop follows on from the previous
workshops, Autrans 1990, Mohonk 1994, Jenolan Caves 1998, Pitlochry
2001, Pittsburgh 2004, Bonn 2007, which aim to promote research and
development of all aspects of speech synthesis.

Workshop topics Papers in all areas of speech synthesis technology are
encouraged to be submitted, with emphasis placed on:

* Spontaneous/expressive speech synthesis
* Speech synthesis in dialog systems
* Voice conversion/speaker adaptation
* Multilingual/crosslingual speech synthesis
* Automated methods for speech synthesis
* TTS for embedded devices
* Talking heads with animated conversational agents
* Applications of synthesis technologies to communication disorders
* Evaluation methods

Submissions for the technical program:

The workshop program will consist of invited lectures, oral and poster
presentations, and panel discussions.  Prospective authors are invited
to submit full-length, 4-6 page papers, including figures and
references.  All papers will be handled and reviewed electronically.
The SSW7 website will provide you with further

Important dates:

* May 7, 2010: Paper submission deadline
* June 30, 2010: Acceptance/rejection notice
* June 30, 2010: Registration begins
* July 9, 2010: Revised paper due
* September 22-24, 2010: Workshop at ATR in Kyoto 

Back to Top

3-3 . (2010-09-26) 2nd CfP INTERSPEECH 2010 Chiba Japan

 INTERSPEECH2010 Call for Papers

    Makuhari,Japan / September 26-30, 2010
Dear Colleague,
INTERSPEECH is the world's largest and most comprehensive conference on
issues surrounding the science and technology of  spoken language
processing(SLP) both in humans and in machines. It is our great pleasure to
host INTERSPEECH 2010 in  Japan, the birthplace of ICSLP, which has held two
ICSLPs, in Kobe and Yokohama, in the past.
The theme of INTERSPEECH 2010 is "Spoken Language Processing for All Ages,
Health Conditions, Native Languages and  Environments". INTERSPEECH 2010
emphasizes an interdisciplinary approach covering all aspects of speech
science and  technology spanning the basic theories to applications. Besides
regular oral and poster sessions, plenary talks by  internationally renowned
experts, tutorials, exhibits, and special sessions are planned.
     "INTERSPEECH conferences are indexed in ISI"
We invite you to submit original papers in any related area, including but
not limited to:
   * Human speech production
   * Human speech and sound perception
   * Linguistics, phonology and phonetics
   * Intersection of spoken and written languages
   * Discourse and dialogue
   * Prosody (e.g., production, perception, prosodic structure, modeling)
   * Paralinguistic and nonlinguistic cues (e.g., emotion and expression)
   * Physiology and pathology of spoken language
   * Spoken language acquisition, development and learning
   * Speech and other modalities (e.g., facial expression, gesture)
   * Speech analysis and representation
   * Speech segmentation
   * Audio segmentation and classification
   * Speaker turn detection
   * Speech enhancement
   * Speech coding and transmission
   * Voice conversion
   * Speech synthesis and spoken language generation
   * Automatic speech recognition
   * Spoken language understanding
   * Language and dialect identification
   * Cross-lingual and multi-lingual speech processing
   * Multimodal/multimedia signal processing (including sign languages)
   * Speaker characterization and recognition
   * Signal processing for music and song
   * Spoken language technology for prosthesis, rehabilitation, wellness
and welfare
   * Computational linguistics for SLP
   * Written Language Processing for SLP
   * Spoken dialogue systems
   * SLP Systems for information extraction/retrieval
   * Systems for spoken language translation
   * Applications for aged and handicapped persons
   * Applications for learning and education
   * Other applications
   * Spoken language resources and annotation
   * Evaluation and standardization of spoken language systems
Special Sessions
   * Open Vocabulary Spoken Document Retrieval
   * Compressive Sensing for Speech and Language Processing
   * Social Signals in Speech
   * The Voice - a Special Treat for the Social Brain?
   * Quality of Experiencing Speech Services
   * Speech Intelligibility Enhancement for All Ages, Health Conditions,
and Environments
   * INTERSPEECH 2010 Paralinguistic Challenge - Age, Gender, and Affect
   * The Speech Models - Searching for Better Representations of Speech
   * Fact and Replica of Speech Production
Paper Submission
Papers for the INTERSPEECH 2010 proceedings should be up to four pages in
length and conform to the format given in the  paper preparation guidelines
and author kits which is now available on the INTERSPEECH 2010 website along
with the Final  Call for Papers. Optionally, authors may submit additional
files, such as multimedia files, to be included on the  Proceedings CD-ROM.
Authors shall also declare that their contributions are original and not
being submitted for  publication elsewhere (e.g., another conference,
workshop, or journal). Papers must be submitted via the on-line paper
submission system. The deadline for submitting a paper is 30 April 2010.
This date will not be extended. Inquiries  regarding paper submissions
should be directed via email to
Important dates
 Paper submission deadline: 30 April 2010
 Notification of acceptance or rejection: 2 July 2010
 Camera-ready paper due: 9 July 2010
 Authors' registration deadline: 12 July 2010
 Early registration deadline: 28 July 2010
 Conference dates: 26-30 September 2010
Please visit our website at
General Chair
 Keikichi Hirose
General Vice Chair
 Yoshinori Sagisaka
INTERSPEECH2010 Organizing Committee



Back to Top

3-4 . (2011-08-27) INTERSPEECH 2011 Florence Italy

Interspeech 2011

Palazzo dei Congressi,  Italy, August 27-31, 2011.

Organizing committee

Piero Cosi (General Chair),

Renato di Mori (General Co-Chair),

Claudia Manfredi (Local Chair),

Roberto Pieraccini (Technical Program Chair),

Maurizio Omologo (Tutorials),

Giuseppe Riccardi (Plenary Sessions).

More information

Back to Top

4 . Industry Notes

Carnegie Speech produces systems to teach people how to speak another language understandably. Some of its products include NativeAccent, SpeakIraqi, SpeakRussian, and ClimbLevel4. You can find out more at You can also read about awarding it a Best Breakout Idea of 2009 at:

Back to Top

5 . Workshops and conferences supported (but not organized) by ISCA


Back to Top

5-1 . (2010-05-03 ) Workshop on Spoken Languages Technologies for Under-Resourced Languages

Workshop on Spoken Languages Technologies for Under-Resourced Languages 
*The second International Workshop on Spoken Languages Technologies for 
Under-resourced languages (SLTU’10) will be held at Universiti Sains Malaysia 
(USM), Penang, Malaysia, May 3 to May 5, 2010.* Workshop 
supported by ISCA, AFCP and CNRS. 
The first workshop on Spoken Languages Technologies for Under-Resourced 
Languages was organized in Hanoi, Vietnam, in 2008 by Multimedia, 
Information, Communication and Applications (MICA) research center in 
Vietnam and /Laboratoire d’Informatique de Grenoble/ (LIG) in France. 
This first workshop gathered 40 participants during two days. 
For 2010, we intend to attract more participants, especially from the 
local regional zone (Malaysia, Indonesia, Singapore, Thailand, 
Australia, ...). The workshop will take place inside USM in Penang, 
Malaysia. SLTU research workshop will focus on spoken language 
processing for under-resourced languages and aims at gathering 
researchers working on: 
   * ASR, synthesis and translation for under-resourced languages 
   * portability issues 
   * multilingual spoken language processing 
   * fast resources acquisition (speech, text, lexicons, parallel corpora) 
   * spoken language processing for languages with rich morphology 
   * spoken language processing for languages without separators 
   * spoken language processing for languages without writing system 
   * NLP for rare or endangered languages 
   * … 
*Important dates* 
* Paper submission: December 15, 2009 
* Notification of Paper Acceptance: February 15, 2010 
* Author Registration Deadline: March 1, 2010 
*Workshop Web site* 
* * 
*Workshop Chairs* 
Laurent Besacier 
Eric Castelli 
Dr. Chan Huah Yong 
Back to Top

5-2 . (2010-05-19) LREC 2010 - 7th Conference on Language Resources and Evaluation

LREC 2010 - 7th Conference on Language Resources and Evaluation


 Special Highlight: Contribute to building the LREC2010 Map!

MAIN CONFERENCE: 19-20-21 MAY 2010
WORKSHOPS and TUTORIALS: 17-18 MAY and 22-23 MAY 2010
Conference web site:
The seventh international conference on Language Resources and Evaluation (LREC) will be organised in 2010 by ELRA in cooperation with a wide range of international associations and organisations.
In 12 years – the first LREC was held in Granada in 1998 – LREC has become the major event on Language Resources (LRs) and Evaluation for Human Language Technologies (HLT). The aim of LREC is to provide an overview of the state-of-the-art, explore new R&D directions and emerging trends, exchange information regarding LRs and their applications, evaluation methodologies and tools, ongoing and planned activities, industrial uses and needs, requirements coming from the e-society, both with respect to policy issues and to technological and organisational ones. 
LREC provides a unique forum for researchers, industrials and funding agencies from across a wide spectrum of areas to discuss problems and opportunities, find new synergies and promote initiatives for international cooperation, in support to investigations in language sciences, progress in language technologies and development of corresponding products, services and applications, and standards.
Special Highlight: Contribute to building the LREC2010 Map!
LREC2010 recognises that time is ripe to launch an important initiative, the LREC2010 Map of Language Resources, Technologies and Evaluation. The Map will be a collective enterprise of the LREC community, as a first step towards the creation of a very broad, community-built, Open Resource Infrastructure. As first in a series, it will become an essential instrument to monitor the field and to identify shifts in the production, use and evaluation of LRs and LTs over the years.
When submitting a paper, from the START page you will be asked to fill in a very simple template to provide essential information about resources (in a broad sense that includes technologies, standards, evaluation kits, etc.) that either have been used for the work described in the paper or are a new result of your research. 
The Map will be disclosed at LREC, where some event(s) will be organised around this initiative. 
Issues in the design, construction and use of Language Resources (LRs): text, speech, other associated media and modalities
•    Guidelines, standards, specifications, models and best practices for LRs
•    Methodologies and tools for LRs construction and annotation
•    Methodologies and tools for the extraction and acquisition of knowledge
•    Ontologies and knowledge representation
•    Terminology 
•    Integration between (multilingual) LRs, ontologies and Semantic Web technologies
•    Metadata descriptions of LRs and metadata for semantic/content markup
•    Validation, quality assurance, evaluation of LRs
Exploitation of LRs in different types of systems and applications 
•    For: information extraction, information retrieval, speech dictation, mobile communication, machine translation, summarisation, semantic search, text mining, inferencing, reasoning, etc.
•    In different types of interfaces: (speech-based) dialogue systems, natural language and multimodal/multisensorial interactions, voice activated services, cognitive systems, etc.
•    Communication with neighbouring fields of applications, e.g. e-government, e-culture, e-health, e-participation, mobile applications, etc. 
•    Industrial LRs requirements, user needs
Issues in Human Language Technologies evaluation
•    HLT Evaluation methodologies, protocols and measures
•    Benchmarking of systems and products
•    Usability evaluation of HLT-based user interfaces (speech-based, text-based, multimodal-based, etc.), interactions and dialogue systems
•    Usability and user satisfaction evaluation
General issues regarding LRs & Evaluation
•    National and international activities and projects
•    Priorities, perspectives, strategies in national and international policies for LRs
•    Open architectures 
•    Organisational, economical and legal issues 
The Scientific Programme will include invited talks, oral presentations, poster and demo presentations, and panels. 
There is no difference in quality between oral and poster presentations. Only the appropriateness of the type of communication (more or less interactive) to the content of the paper will be considered.
Submitted abstracts of papers for oral and poster or demo presentations should consist of about 1500-2000 words.
•    Submission of proposals for oral and poster/demo papers: 31 October 2009 
Proposals for panels, workshops and tutorials will be reviewed by the Programme Committee.
•    Submission of proposals for panels, workshops and tutorials: 31 October 2009
The Proceedings on CD will include both oral and poster papers, in the same format. They will be added to the ELRA web archives before the conference.
A Book of Abstracts will be printed.
Nicoletta Calzolari, Istituto di Linguistica Computazionale del CNR - Pisa, Italy (Conference chair)
Khalid Choukri - ELRA, Paris, France
Bente Maegaard - CST, University of Copenhagen, Denmark
Joseph Mariani - LIMSI-CNRS and IMMI, Orsay, France
Jan Odijk - UIL-OTS, Utrecht, The Netherlands 
Stelios Piperidis - Institute for Language and Speech Processing (ILSP), Athens, Greece
Mike Rosner – Department of Intelligent Computer Systems, University of Malta, Malta
Daniel Tapias - Sigma Technologies S.L., Madrid, Spain
Back to Top

5-3 . (2010-05-25) JEP 2010 Mons Belgique

JEP 2010
         XXVIIIèmes Journées d'Étude sur la Parole
                    Université de Mons, Belgique
                         du 25 au 28 mai 2010
Les Journées d'Études de la Parole (JEP) sont consacrées à l'étude de la communication parlée ainsi qu'à ses applications. Ces journées ont pour but de rassembler l'ensemble des communautés scientifiques francophones travaillant dans le domaine. La conférence se veut aussi un lieu d'échange convivial entre doctorants et chercheurs confirmés.
En 2010, les JEP sont organisées par le Laboratoire des Sciences de la Parole de l'Académie Wallonie-Bruxelles, sur le site de l'Université de Mons en Belgique, sous l'égide de l'AFCP 
(Association Francophone de la Communication Parlée) avec le  soutien de l'ISCA (International Speech Communication Association).
Un second appel à communication précisant les thèmes ainsi que les modalités de soumission suivra ce premier appel.
Date limite de soumission:          11 janvier 2010
Notification aux auteurs:             15    mars 2010
Conférence:                                 25-28 mai 2010
V. Delvaux
Chargée de Recherches FNRS
Laboratoire de Phonétique
Service de Métrologie et Sciences du Langage
Université de Mons-Hainaut
18, Place du Parc
7000 Mons
Back to Top

5-4 . (2010-05-25) Jeunes chercheurs en parole Mons Belgique a JEP 2010

Dans le cadre de sa politique d’ouverture internationale, et en continuité de l’action lancée lors des JEPs 2004 au Maroc, 2006 à Dinard et 2008 à Avignon, l’AFCP invite des étudiants ou jeunes chercheurs de la communauté « Communication Parlée » rattachés à des laboratoires situés hors de France, à participer à la conférence JEP 2010 qui aura lieu à Mons, Belgique, du 25 au 28 mai 2010 ( ). 
Cette aide couvrira les frais de transport, d’hébergement et d’inscription de quelques (4 à 5) jeunes chercheurs venus de l’étranger. 
Modalités de candidature : 
Tout(e) candidat(e) devra constituer un dossier de candidature (voir page suivante) comportant : 
• un CV succinct présentant ses activités scientifiques et sa formation universitaire, 
• un paragraphe expliquant sa motivation et mettant en valeur les retombées attendues d’une participation aux JEP 2010, 
• une estimation des frais de transport (voir ci-dessous). 
Pour les étudiant(e)s, le dossier devra être accompagné d’une lettre de recommandation du directeur de recherche. 
Calendrier : 
• Envoi du dossier par courrier électronique à Isabelle Ferrané et Corinne Fredouille (, avant le 15 février 2010 
• Décisions d’acceptation rendues pour le 1er mars 2010 
• XXVIIIèmes Journées d’Etudes sur la Parole du 25 au 28 mai 2010 
Remarques : 
- La soumission et l’acceptation d’une contribution scientifique aux JEPs n’est pas un critère de sélection pour cette invitation 
- Priorité sera donnée aux candidat(e)s venant de pays peu représentés aux JEP 
- Pour votre estimation de frais de transport, vous pouvez consulter la page du site des JEP 2010 consacrée aux informations pratiques : aéroports et liaisons ferroviaires ( . 
- L’hébergement des participants aux JEP se fera à l’auberge de Jeunesse de Mons.
Back to Top

6 . Books,databases and softwares


Back to Top

6-1 . Books


This section shows recent books whose titles been have communicated by the authors or editors.
Also some advertisements for recent books in speech are included.
This book presentation is written by the authors and not by this newsletter editor or any  volunteer reviewer.


Back to Top

6-1-1 . Digital Speech Transmission

Digital Speech Transmission
Authors: Peter Vary and Rainer Martin
Publisher: Wiley&Sons
Year: 2006
Back to Top

6-1-2 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods

Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
Joseph Keshet and Samy Bengio, Editors
John Wiley & Sons
March, 2009
Website:  Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
About the book:
This is the first book dedicated to uniting research related to speech and speaker recognition based on the recent advances in large margin and kernel methods. The first part of the book presents theoretical and practical foundations of large margin and kernel methods, from support vector machines to large margin methods for structured learning. The second part of the book is dedicated to acoustic modeling of continuous speech recognizers, where the grounds for practical large margin sequence learning are set. The third part introduces large margin methods for discriminative language modeling. The last part of the book is dedicated to the application of keyword-spotting, speaker
verification and spectral clustering. 
Contributors: Yasemin Altun, Francis Bach, Samy Bengio, Dan Chazan, Koby Crammer, Mark Gales, Yves Grandvalet, David Grangier, Michael I. Jordan, Joseph Keshet, Johnny Mariéthoz, Lawrence Saul, Brian Roark, Fei Sha, Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebo. 
Back to Top

6-1-3 . Some aspects of Speech and the Brain.

Some aspects of Speech and the Brain. 
Susanne Fuchs, Hélène Loevenbruck, Daniel Pape, Pascal Perrier
Editions Peter Lang, janvier 2009
What happens in the brain when humans are producing speech or when they are listening to it ? This is the main focus of the book, which includes a collection of 13 articles, written by researchers at some of the foremost European laboratories in the fields of linguistics, phonetics, psychology, cognitive sciences and neurosciences.
Back to Top

6-1-4 . Spoken Language Processing,

Spoken Language Processing, edited by Joseph Mariani (IMMI and
LIMSI-CNRS, France). ISBN: 9781848210318. January 2009. Hardback 504 pp

Publisher ISTE-Wiley

Speech processing addresses various scientific and technological areas. It includes speech analysis and variable rate coding, in order to store or transmit speech. It also covers speech synthesis, especially from text, speech recognition, including speaker and language identification, and spoken language understanding. This book covers the following topics: how to realize speech production and perception systems, how to synthesize and understand speech using state-of-the-art methods in signal processing, pattern recognition, stochastic modeling, computational linguistics and human factor studies. 

More on its content can be found at

Back to Top

6-1-5 . L'imagerie medicale pour l'etude de la parole

 L'imagerie medical pour l'etude de la parole,

Alain Marchal, Christian Cave

Eds Hermes Lavoisier

99 euros • 304 pages • 16 x 24 • 2009 • ISBN : 978-2-7462-2235-9

Du miroir laryngé à la vidéofibroscopie actuelle, de la prise d'empreintes statiques à la palatographie dynamique, des débuts de la radiographie jusqu'à l'imagerie par résonance magnétique ou la magnétoencéphalographie, cet ouvrage passe en revue les différentes techniques d'imagerie utilisées pour étudier la parole tant du point de vue de la production que de celui de la perception. Les avantages et inconvénients ainsi que les limites de chaque technique sont passés en revue, tout en présentant les principaux résultats acquis avec chacune d'entre elles ainsi que leurs perspectives d'évolution. Écrit par des spécialistes soucieux d'être accessibles à un large public, cet ouvrage s'adresse à tous ceux qui étudient ou abordent la parole dans leurs activités professionnelles comme les phoniatres, ORL, orthophonistes et bien sûr les phonéticiens et les linguistes.





Back to Top

6-1-6 . Korpusbasierte Sprachverarbeitung

Author: Christoph Draxler
Title: Korpusbasierte Sprachverarbeitung
Publisher: Narr Francke Attempto Verlag Tübingen
Year: 2008

Summary: Spoken language is a major area of linguistic research and speech technology development. This handbook presents an introduction to the technical foundations and shows how speech data is collected, annotated, analysed, and made accessible in the form of speech databases. The book focuses on web-based procedures for the recording and processing of high quality speech data, and it is intended as a desktop reference for practical recording and annotation work. A chapter is devoted to the Ph@ttSessionz database, the first large-scale speech data collection (860+ speakers, 40 locations in Germany) performed via the Internet. The companion web site ( contains audio examples, software tools, solutions to the exercises, important links, and checklists. 

Back to Top

6-2 . Database providers


Back to Top

6-2-1 . ELRA Language Resources Catalogue Update

ELRA - Language Resources Catalogue - Update

In the framework of our ongoing campaign for updating and reducing the prices of the language resources distributed in the ELRA catalogue, ELRA is happy to announce that the prices for the following resources have been substantially reduced:

ELRA-S0074 British English SpeechDat(II) MDB-1000
This speech database contains the recordings of 1,000 British speakers recorded over the British mobile telephone network. Each speaker uttered around 40 read and spontaneous items.
For more information, see:

ELRA-S0075 Welsh SpeechDat(II) FDB-2000
This speech database contains the recordings of 2,000 Welsh speakers recorded over the British fixed telephone network. Each speaker uttered around 40 read and spontaneous items.
For more information, see:

ELRA-S0101 Spanish SpeechDat(II) FDB-1000
This speech database contains the recordings of 1,000 Castillan Spanish speakers recorded over the Spanish fixed telephone network. Each speaker uttered around 40 read and spontaneous items.
This database is a subset of the Spanish SpeechDat(II) FDB-4000 (ref. ELRA-S0102).
For more information, see:

ELRA-S0102 Spanish SpeechDat(II) FDB-4000
This speech database contains the recordings of 4,000 Castillan Spanish speakers recorded over the Spanish fixed telephone network. Each speaker uttered around 40 read and spontaneous items.
This database includes the Spanish SpeechDat(II) FDB-1000 (ref. ELRA-S0101).
For more information, see:

ELRA-S0140 Spanish SpeechDat-Car database
The Spanish SpeechDat-Car database contains the recordings in a car of 306 speakers, who uttered around 120 read and spontaneous items. Recordings have been made through 5 different channels, of which 4 were in-car microphones (1 close-talk microphone, 3 far-talk microphones) and 1 channel over the GSM network.
For more information, see:

ELRA-S0141 SALA Spanish Venezuelan Database
This speech database contains the recordings of 1,000 Venezuelan speakers recorded over the Venezuelan fixed telephone network. Each speaker uttered around 50 read and spontaneous items.
For more information, see:

ELRA-S0297 Hungarian Speecon database
The Hungarian Speecon database comprises the recordings of 555 adult Hungarian speakers and 50 child Hungarian speakers who uttered respectively over 290 items and 210 items (read and spontaneous).
For more information, see:

ELRA-S0298 Czech Speecon database
The Czech Speecon database comprises the recordings of 550 adult Czech speakers and 50 child Czech speakers who uttered respectively over 290 items and 210 items (read and spontaneous).
For more information, see:

For more information on the catalogue, please contact Valérie Mapelli

Visit our On-line Catalogue:
Visit the Universal Catalogue:
Archives of ELRA Language Resources Catalogue Updates:  

Back to Top

6-2-2 . LDC News

 In this newsletter:

- 65,000th LDC Corpus Distributed! -

- Membership Year 2010 Discounts Still Available! -




New Publications:








65,000th LDC Corpus Distributed!

LDC has recently reached another milestone.  Two years after having distributed our 50,000th corpus, we have just distributed our 65,000th!  To help us celebrate, we took the names of all the organizations that had licensed data on the day we distributed our 65,000th corpus and tossed them into a Phillies baseball cap. 

We then randomly drew a name, and the winner is ...Swarthmore College and Universidad Carlos III de Madrid!  That's not a typo, we have two lucky winners!  We are celebrating our 65,000th distribution by awarding a benefit of US$2000 each to both Swarthmore College and Universidad Carlos III de Madrid. The benefit can be used towards membership or data licensing fees at any time this year.

Swarthmore College and Universidad Carlos III de Madrid join our other recipients of landmark corpora distributions:

  •     Helsinki University of Technology, Adaptive Informatics Research Centre (AIRC) - licensed our 50,000th distribution in January 2008.
  •     Instituto de Engenharia de Sistemas e Computadores (INESC) - licensed our 40,000th distribution in November 2006.
  •     University of Hawai'i, Manoa, Language Analysis and Experimentation Laboratories - licensed our 15,000th distribution in April 2002.

We would like to thank both members and non-members for helping the LDC reach this landmark distribution. The unceasing demand for LDC data from over 2800 organizations supports our mission to develop and share resources for research in human language technologies. 

About our winners:

Swarthmore College ~ The Department of Computer Science offers courses that emphasize the fundamental concepts of computer science, treating today's languages and systems as current examples of the underlying concepts. By educating students to think conceptually, we are preparing them to adapt to developments in this dynamic field.

Universidad Carlos III de Madrid ~ The Multimedia Processing Group aims to make a significant research contribution to the field of multimedia processing, especially focusing on combining signal analysis tools with emerging machine learning methods. Projects include automatic multimedia indexing, automatic speech recognition, and last-generation video coding.

[ top ]

Membership Year 2010 Discounts Still Available!

If you are considering joining for Membership Year 2010 (MY2010), take note that there is still time to save on membership fees.   Any organization which joins or renews membership for 2010 prior to Monday, March 1, 2010, is entitled to a 5% discount on membership fees.  Organizations which held membership for MY2009 can receive a 10% discount on fees, provided they renew prior to March 1, 2010.  For further information on pricing, please consult our Announcements page or contact LDC.  Information on our planned releases for MY2010 is provided below.

[ top ]

2010 Publications Pipeline

For Membership Year 2010 (MY2010), we anticipate releasing a varied selection of publications. Many publications are still in development, but here is a glimpse of what is in the pipeline for MY2010.  Please note that this list is tentative and subject to modifications.  Our planned publications for the coming months include:

Arabic Treebank: Part 3 v 3.2 ~ a revision of Arabic Treebank: Part 3 (full corpus) v 2.0 (MPG + Syntactic Analysis (LDC2005T20). The full Arabic Treebank:  Part 3 has been revised according to the new Arabic Treebank annotation guidelines.  The Arabic Treebank project consists of two distinct phases: (a) Part-of-Speech (POS) tagging which divides the text into lexical tokens, and gives relevant information about each token such as lexical category, inflectional features, and a gloss, and (b) Arabic Treebanking which characterizes the constituent structures of word sequences, provides categories for each non-terminal node, and identifies null elements, co-reference, traces, etc. on-terminal node. Arabic Treebank:  Part 3 v 3.2 consists of 599 newswire stories from An Nahar.                                       

Chinese Treebank 7.0 ~ this release encompasses 2400 text files, containing 45000 sentences, 1.1 million words and 1.65 million hanzi (Chinese characters). The data is provided in two encodings: GBK and UTF-8, and the annotation has Penn Treebank-style labeled brackets.       

Chinese Web 5-gram Version 1 ~ contains n-grams (unigrams to five-grams) and their observed counts in 880 billion tokens of Chinese web data collected in March 2008. All text was converted to UTF-8. A simple segmenter using the same algorithm used to generate the data is included. The set contains 3.9 billion n-grams total.

NPS Chat Corpus Version 1.0 ~ consists of 10,567 posts gathered from age-specific chat rooms. Each file is a recording transcript from one of these chat rooms for a short period on a particular day.   In order to comply with the chat services' terms of service, the posts have been privacy-masked.   Each post is annotated with a chat dialog-act tag, and individual tokens within each post are annotated with part-of-speech tags.

WTIMIT  ~ is a mobile wideband (i.e., 50 Hz – 7kHz) telephone adjunct to TIMIT (LDC93S1).   WTIMIT has been derived as follows: the original TIMIT speech files at 16 kHz sampling rate were concatenated to 11 signal chunks each being preceded by a 4 second calibration tone. These speech chunks were transmitted via two prepared Nokia 6220 mobile phones over T-Mobile’s 3G wideband mobile network in The Hague, The Netherlands, employing the Adaptive Multirate Wideband (AMR-WB) speech codec. After data acquisition and deconcatenation by maximizing the normalized cross-correlation with the original speech files, a database was obtained that is time aligned with the original TIMIT data with good precision. Accordingly, all TIMIT label files can still be used.  WTIMIT is suitable for research on speech quality and intelligibility, and investigations on possible wideband upgrades of network-sided IVR systems with retrained or bandwidth extended acoustic models for automatic speech recognition.  WTIMIT will be presented at LREC2010.

2010 Subscription Members are automatically sent all MY2010 data as it is released.  2010 Standard Members are entitled to request 16 corpora for free from MY2010.   Non-members may license most data for research-use only.

[ top ]


New Publications

(1)  Fisher Spanish Speech was developed by LDC and consists of audio files covering roughly 163 hours of telephone speech from 136 native Caribbean Spanish and non-Caribbean Spanish speakers. Full orthographic transcripts of these audio files are available in Fisher Spanish - Transcripts (LDC2010T04).

The Fisher telephone conversation collection protocol was created at LDC to address a critical need of developers trying to build robust automatic speech recognition (ASR) systems. Under the Fisher protocol, a very large number of participants each make a few calls of short duration speaking to other participants, whom they typically do not know, about assigned topics. This maximizes inter-speaker variation and vocabulary breadth although it also increases formality.  Previous protocols such as CALLHOME, CALLFRIEND and Switchboard relied upon participant activity to drive the collection. Fisher is unique in being platform driven rather than participant driven. Participants who wish to initiate a call may do so; however the collection platform initiates the majority of calls. Participants need only answer their phones at the times they specified when registering for the study.

To encourage a broad range of vocabulary, Fisher participants are asked to speak on an assigned topic which is selected at random from a list, which changes every 24 hours and which is assigned to all subjects paired on that day. Some topics are inherited or refined from previous Switchboard studies while others were developed specifically for the Fisher protocol.

In collecting data for this corpus, attempts were made to provide a representative distribution of subjects across a variety of demographic categories including: gender, age, dialect region, and education level.  Native speakers of Caribbean Spanish and non-Caribbean Spanish were recruited from within the continental United States and Puerto Rico.

The speech recordings consist of 819 telephone conversations of 10 to 12 minutes in duration. They are provided as digital audio files in NIST SPHERE format (1024-byte ASCII file headers). The conversations were recorded as 2-channel mu-law sample data with 8000 samples per second (as captured from the public telephone network).

Fisher Spanish Speech is distributed on 2 DVD-ROM.

2010 Subscription Members will automatically receive two copies of this corpus.  2010 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2500.

[ top ]


(2) Fisher Spanish - Transcripts was developed by LDC and contains full orthographic transcripts of the telephone speech in Fisher Spanish Speech (LDC2010S01). Transcripts cover roughly 163 hours of telephone speech from 136 native Caribbean Spanish and non-Caribbean Spanish speakers.

The transcript files are in plain-text, tab-delimited format (tdf) with UTF-8 character encoding. They were created with the LDC-developed transcription tool "XTrans", which allowed for improved handling of multi-channel audio and overlapping speakers. XTrans is available from LDC.

Transcribers followed LDC's Transcription Guidelines (NQTR), which are included with the documentation for this release.

Fisher Spanish Speech (LDC2010S01) provides the digital audio used as the basis for the transcriptions in this corpus, in the form of 2-channel mu-law sample data with 8000 samples per second (as captured from the public telephone network), for 819 telephone conversations of 10 to 12 minutes in duration. The audio files are in NIST SPHERE format (1024-byte ASCII file headers).

Fisher Spanish - Transcripts is distributed via web download.

2010 Subscription Members will automatically receive two copies of this corpus on disc. 2010 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may

license this data for US$1500.

[ top ]




Back to Top

7 . Jobs openings

We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at as well as Jobs). 

The ads will be automatically removed from ISCApad after  6 months. Informing ISCApad editor when the positions are filled will avoid irrelevant mails between applicants and proposers.

Back to Top

7-1 . (2009-10-08) Computational Linguist or Research engineer with an interest in NLP and translation technologies


Paid Traineeship - Computational Linguist or Research engineer with an interest in NLP and translation technologies



ITS (Information Technology Support) - Research and Development Team

Directorate General for Translation,

European Parliament,



Description of working environment

The Information Technology Support Unit (ITS DGTRAD) is the unit that provides technical and logistical support to Parliament’s translation units. ITS provides its users with standard IT support services by manning helpdesks, providing first-level user support, installing and trouble-shooting user configurations, running file, print and web servers, and providing second-level support. It caters specifically for translation needs by providing its users with a palette of tools - commercial (TWB), inter-institutional (IATE, Euramis) and in-house (Fuse, FullDoc etc) - and by integrating these tools into a coherent working environment and providing effective training and support in their use.
ITS promotes the sharing of information and the adoption of best practices amongst its users by providing a Translation Service Portal and publishing a newsletter. It also represents Parliament in a number of inter-institutional bodies concerned with technical questions related to translation.

Mission (tasks):

The Unit for IT support of the Translation DG of the European Parliament invites applications for a 5 months internship in its Research and Development team. Our current projects focus on developing and adapting language technology tools to assist the work of one of the largest translation services in the world.

As a member of our team you will have the possibility to work as a researcher and/or developer on one or more of the following topics:

  • Machine Translation
  • Indexing
  • Text Categorisation
  • Controlled language
  • Automatic Language Recognition
  • Multilingual and Crosslingual Information retrieval.

Depending on her/his field, the selected candidate will have to carry out one or more of the following tasks:

  • Research
  • Needs Analysis
  • Development
  • Documentation

Possibilities of combining your work here with a master/PhD thesis can be discussed.


The ideal candidate should have a very good Bachelor's degree in Computational Linguistics, Information Science or another related field and a strong interest in Natural Language Processing that can be proven by relevant research papers, university assignments or publications.


Technical knowledge and experience:


  • A strong background in at least one of the task areas mentioned above
  • Good programming skills in Java, Java for Web-applications and/or Visual Basic
  • SQL/Oracleand XML knowledge would be an asset
  • Statistical NLP



Knowledge of at least one official EU language. English as a working language is mandatory. Any other EU language would be considered an asset.



We are looking for highly motivated, communicative candidates who would like to work in a friendly, multinational and multilingual environment in the heart of Europe. Creativity and a strong interest in Language and Translation technologies will definitely be considered as major advantages.



Back to Top

7-2 . (2009-10-08) Post-doc position in speech recognition/modeling at TTI-Chicago

### Post-doc position in speech recognition/modeling at TTI-Chicago ###

A post-doc position is available at TTI-Chicago. It includes opportunities for work on articulatory modeling, graphical models, discriminative learning, large-scale data analysis, and multi-modal (e.g. audio-visual) modeling.

The post-doc will be mainly working with Karen Livescu, and will interact with collaborators Jeff Bilmes (U. Washington), Eric Fosler-Lussier (Ohio State U.), and Mark Hasegawa-Johnson (U. Illinois at Urbana-Champaign).

To apply, or for additional information, please contact Karen Livescu at There is also an opportunity for a shorter-term post-doc project on annotation of speech at the articulatory level. Please contact for more details.

Back to Top

7-3 . (2009-10-19) Opened positions/internships at Microsoft: German Linguists (M/F)

Opened positions/internships at Microsoft: German Linguists (M/F)

MLDC – Microsoft Language Development Center, a branch of the Microsoft Product Group that develops Speech Recognition and Synthesis Technologies, situated in Porto Salvo, Portugal (Opened positions/internships at Microsoft: German Linguists (M/F)), is seeking a part-time or full-time temporary language expert in the German language, for a 2 month contract, renewable, to work in language technology related development projects. The successful candidate should have the following requirements:

·         Be native or near native German speaker

·         Have a university degree in Linguistics (with good computational skills) or Computational Linguistics (Master’s or PhD)

·         Have an advanced level of English (oral and written)

·         Have some experience in working with Speech Technology/Natural Language Processing/Linguistics, either in academia or in industry

·         Have some computational ability – being able to run tools, being comfortable to work with Microsoft Office tools and having some programming fundamentals, though no programming is required

·         Have team work skills

·         Willing to work in Porto Salvo (near Lisbon) for the duration of the contract

·         Willing to work in a multicultural and multinational team across the globe

·         Willing to start immediately

To apply, please submit your resume and a brief statement describing your experience and abilities to Daniela Braga:

We will only consider electronic submissions.

Deadline for submissions: open until filled.



Bruno Reis Bechtlufft| MLDC Trainee

Back to Top

7-4 . (2009-11-20) Post-doc LIMSI Paris

Le groupe "Traitement du Langage Parlé" du LIMSI/CNRS
( recrute un post-doctorant pour participer au
projet ANR EDyLex.

Le projet EDyLex est un projet financé par l'ANR dans le cadre du
programme CONTINT (Contenus et interactions); il porte sur
l'acquisition dynamique de nouvelles entrées lexicales dans des
chaines d'analyse linguistiques (analyse syntaxique/sémantique ou
systèmes de transcription de la parole) : comment détecter et
qualifier un mot inconnu ou une entité nommée nouvelle dans un texte
ou dans un flux de parole ? Comment lui attribuer une phonétique, une
catégorie, des propriétés syntaxiques, une place dans un réseau
sémantique ?

Le travail concerné par cette annonce porte plus spécifiquement sur la
gestion des mots nouveaux ou inconnus, afin qu'ils puissent être
reconnus par un système de transcription de la parole. Ceci implique
la détection des mots inconnus, l'utilisation d'unités sous-lexicales,
la phonétisation avec variantes de mots nouveaux, l'adaptation des
modèles de langage. Ces méthodes seront validées en interne dans le
projet , et lors des futures campagnes nationales ou internationales
comme STD (Spoken Term Detection) organisée par le NIST

Le consortium de Edylex est composé de l'équipe Alpage
(INRIA-Univ. Paris 7), qui coordonne le projet, du Laboratoire
d'Informatique Fondamentale de Marseille (UMR du CNRS), de Syllabs et
Vecsys Research, deux laboratoires de recherche privés, la première
spécialisée dans le traitement du langage naturel, la deuxième dans le
traitement de la parole, de l'AFP qui fournit les corpus et qui
évaluera l'apport des techniques développées dans son système
d'information, et du LIMSI, dans ses deux composantes, écrit (groupe
ILES) et parole (groupe TLP).

Le projet a commencé le 1er novembre 2009 et durera 3 ans.

Les candidats devront savoir programmer dans un environnement Unix, et
être capable de parler et écrire en anglais et en français.
Ils devront avoir obtenu un doctorat dans l'un des domaines
suivants : traitement de la parole, traitement du langage naturel.

Être impliqué dans un projet de recherche dans le groupe TLP au LIMSI
offre une opportunité exceptionnelle de travailler sur des problèmes
de recherche variés, au sein d'une équipe de recherche de premier
plan, et d'être en contact avec les laboratoires académiques ou
industriels les plus importants dans le domaine.

La durée du contrat est de 1 an renouvelable.

Le travail se déroulera au LIMSI/CNRS qui se situe à Orsay, au sud de Paris.

Les candidatures devront être adressées, accompagnées d'un CV, à
Gilles ADDA (

Back to Top

7-5 . (2009-11-25) Proposition de these CIFRE Univ de Strasbourg France

Proposition de Thèse CIFRE en informatique : Compilation d'unités phono-lexicales du langage parlé.

Cette thèse s'inscrit dans le cadre d'un projet de réalisation d'une plate-forme numérique comportant la conception d'un système logiciel permettant, notamment, l'apprentissage de l'écriture à partir de l'expression orale, la correction automatique d'erreurs orthographiques et grammaticales, et l'indexation sémantique de textes. Ce projet allie des chercheurs en phonétique, en linguistique (syntaxe et sémantique intrinsèque) et en informatique (compilation et optimisation de codes).

L'approche adoptée consiste à considérer que l’écriture implique la primauté de la substance phonique (les sons, les syllabes) et que de ce fait, il faut concevoir un modèle unitaire d’emblée établi sur le couplage entre le système phonologique, caractérisé par la variabilité des sons, et le système grammatical, caractérisé par la stabilité des unités graphiques. On fait donc l’hypothèse méthodologique que ce couplage est représentable dans la partie frontale d’un compilateur via les procédures de gestion de la table des symboles et de gestion des erreurs liées aux niveaux de l’analyse phono-lexicale, de l’analyse syntaxique et de l’analyse sémantique.

Au niveau phonique, la définition des unités lexicales implique de traiter simultanément l'intonation, l'accentuation, les syllabes et les phonèmes, pour segmenter la chaîne en mots phonologiques, représentables par des symboles. Le passage à l'écrit exige ainsi une méthode de construction récursive des unités à différents niveaux d’analyse par la formulation de règles d’inférences qui font correspondre les unités de niveau phonologique et les unités de niveau grammatical : la définition des unités au niveau phonologique est simultanément élaborée et couplée avec celle des unités grammaticales constituant l’énoncé selon une méthode descendante. D’où l’hypothèse de la hiérarchie des unités : une unité grammaticale est une structure relationnelle construite d’attributs atomiques définis par la hiérarchie d’une grammaire formelle.

Chaque unité lexicale peut jouer plusieurs rôles différents, selon le contexte de son apparition, contrairement aux unités lexicales des langages de programmation. Le type qui lui est associé est donc multiple. Par exemple « sort » peu jouer le rôle de verbe ou de nom commun, selon sa place
dans la phrase analysée. Cette place ne peut être identifiée qu’à l’étape d’analyse syntaxique, par confrontation avec les règles de production de la grammaire.

Le processus de compilation doit ensuite servir à extraire l’information de la sémantique du texte analysé, celle-ci devant être exprimée par une syntaxe « élémentaire » d’arbres de syntaxe abstraite étiquetés par des attributs.

Le modèle d'analyse descendante de la phrase, qui permet en effet de contrôler la génération et l'analyse de phrases de complexité croissante, servira de modèle à la constitution d'une base de données relationnelle constituée de tables reliées aux attributs des unités de sens. Les champs des tables représenteront les opérations sur les unités (substitution, déplacement, ajout, réduction).

Ce travail de thèse s'effectuera en collaboration avec l'Université de Strasbourg et l'entreprise Digora à Strasbourg, partenaire Oracle. Côté université, l'étudiant sera encadré par Rudolph Sock (phonéticien), Gérard Reb (linguiste) et Philippe Clauss (informaticien). L'étudiant alternera des périodes de travail en laboratoire (laboratoire LSIIT - Strasbourg) et en entreprise. Le doctorant sera amené à travailler avec un phonéticien post-doctorant. Le travail de thèse pourra débuter dès l'accomplissement de la procédure administrative d'établissement de la convention CIFRE pour une durée de 3 ans. Le salaire minimum est de 23 484 € annuel brut.

Prendre contact avec Philippe Clauss (

Liens :
Laboratoire LSIIT :
Digora :
Université de Strasbourg :
Dispositif CIFRE : 

Back to Top

7-6 . (2009-11-26) Postdocs at LIMSI Paris

The Spoken Language Processing Group at the LIMSI/CNRS ( is looking for postdocs, non-permanent research engineers, and doctoral students to participate in a number of research projects funded by national and European programs.

The main research areas are:

* Core technology for speech recognition (acoustic modeling, language modeling, ...)

* Speech-to-speech translation

* Speaker recognition and speaker diarization

 * Language Identification

* Audio indexing in a multilingual context (English, French, German, Arabic, Mandarin, Spanish, Portuguese, Italian, Dutch, Finnish, Danish, Greek, ...)

Preference will be given to candidates with experience in one or more of the following areas: speech processing, computational linguistics, signal processing and computer science.

Applicants should be experienced programmers and be familiar with the Unix environment, and be able to speak and write in English.

Contract duration for postdoc and research engineers: 1 to 3 years Location: LIMSI/CNRS Orsay, France (South of Paris)

Projects at LIMSI offer an exceptional opportunity to address challenging research problems with some of the most prestigious academic and industrial partners. Interested candidates should send a CV to Jean-Luc Gauvain (

Back to Top

7-7 . (2009-11-27) PhD positions for the CMU-Portugal program

PhD.  Program Carnegie Mellon-PORTUGAL in the area of Language and Information Technologies The Language Technologies Institute (LTI) of the School of Computer Science at Carnegie Mellon University offers a dual degree Ph.D. Program in Language and Information Technologies in cooperation with Portuguese Universities.

This Ph.D. program is part of the Portugal-Carnegie Mellon Partnership. The Language Technologies Institute, a world leader in the areas of speech processing, language processing, information retrieval, machine translation, machine learning, and bio-informatics, has been formed 20 years ago. The breadth of language technologies expertise at LTI enables new research in combinations of the core subjects, for example, in speech-to-speech translation, spoken dialog systems, language-based tutoring systems, and question/answering systems. The Portuguese consortium of Universities includes (but is not limited to) the Spoken Language Systems Lab (L2F) of INESC-ID Lisbon/IST, the University of Lisbon (FLUL), the University of Beira Interior (UBI) and the University of Algarve (UALG). These Universities share expertise in the same language technologies as LTI, although with a strong focus on processing the Portuguese language.

 Each Ph.D. student will receive a dual degree from LTI and the selected Portuguese University, being co-supervised by one advisor from each institute, and spending approximately half of the 5-year doctoral program at each institute. The academic part will be done during the first 2 years, including a maximum of 8 courses, with a proper balance of focus areas (Linguistic, Computer Science, Statistical/Learning, Task Orientation). The remaining 3 years of the doctoral program will be dedicated to research.

The thesis topic will be in one of the research areas of the cooperation program, defined by the two advisors. Two multilingual topics have been identified as primary research areas (although other areas of human language technologies may be also contemplated): computer aided language learning (CALL) and speech-to-speech machine translation (S2SMT). The doctoral students will be involved in one of these two projects aimed at building real HLT systems. These projects will involve at least two languages, one of them being Portuguese, the target language for the CALL system to be developed and either the source or target language (or both) for the S2SMT system. These two projects provide a focus for the proposed research; through them the collaboration will explore the main core areas in language technology.

The scholarship will be funded by the Foundation for Science and Technology (FCT), Portugal. How to Apply The application deadline for the LT Ph.D. program in the scope of the CMU-Portugal partnership is December 15, 2009.

Students interested in the dual doctoral program must apply by filling the corresponding form at the LTI webpage. For more information about the joint degree doctoral program in LT, send email to the coordinators of the Portuguese consortium and of the LTI admissions: Isabel.Trancoso at inesc-id dot pt LTI_Portugal_Admissions at cs dot cmu dot edu The applications will be screened by a joint committee formed by representatives of LTI and of the Portuguese Universities. The candidates should indicate their scores in GRE and TOEFL tests. Despite the particular focus on the Portuguese language, applications are not in any way restricted to native or non-native speakers of Portuguese. Post-Doc positions are also available in the scope of the same program. For additional information on these positions, contact Isabel.Trancoso at inesc-id dot pt.

Back to Top

7-8 . (2009-12-08) PhD position at LORIA-INRIA Nancy, France speech team

PhD position at LORIA-INRIA Nancy, speech team
Through a collaboration with a company
located in Epinal, which sells documentary rushes, 
we are interested by indexing these rushes using the automatic recognition
of the rush dialogues.
The speech team has developed a system for automatic transcription
of broadcast news: ANTS.
If the performance of automatic transcription systems
(like ANTS), is satisfactory in the case of speech
read or "prepared" (news), it
degrades significantly in the case of spontaneous speech.
Compared to the prepared speech, spontaneous speech is characterized by:
 - Insertions (hesitation, pauses, false starts)
 - Variations of pronunciation as the contraction of words or
syllables (/want to/ > /wanna/)
 - Changes in speaking rate (reducing
the articulation of some phonemes and lengthening other phonemes)
- Difficult sound environments (overlapping speech, laughter,
ambient noise, ...).
Usually, these features are not taken into account by the recognition system. 
All these phenomena cause recognition errors
and may lead to incorrect indexing.
The purpose of the subject is to take into account the specific phenomena
related to spontaneous speech such as hesitations, pauses, false starts, ... to improve the recognition rate.
To do this, it will be necessary to model these specific phenomena.
We have a large corpus of speech in which these events were
labeled. This corpus will be used to select parameters, estimate models and evaluate the results.
Scope of Work
The work will be done within the Speech team of Inria-Loria.
The student will use the software ANTS for automatic speech recognition
developed by the team.
profile of candidate
Candidates should know how to program in a Unix environment, and
be able to speak and write English.
Knowledge of stochastic modeling  or automatic speech processing are desirable.
The applicants for this PhD position should be fluent in English or in French. Competence in French is optional, though applicants will be encouraged to acquire this skill during training. This position is funded by the ANR.
Strong software skills are required, especially Unix/linux, C, Java, and a scripting language such as Perl or Python. 
contact: or
Back to Top

7-9 . (2010-01-16) Post-doctoral position in France: Signal processing and Experimental Technique for a Silent Speech Interface.

Postdoctoral position in Paris, France:  Signal Processing and Experimental Technique for a Silent Speech Interface DeadLine: 30/04/2010  Postdoctoral position in Paris, France  Signal Processing and Experimental Technique for a Silent Speech Interface  The REVOIX project in Paris, France is seeking an excellent candidate for a 12 month postdoctoral position, starting as soon as possible. REVOIX (ANR-09-ETEC-005), a partnership between the Laboratoire d’Electronique ESPCI ParisTech and the Laboratoire de Phonétique et Phonologie, will design and implement a vocal prosthesis that uses a miniature ultrasound machine and a video camera to restore the original voice of persons who have lost the ability to speak due to laryngectomy or a neurological problem. The technologies developed in the project will have an additional field of application in telecommunications in the context of a “silent telephone† allowing its user to communicate orally but in complete silence (see special issue of Speech Communication, entitled Silent Speech Interfaces, appearing March, 2010,  The project will build upon promising results obtained in the Ouisper project (ANR-06-BLAN-0166) which was completed at the end of 2009. The interdisciplinary REVOIX team includes junior and senior university and medical research staff with skills in signal processing, machine learning, speech processing, phonetics, and phonology. The ideal candidate for the postodoctoral position will have solid skills in signal processing, preferably with speech experience, but also in experimental techniques for man-machine interfaces, coupled with a with a strong motivation for working in an interdisciplinary environment to produce a working, portable silent speech interface system for use in medical and telecommunication applications. Salary is competitive for European research positions.   Contact : Professor Bruce DENBY
Back to Top

7-10 . (2010-01-22) Modelling human speech perception Univ. of Plymouth, UK

Modelling human speech perception  Internal advisors: Dr Susan Denham, Dr Jeremy Goslin and Dr Caroline Floccia (1School of Psychology, University of Plymouth) External advisor: Dr Steven Greenberg (Silicon Speech, USA)  Applications are invited for a University-funded studentship to start in April 2010  Although artificial speech recognition systems have improved considerably over the years, their performance still falls far short of human abilities, and their robustness in the face of changing conditions is limited. In contrast, humans and other animals are able to adapt, seemingly effortlessly, to different listening environments, and are able to communicate effectively with one another in many different circumstances. In this project we aim to investigate a novel theoretical model of human speech perception based on cortical oscillators. We take as our starting point the observation that natural communication sounds contain temporal patterns or regularities evident at many different times scales (Winkler, Denham et al. 2009). The proposal is that the speech message can be extracted through adaptation of a hierarchically organised system of neural oscillators to the characteristic multi-scale temporal patterns present in the speech of the target speaker, and that by doing so extraneous interfering sounds can be simultaneously rejected.  This proposal will be tested using electrophysiological measurements of listeners attending to speech in different background sounds, analyzing activity at various pre-lexical and lexical processing levels (e.g. (Goslin, Grainger et al. 2006)), for application in the development of a biologically inspired computational model of human speech perception.  We are looking for a highly qualified and motivated student with a strong interest in auditory perception, sounds and speech perception. You will join a well-established research environment, and work alongside the brain-technology team which is currently funded by a multi-centre European project SCANDLE (, and a new joint British ESRC/ French ANR project, RECONVO (investigating multi-lingual speech development). Requirement: Knowledge of experimental methods and/or programming experience with a high level language; Desirable: Knowledge of signal processing techniques, models of auditory perception, electrophysiological techniques. Candidates should have a first or upper second class honours degree in an area related to Cognitive Neuroscience (Computer Science, Maths, Physics, Electrical Engineering, Neuroscience, or Psychology). Applicants with a relevant MSc or MRes are particularly welcome. The studentship will provide a fully funded full-time PhD post for three years, with a stipend of approximately £13,290 per annum. The position is open to UK citizens and EU citizens with appropriate qualification who have been resident or studied in the UK for three years.  For informal queries please contact: Dr Susan Denham (<>). For an application form and full details on how to apply, please visit<> Applicants should send a completed application form along with the following documentation to The University of Plymouth, Postgraduate Admissions Office, Hepworth House, Drake Circus, Plymouth, PL4 8AA – United Kingdom  • Two references in envelopes signed across their seals • Copies of transcripts and certificates • If English is not your first language, evidence that you meet our English Language requirements (<> ) • CV • Ethnic and Disability Monitoring Form  Closing Date: 5PM, Monday 15 February 2010.  Interviews will be held at the end of February 2010, with a proposed start date of 1 April 2010.  References Goslin, J., J. Grainger, et al. (2006). "Syllable frequency effects in French visual word recognition: an ERP study." Brain Res 1115(1): 121-34. Winkler, I., S. L. Denham, et al. (2009). "Modeling the auditory scene: predictive regularity representations and perceptual objects." Trends Cogn Sci 13(12): 532-40.
Back to Top

7-11 . (2010-01-25) Post-doc at Aalto University Postdoc (Espoo, Finland)

Aalto University Postdoc (Espoo, Finland)

The Department of Signal Processing and Acoustics will have a postdoctoral research position for the time period of 1 August 2010 - 31 December 2012 related to one of the following fields:

Digital signal processing in wireless communications, sensor array signal processing, speech processing, audio signal processing, spatial sound, or optical radiation measurements

Successful applicants are expected to strengthen and extend the department's current research and teaching in their field of expertise. The applicants are expected to have earned their doctoral degree between between 1 January 2005 - 31 May 2010.

This recruitment is a result of the department's success in the recent Research Assessment Exercise, and we are looking for strong candidates from all over the world.

The postdoc will be expected to participate in the department's teaching. The annual salary starts from 39 500 euros depending on experience.

Applications should include:

  • Research Plan
  • CV
  • List of publications
  • Names and contact information of 1-3 referees
  • Optional 1-2 letters of recommendation

Please send your applications by email to Each application
should be in the form of a single pdf file. Name your file as: "surname_application.pdf". Applications are due 15 March 2010.

See also:


Back to Top

7-12 . (2010-01-30) PhD position at ACLC/NKI-AVL 2010 The Amsterdam Centre for Language and Communication (ACLC)

0One PhD position at ACLC/NKI-AVL 2010 							 The Amsterdam Centre for Language and Communication (ACLC) focuses on the description and explanations for variation in languages and language use. The ACLC includes both functional and formal approaches to language description and encourages dialogue between these approaches. Studies cover all aspects of speech and languages: phonetics, phonology, morphology, syntax, semantics and pragmatics - in a search for the Language Blueprint. Language typology, including that of Creole and signed languages, plays an important part in the ACLC programme. Language variation in terms of time, space and context is also a specialization. The study of variation in the different types of language user - from the child learning her first language to the adult second language learner including also different types of language pathology - is a clear focus.  Questions of speech and language loss and (re-)acquisition are a focus of the ACLC. The course of speech rehabilitation after serious pathologies of the head and neck area is an example of such loss and re-acquisition. The Department of Head and Neck Oncology and Surgery at The Netherlands Cancer Institute/Antoni van Leeuwenhoek Hospital (NKI-AVL), in collaboration with the Academic Medical Center (AMC), is involved in patient care, education and scientific research in the field of head and neck cancer. The department has a long history of quality of life research, focusing on the functional side effects of head and neck cancer and its treatment. The most common tumours include mouth and tongue, throat, and larynx (voice box) cancer. Voice and speech disorders related to head and neck cancer treatment and the rehabilitation thereof are extensively studied in close collaboration with the ACLC.  The PhD project  Title: Automatic evaluation of voice and speech rehabilitation following treatment of head and neck cancers.  Abstract:The research project will study the use of existing Automatic Speech Recognition (ASR) applications to evaluate pathologic speech after treatment of head and neck cancers in a clinical setting. The aim is to obtain therapeutically meaningful measures of the speech quality of individual patients over the course of their treatment. Basic and applied research into the properties and pathologies of Tracheo-Esophageal (TE) speech following laryngectomy has a long history at the ACLC. The current project also includes the effects of other treatment, e.g. radio- and chemotherapy. The project could also contribute to a practical end goal where ASR systems in the future could be used to obtain objective information on speech quality, real-time during treatment and rehabilitation sessions. Such objective information is needed for evidence based medical treatment and is currently lacking. Emphasis will be given to studying the relation between medical history, speech and voice acoustics, and specific ASR results for individual patients. Of special interest are word recognition errors that can be traced to specific phrasing, prosodic, and phoneme errors known to affect TE speakers. The candidate will study how pre-recorded patient materials can be evaluated using existing ASR applications and process the results. The candidate will collaborate with laboratories in Belgium and Germany.  Application and Procedure  You have to apply as a candidate. Please follow the Guidelines for applying for an internal PhD position 2010 (see below under Information).  Tasks  The PhD student needs to carry out the research and write a dissertation within the duration of the project (4 years (80%) or 3.3 years (full time)).  Requirements  Educational background :Logopedic, linguistics, or phonetics with an affinity to speech pathology  Experience : Experience with speech technology and perception experiments is welcome  Information The following documents give precise information about the application procedure:  Project description Automatic evaluation of voice and speech rehabilitation following treatment of head and neck cancers  ACLC guidelines for application 2010  NB Incomplete applications will be automatically rejected so please read the guidelines carefully.  Further information can be obtained from the intended supervisors of this project, Prof. Dr. Frans Hilgers, phone +31.20.512.2550, e-mail:, Dr. Rob van Son, e-mail:, or from the managing director of the ACLC, Dr. Els Verheugd, phone +31.20.525.2543, e-mail: The original position can be found at the ACLC web site  Position The PhD student will be appointed for a period of 4 years (80%) or 3.3 years (full time) at the Faculty of Humanities of the University of Amsterdam under the terms of employment currently valid for the Faculty. A contract will be given in the first instance for one year, with an extension for the following years on the basis of an evaluation of, amongst other things, a written piece of work. The salary (on a full time base) will be # 2042 during the first year (gross per month) and will reach # 2612 during the fourth year, in accordance with the CAO for Dutch universities.  Submissions  Submissions of your application as a candidate should be sent before 22 February, 2010 to (or, in the case of a paper version, to the director of the ACLC, Prof. Dr P.C Hengeveld, Spuistraat 210, 1012 VT Amsterdam). Applications received after this date or those that are incomplete will not be taken into consideration.
Back to Top

7-13 . (2010-02-01) DGA ouvre un poste au sein de l'équipe de traitement du langage (France)

La DGA ouvre un poste au sein de l'équipe de traitement du langage.

* Poste et missions :

En lien avec les acteurs scientifiques et industriels du traitement automatique du langage oral et écrit et pour répondre aux besoins des opérationnels de la défense à court comme à long terme, vous serez en charge de concevoir, spécifier, suivre et évaluer des projets technologiques dans le domaine.

De manière à mener à bien efficacement ces projets, vous effectuerez également une veille technologique active, des actions de coordination au niveau national et international, et des travaux d'étude et de développement informatique.

* Profil :

Une expérience dans le domaine du traitement automatique du langage associée à des compétences en gestion de projet est recherchée. La maitrise de l'anglais et des relations internationales sont un plus.

Diplôme grande école ingénieur ou diplôme niveau bac+5 exigé.

* Référence

Les candidatures peuvent être envoyées soit via l'APEC soit directement. 

Back to Top



The Multimedia Communications Department of EURECOM invites applications
for a faculty position at the Assistant/Associate Professor level. The
new faculty is expected to participate in teaching in our Master program
and to develop a new research activity in
                                      Ambient Multimedia.
We are especially interested in research directions which may extend our
existing activities in audio and video analysis towards pioneering new
approaches to interaction between people and their environment, in
everyday life or professional situations, for better productivity,
security, healthcare or entertainment.

Candidates must have a Ph.D. in computer science or electrical
engineering and between 5 and 10 years of research experience after PhD.
The ideal candidate will have an established research track record at
the international level, and a proven record of successful collaboration
with academic and industrial partners in national and European programs
or equivalent. A strong commitment to excellence in research is
mandatory. Exceptional candidates may be considered at the senior level.

Screening of applications will begin in January, 2010, and the search
will continue until the position is filled. Applicants should send, by
email, a letter of motivation, a resume including a list of their
publications, the names of 3 referees and a copy of their three most
important publications, to:

EURECOM ( is a graduate school in communication
systems founded in 1992 by EPFL (Swiss Federal Institute of Technology,
Lausanne, and Telecom Paris Tech
(, together with several academic and industrial
partners. EURECOM's activity includes research and graduate teaching in
corporate, multimedia and mobile communications. EURECOM currently has a
faculty of 20 professors, 200 Master students and 60 PhD students.
EURECOM is involved in many European research projects and joint
collaborations with industry. EURECOM is located in Sophia-Antipolis, a
major European technology park for telecommunications research and
development in the French Riviera.

Back to Top

7-15 . (2010-02-08) Ircam recruits a Developer W/M under limited-term contract of 18 months and full-time Paris

Ircam recruits two Researchers under limited-term contract of 18 months and full-time

From April 1st, 2010

Introduction to IRCAM

IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in acoustics and music. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies, musicology. Ircam is located in the centre of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

Introduction to Quaero project

Quaero is a 200 M€ collaborative research and development program focusing on the areas of automatic extraction of information, analysis, classification and usage of digital multimedia content for professionals and consumers. The research work shall concentrate on managing virtually unlimited quantities of multimedia and multilingual information, including text, speech, music, image and video. Five main application areas have been identified by the partners:

1.       multimedia internet search

2.       enhanced access services to audiovisual content on portals

3.       personalized video selection and distribution

4.       professional audiovisual asset management

5.       digitalization and enrichment of library content, audiovisual cultural heritage and scientific information.

 The Quaero consortium was created to meet new multimedia content analysis requirements for consumers and professionals, faced with the explosion of accessible digital information and the proliferation of access means (PC,  TV, handheld devices). More information can be found at

 Role of Ircam in Quaero Project

In the Quaero project, Ircam is in charge of the coordination of audio/music indexing research and of development of music-audio indexing technology: music content-description (tempo, rhythm, key, chord, singing-voice, and instrumentation description), automatic indexing (music genre/style, mood), music similarity, music audio summary, chorus detection and audio identification. A specificity of the project is the creation of a large-music-audio corpus in order to train and validate all the algorithms developed during the project.

 Position description

Researchers would be in charge of the development of the technologies related to

·         music-audio content description/ content-extraction: tempo and beat/measure position estimation, key/mode, chord progression, instrument/drum identification, singing voice location, voice description

·         music automatic indexing into music genre, music mood

·         music similarity: especially on large-scale databases

·         music structure discovery, automatic music audio summary generation, chorus location

Researcher will also collaborate with the evaluation team who evaluate algorithms performances and with the developer team.

 Required profile

·         Very high skills in audio signal processing (spectral analysis, audio-feature extraction, parameter estimation)

·         High skill in audio indexing and data mining (statistical modelling, automatic feature selection algorithm, …)

·         High-skill in large-database search algorithms

·         Good knowledge of Linux, Windows, MAC-OS environments

·         High-skill in Matlab programming, skills in C/C++ programming

·         High productivity, methodical work, excellent programming style.


According to background and experience


Please send an application letter together with your resume and any suitable information addressing the above issues preferably by email to: peeters_a_t_ircam dot fr with cc to vinet_a_t_ircam dot fr, rod_a_t_ircam_dot_fr, roebel_at_ircam_dot_fr




Back to Top

7-16 . (2010-02-08) Professeurs IFSIC France

« Problématiques informatiques intégrant l’aléatoire » (27 PR 1214)

Profil pédagogique :

Ce professeur rejoindra l’équipe pédagogique de l’IFSIC  et interviendra tant en Licence qu’en Master. La personne recrutée sera de formation informatique et sera à même d’illustrer l’apport des méthodes probabilistes ou statistiques à plusieurs domaines de l’informatique.

Profil de recherche :

 De nombreuses problématiques informatiques, développées au sein du laboratoire, requièrent des approches probabilistes (ou mixtes, déterministes/probabilistes) et/ou impliquant des aspects statistiques. Nous recherchons un professeur dont le profil de recherche aborde ces questions informatiques intégrant l'aléatoire.

Les domaines de recherche incluent la modélisation probabiliste,  les infrastructures (réseaux, qualité des services)  ainsi que le traitement de données numérisées et numériques (fouille de données, apprentissage).  Les applications incluent par exemple l’image et le son.

« Informatique pour la domotique» (27 MCF 1069 - ce poste sera affecté à l'Ecole Supérieure d'Ingénieurs de Rennes (ESIR))

Profil pédagogique :

L'enseignant-chercheur recruté sur ce poste sera affecté à la formation d'ingénieur en informatique et télécommunication de Rennes 1.  Il interviendra particulièrement dans les options Domotique et Informatique de cette formation.  Selon son profil, il y enseignera soit les techniques utilisées dans les infrastructures informatiques de la domotique (ex. réseau, systèmes embarqués, architecture logiciel), soit les techniques utilisées dans les services domotiques (ex. maintien à domicile, commande vocale, sécurité, contrôle énergétique).

Profil de recherche :

L'enseignant-chercheur recruté sur ce poste pourra être affecté à une équipe de l'IRISA (UMR 6074) spécialiste des techniques utilisées dans les infrastructures informatiques de la domotique (voir plus haut), ou à une équipe spécialiste des techniques utilisées dans les services domotiques (ex. traitement de la parole, IHM, traitement de données).  Il devra collaborer avec les spécialistes en domotique de l'IETR (UMR 6164).

Une expérience avérée de l'enseignement en domotique ou de l'application de techniques informatiques en domotique est souhaitée.

Back to Top

7-17 . (2010-02-09) Poste de Professeur (Informatique, Dialogue, Parole, Texte, Apprentissage Automatique) LIA Universite d'Avignon France

*Poste de Professeur en Informatique n° 232 au LIA (Université d'Avignon)
Intitulé : Informatique, Dialogue, Parole, Texte, Apprentissage Automatique
Description courte : le profil recherche de ce poste se situe, dans l’idéal, au confluent de trois
disciplines : la Reconnaissance Automatique de la Parole (RAP), le Traitement Automatique de la
Langue Naturelle (TALN) et l'Apprentissage Automatique (AA). Préférence sera donnée aux
candidat(e)s menant des recherches sur les traitements linguistiques de haut niveau de la langue
orale, en particulier dans le cadre d'applications de compréhension de la langue et de traduction
automatique. Les contextes applicatifs envisagés sont les interfaces de dialogue homme-machine et
le traitement de vastes archives sonores (données diffusées et archives de centres d'appels).

*Pour lire la description longue du profil détaillé :

*ATTENTION, il s'agit d'un recrutement au fil de l'eau et non de la campagne
synchronisée. Les candidatures sont à déposer avant le 4 Mars 2010.

Back to Top

7-18 . (2010-02-11)The Faculty of Engineering at the University of Sheffield, UK, is recruiting for 5 'Prize Lectureships'

The Faculty of Engineering at the University of Sheffield, UK, is recruiting for 5 'Prize Lectureships': see

These posts may be at any grade from Lecturer (junior Faculty) to Reader (close to Professor), and they come with an attractive funding package for studentships and  research start-up.

If you are interested in applying for a prize lectureship to join SPandH, the Speech and Hearing group in Computer Science (, please feel free to contact the SPandH academics:  contact details on the web site.

Phil Green, Roger Moore, Guy Brown, Jon Barker, Thomas Hain, Yoshi Gotoh. 

Back to Top

7-19 . (2010-03-04) Post doctoral Position in Speech Recognition (Grenoble, France)

Post doctoral Position in Speech Recognition (Grenoble, France)
Title: Application and Optimization of Speech Detection and Recognition Algorithms in Smart Homes
Start Date: October 2010
Duration and salary: 12 months, 1900 euros ¤
Keywords: Speech Recognition, Home Automation, Smart Homes

Description: The GETALP Team of the Laboratory of Informatics of Grenoble invites applications
for a full-time post-doctoral researcher to work on the SWEET-HOME ("Système
Domotique d’Assistance au Domicile") national French project funded by the
ANR ("Agence Nationale de la Recherche"). This project aims to deliver sufficient
support to people who need support for independent living such as elderly of disabled
persons (e.g., Alzheimer, cognitive deficiency . . . ). This assessment is usually
done through sensing technology (e.g., microphones, infra-red presence sensors,
door-contacts, etc.) which detects critical situations in order to generate the appropriate
action to support the inhabitant (call to an emergency service, call to relatives . . . ).
A few microphones are set in an experimental apartment in order to recognize sounds
and speech in real time. The recognition is challenging given that the speaker may
be far from the microphone and because of additive noise and reverberation. Indeed,
the position requires a significant experience in speech recognition. The project consortium
is composed of the LIG (Joseph Fourier University), the ESIGETEL and the
Theoris, Technosens and Camera-Contact companies. The experimental apartment
DOMUS of the Carnot Institute of Grenoble will be used by the consortium during
this project.
Requirements: The succesful candidate will have been awarded a PhD degree in computer science
or signal processing, involving automatic speech recognition. Expertise in environmental
robustness, independent component analysis (ICA), is a bonus, as any other
experience relevant to signal processing. The candidate will have a strong research
track record with significant publications at leading international conferences or in
journals. She/He will be highly motivated to undertake challenging applied research.
Moderate level in French language is required as the project language will be French.
Applications: Please send to the address below (i) a one page statement of your research interests
and motivation, (ii) yout CV and (iii) references before 1st of July 2010. 

Back to Top

7-20 . (2010-03-05) Post-doctoral position: Acoustic to articulatory mapping of fricative sounds, Nancy F

Acoustic to articulatory mapping of fricative sounds

Post-doctoral position

Nancy (France)


This subject deals with acoustic to articulatory mapping [Maeda et al. 2006], i.e. the recovery of the vocal tract shape from the speech signal possibly supplemented by images of the speaker’s face. This is one of the great challenges in the domain of automatic speech processing which did not receive satisfactory answer yet. The development of efficient algorithms would open new directions of research in the domain of second language learning, language acquisition and automatic speech recognition.

The objective is to develop inversion algorithms for fricative sounds. Indeed, there exist now numerical simulation models for fricatives. Their acoustics and dynamics are better known than those of stops and it will be the first category of sounds to be inverted after vowels for which the Speech group has already developed efficient algorithms.

The production of fricatives differs from that of vowels about two points:

·       The vocal tract is not excited by the vibration of vocal cords located at larynx but by a noise. This noise originates in the turbulent air flow downstream the constriction formed by the tongue and the palate.

·       Only the cavity downstream the constriction is excited by the source.

The approach proposed is analysis-by-synthesis. This means that the signal, or the speech spectrum, is compared to a signal or a spectrum synthesized by means of a speech production model which incorporates two components: an articulatory model intended to approximate the geometry of the vocal tract and an acoustical simulation intended to generate a spectrum or a signal from the vocal tract geometry and the noise source. The articulatory model is geometrically adapted to a speaker from MRI images and is used to build a table made up of couples associating one articulatory vector and the corresponding acoustic image vector. During inversion, all the articulatory shapes whose acoustic parameters are close to those observed in the speech signal are recovered. Inversion is thus an advanced table lookup method which we used successfully for vowels [Ouni & Laprie 2005] [Potard et al. 2008].


The success of an analysis by synthesis method relies on the implicit assumption that synthesis can correctly approximate the speech production process of the speaker whose speech is inverted. There exist fairly realistic acoustic simulations of fricative sounds but they strongly depend on the precision of the geometrical approximation of the vocal tract used as an input. There also exist articulatory models of the vocal tract which yield very good results for vowels. On the other hand, these models are inadequate for those consonants which often require a very accurate articulation at the front part of the vocal tract. The first part of the work will be about the elaboration of articulatory models that are adapted to the production of consonants and vowels. The validation will consist of piloting the acoustic simulation from the geometry and of assessing the quality of the synthetic speech signal with respect to the natural one. This work will be carried out for some X-ray films, whose the acoustic signal recorded during the acquisition of them is sufficiently good.

The second part of the work will be about several aspects of the inversion strategy. Firstly, it is now accepted that spectral parameters implying a fairly marked smoothing and frequency integration have to be used, which is the case of MFCC (Mel Frequency Cepstral Coefficients) vectors. However, the most adapted spectral distance to compare natural and synthetic spectra has to be investigated. Another solution consists in modeling the source so as to limit its impact on the computation of the spectral distance.

The second point is about the construction of the articulatory table which has to be revisited for two reasons: (i) only the cavity downstream the constriction plays an acoustic role, (ii) the location of the noise source is an additional parameter but it depends on the other articulatory parameters. The third point concerns the way of taking into account the vocal context. Indeed, the context is likely to provide important information about the vocal tract deformations before and after the fricative sound, and thus constraints for inversion.

A very complete software environment already exists in the Speech group for acoustic-to-articulatory inversion, which can be exploited by the post-doctoral student.


 [S. Ouni and Y. Laprie 2005] Modeling the articulatory space using a hypercube codebook for acoustic-to-articulatory inversion, Journal of the acoustical Society of America, Vol. 118, pp. 444-460

[B. Potard, Y. Laprie and S. Ouni], Incorporation of phonetic constraints in acoustic-to-articulatory inversion, JASA, 123(4), 2008 (pp.2310-2323).

[Maeda et al. 2006] Technology inventory of audiovisual-to-articulatory inversion


Skill and profile

Knowledge of speech processing and articulatory modeling.

Supervision and contact:

Yves Laprie (


1 year (possibly extendable)

Important  and useful links

The PhD should have been defended no more than a year before the recruitment date.

Back to Top

7-21 . (2010-03-12) Invitation to join the graduate team at the CLSP (Johns Hopkins U.) for the summer school



Undergraduate Team Members

The Center for Language and Speech Processing at the Johns Hopkins University is seeking outstanding members of the current junior class to participate in a summer workshop on language engineering from June 7th to July 30th, 2010

No limitation is placed on the undergraduate major. Only enthusiasm for research, relevant skills, past academic and employment record, and the strength of letters of recommendation will be considered. Students of Biomedical Engineering, Computer Science, Cognitive Science, Electrical Engineering, Linguistics, Mathematics, Physics, Psychology, etc. may apply. Women and minorities are encouraged to apply. The workshop is open to both US and international students.

  • An opportunity to explore an exciting new area of research.
  • A two-week tutorial on speech and language technology.
  • Mentoring by an experienced researcher.
  • Use of a computer workstation throughout the workshop.
  • A $5000 stipend and $2520 towards per diem expenses.
  • Private furnished accommodation for the duration of the workshop.
  • Travel expenses to and from the workshop venue.
  • Participation in project planning activities.

The eight-week workshop provides a vigorously stimulating and enriching intellectual environment and we hope it will encourage students to eventually pursue graduate study in the field of human language technologies.

Click Here to Apply!


The 2010 Workshop Teams



Selection Criteria


Four to eight undergraduate students will be selected for next summer's workshop. It is expected that they will be members of the current junior class. Applicants must be proficient in computer usage, including either C, C++, Perl or Python programming and have exposure to basic probability or statistics. Knowledge of the following will be considered, but is not a prerequisite: Linguistics, Speech Communication, Natural Language Processing, Cognitive Science, Machine Learning, Digital Signal Processing, Signals and Systems, Linear Algebra, Data Structures, Foreign Languages, or MatLab or similar software. .



Equal Opportunity Policy

The Johns Hopkins University admits students of any race, color, sex, religion, national or ethnic origin, age, disability or veteran status to all of the rights, privileges, programs, benefits and activities generally accorded or made available to students at the University. It does not discriminate on the basis of race, color, sex, religion, sexual orientation, national or ethnic origin, age, disability or veteran status in any student program or activity, including the administration of its educational policies, admission policies, scholarship and loan programs, and athletic and other University-administered programs or in employment. Accordingly, the University does not take into consideration personal factors that are irrelevant to the program involved.

Questions regarding access to programs following Title VI, Title IX, and Section 504 should be referred to the Office of Institutional Equity, 205 Garland Hall, (410) 516-8075.


Policy on the Reserve Officer Training Corps.

Present Department of Defense policy governing participation in university-based ROTC programs discriminates on the basis of sexual orientation. Such discrimination is inconsistent with the Johns Hopkins University non-discrimination policy. Because ROTC is a valuable component of the University that provides an opportunity for many students to afford a Hopkins education, to train for a career and to become positive forces in the military, the University, after careful study, has decided to continue the ROTC program and to encourage a change in federal policy that brings it into conformity with the University's policy.


Back to Top

7-22 . (2010-03-11) Post doc position in Crete.

Post-doctoral position in speech coding for speech synthesis
A post-doctoral research position in the field of speech synthesis is open at France Telecom-Orange Labs in Lannion, France. This study will involve the design and implementation of new speech coding methods particularly suited for speech synthesis. The objective of this work is twofold: to propose new algorithms for compressing acoustic inventories in concatenative synthesis; to implement the building blocks for speech coding/decoding in the context of parametric synthesis (HMM-based).
This one-year post-doctoral contract lies within a collaboration between Orange Labs (France) and the University of Crete (Greece). Travels between these two entities should thus be expected since the work will be developed in both sites.
Required Skills:
Excellent knowledge of signal processing and speech coding;
Extensive experience with C, C++ programming;
Good familiarity with Linux and Windows development environments.
Knowledge about Sinusoidal Speech modelling and coding will be considered as an advantage.
Salary: around 2300 € net per month depending on experience.
Closing date for applications: May 30th 2010.
Starting date: June/September 2010
Please send applications (CV+ 2 ref letters) or questions to:
Olivier Rosec
Tel: +33 2 96 05 20 67
Yannis Stylianou
Tel: +30 2810 391713
Back to Top

7-23 . (2010-03-11) Post doc in speech synthesis in Crete

Post-doctoral position in speech synthesis
A post-doctoral research position in the field of speech synthesis is open at Orange Labs in Lannion, France. This study will involve the design and implementation of a new hybrid speech synthesis system combining HMM-based synthesis and unit selection synthesis. The successful candidate will: first, develop a toolkit for training HMM models from the acoustic data available at Orange Labs; second, implement the acoustic parameters generation in the Orange Labs speech synthesizer; third, propose, design and implement an hybrid speech synthesis system combining selected and HMM-based units.
Required Skills:
PhD in computer science or electrical engineering
Strong knowledge in automatic learning (including HMM)
Extensive experience with C/C++ programming
Knowledge of HTK/HTS is a plus
Salary: around 2300 € per month depending on experience.
Closing date: April 30th 2010.
Tel: +33 2 96 05 33 53
Back to Top

7-24 . (2010-03-11) PhD opportunity in speech transformation in Crete.

PhD Opportunity in Speech Transformation
A full-time 3 year PhD position is available at France Telecom – Orange Labs in Lannion, France.
The position is within Orange Labs speech synthesis team and under academic supervision by Prof. Stylianou from Multimedia Informatics Laboratory at the University of Crete in Heraklion, Greece. Both labs conduct world class research in speech processing in areas like speech synthesis, speech transformation, voice conversion and speech coding.
Starting date: September 2010/January 2011
Application dates: March 30th 2010/October 30th 2010
Research fields: Speech processing, speech synthesis, pattern recognition, statistical signal processing, machine learning.
Project Description:
Speech transformation refers to the various modifications one may apply to the sound produced by a person, speaking or singing. It covers a wide area of research from speech production modeling and understanding to perception of speech, from natural language processing, modeling and control of speaking style, to pattern recognition and statistical signal processing. Speech Transformation has many potential applications in areas like entertainment, film and music industry, toys, chat rooms and games, dialog systems, security and speaker individuality for interpreting telephony, high-end hearing aids, vocal pathology and voice restoration.
In speech transformation, the majority of work is dedicated to pitch modification as well as to timbre transformation. Many techniques have been suggested in the literature, among which methods based on PSOLA, Sinusoidal Modeling, Harmonic plus Noise Model, Phase Vocoder and STRAIGHT. The above methods yield high quality for moderate pitch modifications and for well-mastered spectral envelope modifications. For more sophisticated transformations, the output speech cannot be considered natural.
During this thesis, we will focus on the re-definition of pitch and timbre modification in order to develop a high quality speech modification system. This will be designed and developed in the context of a quasi-harmonic speech representation which was recently suggested for high-quality speech analysis and synthesis purposes.
Salary: around 1700 € net per month.
Please send applications (CV+ 2 ref letters) or questions to:
Yannis Stylianou
Tel: +30 2810 391713
Olivier Rosec
Tel: +33 2 96 05 20 67
Back to Top

8 . Journals


Back to Top

8-1 . Special Issue on Statistical Learning Methods for Speech and Language Processing

IEEE Signal Processing Society
IEEE Journal of Selected Topics in Signal Processing
Special Issue on Statistical Learning Methods for Speech and
Language Processing
In the last few years, significant progress has been made in both
research and commercial applications of speech and language
processing. Despite the superior empirical results, however, there
remain important theoretical issues to be addressed. Theoretical
advancement is expected to drive greater system performance
improvement, which in turn generates the new need of in-depth
studies of emerging novel learning and modeling methodologies. The
main goal of this special issue is to fill in the above need, with
the main focus on the fundamental issues of new emerging approaches
and empirical applications in speech and language processing.
Another focus of this special issue is on the unification of
learning approaches to speech and language processing problems. Many
problems in speech processing and in language processing share a
wide range of similarities (despite conspicuous differences), and
techniques in speech and language processing fields can be
successfully cross-fertilized. It is of great interest to study
unifying modeling and learning approaches across these two fields.
The goal of this special issue is to bring together a diverse but
complementary set of contributions on emerging learning methods for
speech processing, language processing, as well as unifying
approaches to problems across the speech and language processing
We invite original and unpublished research contributions in all
areas relevant to statistical learning, speech processing and
natural language processing. The topics of interest include, but are
not limited to:
• Discriminative learning methods and applications to speech and language processing
• Unsupervised/semi-supervised learning algorithms for Speech and language processing
• Model adaptation to new/diverse conditions
• Multi-engine approaches for speech and language processing
• Unifying approaches to speech processing and/or language processing
• New modeling technologies for sequential pattern recognition
Prospective authors should visit
for information on paper submission. Manuscripts should be submitted
using the Manuscript Central system at
Manuscripts will be peer reviewed according to the standard IEEE process.
Manuscript submission due: Aug. 7, 2009
First review completed: Oct. 30, 2009
Revised manuscript due: Dec. 11, 2009
Second review completed: Feb. 19, 2010
Final manuscript due: Mar. 26, 2010
Lead guest editor:
Xiaodong He, Microsoft Research, Redmond (WA), USA,
Guest editors:
Li Deng, Microsoft Research, Redmond (WA), USA,
Roland Kuhn, National Research Council of Canada, Gatineau (QC), Canada,
Helen Meng, The Chinese University of Hong Kong, Hong Kong,
Samy Bengio, Google Inc., Mountain View (CA), USA, 
Back to Top

8-2 . Call for a chapter in Conversational Agents and Natural Language Interaction

Conversational Agents and Natural Language Interaction: Techniques and Effective Practices
A book edited by Dr. Diana Perez-Marin and Dr. Ismael Pascual-Nieto
Universidad Rey Juan Carlos, Universidad Autonoma de Madrid, Spain

We cordially invite you to submit a chapter for the forthcoming Conversational Agents and Natural Language Interaction: Techniques and Effective Practices book to be published by IGI Global (

Human-Computer Interaction can be understood as two potent information processors (a human and a computer) trying to communicate with each
other using a highly restricted interface. Natural Language (NL) Interaction, that is, to let the users express in natural language could be the solution to
improve the communication between human and computers. Conversational agents exploit NL technologies to engage users in text-based information-seeking and task-oriented dialogs for a broad range of applications such as e-commerce, help desk, Web site navigation,
personalized service, and education.

The benefits of agent expressiveness have been highlighted both for verbal expressiveness and for non-verbal expressiveness. On the other hand, there
are also studies indicating that when using conversational agents mixed results can appear. These studies reveal the need to review the research in a
field with a promising future and a great impact in the area of Human-Computer Interaction.

Objective of the Book
The main objective of the book is to identify the most effective practices when using conversational agents for different applications. Some secondary
objectives to fulfill the main goal are:
- To gather a comprehensive number of experiences in which conversational agents have been used for different applications
- To review the current techniques which are being used to design conversational agents
- To encourage authors to publish not only successful results, but also non-successful results and a discussion of the reasons that may have
caused them

Target Audience
The proposed book is intended to serve as a reference guide for researchers who want to start their research in the promising field of conversational
agents. It will not be necessary that readers have previous knowledge on the topic.

Recommended topics include, but are not limited to, the following:
1. Fundamental concepts
- Definition and taxonomy of conversational agents
- Motivation, benefits, and issues of their use
- Underlying psychological and social theories
2. Design of conversational agents
- Techniques
- Frameworks
- Methods
3. Practices
- Experiences of use of conversational agents in:
- E-commerce
- Help desk
- Website navigation
- Personalized service
- Training or education
- Results achieved
- Discussion of the reasons of their success of failure
4. Future trends
- Issues that should be solved in the future
- Expectations for the future

Submission Procedure
Researchers and practitioners are invited to submit on or before December 16, 2009, a 2-3 page chapter proposal clearly explaining the mission and
concerns of his or her proposed chapter. Authors of accepted proposals will be notified by January 16, 2010 about the status of their proposals and sent
chapter guidelines. Full chapters (8,000–10,000 words) are expected to be submitted by April 16, 2010. All submitted chapters will be reviewed on a
double-blind review basis. Contributors may also be requested to serve as reviewers for this project.

This book is scheduled to be published by IGI Global (formerly Idea Group Inc.), publisher of the “Information Science Reference” (formerly Idea Group Reference), “Medical Information Science Reference,” “Business Science Reference,” and “Engineering Science Reference” imprints. For additional information regarding the publisher, please visit This publication is anticipated to be released in 2011.

Important Dates
December 16, 2009: Proposal Submission Deadline
January 16, 2010: Notification of Acceptance
April 16, 2010: Full Chapter Submission
June 30, 2010: Review Results Returned
July 30, 2010: Final Chapter Submission
September 30, 2010: Final Deadline

Editorial Advisory Board Members
Dr. Rafael Calvo, University of Sydney, Australia
Dr. Diane Inkpen, University of Ottawa, Canada
Dr. Pamela Jordan, University of Pittsburgh, U.S.A.
Dr. Ramón López Cózar, Universidad de Granada, Spain
Dr. Max Louwerse, University of Memphis, U.S.A.
Dr. José Antonio Macías, Universidad Autónoma de Madrid, Spain
Dr. Mick O’Donnell, Universidad Autónoma de Madrid, Spain
Dr. George Veletsianos, University of Manchester, U.K.
Incompleted list, full list to be announced on November, 16

Inquiries and submissions
Please send all inquiries and submissions (preferably through e-mail) to:

Dr. Diana Perez-Marin, Universidad Rey Juan Carlos, Spain


Dr. Ismael Pascual Nieto, Universidad Autonoma de Madrid, Spain

Back to Top

8-3 . Call for Papers SPECIAL ISSUE OF SPEECH COMMUNICATION on Sensing Emotion and Affect - Facing Realism in Speech Processin

Call for Papers


Sensing Emotion and Affect - Facing Realism in Speech Processing



Human-machine and human-robot dialogues in the next generation will be dominated by natural speech which is fully spontaneous and thus driven by emotion. Systems will not only be expected to cope with affect throughout actual speech recognition, but at the same time to detect emotional and related patterns such as non-linguistic vocalization, e.g. laughter, and further social signals for appropriate reaction. In most cases, this analysis clearly must be made independently of the speaker and for all speech that "comes in" rather than only for pre-selected and pre-segmented prototypical cases. In addition - as in any speech processing task, noise, coding, and blind speaker separation artefacts, together with transmission errors need to be dealt with. To provide appropriate back-channelling and sociSPECIAL ISSUE of SPEECH COMMUNally competent reaction fitting the speaker's emotional state in time, on-line and incremental processing will be among further concerns. Once affective speech processing is applied in real-life, novel issues as standards, confidences, distributed analysis, speaker adaptation, and emotional profiling are coming up next to appropriate interaction and system design. In this respect, the Interspeech Emotion Challenge 2009, which has been organized by the guest editors, provided the first forum for comparison of results, obtained for exactly the same realistic conditions. In this special issue, on the one hand, we will summarise the findings from this challenge, and on the other hand, provide space for novel original contributions that further the analysis of natural, spontaneous, and thus emotional speech by late-breaking technological advancement, recent experience with realistic data, revealing of black holes for future research endeavours, or giving a broad overview. Original, previously unpublished submissions are encouraged within the following scope of topics:


    * Machine Analysis of Naturalistic Emotion in Speech and Text

    * Sensing Affect in Realistic Environments (Vocal Expression, Nonlinguistic Vocalization)

    * Social Interaction Analysis in Human Conversational Speech

    * Affective and Socially-aware Speech User Interfaces

    * Speaker Adaptation, Clustering, and Emotional Profiling

    * Recognition of Group Emotion and Coping with Blind Speaker Separation Artefacts

    * Novel Research Tools and Platforms for Emotion Recognition

    * Confidence Measures and Out-of-Vocabulary Events in Emotion Recognition

    * Noise, Echo, Coding, and Transmission Robustness in Emotion Recognition

    * Effects of Prototyping on Performance

    * On-line, Incremental, and Real-time Processing

    * Distributed Emotion Recognition and Standardization Issues

    * Corpora and Evaluation Tasks for Future Comparative Challenges

    * Applications (Spoken Dialog Systems, Emotion-tolerant ASR, Call-Centers, Education, Gaming, Human-Robot Communication, Surveillance, etc.)



Composition and Review Procedures



This Special Issue of Speech Communication on Sensing Emotion and Affect - Facing Realism in Speech Processing will consist of papers on data-based evaluations and papers on applications. The balance between these will be adjusted to maximize the issue's impact. Submissions will undergo the normal review process.



Guest Editors



Björn Schuller, Technische Universität München, Germany

Stefan Steidl, Friedrich-Alexander-University, Germany

Anton Batliner, Friedrich-Alexander-University, Germany



Important Dates



Submission Deadline April 1st, 2010

First Notification July 1st, 2010

Revisions Ready September 1st, 2010

Final Papers Ready November 1st, 2010

Tentative Publication Date December 1st, 2010



Submission Procedure



Prospective authors should follow the regular guidelines of the Speech Communication Journal for electronic submission ( During submission authors must select the "Special Issue: Sensing Emotion" when they reach the "Article Type"




Dr. Björn Schuller

Senior Researcher and Lecturer



BP133 91403 Orsay cedex



Technische Universität München

Institute for Human-Machine Communication

D-80333 München

Back to Top

8-4 . CfP EURASIP Journal on Advances in Signal Processing Special Issue on Emotion and Mental State Recognition from Speech

EURASIP Journal on Advances in Signal Processing  Special Issue on Emotion and Mental State Recognition from Speech  Call for Papers  _____________________________________________________   As research in speech processing has matured, attention has shifted from linguistic-related applications such as speech recognition towards paralinguistic speech processing problems, in particular the recognition of speaker identity, language, emotion, gender, and age. Determination of emotion or mental state is a particularly challenging problem, in view of the significant variability in its expression posed by linguistic, contextual, and speaker-specific characteristics within speech.  Some of the key research problems addressed to date include isolating emotion-specific information in the speech signal, extracting suitable features, forming reduced-dimension feature sets, developing machine learning methods applicable to the task, reducing feature variability due to speaker and linguistic content, comparing and evaluating diverse methods, robustness, and constructing suitable databases. Automatic detection of other types of mental state, which share some characteristics with emotion, are also now being explored, for example, depression, cognitive load, and "cognitive epistemic" states such as interest or skepticism. Topics of interest in this special issue include, but are not limited to:  * Signal processing methods for acoustic feature extraction in emotion recognition  * Robustness issues in emotion classification, including speaker and speaker group normalization and reduction of mismatch due to coding, noise, channel, and transmission effects * Applications of prosodic and temporal feature modeling in emotion recognition * Novel pattern recognition techniques for emotion recognition * Automatic detection of depression or psychiatric disorders from speech * Methods for measuring stress, emotion-related indicators, or cognitive load from speech * Studies relating speech production or perception to emotion and mental state recognition * Recognition of nonprototypical spontaneous and naturalistic emotion in speech * New methods for multimodal emotion recognition, where nonverbal speech content has a central role * Emotional speech synthesis research with clear implications for emotion recognition * Emerging research topics in recognition of emotion and mental state from speech * Novel emotion recognition systems and applications * Applications of emotion modeling to other related areas, for example, emotion-tolerant automatic speech recognition and recognition of nonlinguistic vocalizations  Before submission authors should carefully read over the journal's Author Guidelines, which are located at Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at according to the following timetable:  _____________________________________________________  Manuscript Due          August 1, 2010 First Round of Reviews  November 1, 2010 Publication Date        February 1, 2011 _____________________________________________________    Lead Guest Editor (for correspondence) _____________________________________________________  Julien Epps, The University of New South Wales, Australia; National ICT Australia, Australia    Guest Editors _____________________________________________________  Roddy Cowie, Queen's University Belfast, UK  Shrikanth Narayanan, University of Southern California, USA  Björn Schuller, Technische Universitaet Muenchen, Germany  Jianhua Tao, Chinese Academy of Sciences, China

Back to Top

8-5 . CfP Special Issue on Speech and Language Processing of Children's Speech for Child-machine Interaction Applications

ACM Transactions on Speech and Language Processing
                                                                      Special Issue on

                                     Speech and Language Processing of Children's Speech
                                   for Child-machine Interaction Applications


                                                                                        Call for Papers
The state-of the-art in  automatic speech recognition (ASR) technology is suitable  for a broad  range of interactive  applications. Although
children  represent an  important user  segment for  speech processing technologies,  the  acoustic  and  linguistic variability  present  in
children's speech poses additional challenges for designing successful interactive systems for children.

Acoustic  and  linguistic  characteristics  of children's  speech  are widely  different  from  those  of  adults and  voice  interaction  of
children with  computers opens challenging  research issues on  how to develop  effective  acoustic, language  and  pronunciation models  for
reliable recognition  of children's speech.  Furthermore, the behavior of children  interacting with  a computer is  also different  from the
behavior of adults. When using a conversational interface for example, children have a different language strategy for initiating and guiding
conversational exchanges, and may adopt different linguistic registers than adults.

In order to develop reliable voice-interactive systems further studies are  needed to  better  understand the  characteristics of  children's
speech and the different aspects of speech-based interaction including the role of speech in  multimodal interfaces. The development of pilot
systems for a broad range of applications is also important to provide  experimental evidence  of the degree  of progress in  ASR technologies
and  to focus  research on  application-specific problems  emerging by using systems in realistic operating environments.

We invite prospective authors to submit papers describing original and previously  unpublished work  in the  following broad  research areas:
analysis of children's speech, core technologies for ASR of children's speech,    conversational    interfaces,   multimodal    child-machine
interaction and computer  instructional systems for children. Specific topics of interest include, but are not limited to:
  • Acoustic and linguistic analysis of children's speech
  • Discourse analysis of spoken language in child-machine interaction
  • Intra- and inter-speaker variability in children's speech
  • Age-dependent characteristics of spoken language
  • Acoustic, language and pronunciation modeling in ASR for children
  • Spoken dialogue systems
  • Multimodal speech-based child-machine interaction
  • Computer assisted language acquisition and language learning
  • Tools  for children  with special  needs (speech  disorders, autism,  dyslexia, etc)

Papers  should have  a major  focus  on analysis  and/or acoustic  and linguistic processing of children's speech. Analysis studies should
be clearly  related to technology development  issues and implications should  be extensively discussed  in the  papers. Manuscripts  will be
peer reviewed according to the standard ACM TSLP process.

Submission Procedure
Authors should  follow the ACM TSLP  manuscript preparation guidelines described on  the journal web  site and  submit an
electronic  copy  of their  complete  manuscript  through the  journal manuscript  submission  site
Authors are required to specify  that their submission is intended for this Special  Issue by including on  the first page  of the manuscript
and in the  field "Author's Cover Letter" the  note "Submitted for the Special Issue  on Speech and Language Processing  of Children's Speech
for Child-machine Interaction  Applications". Without this indication, your submission cannot be considered for this Special Issue.

Submission deadline: May 12, 2010
Notification of acceptance: November 1, 2010
Final manuscript due: December 15, 2010

Guest Editors
Alexandros   Potamianos,  Technical   University   of  Crete,   Greece (
Diego Giuliani, Fondazione Bruno Kessler, Italy (
Shrikanth   Narayanan,   University   of  Southern   California,   USA (
Kay  Berkling,   Inline  Internet  Online   GmbH,  Karlsruhe,  Germany (
Back to Top

8-6 . ACM TSLP - Special Issue: call for Papers:“Machine Learning for Robust and Adaptive Spoken Dialogue Systems"

ACM TSLP - Special Issue: call for Papers:
“Machine Learning for Robust and Adaptive Spoken Dialogue Systems"

* Submission Deadline 1 July 2010 *

During the last decade, research in the field of Spoken Dialogue
Systems (SDS) has experienced increasing growth, and new applications
include interactive search, tutoring and “troubleshooting” systems,
games, and health agents. The design and optimization of such SDS
requires the development of dialogue strategies which can robustly
handle uncertainty, and which can automatically adapt to different
types of users (novice/expert, youth/senior) and noise conditions
(room/street). New statistical learning techniques are also emerging
for training and optimizing speech recognition, parsing / language
understanding, generation, and synthesis for robust and adaptive
spoken dialogue systems.

Automatic learning of adaptive, optimal dialogue strategies is
currently a leading domain of research. Among machine learning
techniques for spoken dialogue strategy optimization, reinforcement
learning using Markov Decision Processes (MDPs) and Partially
Observable MDPs (POMDPs) has become a particular focus.
One concern for such approaches is the development of appropriate
dialogue corpora for training and testing. However, the small amount
of data generally available for learning and testing dialogue
strategies does not contain enough information to explore the whole
space of dialogue states (and of strategies). Therefore dialogue
simulation is most often required to expand existing datasets and
man-machine spoken dialogue stochastic modelling and simulation has
become a research field in its own right. User simulations for
different types of user are a particular new focus of interest.

Specific topics of interest include, but are not limited to:

 • Robust and adaptive dialogue strategies
 • User simulation techniques for robust and adaptive strategy
learning and testing
 • Rapid adaptation methods
 • Modelling uncertainty about user goals
 • Modelling user’s goal evolution along time
 • Partially Observable MDPs in dialogue strategy optimization
 • Methods for cross-domain optimization of dialogue strategies
 • Statistical spoken language understanding in dialogue systems
 • Machine learning and context-sensitive speech recognition
 • Learning for adaptive Natural Language Generation in dialogue
 • Machine learning for adaptive speech synthesis (emphasis, prosody, etc.)
 • Corpora and annotation for machine learning approaches to SDS
 • Approaches to generalising limited corpus data to build user models
and user simulations
 • Evaluation of adaptivity and robustness in statistical approaches
to SDS and user simulation.

Submission Procedure:
Authors should follow the ACM TSLP manuscript preparation guidelines
described on the journal web site and submit an
electronic copy of their complete manuscript through the journal
manuscript submission site
Authors are required to specify that their submission is intended for
this Special Issue by including on the first page of the manuscript
and in the field “Author’s Cover Letter” the note “Submitted for the
Special Issue of Speech and Language Processing on Machine Learning
for Robust and Adaptive Spoken Dialogue Systems”. Without this
indication, your submission cannot be considered for this Special

• Submission deadline : 1 July 2010
• Notification of acceptance: 1 October 2010
• Final manuscript due: 15th November 2010

Guest Editors:
Oliver Lemon, Heriot-Watt University, Interaction Lab, School of
Mathematics and Computer Science, Edinburgh, UK.
Olivier Pietquin, Ecole Supérieure d’Électricité (Supelec), Metz, France.

Back to Top

9 . Future Speech Science and Technology Events

9-1 . (2010-03-11) GIPSA Seminar Grenoble France


Jeudi 11 mars 2010, 13h30 – Séminaire externe
Stefanie Stadler Elmer
University of Zurich, Switzerland
Development of singing in children: language and music
Vocal development had been studied mostly with a focus on speaking, and only rarely, on singing.
Traditional theories on singing development are often based on wrong premises, e.g. eurocentrism,
and reliable analyses of singing are missing or selective. A new theory - inspired by the principles of
Piaget's theory -, and a new methodology - based on acoustic measures - are proposed. The voice
starts to self organize at birth, and gradually adapts to the cultural surrounding and its conventions
concerning language, music, and social rules. Vocal and musical behaviour are highly adaptive and
constructive, and concern two symbolic systems: music and language. The child develops the voice by
playing and imitating. The development proceeds from sensori-motor activities towards more and
more conscious actions and thoughts. In order to study children's singing, computer aided programs
were devised to analyze and represent pitch, timing, pitch qualities, and syllables.
This method yields complex congurations of these parameters describing children's song singing.
Detailed descriptions allow to reconstruct the strategies children apply to invent or learn new songs.
The empirical results from children at various ages demonstrate that the focus on the analysis of the
organisation of the vocal expression is a promising research strategy.



Lucile Rapin
Dept Parole et Cognition GIPSA-lab
961 rue de la Houille Blanche
BP 46
38402 GRENOBLE Cedex
Tel: OO33(0)476575061


Back to Top

9-2 . (2010-03-15) IEEE ICASSP 2010 International Conference on Acoustics, Speech, and Signal Processing March 15 – 19, 2010 Sheraton Dallas Hotel * Dallas, Texas, U.S.A.

                      IEEE ICASSP 2010

  International Conference on Acoustics, Speech, and Signal Processing

                           March 15 – 19, 2010

               Sheraton Dallas Hotel * Dallas, Texas, U.S.A.



The 35th International Conference on Acoustics, Speech, and Signal Processing (ICASSP) will be held at the Sheraton Dallas Hotel, March 15 – 19, 2010. The ICASSP meeting is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 120 lecture and poster sessions on the following topics:


 * Audio and electroacoustics

 * Bio imaging and signal processing

 * Design and implementation of signal processing systems

 * Image and multidimensional signal processing

 * Industry technology tracks

 * Information forensics and security

 * Machine learning for signal processing

 * Multimedia signal processing

 * Sensor array and multichannel systems

 * Signal processing education

 * Signal processing for communications

 * Signal processing theory and methods

 * Speech processing

 * Spoken language processing


Welcome to Texas, Y’All! Dallas is known for living large and thinking big. As the nation’s ninth-largest city, Dallas is exciting, diverse and friendly — factors that contribute to its success as a leading leisure and convention destination. There’s a whole “new” vibrant Dallas to enjoy-new entertainment districts, dining, shopping, hotels, arts and cultural institutions- with more on the way. There’s never been a more exciting time to visit Dallas than now.


Submission of Papers: Prospective authors are invited to submit full-length, four-page papers, including figures and references, to the ICASSP Technical Committee. All ICASSP papers will be handled and reviewed electronically. The ICASSP 2010 website will provide you with further details. Please note that all submission deadlines are strict.


Tutorial and Special Session Proposals: Tutorials will be held on March 14 and 15, 2010. Brief proposals should be submitted by July 31, 2009, through the ICASSP 2010 website and must include title, outline, contact information for the presenter, and a description of the tutorial and material to be distributed to participants. Special sessions proposals should be submitted by July 31, 2009, through the ICASSP 2010 website and must include a topical title, rationale, session outline, contact information, and a list of invited papers. Tutorial and special session authors are referred to the ICASSP website for additional information regarding submissions.


* Important Deadlines *


Submission of Camera-Ready Papers

     September 14, 2009


Notification of Paper Acceptance

     December 11, 2009


Revised Paper Upload Deadline

     January 8, 2010


Author’s Registration Deadline

     January 15, 2010


For more detailed information, please visit the ICASSP 2010 official website,

Back to Top

9-3 . (2010-03-20) CfP CMU Sphinx Users and Developers Workshop 2010 (CMU-SPUD 2010)

CMU Sphinx Users and Developers Workshop 2010 (CMU-SPUD 2010)


20 March 2010, Dallas, TX


Papers are solicited for the CMU Sphinx Workshop for Users and Developers (CMU-SPUD 2010), to be held in Dallas, Texas as a satellite to to ICASSP 2010.


CMU Sphinx is one of the most popular open source speech recognition systems. It is currently used by researchers and developers in many locations world-wide, including universities, research institutions and in industry. CMU Sphinx's liberal license terms has made it a significant member of the open source community and has provided a low-cost way for companies to build businesses around speech recognition.


The first SPUD workshop aims at bringing together CMU Sphinx users, to report on applications, developments and experiments conducted using the system. This workshop is intended to be an open forum that will allow different user communities to become better acquainted with each other and to share ideas. It is also an opportunity for the community to help define the future evolution of CMU Sphinx.


We are planning a one-day workshop with a limited number of oral presentations, chosen for breadth and stimulation, held in an informal atmosphere that promotes discussion. We hope this workshop will expose participants to different perspectives and that this in turn will help foster new directions in research, suggest interesting variations on current approaches and lead to new applications.


Papers describing relevant research and new concepts are solicited on, but not limited to, the following topics. Papers must describe work performed with CMU Sphinx:


·        Decoders: PocketSphinx, Sphinx-2, Sphinx-3, Sphinx-4

·        Tools: SphinxTrain, CMU/Cambridge SLM toolkit

·        Innovations / additions / modifications of the system

·        Speech recognition in various languages

·        Innovative uses, not limited to speech recognition

·        Commercial applications

·        Open source projects that incorporate Sphinx

·        Novel demonstrations


Manuscripts must be between 4 and 6 pages long, in standard ICASSP double-column format. Accepted papers will be published in the workshop proceedings.




Paper submission: 30 November 2009

Notification of paper acceptance: 15 January 2010

Workshop: 20 March 2010




Bhiksha Raj, Carnegie Mellon University, USA

Evandro Gouvêa, Mitsubishi Electric Research Labs, USA

Richard Stern, Carnegie Mellon University, USA

Alex Rudnicky, Carnegie Mellon University, USA

Rita Singh, Carnegie Mellon University, USA

David Huggins-Daines, Carnegie Mellon University, USA

Nickolay Schmyrev, Nexiwave, Russian Federation

Yannick Estève, Laboratoire d'Informatique de l'Université du Maine, France




To email the organizers, please send email to

Back to Top

9-4 . (2010-04-13) CfP Workshop: Positional phenomena in phonology and phonetics Wroclaw-

 Workshop: Positional phenomena in phonology and phonetics

(Organised by Zentrum für Allgemeine Sprachwissenschaft, Berlin)

*Date:* 13 April 2010
*Organisers:* Marzena Zygis, Stefanie Jannedy, Susanne Fuchs
*Deadline for abstract submission:* 1st November 2009
*Abstracts submitted to:*
*Invited speakers:*

  * Taehong Cho (Hanyang University, Seoul) confirmed
  * Grzegorz Dogil (University of Stuttgart) confirmed

*Venue:* /Instytut Filologii Angielskiej, ul. Kuz'nicza 22, 50-138 Wroc?aw/

Positional effects found cross-linguistically at the edges of prosodic
constituents (e.g. final lengthening, final lowering, strengthening
effects, or final devoicing) have increasingly received attention in
phonetic-phonological research. Recent empirical investigations of such
positional effects and their variability pose, however, a great number
of questions challenging e.g. the idea of perceptual invariance. It has
been claimed that acoustic variability is a necessary prerequisite for
the perceptual system to parse segmental strings into words, phrases or
larger prosodic units.

This workshop will provide a forum for discussing controversies and
recent developments regarding positional phenomena. We invite abstracts
bearing on positional effects from various perspectives.The following
questions can be addressed, but are not limited to:

 1. What kind of variability is found in the data, and how does such
    variability need to be accounted for? What positional effects are
    common cross-linguistically and how can they be attributed to
    perceptual, articulatory or aerodynamic principles?
 2. How does positional prominence (lexical stress; accent) interact
    with acoustic and articulatory realizations of prosodic
    boundaries? What are the positional (a)symmetries in the
    realizations of boundaries, and what are the mechanisms underlying
 3. How does left- and right-edge phrasal marking interact with the
    acoustic and articulatory realizations at these prosodic
    boundaries? How are these interpreted in phonetics and in phonology?
 4. What are the necessary prerequisites for the interpretation of
    prosodic constituents? Which auditory cues are essential for the
    perception of boundaries and positional effects? Are such cues
 5. To what extent do lexical frequency, phonotactic probability, and
    neighbourhood density contribute to the production and recognition
    of prosodic boundaries in (fluent/spontaneous) speech?
 6. How are positional characteristics exploited during the process of
    language acquisition? How are they learned during the process of
    language acquisition? Are positional effects salient enough for L2

Abstracts are invited for a 20-min. presentation (excluding discussion).
Abstracts should be sent in two copies: one with a name and one without
as attached files (the name(s) should also be clearly mentioned in the
e-mail) to: in .pdf format. Only electronic
submissions will be considered. Abstracts may not exceed two pages of
text with at least a one-inch margin on all four sides (measured on A4
paper) and must employ a font not smaller than 12 point. Each page may
include a maximum of 50 lines of text. An additional page with
references may be included.

Deadline for submissions: November 1, 2009.

Contact person: Marzena Zygis

Susanne Fuchs, PhD
Schützenstrasse 18
10117 Berlin

phone: 030 20192 569
fax:   030 20192 402

Back to Top

9-5 . (2010-05-10) Cfp Workshop on Prosodic Prominence: Perceptual and Automatic Identification

Extended deadline November 2009   Speech Prosody 2010 Satellite Workshop May 10th, 2010, Chicago, Illinois      Description of the workshop: Efficient tools for (semi-)automatic prosodic annotation are becoming more and more important for the speech community, as most systems of prosodic annotation rely on the identification of syllabic prominence in spoken corpora (whether they lead a phonological interpretation or not). The use of automatic and semi-automatic annotation has also facilitated multilingual research; many experiments on prosodic prominence identification have been conducted for European and non-European languages, and protocols have been written in order to build large databases of spoken languages prosodically annotated all around the world. The aim of this workshop is to bring together specialists of automatic prosodic annotation interested in the development of robust algorithms for prominence detection, and linguists who developed methodologies for the identification of prosodic prominence in natural languages on perceptual bases. The conference will include oral and poster sessions, and a final round table.   Scientific topics: 1. Annotation of prominence 2. Perceptual processing of prominences: gestalt theories’ background 3. Acoustic correlates of prominence 4. Prominence and its relations with prosodic structure 5. Prominence and its relations with accent, stress, tone and boundary 6. The use of syntactic/pragmatic information in prominence identification 7. Perception of prominence by naive/expert listeners 8. Statistical methods for prominence’s detection 9. Number of relevant prominence degrees : categorical or continuous scale 10.Prosodic prominence and visual perception   Submission of papers: Anonymous four-page papers (including figures and references) must be written in English, and be uploaded as pdf files here: All papers will be reviewed by at least three members of the scientific committee. Accepted four-page papers will be included in the online proceedings of the workshop published on the workshop website. The publication of extended selected papers after the workshop in a special issue of a journal is being considered.   Organizing Committee: Mathieu Avanzi (Université de Neuchâtel, CH) Anne Lacheret-Dujour (Université de Paris Ouest Nanterre) Anne-Catherine Simon (Université catholique de Louvain-la-Neuve)  Scientific committee: the names of the scientific committee will be announced in the second circular.   Venue: The workshop will take place in The Doubletree Hotel Magnificent Mile, in Chicago. See the Speech prosody 2010 website ( for further information.   Important deadlines: Submission of four-page papers: November 15, 2009 Notification of acceptation: January 15, 2009 Author's Registration Deadline: March 2, 2010 Workshop: March 10, 2010    Website of the workshop:
Back to Top

9-6 . (2010-05-11) Call For Special Session Proposals SPEECH PROSODY 2010



Call For Special Session Proposals



Speech Prosody 2010, the fifth international conference on speech prosody, invites proposals for special sessions addressing exciting current topics in the science and technology of spoken language prosody.  Special sessions may address any topic among the key topic areas of Speech Prosody 2010 (, or a topic that is too new to be included in the standard topic list.


 Proposals for special sessions should include the names and affiliations of the organizers, an abstract describing the topic of the special session, and a list of six to twelve potential authors doing current research in the topic area.  Proposals should be submitted by e-mail to  In order to receive full consideration, proposals should be submitted by November 15, 2009.





November 15, 2009: Manuscript deadline for regular Speech Prosody papers

November 15, 2009: Special Session Proposal deadline for full consideration

November 20, 2009: Acceptance letters mailed to Special Session organizers

December 15, 2009: Manuscript deadline for Special Session papers

January 15, 2010: Acceptance letters mailed to manuscript authors

May 11-14, 2010:  Conference, Speech Prosody 2010

Back to Top

9-7 . (2010-05-11) CfP Speech prosody 2010 Chicago IL USA

SPEECH PROSODY 2010   (New submission deadline)

Deadline Extension: REVISIONS ONLY ==================================

The Speech Prosody 2010 Organizing Committee is happy to announce a REVISIONS ONLY extension of our manuscript deadline. Authors who submit a draft manuscript by November 15 will be allowed to revise their manuscript, as often as necessary, until 8:00 AM Chicago time on November 23. The November 15 draft should include preliminary title, abstract, list of authors, and content adequate for selection of appropriate reviewers. Reviewers will not see the initial draft, however; only the final uploaded draft (8:00 AM Chicago time, November 23) will be sent to reviewers.

Every Language, Every Style: Globalizing the Science of Prosody
Call For Papers


Prosody is, as far as we know, a universal characteristic of human speech, founded on the cognitive processes of speech production and perception.  Adequate modeling of prosody has been shown to improve human-computer interface, to aid clinical diagnosis, and to improve the quality of second language instruction, among many other applications.

Speech Prosody 2010, the fifth international conference on speech prosody, invites papers addressing any aspect of the science and technology of prosody.  Speech Prosody is the only recurring international conference focused on prosody as an organizing principle for the social, psychological, linguistic, and technological aspects of spoken language.  Speech Prosody 2010 seeks, in particular, to discuss the universality of prosody.  To what extent can the observed scientific and technological benefits of prosodic modeling be ported to new languages, and to new styles of spoken language?  Toward this end, Speech Prosody 2010 especially welcomes papers that create or adapt models of prosody to languages, dialects, sociolects, and/or communicative situations that are inadequately addressed by the current state of the art.


Speech Prosody 2010 will include keynote presentations, oral sessions, and poster sessions covering topics including:

* Prosody of under-resourced languages and dialects
* Communicative situation and speaking style
* Dynamics of prosody: structures that adapt to new situations
* Phonology and phonetics of prosody
* Rhythm and duration
* Syntax, semantics, and pragmatics
* Meta-linguistic and para-linguistic communication
* Signal processing
* Automatic speech synthesis, recognition and understanding
* Prosody of sign language
* Prosody in face-to-face interaction: audiovisual modeling and analysis
* Prosodic aspects of speech and language pathology
* Prosody in language contact and second language acquisition
* Prosody and psycholinguistics
* Prosody in computational linguistics
* Voice quality, phonation, and vocal dynamics


Prospective authors are invited to submit full-length, four-page papers, including figures and references, at All Speech Prosody papers will be handled and reviewed electronically.


The Doubletree Hotel Magnificent Mile is located two blocks from North Michigan Avenue, and three blocks from Navy Pier, at the cultural center of Chicago.  The Windy City has been the center of American innovation since the mid nineteenth century, when a railway link connected Chicago to the west coast, civil engineers reversed the direction of the Chicago river, Chicago financiers invented commodity corn (maize), and the Great Chicago Fire destroyed almost every building in the city. The Magnificent Mile hosts scores of galleries and museums, and hundreds of world-class restaurants and boutiques.


Submission of Papers ( November 15, 2009
Notification of Acceptance:                                           December 15, 2009
Conference:                                                                    May 11-14, 2010


Back to Top

9-8 . (2010-05-17) CfP Workshop on Language Resources (LRs) and Human Language Technologies (HLT) for Semitic Languages


Workshop on Language Resources (LRs) and Human Language Technologies (HLT) for Semitic Languages - Status, Updates, and Prospects
To be held in conjunction with the 7th International Language Resources and Evaluation Conference (LREC 2010)


17 May 2010, Mediterranean Conference Centre, Valetta, Malta


Deadline for submission: 26 February 2010
The Semitic family includes languages and dialects spoken by a large number of native speakers (around 300 million). Prominent members of this family are Arabic (and its varieties), Hebrew, Amharic, Tigrinya, Aramaic, Maltese and Syriac. Their shared ancestry is apparent through pervasive cognate sharing, a rich and productive pattern-based morphology, and similar syntactic constructions.  In addition, there are several languages which are used in the same geographic area such as Amazigh or Coptic, which, while not Semitic, have common features with Semitic languages, such as borrowed vocabulary.
The recent surge in computational work for processing Semitic languages, particularly Modern Standard Arabic (MSA) and Modern Hebrew (MH), has brought modest improvements in terms of actual empirical results for various language processing components (e.g., morphological analyzers, parsers, named entity recognizers, audio transcriptions, etc.). Apparently, reusing existing approaches developed for English or French for processing Semitic language text/speech, e.g., Arabic parsing is not as straightforward as initially thought. Apart from the limited availability of suitable language resources, there is increasing evidence that Semitic languages demand modeling approaches and annotations that deviate from those found suitable for English/French. Issues such as the pattern-based morphology, the frequently head-initial syntactic structure, the importance of the interface between morphology and syntax, and the difference between spoken and written forms (especially in Colloquial Arabic(s)) exemplify the kind of challenges that may arise when processing Semitic languages. For language technologies, such as information retrieval and machine translation, these challenges are compounded by sparse data and often result in poorer performance than for other languages.
This Workshop intends to follow on topics of paramount importance for Semitic-language NLP that were discussed at previous events (LREC, MEDAR/NEMLAR Conferences, the workshops of the ACL Special Interest Group for Semitic languages, etc.) and which are worth revisiting. 
The workshop will bring together people who are actively involved in Semitic language processing in a mono- or cross/multilingual context, and give them an opportunity to update the community through reports on completed or ongoing work as well as on the availability of LRs, evaluation protocols and campaigns, products and core technologies (in particular open source ones). We also invite authors to address other languages spoken in the Semitic language area (languages such as Amazigh, Coptic, etc.).  This should enable participants to develop a common view on where we stand and to foster the discussion of the future of this research area.  Particular attention will be paid to activities involving technologies such as Machine Translation and Cross-Lingual Information Retrieval/Extraction, Summarization, etc. Evaluation methodologies and resources for evaluation of HLT will be also a main focus.  
We expect to elaborate on the HLT state of the art, identify problems of common interest, and debate on a potential roadmap for the Semitic languages. Issues related to sharing of resources, tools, standards, sharing and dissemination of information and expertise, adoption of current best practices, setting up joint projects and technology transfer mechanisms will be an important part of the workshop.
Topics of Interest
This full-day workshop is not intended to be a mini-conference, but as a real workshop aiming at concrete results that should clarify the situation of Semitic languages with respect to Language Resources and Evaluation. We expect to launch at least two evaluation campaigns: Comparative evaluation of Morphology taggers and Named Entities Recognizers. 
Among the many issues to be addressed, below follow a few suggestions:
    Issues in the design, the acquisition, creation, management, access, distribution, use of Language Resources, in particular in a bilingual/multilingual setting (Standard Arabic, Hebrew, Colloquial Arabic, Amazigh, Coptic, Maltese, etc.)
    Impact on LR collections/processing and NLP of the crucial issues related to "code switching" between different dialects and languages
    Specific issues related to the above-mentioned languages such as the role of morphology, named entities, corpus alignment, etc.
    Multilinguality issues including relationship between Colloquial and Standard Arabic
    Exploitation of LR in different types of applications
    Industrial LR requirements and community's response
    Benchmarking of systems and products; resources for benchmarking and evaluation for written and spoken language processing;
    Focus on some key technologies such as MT (all approaches e.g. Statistical, Example-Based, etc.), Information Retrieval, Speech Recognition, Spoken Documents Retrieval, CLIR, Question-Answering, Summarization, etc.
    Local, regional, and international activities and projects and needs, possibilities, forms, initiatives of/for regional and international cooperation.
We invite submissions on computational approaches to processing text/speech in all Semitic and Semitic-area languages. The call is open for all kinds of computational work, e.g., work on computational linguistic processing components (e.g., analyzers, taggers, parsers), on state-of-the-art NLP applications and systems, on leveraging resource and tool creation for the Semitic language family, and on using computational tools to gain new linguistic insight. We especially welcome submissions on work that crosses individual language boundaries, heightens awareness amongst Semitic-language researchers of shared challenges and breakthroughs, and highlights issues and solutions common to any subset of the Semitic languages family.
Workshop general chair:   
Khalid Choukri,, ELRA/ELDA, Paris, France
Workshop co-chairs:   
Owen Rambow, Columbia University, New York, USA  
Bente Maegaard , University of Copenhagen, Denmark 
Ibrahim A. Al-Kharashi, Computer and Electronics Research Institute, King Abdulaziz City for Science and Technology, Saudi Arabia
Organizing Committee information 
The Organizing, Program, and the Scientific Committees will be listed on the web pages.
Important Dates
Deadline for abstract submissions:    26 February 2010
Notification of acceptance:        15 March 2010
Final version of accepted paper:    11 April 2010
Workshop full-day:            17 May 2010
Submission Details
Submissions should comply with LREC standards (including the LREC Map initiative) and must be in English. Abstracts for workshop contributions should not exceed Four A4 pages (excluding references). An additional title page should state: the title; author(s); affiliation(s); and contact author's e-mail address, as well as postal address, telephone and fax numbers.
Submission will use the LREC START facility:
Expected deadline is 26 February 2010.
Submitted papers will be judged based on relevance to the workshop aims, as well as the novelty of the idea, technical quality, clarity of presentation, and expected impact on future research within the area of focus.
Registration to LREC’2010 will be required for participation, so potential participants are invited to refer to the main conference website for all details not covered in the present call (
Formatting instructions for the final full version of papers will be sent to authors after notification of acceptance and will be identical to LREC main conference instructions.

When submitting a paper through the START page, authors will be kindly asked to provide relevant information about the resources that have been used for the work described in their paper or that are the outcome of their research. For further information on this new initiative, please refer to


Back to Top

9-9 . (2010-05-17) Workshop on Web Services and Processing Pipelines in HLT, Valetta,Malta


Workshop on

Web Services and Processing Pipelines in HLT: Tool Evaluation, LR Production and Validation

To be held in conjunction with the 7th International Language Resources and Evaluation Conference (LREC 2010)

17-18 May 2010, Mediterranean Conference Center, Valletta, Malta

Deadline for submission: 22 February 2010

Workshop Description

With the emergence of large e­infrastructures and the widespread adoption of the Service Oriented Architecture (SOA) paradigm, more and more language technology is being made available through web services. Extending such services to linguistic processing pipelines, tool evaluation or LR production and validation involves considering both the methodologies and technical aspects specific to the application domains.

Distributed architectures such as web services allow communication and data exchange between applications. They are a suitable instrument for automatic, less often semi­automatic, tool evaluation as well as resource production processes both for practical and conceptual reasons. At a practical level, web services support quick results, centralised data storage, remote access etc.; at a conceptual level, they allow for the combination of more than one processing components that may be located on different sites. Such processing pipelines are set up to tackle a particular analysis task. To support these, new techniques have to be developed that organise well­established practices into workflows and support the exchange of data by standards and open tool architectures.

The workshop focuses on current uses and best practices for the deployment of web services and web interfaces in the HLT domain, including processing pipelines, LR production and validation, and evaluation of tools. It highlights relevant aspects for the integration of linguistic or evaluation web services within infrastructures (e.g. authorisation and authentication, service registries) and infrastructural requirements (e.g. interface harmonisation, metadata generation). The workshop also aims at demonstrating different approaches on how to combine linguistic web services into a composite web service.

The expected outcome of the workshop is a comparison of the practices in architectures and processing pipelines that people build and discussion of the issues involved. Topics of interest include, but are not limited to:

− Technical aspects: approaches, protocols, management of huge amounts of data, data structures and formats, performance, manual components (e.g. annotation or evaluation), composition and configuration, interoperability, security, monitoring and recovery strategies, standardisation of APIs, tools and frameworks supporting HLT services deployment, architectures. 

− Scientific aspects: influence of web services on evaluation or resource production, meta­evaluation / validation of architectures, annotation agreements, needs for tools evaluation and resource production, status of the data produced.

− Commercial aspects: licensing, privacy, advertising, brokering, business possibilities, challenges, exploitation of the resulting data.

Chairing Committee

Núria Bel (Institut Universitari de Lingüística Aplicada, Universitat Pompeu Fabra, Spain)
Olivier Hamon (Evaluations and Language resources Distribution Agency (ELDA)
Elke Teich (Technische Universität Darmstadt)

Organising Committee

Peter Fankhauser (L3S Hannover, Germany)
Maria Gavrilidou (Institute for Language and Speech Processing, Greece)
Gerhard Heyer (Department of Natural Language Processing, University of Leipzig,  Germany)
Zdravko Kacic (University of Maribor, Faculty of Electrical Engineering and Computer Science, Slovenia)
Mark Kemps­Snijders (MPI, the Netherlands)
Andreas Witt (IDS Mannheim, Germany)

Programme Committee

Sophia Ananiadou (School of Computer Science, University of Manchester, England)
Victoria Arranz (ELDA, France)
Volker Boehlke (University of Leipzig, DE)
Gaël de Chalendar (CEA, France)
Key­Sun Choi (KAIST, Korea)
Dan Cristea (University of Iasis, Romania)
Thierry Declerck (DFKI, Germany)
Christoph Draxler (LMU München, Germany)
Nicola Ferro (University of Padua, Italy)
Riccardo del Grata (ILC, Italy)
Iryna Gurevych (Technische Universität Darmstadt, Germany)
Yoshihiko Hayashi (Osaka University, Japan)
Nicolas Hernandez (Université de Nantes, France)
Radu Ion (Research Institute for Artificial Intelligence, Romanian Academy, Romania)
Yoshinobu Kano (University of Tokyo, Japan)
Yohei Murakami (NICT, Japan)
Jan Odijk (University of Utrecht, the Netherlands)
Patrick Paroubek (LIMSI, France)
Kay Peterson (NIST, U.S.A.)
Maciej Piasecki (Instytut Informatyki Stosowanej, Poland)
Mark Przybocki (NIST, U.S.A.)
Matej Rojc (University of Maribor, Slovenia)
Felix Sasaki (W3C / FH Potsdam, Germany)
Junichi Tsujii (University of Tokyo, Japan)
Dan Tufis (RACAI, Romania)
Karin Verspoor (University of Colorado, U.S.A.)
Graham Wilcock (University of Helsinki, Finland)

Important dates

Deadline for submission: Monday 22 February 2010
Notification of acceptance: Monday 15 March 2010
Final version due: Tuesday 23 March 2010
Workshop : 17­18 May 2010

Submission Format

Full papers up to 8 pages should be formatted according to LREC 2010 guidelines and be submitted through the online submission form (  on START. For further queries, please contact Olivier Hamon at hamon_at_elda_dot_org. When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. For further information on this new initiative, please refer to http://www.lrec­ LREC2010­Map­of­Language­Resources.

Back to Top

9-10 . (2010-05-17) Workshop on Tool Evaluation, LR Production and Validation Valetta, Malta

Tool Evaluation, LR Production and Validation


To be held in conjunction with the 7th International Language Resources and Evaluation Conference (LREC 2010)


17-18 May 2010, Mediterranean Conference Center, Valletta, Malta


Extended deadline for submission: 26 February 2010



Workshop Description

With the emergence of large e-infrastructures and the widespread adoption of the Service Oriented Architecture (SOA) paradigm, more and more language technology is being made available through web services. Extending such services to linguistic processing pipelines, tool evaluation or LR production and validation involves considering both the methodologies and technical aspects specific to the application domains.


Distributed architectures such as web services allow communication and data exchange between applications. They are a suitable instrument for automatic, less often semi-automatic, tool evaluation as well as resource production processes  both for practical and conceptual reasons. At a practical level, web services support quick results, centralised data storage, remote access etc.; at a conceptual level, they allow for the combination of more than one processing components that may be located on different sites. Such processing pipelines are set up to tackle a particular analysis task. To support these, new techniques have to be developed that organise well-established practices into workflows and support the exchange of data by standards and open tool architectures.


The workshop focuses on current uses and best practices for the deployment of web services and web interfaces in the HLT domain, including processing pipelines, LR production and validation, and evaluation of tools. It highlights relevant aspects for the integration of linguistic or evaluation web services within infrastructures (e.g. authorisation and authentication, service registries) and infrastructural requirements (e.g. interface harmonisation, metadata generation). The workshop also aims at demonstrating different approaches on how to combine linguistic web services into  a composite web service.


The expected outcome of the workshop is a comparison of the practices in architectures and processing pipelines that people build and discussion of the issues involved. Topics of interest include, but are not limited to:


-                    Technical aspects: approaches, protocols, management of huge amounts of data, data structures and formats, performance, manual components (e.g. annotation or evaluation), composition and configuration, interoperability, security, monitoring and recovery strategies, standardisation of APIs, tools and frameworks supporting HLT services deployment, architectures.


-                    Scientific aspects: influence of web services on evaluation or resource production, meta-evaluation / validation of architectures, annotation agreements, needs for tools evaluation and resource production, status of the data produced.


-                    Commercial aspects: licensing, privacy, advertising, brokering, business possibilities, challenges, exploitation of the resulting data.


Chairing Committee

Núria Bel (Institut Universitari de Lingüística Aplicada, Universitat Pompeu Fabra, Spain)

Olivier Hamon (Evaluations and Language resources Distribution Agency (ELDA)

Elke Teich (Technische Universität Darmstadt)


Organising Committee

Peter Fankhauser (L3S Hannover, Germany)

Maria Gavrilidou (Institute for Language and Speech Processing, Greece)

Gerhard Heyer (Department of Natural Language Processing, University of Leipzig,  Germany)

Zdravko Kacic (University of Maribor, Faculty of Electrical Engineering and Computer Science, Slovenia)

Mark Kemps-Snijders (MPI, the Netherlands)

Andreas Witt (IDS Mannheim, Germany)


Programme Committee

Sophia Ananiadou (School of Computer Science, University of Manchester, England)

Victoria Arranz (ELDA, France)

Volker Boehlke (University of Leipzig, DE)

Gaël de Chalendar (CEA, France)

Key-Sun Choi (KAIST, Korea)

Dan Cristea (University of Iasis, Romania)

Thierry Declerck (DFKI, Germany)

Christoph Draxler (LMU München, Germany)

Nicola Ferro (University of Padua, Italy)

Riccardo del Grata (ILC, Italy)

Iryna Gurevych (Technische Universität Darmstadt, Germany)

Yoshihiko Hayashi (Osaka University, Japan)

Nicolas Hernandez (Université de Nantes, France)

Radu Ion (Research Institute for Artificial Intelligence, Romanian Academy, Romania)

Yoshinobu Kano (University of Tokyo, Japan)

Yohei Murakami (NICT, Japan)

Jan Odijk (University of Utrecht, the Netherlands)

Patrick Paroubek (LIMSI, France)

Kay Peterson (NIST, U.S.A.)

Maciej Piasecki (Instytut Informatyki Stosowanej, Poland)

Mark Przybocki (NIST, U.S.A.)

Matej Rojc (University of Maribor, Slovenia)

Felix Sasaki (W3C / FH Potsdam, Germany)

Junichi Tsujii (University of Tokyo, Japan)

Dan Tufis (RACAI, Romania)

Karin Verspoor (University of Colorado, U.S.A.)

Graham Wilcock (University of Helsinki, Finland)



Important dates

Extended deadline for submission: Friday 26 February 2010

Notification of acceptance: Thursday 18 March 2010

Final version due: Thursday 25 March 2010

Workshop : 17-18 May 2010


Submission Format

Full papers up to 8 pages should be formatted according to LREC 2010 guidelines and be submitted through the online submission form ( on START. For further queries, please contact Olivier Hamon at hamon_at_elda_dot_org.

When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. For further information on this new initiative, please refer to

Back to Top

9-11 . (2010-05-18)CfP LREC 2010 Workshop on Multimedia Corpora Malta.


*** 2nd Call for Papers ***
LREC 2010 Workshop on
Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality
*** 18 May 2010, Malta ***


A "Multimodal Corpus" involves the recording, annotation and analysis of
several communication modalities such as speech, hand gesture, facial
expression, body posture, etc. As many research areas are moving from
focused but single modality research to fully-fledged multimodality
research, multimodal corpora are becoming a core research asset and an
opportunity for interdisciplinary exchange of ideas, concepts and data.

This workshop follows similar events held at LREC 00, 02, 04, 06, 08.
There is an increasing interest in multimodal communication and multimodal
corpora as visible by European Networks of Excellence and integrated
projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet.
Furthermore, the success of recent conferences and workshops dedicated to
multimodal communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium
on Multimodal Communication, Embodied Language Processing) and the
creation of the Journal of Multimodal User Interfaces also testify to the
growing interest in this area, and the general need for data on multimodal

The 2010 full-day workshop is planned to result in a significant follow-up
publication, similar to previous post-workshop publications like the 2008
special issue of the Journal of Language Resources and Evaluation and the
2009 state-of-the-art book published by Springer.


In 2010, we are aiming for a wide cross-section of the field, with
contributions on collection efforts, coding, validation and analysis
methods, as well as actual tools and applications of multimodal corpora.
However, we want to put emphasis on the fact that there have been
significant advances in capture technology that make highly accurate data
available to the broader research community. Examples are the tracking of
face, gaze, hands, body and the recording of articulated full-body motion
using motion capture. These data are much more accurate and complete than
simple videos that are traditionally used in the field and therefore, will
have a lasting impact on multimodality research. However, the richness of
the signals and the complexity of the recording process urgently call for
an exchange of state-of-the-art information regarding recording and coding
practices, new visualization and coding tools, advances in automatic
coding and analyzing corpora.


This LREC 2010 workshop on multimodal corpora will feature a special
session on databases of motion capture, trackers, inertial sensors,
biometric devices and image processing. Other topics to be addressed
include, but are not limited to:

* Multimodal corpus collection activities (e.g. direction-giving
dialogues, emotional behaviour, human-avatar interaction, human-robot
interaction, etc.) and descriptions of existing multimodal resources

* Relations between modalities in natural (human) interaction and in
human-computer interaction

* Multimodal interaction in specific scenarios, e.g. group interaction
in meetings

* Coding schemes for the annotation of multimodal corpora

* Evaluation and validation of multimodal annotations

* Methods, tools, and best practices for the acquisition, creation,
management, access, distribution, and use of multimedia and multimodal

* Interoperability between multimodal annotation tools (exchange
formats, conversion tools, standardization)

* Collaborative coding

* Metadata descriptions of multimodal corpora

* Automatic annotation, based e.g. on motion capture or image
processing, and the integration with manual annotations

* Corpus-based design of multimodal and multimedia systems, in
particular systems that involve human-like modalities either in input
(Virtual Reality, motion capture, etc.) and output (virtual

* Automated multimodal fusion and/or generation (e.g., coordinated
speech, gaze, gesture, facial expressions)

* Machine learning applied to multimodal data

* Multimodal dialogue modelling


* Deadline for paper submission: 19 February 2010
* Notification of acceptance: 10 March
* Final version of accepted paper: 19 March
* Final program: 21 March
* Final proceedings: 28 March
* Workshop: 18 May


The workshop will consist primarily of paper presentations and
discussion/working sessions. Submissions should be 4 pages long, must be
in English, and follow the submission guidelines available under

Submit your paper here:

Demonstrations of multimodal corpora and related tools are encouraged as
well (a demonstration outline of 2 pages can be submitted).


When submitting a paper through the START page, authors will be kindly
asked to provide relevant information about the resources that have been
used for the work described in their paper or that are the outcome of
their research. For further information on this new initiative, please
refer to


Michael Kipp, DFKI, Germany
Jean-Claude Martin, LIMSI-CNRS, France
Patrizia Paggio, University of Copenhagen, Denmark
Dirk Heylen, University of Twente, The Netherlands



Back to Top

9-12 . (2010-05-23) CfP Workshop on Language Resources: From Storyboard to Sustainability and LR Lifecycle Management Valetta, Malta


Workshop on

Language Resources: From Storyboard to Sustainability and LR Lifecycle Management


To be held in conjunction with the 7th International Language Resources and Evaluation Conference (LREC 2010)

23 May 2010, Mediterranean Conference Centre, Valletta, Malta (under construction)

Deadline for submission: 22 February 2010




The life of a language resource (LR), from its mere conception and drafting to its adult phases of active exploitation by the HLT community, varies considerably. Ensuring that language resources be a part of a sustainable and endurable living process represents a multi-faceted challenge that certainly calls for well-planned anti-neglecting actions to be put into action by the different actors participating in the process. Clearing all IPR issues, exploiting best practices at specification and production time are just a few samples of such actions. Sustainability and lifecycle management issues are thus concepts that should be addressed before endeavouring into any serious LR production.


When thinking of long-term LRs a number of aspects come to our minds which do not always succeed to be taken into account before development. Some of these aspects are usability, accessibility, interoperability and scalability, which inevitably call for a long list of neglected points that would need to be taken into account at a very early stage of development. Looking further into the portability and scalability of a language resource, a number of dimensions should be taken into account to ensure that a language resource reaches its adult life in an active and productive way.


An aspect that is often neglected is the accessibility and thus secured reusability of a language resource. Institutions such as ELRA (European Language resources Association) and LDC (Linguistic Data Consortium), at a European and American level, respectively, as well as BAS (Bavarian Archive for Speech Signals) and TST-Centrale (Flemish-Dutch Human Language Technology Agency), at a language-specific level, have worked on these aspects for a large number of years. Through their different activities, they have successfully implemented a sharing policy which allows different users to gain access to already existing resources. Other emerging programmes such as CLARIN (Common Language Resources and Technology Infrastructure) are also looking into these aspects. Nevertheless, many resources still follow development without a long-term accessibility plan into place which makes impossible to gain access once the resource is finished. This accessibility plan should consider issues such as ownership rights, licensing, types of use, aiming for a wide community from the very beginning. This accessibility plan calls for an optimal co-operation between all actors (LR users, financing bodies, owners, developers and organisations) so that issues related to the life of a LR are well established, roles and actors are clearly identified within the cycle and best practices are defined towards the management of the entire LR lifecycle.


We are aware, though, that these above-presented ideas are but a take-off for discussion. It is at this point that we would like to invite the community to participate in this workshop and share with us their views on these and other relevant issues of concern. A fruitful discussion could lead us to finding new mechanisms to support perpetuating language resources, and may lead us towards a sustainability model that guarantees an appropriate and well-defined LR storyboard and lifecycle management plan in the future.


Among the many issues and topics that may be presented and discussed during this workshop, we would like to already suggest the following:


-         Which fields require LRs and which are their respective needs?

-         What needs to be part of a LR storyboard? What points are we missing in its design?

-         General specifications vs. detailed specifications and design

-         Annotation frameworks and layers: interoperable at all?

-         Should creation and provision of LRs be included in higher education curriculae?

-         How to plan for scalable resources?

-         Language Resource maintenance and improvement: feasible?

-         Sharing language resources: how to bear this in mind and implement it? Logistics of the sharing: online vs. offline

-         Centralised vs. decentralised, and national vs. international management and maintenance of LRs

-         What happens when users create updated or derived LRs?

-         Sharing language resources: legal issues concerned

-         Sharing language resources: pricing issues concerned, commercial vs. non-commercial use

-         Do LR actors work in a synchronised manner?

-         What should be the roles of the different actors?

-         What are the business models and arrangements for IPRs?

-         Self-supporting vs. subsidised LR organisations

-         Other general problems faced by the community


We solicit papers that address these questions and other related issues relevant to the workshop.


Workshop Programme and Audience Addressed

This full-day workshop aims to address all those involved with language resources at some point of their research/work (LR users, producers, ...) and all those with an interest in the different aspects involved, whether universities, companies or funding agencies of some nature. It aims to be a meeting and discussion point for the so many bottlenecks surrounding the life of a resource and which remain to be addressed with a sustainability plan.


The workshop features two invited talks, opening the morning and afternoon sessions, submitted papers, and will conclude with a round table to brainstorm on the issues raised during the presentations and the individual discussions. This round table will be run by a number of experts already experienced in some of the highlighted problems and in open discussion with the workshop participants. In short, this workshop will result in a plan of action towards a sustainability and lifecycle management plan to implement.


Invited Speakers

To be announced on the workshop web site.


Organising Committee

Victoria Arranz (Evaluations and Language resources Distribution Agency (ELDA) /  European Language resources Association (ELRA), France)

Khalid Choukri (ELDA - Evaluations and Language resources Distribution Agency / ELRA - European Language resources Association, France)

Christopher Cieri (LDC - Linguistic Data Consortium, USA)

Laura van Eerten (Flemish-Dutch HLT Agency, Instituut voor Nederlandse Lexicologie, The Netherlands)

Bente Maegaard (CST, University of Copenhagen, Denmark)

Stelios Piperidis (ILSP – Institute for Language and Speech Processing / ELRA - European Language resources Association, France)

Remco van Veenendaal (Flemish-Dutch HLT Agency, Instituut voor Nederlandse Lexicologie, The Netherlands)


Programme Committee

Núria Bel (Institut Universitari de Lingüística Aplicada, Universitat Pompeu Fabra, Spain)

Nicoletta Calzolari (Istituto di Linguistica Computazionale del CNR (ILC-CNR) – Italy)

Jean Carletta (Human Communication Research Centre, School of Informatics, University of Edinburgh, UK)

Catia Cucchiarini (Nederlandse Taalunie, The Netherlands)

Christoph Draxler (Bavarian Archive for Speech Signals, Institute of Phonetics and Speech Processing (BAS), Germany)
Maria Gavrilidou (Institute for Language and Speech Processing (ILSP), Greece)

Nancy Ide (Department of Computer Science, Vassar College, USA)

Steven Krauwer (UiL OTS, Utretch University, The Netherlands)

Asunción Moreno (Universitat Politècnica de Catalunya (UPC), Spain)

Dirk Roorda (Data Archiving and Networked Services, The Netherlands)

Ineke Schuurman (Centre for Computational Linguistics, Catholic University Leuven, Belgium)

Claudia Soria (Istituto di Linguistica Computazionale del CNR (ILC-CNR) – Italy)

Stephanie M. Strassel (Linguistic Data Consortium (LDC), USA)

Andreas Witt (IDS Mannheim, Germany)

Peter Wittenburg (Max Planck Institute for Psycholinguistics, The Netherlands)


Important dates

Deadline for abstracts: Monday 22 February 2010

Notification to Authors: Friday 12 March 2010

Submission of Final Version: Sunday 21 March 2010

Workshop: Sunday 23 May 2010



Abstracts should be no longer than 1500 words and should be submitted in PDF format through the online submission form on START ( For further queries, please contact Victoria Arranz at or Laura van Eerten at


When submitting a paper through the START page, authors will be kindly asked to provide relevant information about the resources that have been used for the work described in their paper or that are the outcome of their research. For further information on this new initiative, please refer to


Back to Top

9-13 . (2010-05-23) CfP Third International Workshop on EMOTION (satellite of LREC), Valetta, Malta

First Call for Papers Third International Workshop on EMOTION (satellite of LREC): CORPORA FOR RESEARCH ON EMOTION AND AFFECT Sunday, 23rd May 2010 Mediterranean Conference Centre Valletta Malta In Association with 7th INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION LREC2010 Main Conference 19th-21st May 2010 _______________________________________________________________________________ Recognition of emotion in speech has recently matured to one of the key disciplines in speech analysis serving next generation human-machine and -robot communication and media retrieval systems. However, compared to automatic speech and speaker recognition, where several hours of speech of a multitude of speakers in a great variety of different languages are available, sparseness of resources has accompanied emotion research to the present day: genuine emotion is hard to collect, ambiguous to annotate, and tricky to distribute due to privacy preservation. The few available corpora suffer from a number of issues owing to the peculiarity of this young field: as in no related task, different forms of modelling reaching from discrete over complex to continuous emotions exist, and ground truth is never solid due to the often highly different perception of the mostly very few annotators. Given by the data sparseness - most widely used corpora feature below 30 min of speech - cross-validation without strict test, development, and train partitions, and without strict separation of speakers throughout partitioning are the predominant evaluation strategy, which is obviously sub-optimal. Acting of emotions was often seen as a solution to the desperate need for data, which often resulted in further restrictions such as little variation of spoken content or few speakers. As a result, many interesting potentially progressing ideas cannot be addressed, as clustering of speakers or the influence of languages, cultures, speaker health state, etc.. Previous LREC workshops on Corpora for research on Emotion and Affect (at LREC 2006 and 2008) have helped to consolidate the field, and in particular there is now growing experience of not only building databases but also using them to build systems (for both synthesis and detection). This proposal aims to continue the process, and lays particular emphasis on showing how databases can be or have been used for system building. Papers are invited in the area of corpora for research on emotion and affect. Topics include, but are not limited to: + Novel corpora of affective speech in audio and multimodal data - in + particular with high number of speakers and high diversity (language, age, speaking style, health state, etc.) + Case studies of the way databases have been or can be used for system building + Measures for quantitative corpus quality assessment + Standardisation of corpora and labels for cross-corpus experimentation + Mixture of emotions (i.e. complex or blended emotions) + Real-life applications + Long-term recordings for intra-speaker variation assessment + Rich and novel annotations and annotation types + Communications on testing protocols + Evaluations on novel or multiple corpora ORGANISING COMMITEE _______________________________________________________________________________ Laurence Devillers / Björn Schuller LIMSI-CNRS, France Roddy Cowie / Ellen Douglas-Cowie Queen's University, UK Anton Batliner Universität Erlangen-Nürnberg, Germany Contact: Laurence Devillers and Björn Schuller, IMPORTANT DATES _______________________________________________________________________________ Deadline for 1500-2000 words abstract submission 12th February Notification of acceptance 12th March Final version of accepted paper 22nd March Workshop full-day 23rd May SUBMISSIONS _______________________________________________________________________________ The workshop will consist of paper and poster presentations. Submitted abstracts of papers for oral and poster must consist of about 1500-2000 words. Final submissions should be 4 pages long, must be in English, and follow the submission guidelines at LREC2010. Papers need to be submitted via the START page of LREC 2010. When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. For further information on this new iniative, please refer to Following this initiative, all contributions shall provide an additional corpus description according to a template (with example) provided by the organisers at the time of submission. The information will consist of providing site, domain, classes or dimensions with definition, context, language(s), spoken content, type, status, size, speaker and instance numbers, total duration, recording, encoding and storage details, annotator number, annotation state and format, and partitioning type. In addition they are asked to provide audio examples if possible. As soon as possible, authors are encouraged to send to a brief email indicating their intention to participate, including their contact information and the topic they intend to address in their submissions. Submission site: Proceedings of the workshop will be printed by the LREC Local Organising Committee. Submitted papers will undergo peer-review. TIME SCHEDULE AND REGISTRATION FEE _______________________________________________________________________________ The workshop will consist of a full-day session, and there will be time for collective discussions. For this full-day Workshop, the registration fee will be specified on

Back to Top




 Trier, Germany, May 24-28, 2010



 LATA is a yearly conference in theoretical computer science and its applications. As linked to the International PhD School in Formal Languages and Applications that was developed at Rovira i Virgili University (the host of the previous three editions and co-organizer of this one) in the period 2002-2006, LATA 2010 will reserve significant room for young scholars at the beginning of their career. It will aim at attracting contributions from both classical theory fields and application areas (bioinformatics, systems biology, language technology, artificial intelligence, etc.).


 Topics of either theoretical or applied interest include, but are not limited to:

 - algebraic language theory

- algorithms on automata and words

- automata and logic

- automata for system analysis and programme verification

- automata, concurrency and Petri nets

- cellular automata

- combinatorics on words

- computability

- computational complexity

- computer linguistics

- data and image compression

- decidability questions on words and languages

- descriptional complexity

- DNA and other models of bio-inspired computing

- document engineering

- foundations of finite state technology

- fuzzy and rough languages

- grammars (Chomsky hierarchy, contextual, multidimensional, unification, categorial, etc.)

- grammars and automata architectures

- grammatical inference and algorithmic learning

- graphs and graph transformation

- language varieties and semigroups

- language-based cryptography

- language-theoretic foundations of artificial intelligence and artificial life

- neural networks

- parallel and regulated rewriting

- parsing

- pattern matching and pattern recognition

- patterns and codes

- power series

- quantum, chemical and optical computing

- semantics

- string and combinatorial issues in computational biology and bioinformatics

- symbolic dynamics

- term rewriting

- text algorithms

- text retrieval

- transducers

- trees, tree languages and tree machines

- weighted machines





 LATA 2010 will consist of:

 - 3 invited talks

- 2 invited tutorials

- refereed contributions

- open sessions for discussion in specific subfields, on open problems, or on professional issues (if requested by the participants)


 John Brzozowski (Waterloo), Complexity in Convex Languages

Alexander Clark (London), Three Learnable Models for the Description of Language

Lauri Karttunen (Palo Alto), to be announced (tutorial)

Borivoj Melichar (Prague), Arbology: Trees and Pushdown Automata

Anca Muscholl (Bordeaux), Communicating Automata (tutorial)



 Adrian Horia Dediu (Tarragona)

Henning Fernau (Trier, co-chair)

Maria Gindorf (Trier)

Stefan Gulan (Trier)

Anna Kasprzik (Trier)

Carlos Martín-Vide (Brussels, co-chair)

Norbert Müller (Trier)

Bianca Truthe (Magdeburg)


 Authors are invited to submit papers presenting original and unpublished research. Papers should not exceed 12 single-spaced pages and should be formatted according to the standard format for Springer Verlag's LNCS series (see Submissions have to be uploaded at:


 A volume of proceedings published by Springer in the LNCS series will be available by the time of the conference.

 A special issue of the Journal of Computer and System Sciences (Elsevier) will be later published containing refereed extended versions of some of the papers contributed to the conference. Submissions to it will be only by invitation.

 A special issue of another major journal containing papers oriented to applications is under consideration.


 The period for registration will be open since September 1, 2009 until May 24, 2010. The registration form can be found at the website of the conference:

 Early registration fees: 500 Euro

Early registration fees (PhD students): 400 Euro

Late registration fees: 530 Euro

Late registration fees (PhD students): 430 Euro

On-site registration fees: 550 Euro

On-site registration fees (PhD students): 450 Euro

 At least one author per paper should register. Papers that do not have a registered author by February 15, 2010 will be excluded from the proceedings.

 Fees comprise access to all sessions, one copy of the proceedings volume, coffee breaks, lunches, excursion, and conference dinner.


 Early (resp. late) registration fees must be paid by bank transfer before February 15, 2010 (resp. May 14, 2010) to the conference series account at Open Bank (Plaza Manuel Gomez Moreno 2, 28020 Madrid, Spain): IBAN: ES1300730100510403506598 - Swift code: OPENESMMXXX (account holder: Carlos Martin-Vide & URV – LATA 2010).

 Please write the participant’s name in the subject of the bank form. Transfers should not involve any expense for the conference.

 On-site registration fees can be paid only in cash. A receipt for the payment will be provided on site.

Besides paying the registration fees, it is required to fill in the registration form at the website of the conference.



 An award will be offered to the authors of the two best papers accepted to the conference. Only papers fully authored by PhD students are eligible. The award intends to cover their travel expenses.


 Paper submission: December 3, 2009

Notification of paper acceptance or rejection: January 21, 2010

Final version of the paper for the LNCS proceedings: February 3, 2010

Early registration: February 15, 2010

Late registration: May 14, 2010

Starting of the conference: May 24, 2010

Submission to the post-conference special issue(s): August 27, 2010



 LATA 2010

Universität Trier

Fachbereich IV – Informatik

Campus II, Behringstraße

D-54286 Trier

 Phone: +49-(0)651-201-2836

Fax: +49-(0)651-201-3954

Back to Top

9-15 . (2010-06-01) NAACL-HLT-10: Call for Tutorial Proposals

NAACL-HLT-10: Call for Tutorial Proposals


Proposals are invited for the Tutorial Program of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT) 2010 Conference. The conference is to be held from June 1 to 6, 2009 in Los Angeles, California. The tutorials will be held on Tuesday,  June 1.


We seek proposals for half-day (or exceptionally full-day) tutorials on all topics in computational linguistics, speech processing, information extraction and retrieval, and natural language processing, including their theoretical foundations, algorithms, intersections, and applications. Tutorials will normally move quickly, but they are expected to be accessible, understandable, and of interest to a broad community of researchers.


Information on the tutorial instructor payment policy can be found at 


PLEASE NOTE: Remuneration for Tutorial presenters is fixed according to the above policy and does not cover registration fees for the main conference.




Proposals for tutorials should contain:

  1. A title and brief description of the tutorial content and its relevance to the NAACL-HLT community (not more than 2 pages).
  2. A brief outline of the tutorial structure showing that the tutorial's core content can be covered in a three-hour slot (including a coffee break). In exceptional cases six-hour tutorial slots are available as well.
  3. The names, postal addresses, phone numbers, and email addresses of the tutorial instructors, including a one-paragraph statement of their research interests and areas of expertise.
  4. A list of previous venues and approximate audience sizes, if the same or a similar tutorial has been given elsewhere; otherwise an estimate of the audience size.
  5. A description of special requirements for technical equipment (e.g., internet access).

Proposals should be submitted by electronic mail, in plain ASCII text, no later than January 15, 2010 to The subject line should be: "NAACL HLT 2010: TUTORIAL PROPOSAL".



  1. Proposals will not be accepted by regular mail or fax, only by email to:
  2. You will receive an email confirmation from us within 24 hours that your proposal has been received.



Accepted tutorial speakers will be notified by February 1, 2010, and must then provide abstracts of their tutorials for inclusion in the conference registration material by March 1, 2010. The description should be in two formats: an ASCII version that can be included in email announcements and published on the conference web site, and a PDF version for inclusion in the electronic proceedings (detailed instructions will be given). Tutorial speakers must provide tutorial materials, at least containing copies of the course slides as well as a bibliography for the material covered in the tutorial, by April 15, 2010.



  • Submission deadline for tutorial proposals: January 15, 2010
  • Notification of acceptance: February 1, 2010
  • Tutorial descriptions due: March 1, 2010
  • Tutorial course material due: April 15, 2010
  • Tutorial date: June 1, 2010


  • Jason Baldridge, The University of Texas at Austin
  • Peter Clark, The Boeing Company
  • Gokhan Tur, SRI International

Please send inquiries concerning NAACL-HLT-10 tutorials to


Back to Top

9-16 . (2010-06-05) CfP NAACL HLT 2010, ACL 2010 and COLING 2010

NAACL HLT 2010, ACL 2010 and COLING 2010
                             JOINT CALL FOR WORKSHOP PROPOSALS
                           * * * Proposal deadline: Oct 30, 2009 * * *
The Association for Computational Linguistics (ACL) and the
International Committee on Computational Linguistics (ICCL) invite
proposals for workshops to be held in conjunction with one of the three
2010 flagship conferences in computational linguistics: NAACL HLT 2010,
ACL 2010 and COLING 2010. We solicit proposals on any topic of interest
to the ACL/ICCL community. Workshops will be held at one of the
following conference venues:
    * NAACL HLT 2010 is the 11th annual meeting of the North American
chapter of the Association for Computational Linguistics. It will be
held in Los Angeles, June 1-6, 2010. The dates for the NAACL HLT
workshops will be June 5-6. The webpage for NAACL HLT 2010 is:
    * The 48th annual meeting of the ACL (ACL 2010) will be held in
Uppsala, July 11-16, 2010. The ACL workshops will be held July 15-16.
The webpage for ACL 2010 is
    * The 23rd International Conference on Computational Linguistics
(COLING 2010) will be held in Beijing, August 23-27, 2010. There will be
pre-conference workshops on August 21-22, and post-conference workshops
on August 28. The webpage for the conference is:
As in 2009, we will coordinate the submission and reviewing of workshop
proposals for all three ACL/ICCL 2010 conferences.
Proposals for workshops should contain:
    * A title and brief (2-page max) description of the workshop topic
and content.
    * The desired workshop length (one or two days), and an estimate of
the audience size.
    * The names, postal addresses, phone numbers, and email addresses of
the organizers, with one-paragraph statements of their research
interests and areas of expertise.
    * A list of potential members of the program committee, with an
indication of which members have already agreed.
    * A description of any shared tasks associated with the workshop.
    * A description of special requirements for technical needs.
    * A venue preference specification.
The venue preference specification should list the venues at which the
organizers would be willing to present the workshop (NAACL HLT, ACL, or
COLING). A proposal may specify one, two, or three acceptable workshop
venues; if more than one venue is acceptable, the venues should be
preference-ordered. There will be a single workshop committee,
coordinated by the three sets of workshop chairs. This single committee
will review the quality of the workshop proposals. Once the reviews are
complete, the workshop chairs will work together to assign workshops to
each of the three conferences, taking into account the location
preferences given by the proposers.
The ACL has a set of policies on workshops. You can find the ACL's
general policies on workshops at,
the financial policy for workshops at,
and the financial policy for SIG workshops at
This year we will be using the START system for submission and reviewing
of workshop proposals. Please submit proposals to no later than 12 Midnight,
Pacific Standard Time, October 30, 2009.
Notification of acceptance of workshop proposals will occur no later
than November 20, 2009. Since the three ACL/ICCL conferences will occur
at different times, the timescales for the submission and reviewing of
workshop papers, and the preparation of camera-ready copies, will be
different for the three conferences. Suggested timescales for each of
the conferences are given below.
Oct 30, 2009     Workshop proposal deadline
Nov 20, 2009     Notification of acceptance
NAACL 2010    
Dec 18, 2009: Proposed workshop CFP
Mar 1, 2010: Proposed paper due date
Mar 30, 2010: Proposed notification of acceptance
Jun 5-6, 2010: Workshops
ACL 2010    
Jan 18, 2010: Proposed workshop CFP
Apr 5, 2010: Proposed paper due date
May 6, 2010: Proposed notification of acceptance
Jul 15-16, 2010: Workshops
COLING 2010    
Feb 25, 2010: Proposed workshop CFP
May 30, 2010: Proposed paper due date
Jun 30, 2010: Proposed notification of acceptance     
Aug 21-22, 2010: Pre-conference workshops
Aug 28, 2010: Post-conference workshops
Workshop Co-Chairs
    * Richard Sproat, NAACL, Oregon Health & Science University
    * David Traum, NAACL, University of Southern California
    * Pushpak Bhattacharyya, ACL, Indian Institute of Technology, Bombay
    * David Weir, ACL, University of Sussex
    * Noah Smith, COLING, Carnegie Mellon University
    * Takenobu Tokunaga, COLING, Tokyo Institute of Technology
    * Haifeng Wang, COLING, Toshiba (China) Research and Development Center

For inquiries, send email to:

Back to Top

9-17 . (2010-06-21) Second International Workshop on Quality of Multimedia Experience, QoMex'10

9-18 . (2010-07-12) eNTERFACE’10 - the 6th Intl. Summer Workshop on Multimodal Interfaces, Amsterdam

eNTERFACE’10 - the 6th Intl. Summer Workshop on Multimodal Interfaces

Amsterdam, the Netherlands,

July 12th – August 6th, 2010


Call for Participation - apologies for cross-posting



The eNTERFACE workshops aim at establishing a tradition of

collaborative, localized research and development work by gathering,

in a single place, a team of leading professionals in multimodal

man-machine interfaces together with students (both graduate and

undergraduate), to work on a pre-specified list of challenges, for 4

complete weeks. In this respect, it is an innovative and intensive

collaboration scheme, designed to allow researchers to integrate their

software tools, deploy demonstrators, collect novel databases, and

work side by side with a great number of experts.

Outcomes of synergy and success stories of past eNTERFACE Workshops

held in Mons (2005), Dubrovnik (2006), Istanbul (2007), Paris (2008),

and Genova (2009) can be seen at Intelligent

Systems Lab Amsterdam of the University of Amsterdam is organizing the

2010 edition of the Workshop.

Senior researchers, PhD, MS, or undergraduate students interested in

participating at the Workshop should send their application by

emailing the Organizing Committee at on or before

March 1, 2010 (extended). The application should contain:


-       A short CV.

-       A list of three preferred projects to work on.

-       A list of interests/skills to offer for these projects.

-       Possible dates of participation (full/partial).


The workshop is FREE for all participants, but participants must

procure their own travel and accommodation expenses. Information about

the venue and accommodation are provided on the eNTERFACE’10 website:


eNTERFACE'10 will welcome students, researchers, and seniors, working

in teams on the following projects:

#01 CoMediAnnotate: a usable multimodal annotation framework

#02 Looking around in a virtual world

#03 Parameterized user modelling of people with disabilities and

simulation of their behaviour in a virtual environment

#04 Continuous interaction for ECAs

#05 Multimodal Speaker Verification in NonStationary Noise Environments

#06 Vision based Hand Puppet

#07 Audio-visual speech recognition

#08 Affect-responsive interactive photo-frame

#09 Automatic Fingersign to Speech Translator


Full descriptions of the projects are available at:



eNTERFACE'10 Scientific Committee:


Lale Akarun, Boğaziçi University, Turkey

Antonio Camurri, University of Genova, Italy

Cristophe d'Alessandro, CNRS-LIMSI, Orsay, France

Thierry Dutoit, Faculté Polytechnique de Mons, Belgium

Theo Gevers, University of Amsterdam, The Netherlands

Ben Kröse, University of Amsterdam, The Netherlands

Maurizio Mancini, University of Genova, Italy

Panos Markopoulos, Technical University Eindhoven, The Netherlands

Ferran Marques, Universitat Politécnica de Catalunya, Spain

Ramon Morros, Universitat Politécnica de Catalunya, Spain

Anton Nijholt, Twente University, The Netherlands

Igor Pandzic, Zagreb University, Croatia

Catherine Pelachaud, TELECOM Paris-Tech, France

Albert Ali Salah, University of Amsterdam, The Netherlands

Bülent Sankur, Bogazici University, Turkey

Ben Schouten, FONTYS, The Netherlands

Bjorn Schuller, Technical University of  Munich, Germany

Nicu Sebe, University of Trento, Italy

Alessandro Vinciarelli, IDIAP, Switzerland

Gualtiero Volpe, University of Genova, Italy


Back to Top

9-19 . (2010-07-15)CfP ACL 2010 Workshop on Domain Adaptation for Natural Language Processing (DANLP 2010) Sweden


            ACL 2010 Workshop on Domain Adaptation
          for Natural Language Processing (DANLP 2010)

              July 15, 2010, Uppsala, Sweden


Most modern Natural Language Processing (NLP) systems are subject to
the well known problem of lack of portability to new domains/genres:
there is a substantial drop in their performance when tested on data
from a new domain, i.e., their test data is drawn from a related but
different distribution as their training data. This problem is
inherent in the assumption of independent and identically distributed
(i.i.d.) variables for machine learning systems, but has started to
get attention only in recent years. The need for domain adaptation
arises in almost all NLP tasks: part-of-speech tagging, semantic role
labeling, statistical parsing and statistical machine translation, to
name but a few.

Studies on supervised domain adaptation (where there are limited
amounts of annotated resources in the new domain) have shown that
baselines comprising of very simple models (e.g. models based only on
source-domain data, only target-domain data, or the union of the two)
achieve relatively high performance and are "surprisingly difficult to
beat" (Daume III, 2007). Thus, one conclusion from that line of work
is that as long as there is a reasonable (often even small) amount of
labeled target data, it is often more fruitful to just use that.

In contrast, semi-supervised adaptation (i.e., no annotated resources
in the new domain) is a much more realistic situation but is clearly
also considerably more difficult.  Current studies on semi-supervised
approaches show very mixed results. For example, Structural
Correspondence Learning (Blitzer et al., 2006) was applied
successfully to classification tasks, while only modest gains could be
obtained for structured output tasks like parsing. Many questions thus
remain open.

The goal of this workshop is to provide a meeting-point for research
that approaches the problem of adaptation from the varied perspectives
of machine-learning and a variety of NLP tasks such as parsing,
machine-translation, word sense disambiguation, etc.  We believe there
is much to gain by treating domain-adaptation as a general learning
strategy that utilizes prior knowledge of a specific or a general
domain in learning about a new domain; here the notion of a 'domain'
could be as varied as child language versus adult-language, or the
source-side re-ordering of words to target-side word-order in a
statistical machine translation system.

Sharing insights, methodologies and successes across tasks will thus
contribute towards a better understanding of this problem. For
instance, self-training the Charniak parser alone was not effective
for adaptation (it has been common wisdom that self-training is
generally not effective), but self-training with a reranker was
surprisingly highly effective (McClosky et al., 2006). Is this an
insight into adaptation that can be used elsewhere?  We believe that
the key to future success will be to exploit large collections of
unlabeled data in addition to labeled data. Not only because unlabeled
data is easier to obtain, but existing labeled resources are often not
even close to the envisioned target application domain. Directly
related is the question of how to measure closeness (or differences)
among domains.

Workshop Topics

We especially encourage submissions on semi-supervised approaches of
domain adaptation with a deep analysis of models, data and results,
although we do not exclude papers on supervised adaptation. In
particular, we welcome submissions that address any of the following
topics or other relevant issues:

* Algorithms for semi-supervised DA
* Active learning for DA
* Integration of expert/prior knowledge about new domains
* DA in specific applications (e.g., Parsing, MT, IE, QA, IR, WSD)
* Automatic domain identification and model adjustment
* Porting algorithms developed for one type of problem structure to
another (e.g.
from binary classification to structured-prediction problems)
* Analysis and negative results: in-depth analysis of results, i.e.
which model
parts/parameters are responsible for successful adaptation; what can we
from negative results (impact of negative experimental results on
learning strategies/
* A complementary perspective: (Better) generalization of ML models,
i.e. to
make NLP models more broad-coverage and domain-independent, rather than
* Learning from multiple domains


Papers should be submitted via the ACL submission system:

All submissions are limited to 6 pages (including references) and
should be formatted using the ACL 2010 style file that can be found at:

As the reviewing will be blind, papers must not include the authors'
names and affiliations.  Submissions should be in English and should
not have been published previously. If essentially identical papers
are submitted to other conferences or workshops as well, this fact
must be indicated at submission time.

The submission deadline is 23:59 CET on April 5, 2010.

Important Dates

April 5, 2010: Submission deadline
May 6, 2010: Notification of acceptance
May 16, 2010: Camera-ready papers due
July 15, 2010: Workshop

Invited speaker

John Blitzer, University of California, United States


Hal Daumé III, University of Utah, USA
Tejaswini Deoskar, University of Amsterdam, The Netherlands
David McClosky, Stanford University, USA
Barbara Plank, University of Groningen, The Netherlands
Jörg Tiedemann, Uppsala University, Sweden

Program Committee

Eneko Agirre, University of the Basque Country, Spain
John Blitzer, University of California, United States
Walter Daelemans, University of Antwerp, Belgium
Mark Dredze, Johns Hopkins University, United States
Kevin Duh, NTT Communication Science Laboratories, Japan (formerly
University of Washington, Seattle)
Philipp Koehn, University of Edinburgh, United Kingdom
Jing Jiang, Singapore Management University, Singapore
Oier Lopez de Lacalle, University of the Basque Country, Spain
Robert Malouf, San Diego State University, United States
Ray Mooney, University Texas, United States
Hwee Tou Ng, National University of Singapore, Singapore
Khalil Sima'an, University of Amsterdam, The Netherlands
Michel Simard, National Research Council of Canada, Canada
Jun'ichi Tsujii, University of Tokyo, Japan
Antal van den Bosch, Tilburg University, The Netherlands
Josef van Genabith, Dublin City University, Ireland
Yi Zhang, German Research Centre for Artificial Intelligence (DFKI GmbH)
and Saarland University, Germany


This workshop is kindly supported by the Stevin project PaCo-MT (Parse
and Corpus-based Machine Translation) .



Back to Top

9-20 . (2010-09-06) CfP Thirteenth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2010) Brno, Czech Republic

Back to Top

9-21 . (2010-09-08) CfP 21st Conference on Electronic Speech Signal Processing (ESSV)

Call for Papers

21st Conference on Electronic Speech Signal Processing (ESSV)

8 - 10 September 2010 in Berlin

Dear friends of our conference series,

Also in the year 2010 the conference Electronic Speech Signal Processing will bring together those interested in speech technology in research and applications. After a long break the event will be once again held in Berlin, at Beuth University of Applied Sciences. Although this has traditionally been a German event, we also invite our colleagues from abroad to contribute. Therefore conference languages will be German and English. The conference will again focus on speech signal processing at large, with the following being potential topics of contributions, but not an exhaustive list:

  • Speech recognition and synthesis in embedded systems
  • Speech technology in vehicles
  • Speech technology and education
  • Speech technology for the disabled
  • Speech and multimedia
  • Applications to non-speech acoustic signals from biological, musical and technological fields

This time is the twenty-first that the ESSV takes place. As always the organizers strive to develop a scientifically sophisticated program reflecting the cutting edge of speech technology. We are relying on your active cooperation and invite you cordially to make a contribution in the form of a talk or a poster. The proceedings will be published as usual in the series "Studientexte zur Sprachkommunikation" of TUDpress publishing.

Paper Submission

More info about the proceedings, venue and accommodations will be updated regularly online at the following address:

You can also contact us by post, fax or E-mail at the following address:

Beuth Hochschule für Technik Berlin
Fachbereich Informatik und Medien
Prof. Dr.-Ing. habil. Hansjörg Mixdorff
13353 Berlin
Luxemburger Straße 10

Tel: 030 4504 2364
Fax: 030 4505 2013

Important Dates

  • Abstract Submission Deadline (max. 1 page):
    1 May 2010
  • Notification of Acceptance:
    15 May 2010
  • Deadline for conference papers to be published in the proceedings:
    15 July 2010

Local Organizers

Hansjörg Mixdorff
Sascha Fagel
Lutz Leutelt

 Call for Papers


Back to Top

9-22 . (2010-09-15) 52nd International Symposium ELMAR-2010

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~         52nd International Symposium ELMAR-2010 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                   September 15-17, 2010                     Zadar, Croatia         Paper submission deadline: March 15, 2010                             CALL FOR PAPERS              TECHNICAL CO-SPONSORS  IEEE Region 8 IEEE Croatia Section IEEE Croatia Section Chapter of the Signal Processing Society IEEE Croatia Section Joint Chapter of the AP/MTT Societies EURASIP - European Assoc. Signal, Speech and Image Processing   CONFERENCE PROCEEDINGS INDEXED BY  IEEE Xplore, INSPEC and SCOPUS   TOPICS  --> Image and Video Processing --> Multimedia Communications --> Speech and Audio Processing --> Wireless Commununications --> Telecommunications --> Antennas and Propagation --> Navigation Systems --> Ship Electronic Systems --> Power Electronics and Automation --> Naval Architecture --> Sea Ecology --> Special Sessions Proposals - A special session consist      of 5-6 papers which should present a unifying theme from      a diversity of viewpoints   KEYNOTE TALKS  * Prof. Lajos Hanzo, University of Southampton, UK:   Telepresence, the 'World-Wide Wait' and 'Green' Radios...  * Dr. Michael M. Bronstein, Technion - Israel Institute    of Technology, Haifa, ISRAEL:   Non-rigid, non-rigid, non-rigid world  * Dr. Mikel M. Miller, AFRL Munitions Directorate,    Eglin Air Force Base, Florida, USA:   Got GPS? The Navigation Gap  * Dr. Panos Liatsis, City University London, UK:   3D reconstruction and stenosis quantification    in CT angiograms   SUBMISSION  Papers accepted by two reviewers will be published in  conference proceedings available at the conference and  abstracted/indexed in IEEE Xplore, INSPEC and SCOPUS  databases. More info is available here:   SCHEDULE OF IMPORTANT DATES  Deadline for submission of full papers: March 15, 2010 Notification of acceptance mailed out by: May 10, 2010 Submission of (final) camera-ready papers: May 20, 2010 Preliminary program available online by: June 14, 2010 Registration forms and payment deadline: June 21, 2010   GENERAL CO-CHAIRS  Ive Mustac, Tankerska plovidba, Zadar, Croatia Branka Zovko-Cihlar, University of Zagreb, Croatia   PROGRAM CHAIR  Mislav Grgic, University of Zagreb, Croatia  CONTACT INFORMATION  Prof. Mislav Grgic  FER, Unska 3/XII  HR-10000 Zagreb  CROATIA  Telephone: + 385 1 6129 851  Fax: + 385 1 6129 717  E-mail: elmar2010 (at)  For further information please visit:
Back to Top

9-23 . (2010-09-22 ) INTERSPEECH 2010 Satellite Workshop on "Second Language Studies:Acquisition, Learning, Education and Technology"

                                      CALL FOR PAPERS
                 INTERSPEECH 2010 Satellite Workshop on
                      "Second Language Studies:
             Acquisition, Learning, Education and Technology"
                 Co-organized by AESOP, SLaTE, and LSSRL.
                         September 22-24, 2010,
                    Waseda University, Tokyo, Japan


Aim of workshop:
  INTERSPEECH 2010 Satellite Workshop on Second Language Studies will be
  held at the International Conference Center of Waseda University in Tokyo,
  immediately before the main conference. The aim of the workshop is for
  people working in speech science & engineering, linguistics, psychology, and
  language education to meet and discuss second language acquisition & learning,
  education, and technology. The workshop theme is interdisciplinary, ranging
  over but not exclusive to spoken and written L2 acquisition & learning,
  designing & constructing corpora for language research, speech science &
  engineering, and their application to education. All theoretical and
  practical topics in these areas will be considered.

Main topics include:
  a) Spoken and written L2 acquisition and learning
  b) Perception and production of L2 speech
  c) Phonetics and phonology of L2
  d) Psycholinguistics
  e) Language education and learning theories
  f) Data collection methods and corpus design
  g) Development of speech recognition and speech synthesis techniques for education
  h) Development of natural language processing techniques for education
  i) Practical and educational applications using speech and language technologies
  j) Intelligent tutoring systems using speech and language technologies
  k) Other topics related to L2 studies

Technical program:
  The workshop program will consist of oral & poster presentations, panel discussions,
  and demonstrations of educational systems using speech and language technologies.

Paper submission:
  Prospective authors are invited to submit 4-page full papers, including figures
  and references. All the papers will be handled and reviewed electronically.
  Detailed instructions on paper submission will be shown on the workshop website
  in April.

Important dates:
  Full paper submission             May 15
  Notification of acceptance        June 15
  Final paper submission            June 30
  Early registration deadline       July 17

  This workshop is co-organized by:
  AESOP: Asian English Speech cOrpus Project
  SLaTE: the ISCA SIG on Speech and Language Technology in Education
  LSSRL: Language and Speech Science Research Laboratories of Waseda University

For further information:
  If you want to receive more information, please email to:
  L2WS-org [AT]

Back to Top

9-24 . (2010-09-27) Intern Conf on Latent semantic variable analysis and signal separation- St Malo F

 LVA/ICA 2010
       September 27-30, 2010 - Saint-Malo, France
           9th International Conference on
     Latent Variable Analysis and Signal Separation

        formerly the International Conference on
  Independent Component Analysis and Signal Separation



Ten years after the first workshop on Independent Component Analysis in
Aussois, the series of ICA conferences has shown the liveliness of the
community of theoreticians and practitioners working in this field.
While ICA and blind signal separation have become mainstream topics, new
approaches have emerged to solve problems involving signal mixtures or
various other types of latent variables: semi-blind models, matrix
factorization using Sparse Component Analysis (SCA), Non-negative Matrix
Factorization (NMF), Probabilistic Latent Semantic Indexing (PLSI), but
also tensor decompositions, Independent Vector Analysis (IVA),
Independent Subspace Analysis (ISA), ...

The 9th edition of the conference, renamed LVA/ICA to reflect this
evolution towards more general Latent Variable Analysis problems in
signal processing, will offer an interdisciplinary forum for scientists
and engineers to experience renewed theoretical surprises and face
real-world problems.

In addition to contributed papers (oral and poster presentations), the
meeting will feature keynote talks by leading researchers:

    Pierre Comon, University of Nice, France
    Stephane Mallat, Ecole Polytechnique, France
    Mark Girolami, University of Glasgow, UK
    Arie Yeredor, Tel-Aviv University, Israel

as well as a community-based evaluation campaign (SiSEC 2010), a panel
discussion session, and a special late-breaking / demo session.



Saint Malo (, the
corsair city, is an ancient city and pitoresque sea resort located in
Brittany, in the north-west of France.
Chateaubriand, Surcouf, Jacques Cartier... from writers to privateers
and sailors, many were the good men who hailed from Saint-Malo. As if in
honour of their pride and independence, the forts and ramparts of the
corsair city face the sea, adding to the city's charm and its
exceptional setting. To visitors and event-goers, the city offers the
beauty of its maritime views and the wealth of its historical heritage.
A city of 52,000 inhabitants that is lively all year round, Saint-Malo's
heart beats to the rhythm of the major event it hosts, festivals as the
Etonnants Voyageurs or Internationally renowned regattas such as the
Route du Rhum.



• April 7, 2010: Paper submission deadline
• June 15, 2010: Notification of acceptance
• June 30, 2010: Final paper due
• July 31, 2010: Late-breaking / demo / SiSEC abstract submission deadline

Detailed submission instructions will shortly be made available on the
conference website



Prospective authors are invited to submit papers in all areas of latent
variable analysis and signal separation, including but not limited to:
• Theoretical frameworks: probabilistic, geometric &
biologically-inspired modeling; flat, hierarchical & dynamic structures;
sparse coding; kernel methods; neural networks
• Models: linear & nonlinear models; continuous & discrete latent
variables; convolutive & noisy mixtures; linear & quadratic
time-frequency representations
• Algorithms: blind & semi-blind estimation; identification &
convergence conditions; local & evolutionary optimization; computational
complexity; adaptation & modularity
• Speech and audio data: source separation; denoising & dereverberation;
Computational Auditory Scene Analysis (CASA); Automatic Speech
Recognition (ASR)
• Images: segmentation; fusion; texture analysis; color imaging; coding;
scene analysis
• Biomedical data: functional imaging; BCI; genomic data analysis;
systems biology
• Unsolved and emerging problems: causality detection; feature
selection; data mining; control; psychology; social networks; finance;
artificial intelligence; real-time applications
• Resources: software; databases; objective & subjective evaluation

Papers must be original and must not be already published nor under
review elsewhere. Papers linked to a submission to SiSEC 2010 are highly
welcome. The proceedings will be published in Springer-Verlag’s Lecture
Notes in Computer Science (LNCS) Series.



Extended versions of selected papers will be considered for a special
issue of a journal.

The Best Student Paper Award will distinguish the work of a PhD student
with original scientific contributions and the quality of his/her
presentation at LVA/ICA 2010. Eligible papers must be first-authored and
presented by the PhD student during the Conference. Candidates will be
asked to notify their participation on the submission form. A prize of
400 € offered by the Fondation Metivier will be awarded to the winner.



A special session will be dedicated to the presentation of:
• early results and ideas that are not yet fully formalized and evaluated
• software and data of interest to the community, with focus on open
source resources
• signal separation systems evaluated in SiSEC 2010 but not associated
with a full paper

Presenters are invited to submit a non-reviewed abstract, which will be
included in the conference program but not published in the proceedings.

We look forward to receiving your technical contribution and meeting you
in Saint-Malo!

Remi Gribonval and Emmanuel Vincent
General Chairs

Vincent Vigneron and Eric Moreau
Technical Chairs


Back to Top

9-25 . (2011-05-19) Quatrièmes Journées de Phonétique Clinique Srasbourg F

Quatrièmes Journées de Phonétique Clinique
IVèmes JPC


Les Quatrièmes Journées de Phonétique Clinique (IVèmes JPC) auront lieu du 19 au 21 mai 2011 à Strasbourg. Ces journées s'inscrivent dans la lignée des premières, deuxièmes et troisièmes journées d'études de phonétique clinique, qui s'étaient tenues respectivement à Paris en 2005, à Grenoble en 2007 et à Aix-en-Provence en 2009.

Elles seront organisées par l'Institut de Phonétique de Strasbourg (IPS) & l'U.R. 1339 Linguistique, Langues et Parole (LiLPa)  - Equipe Parole et Cognition et la Maison Interuniversitaire des Sciences de l'Homme Alsace (MISHA).

Le  calendrier, ainsi que les  modalités de  soumission  et d'inscription  suivront sous peu.

--  Rudolph Sock Institut de Phonétique de Strasbourg (IPS)  &  Composante Parole et Cognition (PC)   E.A. 1339 - Linguistique, Langues et Parole (LiLPa) Université de Strasbourg 22, rue René Descartes 67084 Strasbourg cedex  Téléphone : +33 3 68 85 65 68 Fax : +33 3 68 85 65 69 --------------------------------
Back to Top

9-26 . (2010-11-08) CfP 12th International Conference on Multimodal Interfaces

Call for Papers: ICMI-MLMI 2010


12th International Conference on Multimodal Interfaces


7th Workshop on Machine Learning for Multimodal Interaction


Beijing, China, November 8-12, 2010


The Twelfth International Conference on Multimodal Interfaces and the

Seventh Workshop on Machine Learning for Multimodal Interaction will be

held jointly in Beijing China during November 8-12, 2010. The primary aim

of ICMI-MLMI 2010 is to further scientific research within the broad

field of multimodal interaction, methods, and systems, focusing on major

trends and challenges, and working towards identifying a roadmap for

future research and commercial success. The conference will continue to

feature a single-track with keynote speakers, technical paper

presentations, poster sessions, a doctoral consortium, and

demonstrations of state of the art multimodal systems and concepts. The

conference will be followed by workshops.


Topics of interest include, but are not limited to:

    - Multimodal input and output interfaces

    - Multimodal human behavior analysis

    - Machine learning methods for multimodal processing

    - Fusion techniques and hybrid architectures

    - Processing of language and action patterns

    - Gaze and vision-based interfaces

    - Speech and conversational interfaces

    - Pen-based interfaces

    - Haptic interfaces

    - Brain-computer interfaces

    - Cognitive modeling of users  

    - Multi-biometric interfaces

    - Multimodal-multisensor interfaces

    - Interfaces for attentive and intelligent environments

    - Mobile, tangible and virtual/augmented multimodal interfaces

    - Distributed/collaborative multimodal interfaces

    - Tools and system infrastructure issues for designing multimodal interfaces

    - Evaluation of multimodal interfaces

    - AI techniques and adaptive multimodal interfaces


Paper Submission

There are two different submission categories: regular paper and short

paper. The page limit is 8 pages for regular papers and 4 pages for

short papers.


Demo Submission

Proposals for demos shall be submitted to demo chairs electronically. A

two page description with photographs of the demo is required.


Organizing Committee

General Chairs:

Wen Gao, Peking University

Chin-Hui Lee, Georgia Tech

Jie Yang, Carnegie Mellon University


Program Chairs

Xilin Chen, Chinese Academy of Sciences

Maxine Eskenazi, Carnegie Mellon University

Zhengyou Zhang, Microsoft Research


Important Dates

    Workshop proposals due: April 1, 2010

    Workshop proposal acceptance notification: May 1, 2010

    Paper submission: May 20, 2010

    Author notification: July 20, 2010

    Camera-ready due: August 20, 2010

    Conference: Nov. 8-10, 2010

    Workshops: Nov. 11-12, 2010


Back to Top

9-27 . (2010-11-29) 2010 Int. Symposium on Chinese Spoken Language Processing (ISCSLP 2010) Taiwan


2010 International Symposium on Chinese Spoken Language Processing (ISCSLP 2010)
November 29 – December 3, 2010  -  Tainan and Sun Moon Lake, Taiwan

ISCSLP is the flagship conference of ISCA SIG-CSLP (International Speech Communication Association, Special Interest Group on Chinese Spoken Language Processing). ISCSLP2010 will be held during November 29 - December 3, 2010 in Tainan and Sun Moon Lake, Taiwan hosted by National Cheng Kung University.
Tainan, located in south-western Taiwan, is the city of cultural origin. There are many historical places and heritage sites. In addition, Tainan is a modern city with various shopping centers, department stores, and night markets. It will be a wonderful opportunity to experience Taiwanese cultures when you visit Tainan. Sun Moon Lake, the largest lake located in central Taiwan, is a beautiful alpine lake, with its eastern part rounded like the sun and the western side shaped like a crescent moon. Its crystalline, emerald green body of water reflects the hills and mountains surrounding on all sides. Its natural beauty is further enhanced by numerous cultural and historical sites.

We invite your participation in this premier conference, where the language from ancient civilizations embraces modern computing technology. ISCSLP 2010 will feature world-renowned plenary speakers, tutorials, exhibits, and a number of lecture and poster sessions on the following topics:
Speech Production and Perception
Phonetics and Phonology
Speech Analysis
Speech Coding
Speech Enhancement
Speech Recognition
Speech Synthesis
Language Modeling and Spoken Language Understanding
Spoken Dialog Systems
Spoken Language Translation
Speaker and Language Recognition
Computer-Assisted Language Learning
Indexing, Retrieval and Authoring of Speech Signals
Multi-Modal Interface including Spoken Language Processing
Spoken Language Resources and Technology Evaluation
Applications of Spoken Language Processing Technology 
Official Language & Publication
The official language of ISCSLP is English.
All papers accepted will be included in IEEE Xplore and indexed by EI Compendex.
Paper Submission
Authors are invited to submit original, unpublished work in English.
Papers should be submitted via
Each submission will be reviewed by two or more reviewers.
At least one author of each paper is required to register. 
Important Dates
Full paper submission by July 15, 2010
Notification of acceptance by Aug. 30, 2010
Camera ready papers by Sep. 13, 2010
Registration to cover an accepted paper by Oct.13, 2010

Back to Top

9-28 . (2010-09-22) CfP 7th ISCA Speech Synthesis Workshop (SSW7) Kyoto Japan

7th ISCA Speech Synthesis Workshop (SSW7)
Kyoto, Japan - September 22-24, 2010

The Seventh ISCA Tutorial and Research Workshop (ITRW) on Speech
Synthesis will take place at ATR, Kyoto, Japan, September 22-24, 2010.
It is co-sponsored by the International Speech Communication
Association (ISCA), the ISCA Special Interest Group on Speech
Synthesis (SynSIG), the National Institute of Information and
Communications Technology (NICT), and the Effective Multilingual
Interaction in Mobile Environments (EMIME) project.  The workshop will
be held as a satellite workshop of Interspeech 2010 (Chiba, Japan,
September 26-30, 2010).  This workshop follows on from the previous
workshops, Autrans 1990, Mohonk 1994, Jenolan Caves 1998, Pitlochry
2001, Pittsburgh 2004, Bonn 2007, which aim to promote research and
development of all aspects of speech synthesis.

Workshop topics Papers in all areas of speech synthesis technology are
encouraged to be submitted, with emphasis placed on:

* Spontaneous/expressive speech synthesis
* Speech synthesis in dialog systems
* Voice conversion/speaker adaptation
* Multilingual/crosslingual speech synthesis
* Automated methods for speech synthesis
* TTS for embedded devices
* Talking heads with animated conversational agents
* Applications of synthesis technologies to communication disorders
* Evaluation methods

Submissions for the technical program:

The workshop program will consist of invited lectures, oral and poster
presentations, and panel discussions.  Prospective authors are invited
to submit full-length, 4-6 page papers, including figures and
references.  All papers will be handled and reviewed electronically.
The SSW7 website will provide you with further

Important dates:

* May 7, 2010: Paper submission deadline
* June 30, 2010: Acceptance/rejection notice
* June 30, 2010: Registration begins
* July 9, 2010: Revised paper due
* September 22-24, 2010: Workshop at ATR in Kyoto

We look forward to seeing you in Kyoto.
Yoshinori Sagisaka
Keiichi Tokuda
Co-chairs of SSW7 organizing committee

Back to Top