1 . Editorial

 Dear members,


ISCA board is periodically and partially renewed: you will find hereunder how you can participate to the election of new members.  The board invites you to massively participate to this vote.

The deadline for submitting papers to Interspeech 2009 is over. Reviewers are now quite active in selecting your best papers for a successful conference in September.


 Prof. em. Chris Wellekens 

Institut Eurecom

Sophia Antipolis


Back to Top

2 . ISCA News


Back to Top

2-1 . VERY IMPORTANT: ISCA Board elections

Dear ISCA Member,

ISCA is currently electing nine ISCA members to the ISCA Board. An email with voting instructions and the bios of the nominees was sent to all ISCA members on May 1, 2009. If you are currently an ISCA member but have not received the voting instructions and bios, please contact the ISCA secretariat by sending an email to

As you may know, the ISCA Board currently has eleven members from nine countries (see full list below). Members are elected to the Board for a period of four years and no member may serve on the Board for more than two consecutive terms.

Eva Hajicová and Lin-shan Lee have served on the ISCA Board for two terms of four years and will leave the board in September 2009.

The following persons have now served for one term of four years: - Isabel Trancoso, (President), Jean-François Bonastre (Vice President, ITRW Coordinator) David House (Secretary) and Michael Picheny (SIG Coordinator and Industrial Liaison). They have all indicated their willingness to serve the Board for another four years if re-elected.

Thirteen ISCA members have put their names forward for nomination to the Board. These are: Martine ADDA-DECKER (France), Jean-Francois BONASTRE (France), Nick CAMPBELL (Ireland), Keikichi HIROSE (Japan), David HOUSE (Sweden), Haizhou LI (Singapore), Geza NEMETH (Hungary), Douglas O'SHAUGHNESSY (Canada), Michael PICHENY (USA), Yannis STYLIANOU (Greece), Isabel TRANCOSO (Portugal), Chiu-yu TSENG (Taiwan), Gokhan TUR (USA).

According to the ISCA by-laws, elections to the Board may take place in April or May every two years.
The Board has recently decided to increase its size from 11 to 14 Board members to better meet the growing activities of our association . Therefore NINE people will now be elected to the Board.

According to ISCA's current statutes only members of ISCA can become members of the Board. No more than two members of the Board are permitted from one country of residence.

The voting procedure is as follows. We have adopted the procedure frequently applied by the IEEE in its elections. 9 candidates have to be elected. You have an arbitrary number of votes between 1 and 13; you can vote for as many candidates as you like, but not more than one vote to one candidate. When voting, please consider the following.
- When you vote for 9 candidates, you specify exactly which persons you want to see on the Board.
- When you vote for significantly fewer than five candidates, i.e., for one or two, you indicate a strong preference for these candidates without necessarily voting against the others.
- When you vote for significantly more than 9 candidates, you indicate that you find these candidates appropriate for the Board (without a preference for an individual candidate) and do not want the others to be elected.

Those candidates receiving the highest number of votes will be elected to the Board. If there are two candidates with the same country of residence within the top 9 candidates (in the case that this country is already represented on the Board), the candidate with the highest number of votes will be selected. An eventual tie between positions 9 and 10 will be resolved by presidential cast.

This election is carried out electronically via the SurveyMonkey. To vote, simply click on the link provided in the email to access the ballot. You will need to enter your ISCA membership number when voting.

You must vote before the deadline. DEADLINE: MAY 31, 2009, 18:00 UTC

Thank you for your kind cooperation.

Bernd Möbius, Treasurer, ISCA

Back to Top

3 . SIG's News: SIG-Iberian languages

Report SIG-IL

March 2009

1.  Introduction


Iberian languages (henceforth IL) are amongst the most widely spoken languages in the world. Nowadays, 628 million people on virtually all continents have Spanish, Portuguese, Catalan, Basque, Galician, etc. as their official language. This widespread usage is also accompanied by a growing technical use of these languages. Spanish and Portuguese rank third and seventh in terms of the number of web users (113 and 51 million users, respectively), whereas Portuguese ranks second in terms of the fastest growing web usage.


In September 2005, a significant number of researchers from this community got together in Lisbon, during Interspeech 2005, to discuss the creation of a Special Interest Group in the framework of the International Speech Communication Association (ISCA). The idea of having a SIG to promote the advance of speech science and technology in these Iberian languages was promptly accepted and nowadays the SIG already has 114 members from 10 countries from different continents. The SIG-IL has an important number of members and an important geographic coverage,


In the website of the SIG-IL ( it is possible to get any information about the SIG. SIG-IL also use a Yahoo group to interchange information ( ) including a distribution email list.


In this report we will concentrate on the 2008 and the start of 2009. With the change of Board near the middle of last year, most of the work must be credited to the first SIG-IL board.


2.  New Events supported by SIG-IL


2.1.              PROPOR 2008

The International Conference on Computational Processing of Portuguese, former Workshop on Computational Processing of the Portuguese Language - PROPOR - is the main event in the area of Natural Language Processing that is focused on Portuguese and the theoretical and technological issues related to this specific language.


A total of 63 papers (more than half related to speech)  were submitted to PROPOR 2008. Each submitted paper received a careful, triple-blind review by the program committee or by their commitment. The reviewing process led to the selection of 21  (11 on speech processing) regular papers for oral presentation and 16 (half on speech) short papers for poster sessions.


Speech main topics were: Speech Analysis; Speech Synthesis; Speech Recognition and Natural Language Processing Tools and Applications. Short papers and related posters were organized according to the two main areas of PROPOR: Natural Language Processing and Speech Technology.


PROPOR had two important novelties: one was the fact that the two main areas of the conference were more equally represented and the other was the inclusion of a special session dedicated to Applications of Portuguese Speech and Language Technologies. The special session, promoted by the Microsoft Language Development Centre (MLDC), provided an opportunity for university and industrial communities (one of the SIG-IL aims) working on Portuguese  natural language processing and speech technology to report their most recent products, systems, resources or tools for Portuguese.


The accepted papers were published by Springer as LNAI volume 5190 “Computational Processing of the Portuguese Language” (ISBN-10: 3540859799, ISBN-13: 978-3540859796)


Many of the SIG-IL working on Portuguese  members attended and/or were part of the Technical Committee. Worth mention the presentation of research on Galician. Information on ISCA and SIG-IL was distributed to all participants, promoting the increase of SIG members.

2.2.              V Jornadas en Tecnología del Habla

This workshop have taken place from 12 to 14 November 2008 in Bilbao (Spain), intending to be a meeting point for the communication of research results and opinion exchange on the development of this research area in Iberian Languages. This event has been organized by the Aholab Signal Processing Group  of the University of the Basque Country and supported by the Spanish Thematic Network on Speech Technologies (RTTH and ISCA.


The workshop featured technical presentations, special sessions, invited conferences and other events, all of them included with the workshop registration. Previous editions took place in Seville (2000), Granada (2002), Valencia (2004) and Zaragoza (2006).


More than 100 papers were accepted in different topics: Speech recognition and understanding, Speech synthesis, Signal processing and feature extraction, Natural language processing, Dialogue systems, Automatic translation, Speech perception, Prosody, Speech coding, Speaker and language identification, Speech and language resources, Information retrieval, Applications for handicapped persons, Applied systems for advance interaction, Expressive speech.


During the conference there were 4 invited conferences:

  • Voice Conversion: State of the art and Perspectives, Yannis Stylianou (University of Crete, Grecia)
  • Embodied conversational agents  in verbal and non-verbal communication,       David House (KTH - Royal Institute of Technology, Suecia)
  • Aplicaciones de las tecnologías del habla en sistemas CALL (Computer Aided Language Training) y CAPT (Computer Aided Pronunciation Training). Néstor Becerra Yoma (Universidad de Chile).
  • Next Generation Spoken Language Interfaces. Giuseppe Riccardi (University of Trento,Italy)


And we also organised 3 evaluations/competitions with more than 15 participants:

  • Speech Synthesis
  • Translation from Euskera to Spanish
  • Language verification


All the members in the local committee are member of the SIG-IL.


2.3.              Advanced Voice Function Assessment 2009

SIG-IL, the Spanish Network of Speech Technology (RTTH and ISCA are supporting the "Advanced Voice Function Assesemnt Workshop 2009"  that it will be hold in Madrid from the 18th to the 20th of May, 2009. SIG-IL supported this event helping to distribute the information to SIG-IL members and proposing reviewers.


AVFA2009 offers a great opportunity for researchers from all over the world, representing different disciplines within the field of voice function assessment, to meet and share their current knowledge and present new ideas. The emphasis of AVFA2009 is on both basic and applied research related to evaluation of voice quality and diagnosis schemes, as well as in the results of voice treatments.


     Contributions on Voice Physiology and Biomechanics, Modelling of Voice Production, Objective Assessment of Voice Quality, Diagnosis and Evaluation Protocols, Voice Database Collection and Management, Substitution Voices, Evaluation of Clinical Treatments, Esophagical Voices and related fields are welcome.


All the members in the local committee are member of the SIG-IL.



2.4.              II Microsoft Workshop on Speech Technology

The Microsoft Language Development Center - MLDC (situated in Portugal) has suggested the idea to organised the II Microsoft Workshop on Speech Technology (the first one was in 2007 ( SIG-IL will support this event.


3.  Other important achievements


3.1.              Speech Communication Special Issue on Iberian Languages

By Isabel Trancoso, Nestor Becerra-Yoma, Plínio A. Barbosa & Rubén San Segundo


The purpose of this Special Issue has been to present recent progress and significant advances in all areas of speech science and technology research in the context of IL. We invited submissions addressing topics specific to IL and/or issues raised by analyses of spoken data that shed light on speech science and linguistic theories regarding these languages. The target was not to have submissions describing research which deals with IL data, but makes use of standard techniques, but rather research presenting relevant optimisation of current technology and systems, and work exploring specific features of IL spoken corpora.


This call for papers originated a fairly significant number of submissions (26), from Spain, Portugal, Brazil, Chile, Cuba, and other non-IL countries. This issue includes only 12 papers. Four others are still under review. The range of topics of the current set of manuscripts is very wide, covering speech science (production, prosody), speech technology (synthesis, recognition, language/accent and speaker  identification), and spoken language systems (understanding, dialogue, translation, spoken term detection, capitalisation and punctuation).


Although the range of topics was very wide, the papers of this issue can be grouped together in the following themes: two papers focus on how to improve language translation systems by adding linguistic information; two on linguistic studies on Iberian Languages; four papers consider speech prosody aspects of Iberian Languages; two papers address language and speaker identification/verification issues; one on speech recognition; one on  Dialogue management; one on the development of and the first experiments in a Spanish to sign language translation system in a real domain.  




Back to Top

4 . Future ISCA Conferences and Workshops (ITRW)


Back to Top

4-1 . (2009-06-25) ISCA Tutorial and Research Workshop on NON-LINEAR SPEECH PROCESSING

An ISCA Tutorial and Research Workshop on NON-LINEAR SPEECH PROCESSING (NOLISP'09)
25/06/2009 - DeadLine: 2009-03-15
Vic Catalonia Espagne
After the success of NOLISP'03 held in Le Croisic, NOLISP'05 in Barcelona and NOLISP'07 in Paris, we are pleased to present NOLISP'09 to be held at the University of Vic (Catalonia, Spain) on June 25-27, 2009. The workshop will feature invited lectures by leading researchers as well as contributed talks. The purpose of NOLISP'09 is to present and discuss novel ideas, works and results related to alternative techniques for speech processing, which depart from mainstream approaches. Prospective authors are invited to submit a 3 to 4 page paper proposal in English, which will be evaluated by the Scientific C! ommittee. Final papers will be due one month after the workshop to be included in the CD-ROM proceedings. Contributions are expected (but not restricted to) the following areas: Non-linear approximation and estimation Non-linear oscillators and predictors Higher-order statistics Independent component analysis Nearest neighbours Neural networks Decision trees Non-parametric models Dynamics of non-linear systems Fractal methods Chaos modelling Non-linear differential equations All fields of speech processing are targeted by the workshop, namely: Speech production, speech analysis and modelling, speech coding, speech synthesis, speech recognition, speaker identification/verification, speech enhancement/separation, speech perception, etc. ADDITIONAL INFORMATION Proceedings will be published in Springer-Verlag's Lecture Notes Series in Computer Science (LNCS). LNCS is published, in parallel to the printed books, in full-text electronic form. All contributions should be original, and must not have been previously published, nor be under review for presentation elsewhere. A special issue of Speech Communication (Elsevier) on “Non-Linear and Non-Conventional Speech Processing” will be also published after the workshop Detailed instructions for submission to NOLISP'09 and further informations will be available at the conference Web site (
* March 15, 2009 - Submission (full papers)
* April 30, 2009 - Notification of acceptance
* September 30, 2009 - Final (revised) paper
Back to Top

4-2 . (2009-09-06) CfP INTERSPEECH 2009 Brighton UK

Interspeech 2009 - Call for Papers
Interspeech is the world's largest and most comprehensive
conference on Speech Science and Speech Technology. Interspeech
2009 will be held in Brighton, UK, 6-10 September 2009, and its
theme is Speech and Intelligence. We invite you to submit
original papers in any related area, including (but not limited
Human Speech Production, Perception And Communication
* Human speech production
* Human speech perception
* Phonology and phonetics
* Discourse and dialogue
* Prosody (production, perception, prosodic structure)
* Emotion and Expression
* Paralinguistic and nonlinguistic cues (e.g. emotion and
* Physiology and pathology
* Spoken language acquisition, development and learning
Speech And Language Technology
* Automatic Speech recognition
* Speech analysis and representation
* Audio segmentation and classification
* Speech enhancement
* Speech coding and transmission
* Speech synthesis and spoken language generation
* Spoken language understanding
* Accent and language identification
* Cross-lingual and multi-lingual processing
* Multimodal/multimedia signal processing
* Speaker characterisation and recognition
Spoken Language Systems And Applications
* Speech Dialogue systems
* Systems for information retrieval from spoken documents
* Systems for speech translation
* Applications for aged and handicapped persons
* Applications for learning and education
* Hearing prostheses
* Other applications
Resources, Standardisation And Evaluation
* Spoken language resources and annotation
* Evaluation and standardisation
Paper Submission
Papers for the Interspeech 2009 proceedings are up to four pages
in length and should conform to the format given in the paper
preparation guidelines and author kits, which are now available
Authors are asked to categorize their submitted papers as being
one of:
N: Completed empirical studies reporting novel research findings
E: Exploratory studies
P: Position papers
Authors will also have to declare that their contribution is
original and not being submitted for publication elsewhere (e.g.
another conference, workshop, or journal).
Papers must be submitted via the on-line paper submission
system. The deadline for submitting a paper is 17th April 2009.
This date will not be extended.
Interspeech2009 Organising Committee
Back to Top

4-3 . (2010-09-26) INTERSPEECH 2010 Chiba Japan

Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."

Back to Top

4-4 . (2011-08-27) INTERSPEECH 2011 Florence Italy

Interspeech 2011

Palazzo dei Congressi,  Italy, August 27-31, 2011.

Organizing committee

Piero Cosi (General Chair),

Renato di Mori (General Co-Chair),

Claudia Manfredi (Local Chair),

Roberto Pieraccini (Technical Program Chair),

Maurizio Omologo (Tutorials),

Giuseppe Riccardi (Plenary Sessions).

More information

Back to Top

5 . Workshops and conferences supported (but not organized) by ISCA


Back to Top

5-1 . (2009-12-13) ASRU 2009

IEEE ASRU2009 Automatic Speech Recognition and Understanding Workshop
Merano, Italy, December 13-17, 2009
The eleventh biannual IEEE workshop on Automatic Speech Recognition and Understanding (ASRU) will be held on December 13-17, 2009. The ASRU workshops have a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding. Workshop topics - automatic speech recognition and understanding - human speech recognition and understanding - speech to text systems - spoken dialog systems - multilingual language processing - robustness in ASR - spoken document retrieval - speech-to-speech translation - spontaneous speech processing - speech summarization - new applications of ASR The workshop program will consist of invited lectures, oral and poster presentations, and panel discussions. Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the ASRU 2009 website All papers will be handled and reviewed electronically. The website will provide you with further details. Please note that the submission dates for papers are strict deadlines.
Paper submission deadline July 15, 2009
Paper notification of acceptance September 3, 2009
Demo session proposal deadline September 24, 2009
Early registration deadline October 7, 2009
Workshop December 13-17, 2009
Please note that the number of attendees will be limited and priority will be given to paper presenters. Registration will be handled via the ASRU 2009 website,, where more information on the workshop will be available.
General Chairs Giuseppe Riccardi, U. Trento, Italy Renato De Mori, U. Avignon, France
Technical Chairs Jeff Bilmes, U. Washington, USA Pascale Fung, HKUST, Hong Kong China Shri Narayanan, USC, USA Tanja Schultz, U. Karlsruhe, Germany
Panel Chairs Alex Acero, Microsoft, USA Mazin Gilbert, AT&T, USA Demo Chairs Alan Black, CMU, USA Piero Cosi, CNR, Italy
Publicity Chairs Dilek Hakkani-Tur, ICSI, USA Isabel Trancoso, INESC -ID/IST, Portugal
Publication Chair Giuseppe di Fabbrizio, AT&T, USA
Local Chair Maurizio Omologo, FBK-IRST, Italy .
Back to Top

5-2 . (2009-12-14) 6th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications MAVEBA 2009

University degli Studi di Firenze Italy
Department of Electronics and Telecommunications
6th International Workshop
Models and Analysis of Vocal Emissions for Biomedical
December 14 - 16, 2009
Firenze, Italy
Speech is the primary means of communication among humans, and results from
complex interaction among vocal folds vibration at the larynx and voluntary articulators
movements (i.e. mouth tongue, jaw, etc.). However, only recently has research
focussed on biomedical applications. Since 1999, the MAVEBA Workshop is
organised every two years, aiming to stimulate contacts between specialists active in
clinical, research and industrial developments in the area of voice signal and images
analysis for biomedical applications. This sixth Workshop will offer the participants
an interdisciplinary platform for presenting and discussing new knowledge in the field
of models, analysis and classification of voice signals and images, as far as both
adults, singing and children voices are concerned. Modelling the normal and
pathological voice source, analysis of healthy and pathological voices, are among the
main fields of research. The aim is that of extracting the main voice characteristics,
together with their deviation from “healthy conditions”, ranging from fundamental
research to all kinds of biomedical applications and related established and advanced
linear and non-linear models of voice
physical and mechanical models;
aids for disabled;
measurement devices (signal and image); prostheses;
robust techniques for voice and glottal
analysis in time, frequency, cepstral,
wavelet domain;
neural networks, artificial intelligence and
other advanced methods for pathology
linguistic and clinical phonetics; new-born infant cry analysis;
neurological dysfunction; multiparametric/multimodal analysis;
imaging techniques (laryngography,
videokymography, fMRI);
voice enhancement;
protocols and database design;
Industrial applications in the biomedical
singing voice;
speech/hearing interactions;
30 May 2009 Submission of extended abstracts (1-2 pages, 1 column)
/special session proposal
30 July 2009 Notification of paper acceptance
30 September 2009 Final full paper submission (4 pages, 2 columns, pdf
format) and early registration
14-16 December 2009 Conference venue
ENTE CRF Ente Cassa di Risparmio di Firenze
IEEE Engineering in Medicine and Biology
Biomedical Signal Processing and Control
International Speech and Communication
Associazione Italiana di Ingegneria Medica e
COST Action
2103 Europ. COop. in Science & Tech. Research
Claudia Manfredi – Conference Chair
Department of Electronics and
Via S. Marta 3, 50139 Firenze, Italy
Phone: +39-055-4796410
Fax: +39-055-494569
Piero Bruscaglioni
Department of Physics
Polo Scientifico Sesto Fiorentino, 50019
Phone: +39-055-4572038
Fax: +39-055-4572356
Back to Top

6 . Books,databases and softwares


Back to Top

6-1 . Books

This section shows recent books whose titles been have communicated by the authors or editors.
Also some advertisement for recent books in speech are included.
Book presentation is written by the authors and not by this newsletter editor or any  voluntary reviewer.

Back to Top

6-1-1 . Computeranimierte Sprechbewegungen in realen Anwendungen

Computeranimierte Sprechbewegungen in realen Anwendungen
Authors: Sascha Fagel and Katja Madany
102 pages
Publisher: Berlin Institute of Technology
Year: 2008
To learn more, please visit the corresponding IEEE Xplore site at
Usability of Speech Dialog Systems
Back to Top

6-1-2 . Usability of Speech Dialog Systems Listening to the Target Audience

Usability of Speech Dialog Systems
Listening to the Target Audience
Series: Signals and Communication Technology
Hempel, Thomas (Ed.)
2008, X, 175 p. 14 illus., Hardcover
ISBN: 978-3-540-78342-8
Back to Top

6-1-3 . Speech and Language Processing, 2nd Edition

Speech and Language Processing, 2nd Edition
By Daniel Jurafsky, James H. Martin
Published May 16, 2008 by Prentice Hall.
More Info
Copyright 2009
Dimensions 7" x 9-1/4"
Pages: 1024
Edition: 2nd.
ISBN-10: 0-13-187321-0
ISBN-13: 978-0-13-187321-6
Request an Instructor or Media review copy
Sample Content
An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology – at all levels and with all modern technologies – this book takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. KEY TOPICS: Builds each chapter around one or more worked examples demonstrating the main idea of the chapter, usingthe examples to illustrate the relative strengths and weaknesses of various approaches. Adds coverage of statistical sequence labeling, information extraction, question answering and summarization, advanced topics in speech recognition, speech synthesis. Revises coverage of language modeling, formal grammars, statistical parsing, machine translation, and dialog processing. MARKET: A useful reference for professionals in any of the areas of speech and language processing.
Back to Top

6-1-4 . Advances in Digital Speech Transmission

Advances in Digital Speech Transmission
Editors: Rainer Martin, Ulrich Heute and Christiane Antweiler
Publisher: Wiley&Sons
Year: 2008
Back to Top

6-1-5 . Sprachverarbeitung -- Grundlagen und Methoden der Sprachsynthese und Spracherkennung

Title: Sprachverarbeitung -- Grundlagen und Methoden 
       der Sprachsynthese und Spracherkennung 
Authors: Beat Pfister, Tobias Kaufmann 
Publisher: Springer 
Year: 2008 
Back to Top

6-1-6 . Digital Speech Transmission

Digital Speech Transmission
Authors: Peter Vary and Rainer Martin
Publisher: Wiley&Sons
Year: 2006
Back to Top

6-1-7 . Distant Speech Recognition,

Distant Speech Recognition, Matthias Wölfel and John McDonough (2009), J. Wiley & Sons.
 Please link the title to 
In the very recent past, automatic speech recognition (ASR) systems have attained acceptable performance when used with speech captured with a head-mounted or close-talking microphone (CTM). The performance of conventional ASR systems, however, degrades dramatically as soon as the microphone is moved away from the mouth of the speaker. This degradation is due to a broad variety of effects that are not found in CTM speech, including background noise, overlapping speech from other speakers, and reverberation. While conventional ASR systems underperform for speech captured with far-field sensors, there are a number of techniques developed in other areas of signal processing that can mitigate the deleterious effects of noise and reverberation, as well as separating speech from overlapping speakers. Distant Speech Recognition presents a contemporary and comprehensive description of both theoretic abstraction and practical issues inherent in the distant ASR problem.
Back to Top

6-1-8 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods

Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
Joseph Keshet and Samy Bengio, Editors
John Wiley & Sons
March, 2009
Website:  Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
About the book:
This is the first book dedicated to uniting research related to speech and speaker recognition based on the recent advances in large margin and kernel methods. The first part of the book presents theoretical and practical foundations of large margin and kernel methods, from support vector machines to large margin methods for structured learning. The second part of the book is dedicated to acoustic modeling of continuous speech recognizers, where the grounds for practical large margin sequence learning are set. The third part introduces large margin methods for discriminative language modeling. The last part of the book is dedicated to the application of keyword-spotting, speaker
verification and spectral clustering. 
Contributors: Yasemin Altun, Francis Bach, Samy Bengio, Dan Chazan, Koby Crammer, Mark Gales, Yves Grandvalet, David Grangier, Michael I. Jordan, Joseph Keshet, Johnny Mariéthoz, Lawrence Saul, Brian Roark, Fei Sha, Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebo. 
Back to Top

6-1-9 . Some aspects of Speech and the Brain.

Some aspects of Speech and the Brain. 
Susanne Fuchs, Hélène Loevenbruck, Daniel Pape, Pascal Perrier
Editions Peter Lang, janvier 2009
What happens in the brain when humans are producing speech or when they are listening to it ? This is the main focus of the book, which includes a collection of 13 articles, written by researchers at some of the foremost European laboratories in the fields of linguistics, phonetics, psychology, cognitive sciences and neurosciences.
Back to Top

6-2 . Database providers


Back to Top

6-2-1 . ELRA - Language Resources Catalogue - Update

ELRA - Language Resources Catalogue - Update

ELRA is happy to announce that 1 new Speech Corpus is now available in its catalogue:

ELRA-S0299 Alcohol Language Corpus (BAS ALC)
ALC contains recordings of 88 German speakers that are either intoxicated or sober. The type of speech ranges from read single digits to full conversation style. Recordings were done during drinking test where speakers drank beer or wine to reach a self-chosen level of alcoholic intoxication. Recordings were performed in two standing automobiles. In the intoxicated state 30 items were sampled from each speaker, while in the sober state 60 items were recorded.

For more information, see:

For more information on the catalogue, please contact Valérie Mapelli

Visit our On-line Catalogue:
Visit the Universal Catalogue:
Archives of ELRA Language Resources Catalogue Updates:

Back to Top

6-2-2 . LDC News

 Membership Mailbag - Navigating the LDC Intranet  -








Membership Mailbag - Navigating the LDC Intranet

LDC's Membership office responds to a few thousand emailed queries a year, and, over time, we've noticed that some questions tend to crop up with regularity.  To address the questions that you, our data users, have asked, we'd like to continue our Membership Mailbag series of newsletter articles.  This month we will focus on a few features of the LDC Intranet including establishing an account and using that account to access information about your organization's history with LDC.  Next month, we'll take a look into using your account to access password-protected corpora and resources.

LDC's Intranet contains the following links:

Customer Profile
LDC Online
Corpora Available for Download

The User and Customer Profile sections.  Anyone can sign up for a 'guest account' to the LDC Intranet either through the Member Resources page or through the LDC Online page on LDC website.   When signing up for an account, you'll be asked to select your organization affiliation from a list of over 2700 organizations that have licensed data from LDC.  If your organization doesn't appear on the list, you can register under the organization 'Guest'.  Once your account is established under an organization name,  the organization administrator for that account receives an automated email which requests that the administrator verify your organization affiliation and change your account permission from 'guest' to 'org_user'. 

As an 'org_user', you can access more information about your organization and, generally, more data.  If you are receiving this email, then you already have an account to the LDC Intranet.  Don't recall signing up for an account?  If you have licensed data from LDC in the past, then an account was automatically created for you.  Account holders should use the User link to update their contact and log-in information.  

After your account is established and verified, you can next view information about your organization through the Customer Profile link.  The Customer Profile shows the 'Primary Contact' at each organization - for organizations which are LDC members, that contact is for membership and data inquiries; for non-member organizations the 'Primary Contact' is the first person to have licensed data under that organization name.  If your organization is an LDC member, the Customer Profile will list which years your organization has held membership under the 'Membership Year(s)' section.  The Customer Profile also shows which data your organization has licensed under the 'Catalog Information' section.  For member organizations, the profile will list all corpora which are included in their Membership Year(s).  If a corpus has been requested, then the profile will indicate who requested the corpus and when.

In the next newsletter, we'll look at using an LDC Intranet account to access password-protected corpora and resources, as we review the LDC Online and Corpora Available for Download sections.

Got a question?  About LDC data?  Forward it to  The answer may appear in a future Membership Mailbag article.

New Publications

(1) An English Dictionary of the Tamil Verb, Second Edition represents over twenty-five years of work led by Harold F. Schiffman, Professor, emeritus, of Dravidian Linguistics and Culture at the University of Pennsylvania's Department of South Asia Studies. It contains translations for 6597 English verbs and defines 9716 Tamil verbs. This release presents the dictionary in two formats: Adobe PDF and XML. The PDF format displays the dictionary in a human readable form. The XML version is a purely electronic form and is intended mainly for application development and the creation of searchable electronic databases.

In the electronic XML version each entry contains the following: the English entry or head word; the Tamil equivalent (in Tamil script and transliteration); the verb class and transitivity specification; the spoken Tamil pronunciation (audio files in mp3 format); the English definition(s); additional Tamil entries (if applicable); example sentences or phrases in Literary Tamil, Spoken Tamil (with a corresponding audio file in .mp3 format) and an English translation; and Tamil synonyms or near-synonyms, where appropriate. It is expected that the dictionary will be useful for Tamil learners, scholars and others interested in the Tamil language.

What's New in the Second Edition?

  • Errors in the Tamil text and the roman transliteration have been corrected.
  • Audio files have been updated and corrected and missing files have been added.
  • A brand new search and browse application that can access the audio has been included in this edition. This application can be accessed from the tools directory.
  • The XML structure has been modified to normalize the presentation of synonyms.

An English Dictionary of the Tamil Verb seeks to meet needs not currently addressed by existing English-Tamil dictionaries. The main goal of this dictionary is to get an English-knowing user to a Tamil verb, irrespective of whether he or she begins with an English verb or some other item, such as an adjective; this is because what may be a verb in Tamil may in fact not be a verb in English, and vice versa.  The main goal is to specifically concentrate on supplying the kinds of information lacking in all previous attempts to capture the equivalencies between English and Tamil. 

An English Dictionary of the Tamil Verb, Second Edition is distributed on one DVD.

2009 Subscription Members will automatically receive two copies of this corpus. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$300.


2) Japanese Web N-gram Version 1 was created by Google Inc. It consists of Japanese "word" n-grams and their observed frequency counts generated from over 255 billion tokens of text. The length of the n-grams ranges from unigrams to seven-grams.

The n-grams were extracted from publicly accessible web pages that were crawled by Google in July 2007. This data set contains only n-grams that appear at least 20 times in the processed sentences. Less frequent n-grams were simply discarded. Those web pages requiring user authentication, pages containing "noarchive" or "noindex" meta tags, and pages under other special restrictions were excluded from the final release. While the aim was to process only Japanese pages, the corpus may contain some pages in other languages due to language detection errors. This dataset will be useful for research in areas such as statistical machine translation, language modeling and speech recognition, among others.

Before the n-grams were collected, the web pages were converted into UTF-8 encoding, normalized into Unicode Normalization Form KC, and split into sentences. Ill-formed sentences were filtered out, and the remaining sentences were segmented into "words".  The vocabulary was restricted to "words" that appeared at least 50 times in the processed sentences. Less frequent words were replaced with the "<UNK>" special token.

Japanese Web N-gram Version 1 is distributed on six DVD-ROM.

2009 Subscription Members will automatically receive two copies of this corpus, provided that they have submitted a signed copy of the User License Agreement for  Japanese Web N-gram Version 1.  2009 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$150.

Back to Top

7 . Jobs openings

We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at as well as Jobs). 

The ads will be automatically removed from ISCApad after  6 months. Informing ISCApad editor when the positions are filled will avoid irrelevant mails between applicants and proposers.

Back to Top

7-1 . (2008-10-30) Programmer Analyst Position at LDC

                                                          Programmer Analyst Position at LDC
The Linguistic Data Consortium (LDC) at the University of Pennsylvania, Philadelphia, PA has an immediate opening for a full-time programmer analyst.
Programmer Analyst – Publications Programmer (#081025790)
Duties: Position will have primary responsibility for developing, implementing and managing data processing systems required to coordinate and prepare publications of language resources used for human language technology research and technology development.  Such resources include video, computer-readable speech, software and text data that are distributed via media and internet.  Position will  communicate with external data providers and internal project managers to acquire raw source material and to schedule releases; perform quality assessment of large data collections and render analyses/descriptions of their formats; create or adapt software tools to condition data to a uniform format and level of quality (e.g., eliminating corrupted data, normalizing data, etc.); validate quality control standards to published data and verify results; document initial and final data formats; review author documentation and supporting materials; create additional documentation as needed; and master and replicate publications. Position will also maintain the publications catalog system, the publications inventory, the archive of publishable and published data and the publication equipment, software and licenses.  Position requires attention to detail and is responsible for managing multiple short-term projects.
For further information on the duties and qualifications for this position, or to apply online please visit; search postings for the reference number indicated above.
Penn offers an excellent benefits package including medical/dental, retirement plans, tuition assistance and a minimum of 3 weeks paid vacation per year. The University of Pennsylvania is an affirmative action/equal opportunity employer.
Position contingent upon funding. For more information about LDC and the programs we support, visit
Back to Top

7-2 . (2008-11-17) Volunteering at ISCA Student advisory committee

 Announcement #1: ISCA-SAC Call for Volunteers
The ISCA Student Advisory Committee (ISCA-SAC) is seeking student volunteers to help with several interesting projects such as transcribe interviews from the Saras Institute, plan/organize student events at ISCA-sponsored conferences/workshops, increase awareness of speech and language research to undergraduate and junior graduate students, assist with website redesign to facilitate interaction with Google Scholar, as well as collect resources (e.g., conferences, theses, job listings, speech labs, etc.) for the website, to name a few.
There are many small tasks that can be done, each of which would only take up a few hours. Unless it is of your interest to become a long term volunteer, no further commitment is required. If interested, please contact the ISCA-SAC Volunteer Coordinator at: vo lun te er [at] isca-students [dot] org.
Announcement #2: ISCA-SAC Logo Contest
The ISCA Student Advisory Committee is in the search for a new logo. This is your chance to release your artistic side and enter the ISCA-SAC Logo Competition. All students are invited to participate and a prize (still to be determined) will be awarded to the winner; not to mention the importance of having your logo posted on the website for the world to see.
The deadline for submissions is March 31st, 2009. The new Logo will be unveiled during the Interspeech 2009 conference in the form of merchandise embedded with the new logo (e.g., mugs, pens, etc.).

If interested, please send your submissions to: logocontest [at]  

Back to Top

7-3 . (2008-11-20) 12 PhD Positions and 2 Post Doc Positions available in SCALE (EU Marie Curie)

12 PhD Positions and 2 Post Doc Positions available in


the Marie Curie International Training Network on


Speech Communication with Adaptive LEarning (SCALE)


SCALE is a cooperative project between


·        IDIAP Research Institute in Martigny, Switzerland (Prof Herve Bourlard)

·        Radboud University Nijmegen, The Netherlands (Prof Lou Boves, Dr Louis ten Bosch, Dr-ir Bert Cranen, Dr O. Scharenborg)

·        RWTH Aachen, Germany (Prof Hermann Ney, Dr Ralf Schlüter)

·        Saarland University, Germany (Prof Dietrich Klakow, Dr John McDonough)

·        University of Edinburgh, UK (Prof Steve Renals, Dr Simon King, Dr Korin Richmond, Dr Joe Frankel)

·        University of Sheffield, UK (Prof Roger Moore, Prof Phil Green, Dr Thomas Hain, Dr Guido Sanguinetti) .


Companies like Motorola or Philips Speech Recognition Systems/Nuance are associated partners of the program.


Each PhD position is funded for three years and degrees can be obtained from the participating academic institutions. 


Distinguishing features of the cooperation include:


·        Joint supervision of dissertations by lecturers from two partner institutions

·        While staying with one institution for most of the time, the program includes a stay at a second partner institution either from academic or industry for three to nine month 

·        An intensive research exchange program between all participating institutions


PhD and Post Doc projects will be in the area of


·        Automatic Speech Recognition

·        Machine learning

·        Speech Synthesis

·        Signal Processing

·        Human speech recognition


The salary of a PhD position is roughly 33.800€ per year. There are additional mobility (up to 800€/month) and travel allowances (yearly allowance). Applicants should hold a strong university degree which would entitle them to embark on a doctorate (Masters/diploma or equivalent) in a relevant discipline, and should be in the first four years of their research careers. As the project is funded by a EU mobility scheme, there are also certain mobility requirements.


Each Post Doc position is funded for two years. The salary is approximately 52000€ per year. Applicants must have a doctoral degree at the time of recruitment or equivalent research experience. The research experience may not exceed 5 years at the time of appointment.


Women are particularly encouraged to apply.


Deadlines for applications:


January 1, 2009

April 1, 2009

July 1, 2009

September 1, 2009.


After each deadline all submitted applications will be reviewed and positions awarded until all positions are filled.


Applications should be submitted at .


To be fully considered, please include:


- a curriculum vitae indicating degrees obtained, disciplines covered

(e.g. list of courses ), publications, and other relevant experience


- a sample of written work (e.g. research paper, or thesis,

preferably in English)


- copies of high school and university certificates, and transcripts


- two references (e-mailed directly to the SCALE office

(Diana.Schreyer@LSV.Uni-Saarland.De) before the deadline)


- a statement of research interests, previous knowledge and activities

in any of the relevant research areas.


In case an application can only be submitted by regular post, it should

be sent to:


SCALE office

Diana Schreyer

Spoken Language Systems, FR 7.4

C 71 Office 0.02

Saarland University

P.O. Box 15 11 50

D-66041 Saarbruecken



If you have any questions, please contact Prof. Dr. Dietrich Klakow



Back to Top

7-4 . (2009-01-08) Assistant Professor Toyota Technological Institute at Chicago

Assistant Professor Toyota Technological Institute at Chicago  ########################################################  Toyota Technological Institute at Chicago ( is a philanthropically endowed academic computer science institute, dedicated to basic research and graduate education in computer science.  TTI-C opened for operation in 2003 and by 2010 plans to have 12 tenured and tenure track faculty and 18 research (3-year) faculty. Regular faculty will have a teaching load of at most one course per year and research faculty will have no teaching responsibilities.  Applications are welcome in all areas of computer science, but TTI-C is currently focusing on a number of areas including speech and language processing.  For all positions we require a Ph.D. degree or Ph.D. candidacy, with the degree conferred prior to date of hire.  Applications received after December 31 may not get full consideration.  Applications can be submitted online at
Back to Top

7-5 . (2009-01-09) Poste d'ingénieur CDD : environnement intelligent

Poste d'ingénieur CDD : environnement intelligent

Ingenieur - CDD

DeadLine: 15/02/2008


Un poste d'ingénieur CDD de 18 mois est ouvert sur le campus de Metz de Supélec. Le candidat s’intégrera au sein de l’équipe « Information, Multimodalité & Signal » ( Cette équipe composée de 15 personnes est active dans les domaines du traitement numérique du signal et de l’information (traitement statistique du signal, apprentissage numérique, méthodes d’inspiration biologique), de la représentation des connaissances (fouille de données, analyse et apprentissage symbolique) et du calcul intensif et distribué. Le poste vise un profil permettant l’implémentation matérielle intégrée des méthodes développées au sein de l’équipe dans des applications liées aux environnements intelligents ainsi que leur maintenance. Le campus de Metz s’est en effet doté d’une plateforme en vraie grandeur reproduisant une pièce intelligente intégrant caméras, microphones, capteurs infrarouges, interfaces homme-machine (interface vocale, interface cerveau-machine), robo!

 ts et moyens de diffusion d’information. Il s’agira de réaliser une plateforme intégrée permettant de déployer des démonstrations rapidement dans cet environnement et de les maintenir.



Profil recherché :

– diplôme d’ingénieur en informatique, ou équivalent universitaire

– expérience de travail dans le cadre d’équipes multidisciplinaires,

– une bonne pratique de l’anglais est un plus.


Plus d'informations sont disponibles sur le site de l'équipe (


Faire acte de candidature (CV+lettre) auprès de O. Pietquin :

Back to Top

7-6 . (2009-01-13) 2009 PhD Research Fellowships at the University of Trento (Italy)

2009 PhD Research Fellowships 



The Adaptive Multimodal Information and  Interface  Research Lab

( at University of Trento (Italy) has several

PhD Research fellowships in the following areas:


                Statistical Machine Translation                

                Natural Language Processing    

                Automatic Speech Recognition

                Machine Learning

                Spoken/Multimodal Conversational Systems


We are looking for students with _excellent_ academic records

and relevant technical background. Students with EE, CS Master degrees

( or equivalent ) are welcome as well other related disciplines will

be considered. Prospective students are encouraged to look at the lab

website to search for current and past research projects.


PhD research fellowships benefits are described in the graduate school

website (

The  applicants should be fluent in _English_. The Italian language

competence is optional and applicants are encouraged to acquire

this skill during training. All applicants should have very good

programming skills. University of Trento is an equal opportunity employer.


The selection of candidates will be open until positions are filled.

Interested applicants should send their CV along with their

statement of research interest, transcript records and three reference

letters to :



                Prof. Dr.-Ing. Giuseppe Riccardi





About University of Trento and Information Engineering and Computer

 Science Department


The University of Trento is constantly ranked as

 premiere Italian graduate university institution (see


Please visit the DISI Doctorate school website at


DISI Department

DISI has a strong focus on Interdisciplinarity with professors from

different faculties of the University (Physical Science, Electrical

Engineering, Economics, Social Science, Cognitive Science, Computer Science)

 with international background.

DISI aims at exploiting the complementary experiences present in the

various research areas in order to develop innovative methods and

technologies, applications and advanced services.

English is the official language.




Prof. Ing. Giuseppe Riccardi

Marie Curie Excellence Leader

Department of Information Engineering and Computer Science

University of Trento

Room D11, via Sommarive 14

38050 Povo di Trento, Italy

tel  : +39-0461 882087



Back to Top

7-7 . (2009-02-06) Position at ELDA

The Evaluation and Language Distribution Agency (ELDA) is offering a 6-month to 1-year internship in Human Language Technology for the Arabic language, with a special focus on Machine Translation (MT) and Multilingual Information Retrieval (MLIR). The internship is organised in the framework of the European project MEDAR (MEDiterranean ARabic language and speech technology). She or he will work in ELDA offices in Paris and the main work will consist of the development and adaptation of MT and MLIR open source software for Arabic.

The applicant should have a high-quality degree in Computer Science. Good programming skills in C, C++, Perl and Eclipse are required.
The applicant should have a good knowledge of Linux and open source software.

Interest in Speech/Text Processing, Machine Learning, Computational Linguistics, or Cognitive Science is a plus.
Proficiency in written English is required.

Starting date:
February 2009.

Applications in the first instance should be made by email to
Djamel Mostefa, Head of Production and Evaluation department, ELDA,  email: mostefa _AT_

Please include a cover letter and  your CV 

Back to Top

7-8 . (2009-01-18) Ph D position at Universitaet Karlsruhe




At the Institut für Theoretische Informatik, Lehrstuhl Prof. Waibel Universität Karlsruhe (TH) a



Ph.D. position

in the field of

Multimodal Dialog Systems


is to be filled immediately with a salary according to TV-L, E13.


The responsibilities include basic research in the area of multimodal dialog systems, especially multimodal human-robot interaction and learning robots, within application targeted research projects in the area of multimodal Human-machine interaction.  Set in a framework of internationally and industry funded research programs, the successful candidate(s) are expected to contribute to the state-of-the art of modern spoken dialog systems, improving natural interaction with robots.


We are an internationally renowned research group with an excellent infrastructure. Current research projects for improving Human-Machine and Human-to-Human interaction are focus on dialog management for Human-Robot interaction.


Within the framework of the International Center for Advanced Communication Technology (interACT), our institute operates in two locations, Universität Karlsruhe (TH), Germany and at Carnegie Mellon University, Pittsburgh, USA.  International joint and collaborative research at and between our centers is common and encouraged, and offers great international exposure and activity. 


Applicants are expected to have:

  • an excellent university degree (M.S, Diploma or Ph.D.) in Computer Science, Computational Linguistics, or related fields
  • excellent programming skills 
  • advanced knowledge in at least one of the fields of Speech and Language Processing, Pattern Recognition, or Machine Learning


For candidates with Bachelor or Master’s degrees, the position offers the opportunity to work toward a Ph.D. degree.


In line with the university's policy of equal opportunities, applications from qualified women are particularly encouraged. Handicapped applicants will be preferred in case of the same qualification.


Questions may be directed to: Hartwig Holzapfel, Tel. 0721 608 4057, E-Mail:,


The application should be sent to Professor Waibel, Institut für Theoretische Informatik, Universität Karlsruhe (TH), Adenauerring 4, 76131 Karlsruhe, Germany

Back to Top

7-9 . (2009-01-16) Two post-docs at the University of Rennes (France)

Two post-doc positions on sparse representations at IRISA, Rennes,  Post-Doc DeadLine: 28/02/2009  Two postdoc positions are opened in the METISS team at INRIA, Rennes, France, in the area of data analysis / signal processing for large-scale data.   INRIA, the French National Institute for Research in Computer Science and Control plays a leading role in the development of Information and Communication Science and Technology (ICST) in France.  The METISS project team gathers more than 15 researchers and engineers for research in audio signal and speech modelling and processing.  The positions are opened in the context of the European project SMALL (Sparse Models, Algorithms and Learning for Large-scale data), within the FET-Open program of FP7, and of the ECHANGE project (ECHantillonnage Acoustique Nouvelle GEnération), funded by the french ANR.  The objective of the SMALL project is to build a theoretical framework with solid foundations, as well as efficient algorithms, to discover and exploit structure in large-scale multimodal or multichannel data, using sparse signal representations. The SMALL consortium is made of 5 academic partners located in four countries (France, United Kingdom, Switzerland, and Israel). INRIA is the scientific coordinator of the SMALL project.   INRIA is also the coordinator of the ECHANGE project, which gathers three academic partners (Institut Jean Le Rond d'Alembert & Institut Jacques Louis Lions from Université Paris 6, and INRIA).  The objective of ECHANGE is to design a theoretical and experimental framework based on sparse representations and compressed sensing to measure and process large complex acoustic fields through a limited number of acoustic sensors.  DESCRIPTION The postdocs will work on theoretical, algorithmic and practical aspects of sparse representations of large-dimensional data, with a particular emphasis on acoustic fields, for various applications such as compressed sensing, source separation and localization, and signal classification.   REQUESTED PROFILE: Candidates should hold a Ph.D in Signal/Image Processing, Machine Learning, or Applied Mathematics.  Previous experience in sparse representations (time-frequency and time-scale transforms, pursuit algorithms, support vector machines and related approaches) is desirable, as well as a strong taste for the mathematical aspects of signal processing.     ADDITIONAL INFORMATION  For additional technical information, please contact :    DURATION OF THE CONTRACT The positions, funded for at least 2 years (up to three years), will be renewed on a yearly basis depending on scientific progress and achievement. The gross minimum salary will be 28287 € annually (~ 1923 € net per month) and will be adjusted according to experience. The usual funding support of any French institution (medical insurance, etc.) will be provided.   TENTATIVE RECRUITING DATE  01.03.2009  as soon as possible  PLACE OF EMPLOYMENT  INRIA Rennes – Bretagne Atlantique  (France) - Websites: :   SCIENTIFIC COORDINATOR  Rémi GRIBONVAL - SMALL/ECHANGE project leader -  METISS Project-Team - INRIA-Bretagne Atlantique -  Email: -  phone: +33 2 99 84 25 06   APPLICATIONS TO BE SENT TO  Please send application files (a motivation letter, a full resume, a statement of research interests, a list of  publications, and up to five reference letters) to  Stéphanie Lemaile, SMALL/ECHANGE administrative assistant.  Email:  Deadline: end of february 2009.
Back to Top

7-10 . (2009-01-14)AT&T Labs-Research Research staff

AT&T Labs - Research : Research Staff

AT&T Labs - Research is seeking exceptional candidates for
Research Staff positions. AT&T is the premiere broadband, IP,
entertainment, and wireless communications company in the U.S.
and one of the largest in the world. Our researchers are
dedicated to solving real problems in speech and language
processing, and are involved in inventing, creating and
deploying innovative services. We also explore fundamental
research problems in these areas. Outstanding Ph.D.-level
candidates at all levels of experience are encouraged to apply.
Candidates must demonstrate excellence in research, a
collaborative spirit and strong communication and software
skills. Areas of particular interest are

    * Large-vocabulary automatic speech recognition
    * Acoustic and language modeling
    * Robust speech recognition
    * Signal processing
    * Adaptive learning
    * Pronunciation modeling
    * Natural language understanding
    * Voice and multimodal search

AT&T Companies are Equal Opportunity Employers. All qualified
candidates will receive full and fair consideration for
employment. Application instructions are available on our
website at Click on "Join us". 

Back to Top

7-11 . (2009-01-13) Ph D Research fellowships at University of Trento (Italy)

2009 PhD Research Fellowships 

The Adaptive Multimodal Information and  Interface  Research Lab

( at University of Trento (Italy) has several

PhD Research fellowships in the following areas:


                Statistical Machine Translation                

                Natural Language Processing    

                Automatic Speech Recognition

                Machine Learning

                Spoken/Multimodal Conversational Systems


We are looking for students with _excellent_ academic records

and relevant technical background. Students with EE, CS Master degrees

( or equivalent ) are welcome as well other related disciplines will

be considered. Prospective students are encouraged to look at the lab

website to search for current and past research projects.


PhD research fellowships benefits are described in the graduate school

website (

The  applicants should be fluent in _English_. The Italian language

competence is optional and applicants are encouraged to acquire

this skill during training. All applicants should have very good

programming skills. University of Trento is an equal opportunity employer.


The selection of candidates will be open until positions are filled.

Interested applicants should send their CV along with their

statement of research interest, transcript records and three reference

letters to :



                Prof. Dr.-Ing. Giuseppe Riccardi





About University of Trento and Information Engineering and Computer

 Science Department


The University of Trento is constantly ranked as

 premiere Italian graduate university institution (see


Please visit the DISI Doctorate school website at


DISI Department

DISI has a strong focus on Interdisciplinarity with professors from

different faculties of the University (Physical Science, Electrical

Engineering, Economics, Social Science, Cognitive Science, Computer Science)

 with international background.

DISI aims at exploiting the complementary experiences present in the

various research areas in order to develop innovative methods and

technologies, applications and advanced services.

English is the official language.




Prof. Ing. Giuseppe Riccardi

Marie Curie Excellence Leader

Department of Information Engineering and Computer Science

University of Trento

Room D11, via Sommarive 14

38050 Povo di Trento, Italy

tel  : +39-0461 882087



Back to Top

7-12 . (2009-02-15) Research Grants for PhD Students and Postdoc Researchers-Bielefeld University

The Graduate School Cognitive Interaction Technology at Bielefeld University,
Germany offers
Research Grants for PhD Students and Postdoc Researchers
The Center of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University
has been established in the framework of the Excellence Initiative as a research center for
intelligent systems and cognitive interaction between humans and technical systems.
CITEC's focus is directed towards motion intelligence, attentive systems, situated
communication, and memory and learning. Research and development are directed towards
understanding the processes and functional constituents of cognitive interaction, and
establishing cognitive interfaces that facilitate the use of complex technical systems.
The Graduate School Cognitive Interaction Technology invites applications from outstanding
young scientists, in the fields of robotics, computer science, biology, physics, sports
sciences, linguistics or psychology, that are willing to contribute to the cross-disciplinary
research agenda of CITEC. The international profile of CITEC fosters the exchange of
researchers and students with related scientific institutions. For PhD students, a structured
program including taught courses and time for individual research is offered. The integration
and active participation in interdisciplinary research projects, which includes access to first
class lab facilities, is facilitated by CITEC. For more information, please see: .
Successful candidates must hold an excellent academic degree (MSc/Diploma/PhD) in a
related discipline, have a strong interest in research, and be proficient in both written and
spoken English. Research grants will be given for the duration of three years for PhD
students, and one to three years for Postdocs.
All applications should include: a cover letter indicating the motivation and research interests
of the candidate, a CV including a list of publications, and relevant certificates of academic
qualification. PhD applicants are asked to provide the outline of a PhD project (2-3 pages)
and a short abstract. Postdoc researchers are asked to provide the outline of a research
project (4-5 pages) relevant to CITEC's research objectives and a short abstract. It is
obligatory for Postdoc applicants, and strongly recommended for PhD applicants, to provide
two letters of recommendation. In the absence of letters of recommendation, PhD candidates
should provide the names and contact details of two referees. All documentation should be
submitted in electronic form.
We strongly encourage candidates to contact our researchers, in advance of application, in
order to develop project ideas. For a list of CITEC researchers please visit: .
Bielefeld University is an equal opportunity employer. Women are especially encouraged to
apply and in the case of comparable competences and qualification, will be given preference.
Bielefeld University explicitly encourages disabled people to apply.
Applications will be considered until all positions have been filled. For guaranteed
consideration, please submit your documents no later than March 22, 2009. Please address
your application to Prof. Thomas Schack, Head of Graduate School, Email: gradschool@citec. . Please direct any queries relating to your application to Claudia Muhl,
Graduate School Manager, phone: +49-(0)521-106-6566,
Back to Top

7-13 . (2009-03-09) 9 PhD positions in the Marie Curie International Training Network

Up to 9 PhD Positions available in


 the Marie Curie International Training Network on


Speech Communication with Adaptive LEarning (SCALE)





SCALE is a cooperative project between


·        IDIAP Research Institute in Martigny, Switzerland (Prof Herve Bourlard)

·        Radboud University Nijmegen, The Netherlands (Prof Lou Boves, Dr Louis ten Bosch, Dr-ir Bert Cranen, Dr O. Scharenborg)

·        RWTH Aachen, Germany (Prof Hermann Ney, Dr Ralf Schlüter)

·        Saarland University, Germany (Prof Dietrich Klakow, Dr John McDonough)

·        University of Edinburgh, UK (Prof Steve Renals, Dr Simon King, Dr Korin Richmond, Dr Joe Frankel)the Marie Curie International Training Network

·        University of Sheffield, UK (Prof Roger Moore, Prof Phil Green, Dr Thomas Hain, Dr Guido Sanguinetti) .


Companies like Toshiba or Philips Speech Recognition Systems/Nuance are associated partners of the program.


Each PhD position is funded for three years and degrees can be obtained from the participating academic institutions. 


Distinguishing features of the cooperation include:


·        Joint supervision of dissertations by lecturers from two partner institutions

·        While staying with one institution for most of the time, the program includes a stay at a second partner institution either from academic or industry for three to nine month 

·        An intensive research exchange program between all participating institutions


PhD projects will be in the area of


·        Automatic Speech Recognition

·        Machine learning

·        Speech Synthesis

·        Signal Processing

·        Human speech recognition


The salary of a PhD position is roughly 33.800 Euro per year. There are additional mobility (up to 800 Euro/month) and travel allowances (yearly allowance). Applicants should hold a strong university degree which would entitle them to embark on a doctorate (Masters/diploma or equivalent) in a relevant discipline, and should be in the first four years of their research careers. As the project is funded by a EU mobility scheme, there are also certain mobility requirements.


Women are particularly encouraged to apply.


Deadlines for applications:


April 1, 2009

July 1, 2009

September 1, 2009.


After each deadline all submitted applications will be reviewed and positions awarded until all positions are filled.


Applications should be submitted at .


To be fully considered, please include:


- a curriculum vitae indicating degrees obtained, disciplines covered

(e.g. list of courses ), publications, and other relevant experience


- a sample of written work (e.g. research paper, or thesis,

preferably in English)


- copies of high school and university certificates, and transcripts


- two references (e-mailed directly to the SCALE office

(Diana.Schreyer@LSV.Uni-Saarland.De) before the deadline)


- a statement of research interests, previous knowledge and activities

in any of the relevant research areas.


In case an application can only be submitted by regular post, it should

be sent to:


SCALE office

Diana Schreyer

Spoken Language Systems, FR 7.4

C 71 Office 0.02

Saarland University

P.O. Box 15 11 50

D-66041 Saarbruecken



If you have any questions, please contact Prof. Dr. Dietrich Klakow



For more information see also


Back to Top

7-14 . )2009-03-10) Maitre de conferences a l'Universite Descartes Paris (french)

 Un poste de maître de conférences en informatique (section 27) 27MCF0031 est à pouvoir à l'université Paris Descartes.
L’objectif de ce recrutement est de renforcer la thématique de recherche en traitement de la parole pour la détection et la remédiation d’altérations de la voix. On attend du candidat une solide expérience en traitement automatique de la parole (reconnaissance, synthèse, …).
Pour l'enseignement, tous les diplômes de l'UFR mathématiques et informatique sont concernés : la Licence MIA, le Master Mathématique et Informatique, le Master MIAGE. 


Contact: Marie-José Carat

Professeur d'Informatique
CRIP5 - Diadex (Dialogue et indexation)

Université Paris Descartes
45, rue des Saints Pères - 75270 Paris cedex 06

Tél  : (33/0) 1 42 86 38 48 


Back to Top

7-15 . (2009-03-14) Institut de linguistique et de phonetique Sorbonne Paris (french)


07-Sciences du langage : linguistique et phonétique générales ...
Informatique et Traitemant Automatique des Langues
PARIS 75005
Adresse d'envoi du
dossier :
Bureau du personnel enseignant
PR - 7eme - 0743
75005 - PARIS
Contact administratif :
N° de téléphone :
N° de Fax :
Email :
01 40 46 28 96 01 40 46 28 92
01 43 25 74 71
Date de prise de fonction : 01/09/2009
Mots-clés :
Profil enseignement :
Composante ou UFR :
Référence UFR :
Institut de linguistique et phonetique generales et appliquees  


Laboratoire5 : 

Informations Complémentaires
Enseignement :
Profil :
L’enseignement interviendra dès la 1ère année de la filière de la Licence des Sciences
du Langage jusqu’au Doctorat des Sciences du Langage, spécialité TAL. La formation en
Traitement Automatique des Langues peut bien entendu aussi trouver des applications dans
un Master de Sciences du Langage, spécialité Langage, Langues, Modèles et un Doctorat de
Sciences du Langage d’autres spécialités.
Le poste permettra l’encadrement d’enseignements associant Sciences du Langage et
Traitement Automatique des Langues, orientés à la fois vers la poursuite d’études en Master
et Doctorat et vers la professionnalisation, en préparant à des métiers des Industries de la
Département d’enseignement : UFR de Linguistique e Phonétique Générales et
Lieu(x) d’exercice : 19, rue des Bernardins 75005 - PARIS
Equipe pédagogique :
Nom directeur département : Madame Martine VERTALIER
Tél. directeur département. : 01 44 32 05 79
Email directeur département :
URL département. : /
Recherche :
Profil :
Développement et encadrement des recherches en TAL, recherche sur « grands
corpus » oraux et/ou écrits dans des langues diverses, éventuellement fouille de données,
induction de grammaires, mais aussi en synergie, au sein des équipes de recherche
constituées, avec les composantes travaillant sur d’autres domaines de recherche, par un
apport de ressources théoriques et technologiques. Le Professeur inscrira sa recherche dans
l’Ecole Doctorale 268 de Paris3 en priorité dans l’équipe fondatrice des filières de formation
et de recherche décrites ci-dessus : le SYLED, en particulier sa composante CLA2t (Centre de
Lexicométrie et d’Analyse Automatique des Textes), ou dans une équipe dont les enseignants
chercheurs contribuent à l’enseignement et à la recherche à l’ILPAG (Laboratoire de
Phonéthique et Phonologie ) ; UMR 7107 Laboratoire des Langues et Civilisations à Tradition
Orale (LACITO).
Lieu(x) d’exercice :
1- EA 2290 SYLED 19 , rue des Bernardins 75005-PARIS
2- UMR 7018 Laboratoire de phonétique et phonologie 19, rue des Bernardins
3- EA 1483 Recherche sur le Français Contemporain 19, rue des Bernardins
4- UMR 7107 LACITO CNRS 7, rue G. Môquet 94800-VILLEJUIF
Nom directeur laboratoire : 1- M. André SALEM 01 44 32 05 84
2- Me Jacqueline VAISSIERE et Me Annie RIALLAND 01 43 26 57 17
3- Me Anne SALAZAR-ORVIG 01 44 32 05 07
4- Me Zlatka GUENTCHEVA 01 49 58 37 78
Email directeur laboratoire : - - - 


Back to Top

7-16 . (2009-03-15) Poste Maitre de conferences Nanterre Paris (french)

Poste MCF, 221 : Linguistique : pathologie des acquisitions langagières
Université Paris X, Nanterre, Département des Sciences du langage
Contact : Anne Lacheret,

Préférence accordée aux candidats et candidates à double profil :
linguistique et orthophonie ou discipline connexe. 

Back to Top

7-17 . (2009-03-18) Ingenieur etude/developpement Semantique, TAL, traduction automatique (french)

Ingénieur Etude & Développement (H/F)




Fort d’une croissance continue de ses activités, soutenue par un investissement permanent en R&D, notre CLIENT, leader Européen du traitement de l’information recrute un Ingénieur Développement (h/f) spécialisé en sémantique, traitement automatique du langage naturel, outils de traduction automatique et de recherche d’informations cross-lingue et système de gestion de ressources linguistique multilingues (dictionnaires, lexiques, mémoires de traduction, corpus alignés).

Passionné(e) par l’application des technologies les plus avancées au traitement industriel de l’information, vos missions consistent à concevoir, développer et industrialiser les chaînes de traitement documentaire utilisées par les lignes de production pour le compte des clients de l’entreprise.

De formation supérieure en informatique (BAC+5 ou équivalent), autonome et créatif, nous vous proposons d’intégrer une structure dynamique et à taille humaine où l’innovation est permanente au service de la production et du client.

Vous justifiez idéalement de 2/3 ans d'expérience dans la programmation orientée objet et les processus de développement logiciel. La pratique de C++ et/ou Java est indispensable.
La maîtrise de l’anglais est exigée pour évoluer dans un groupe à envergure internationale.
Vos qualités d’analyse et de synthèse, votre sens du service et de l’engagement client vous permettront de relever le challenge que nous vous proposons.

Back to Top

7-18 . (2009-04-02)The Johns Hopkins University: Post-docs, research staff, professors on sabbaticals

The Johns Hopkins University
The Human Language Technology Center of Excellence
Post-docs, research staff, professors on sabbaticals
The Human Language Technology Center of Excellence (COE) at the Johns Hopkins University is seeking to hire outstanding Ph.D. researchers in the field of speech and natural language processing. The COE seeks the most talented candidates for both junior and senior level positions including, but not limited to, full-time research staff, professors on sabbaticals, visiting scientists and post-docs. Candidates will be expected to work in a team setting with other researchers and graduate students at the Johns Hopkins University, the University of Maryland College Park and other affiliated institutions.
Candidates should have a strong background in speech processing:
Robust speech recognition across language channel, formal vs. informal genres, speaker identification, language identification, speech retrieval, spoken term detection, etc.
The COE was founded in January 2007 and has a long-term research contract as an independent center within Johns Hopkins University. Located next to Johns Hopkins’ Homewood Campus in Baltimore, Maryland, the COE’s distinguished contract partners include the University of Maryland College Park, the Johns Hopkins University Applied Physics Lab, and BBN Technologies of Cambridge, Massachusetts. World-class researchers at the COE focus on fundamental challenge problems critical to finding solutions for real-world problems of importance to our government sponsor. The COE offers substantial computing capability for research that requires heavy computation and massive storage. In the summer of 2009, the COE will hold its first annual Summer Camp for Advanced Language Exploration (SCALE), inviting the best and brightest researchers to work on common areas in speech and NLP. Researchers are expected to publish in peer-reviewed venues. For more information about the COE, visit
Applicants should have earned a Ph.D. in Computer Science (CS), Electrical and Computer Engineering (ECE), or a closely related field. Applicants should submit a curriculum vitae, research statement, names and addresses of at least four references, and an optional teaching statement. Please send applications and inquiries about the position to
Back to Top

7-19 . (2009-04-07) PhD Position in The Auckland University - New Zealand

PhD Position in The Auckland University - New Zealand Speech recognition for Healthcare Robotics Description:  This project is the speech recognition component of a larger project for a speech enabled command module with verbal feedback software to facilitate interaction between aged people and robots. Including: speech generation and empathetic speech expression by the robot, speech recognition by the robot. For more details please refer to the link:
Back to Top


RESEARCH AND DEVELOPMENT POSITION IN SPEECH RECOGNITION, PROCESSING AND SYNTHESIS =========================================================================

The position is available immediately in the Speech group of the Analysis/Synthesis team at Ircam.
The Analysis/Synthesis team undertakes research and development
centered on new and advanced algorithms for analysis, synthesis and
transformation of audio signals, and, in particular, speech.

A full-time position is open for research and development of advanced statistics
and signal processing algorithms in the field of speech recognition,
transformation and synthesis.  (projects Rhapsodie, Respoken,
Affective Avatars, Vivos, among others)
The applications in view are, for example,
- Transformation of the identity, type and nature of a voice
- Text-to-Speech and expressive Speech Synthesis
- Synthesis from actor and character recordings.
The principal task is the design and the development of new algorithms
for some of the subjects above and in collaboration with the other
members of the Speech group. The research environment is Linux, Matlab
and various scripting languages like Perl. The development environment
is C/C++, for Windows in particular.

O Excellent experience of research in statistics, speech and signal processing
O Experience in speech recognition, automatic segmentation (e.g. HTK)
O Experience of C++ development
O Good knowledge of UNIX and Windows environments
O High productivity, methodical work, and excellent programming style.

The position is available in the Analysis/Synthesis team of the Research
and Ddevelopment department of Ircam to start as soon as possible.

The initial contract is for 1 year, and could be prolonged.

In order to be able to begin immediately, the candidate SHALL HAVE valid EEC working papers.

According to formation and experience.

Please send your CV describing in a very detailed way the level of knowledge,
expertise and experience in the fields mentioned above (and any other
relevant information, recommendations in particular) preferably by email to: (Xavier Rodet, Head of the Analysis/Synthesis team)

Or by fax: (33 1) 44 78 15 40, attention of Xavier Rodet

Or by post to: Xavier Rodet, IRCAM, 1 Place Stravinsky, 75004 Paris, France 

Back to Top

7-21 . (2009-05-04) Several Ph.D. positions and Ph.D. or Postdoc scholarships, Universität Bielefeld

 Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld

Applications are invited for several Ph.D. positions and Ph.D. scholarships in experimental phonetics, speech technology and laboratory phonology at Universität Bielefeld (Fakultät für Linguistik und Literaturwissenschaft), Germany.


Successful candidates should hold a Master's degree (or equivalent) in phonetics, computational linguistics, linguistics, computer science or a related discipline. They will have a strong background in either

-       speech synthesis and/or recognition

-       discourse prosody

-       laboratory phonology

-       speech and language rhythm research

-       multimodal speech (technology)


Candidates should appreciate working in an interdisciplinary environment. Good knowledge in experimental design techniques and programming skills will be considered a plus. Strong interest in research and high proficiency in English is required.


The Ph.D. positions will be part-time (50%); salary and social benefits are determined by the German public service pay scale (TVL-E13). The Ph.D. scholarship is based on the DFG scale. There is no mandatory teaching load.


Bielefeld University is an equal opportunity employer. Women are therefore particularly encouraged to apply. Disabled applicants with equivalent qualification will be treated preferentially.


The positions are available for three years (with a potential extension for the Ph.D. positions), starting as soon as
possible. Please submit your documents (cover letter, CV including list of publications, statement of research interests, names of two referees) electronically to the address indicated below. Applications must be received by June 15, 2009.
Universität Bielefeld
Fakultät für Linguistik und Literaturwissenschaft
Prof. Dr. Petra Wagner
Postfach 10 01 31
33 501 Bielefeld




Back to Top



The PORT-MEDIA (ANR CONTINT 2008-2011) is a cooperative project
sponsored by the French National Research Agency, between the University
of Avignon, the University of Grenoble, the University of Le Mans, CNRS
at Nancy and ELRA (European Language Resources Association).  PORT-MEDIA
will address the multi-domain and multi-lingual robustness and
portability of spoken language understanding systems. More specifically,
the overall objectives of the project can be summarized as:
- robustness: integration/coupling of the automatic speech recognition
component in the spoken language understanding process.
- portability across domains and languages: evaluation of the genericity
and adaptability of the approaches implemented in the
understanding systems, and development of new techniques inspired by
machine translation approaches.
- representation: evaluation of new rich structures for high-level
semantic knowledge representation.

The PhD thesis will focus on the multilingual portability of speech
understanding systems. For example, the candidate will investigate
techniques to fast adapt an understanding system from one language to
another and creating low-cost resources with (semi) automatic methods,
for instance by using automatic alignment techniques and lightly
supervised translations. The main contribution will be to fill the gap
between the techniques currently used in the statistical machine
translation and spoken language understanding fields.

The thesis will be co-supervised by Fabrice Lefèvre, Assistant Professor
at LIA (University of Avignon) and Laurent Besacier, Assistant Professor
at LIG (University of Grenoble). The candidate will spend 18 months at
LIG then 18 months at LIA.

The salary of a PhD position is roughly 1,300€ net per month. Applicants
should hold a strong university degree entitling them to start a
doctorate (Masters/diploma or equivalent) in a relevant discipline
(Computer Science, Human Language Technology, Machine Learning, etc).
The applicants should be fluent in English. Competence in French is
optional, though applicants will be encouraged to acquire this skill
during training. All applicants should have very good programming skills.

For further information, please contact Fabrice Lefèvre (Fabrice.Lefevre
at AND Laurent Besacier (Laurent.Besacier at

Sujet de thèse en Traduction Automatique et Compréhension de la Parole
(début 09/09)

Le projet PORT-MEDIA (ANR CONTINT 2008-2011) concerne la robustesse et
la portabilité multidomaine et multilingue des systèmes de compréhension
de l'oral. Les partenaires sont le LIG, le LIA, le LORIA, le LIUM et
ELRA (European Language Ressources Association). Plus précisément, les
trois objectifs principaux du projet concernent :
-la robustesse et l'intégration/couplage du composant de reconnaissance
automatique de la parole dans le processus de compréhension.
-la portabilité vers un nouveau domaine ou langage : évaluation des
niveaux de généricité et d'adaptabilité des approches implémentées dans
les systèmes de compréhension.
-l’utilisation de représentations sémantiques de haut niveau pour
l’interaction langagière.

Ce sujet de thèse concerne essentiellement la portabilité multilingue
des différents composants d’un système de compréhension automatique ;
l’idée étant d’utiliser, par exemple, des techniques d’alignement
automatique et de traduction pour adapter rapidement un système de
compréhension d’une langue vers une autre, en créant des ressources à
faible coût de façon automatique ou semi-automatique. L'idée forte est
de rapprocher les techniques de traduction automatique et de
compréhension de la parole.

Cette thèse est un co-encadrement entre deux laboratoires (Fabrice
Lefevre, LIA & Laurent Besacier, LIG). Les 18 premiers mois auront lieu
au LIG, les 18 suivants au LIA.

Le salaire pour un etudiant en thèse est d'environ 1300€ net par mois.
Nous recherchons des étudiants ayant un Master (ou équivalent) mention
Recherche dans le domaine de l'Informatique, et des compétences dans les
domaines suivants : traitement des langues écrites et/ou parlées,
apprentissage automatique...

Pour de plus amples informations ou candidater, merci de contacter
Fabrice Lefèvre (Fabrice.Lefevre at ET Laurent Besacier
(Laurent.Besacier at


Back to Top

8 . Journals


Back to Top

8-1 . Special issue of CSL on Emergent Artificial Intelligence Approaches for Pattern Recognition in Speech and Language Processing

Special Issue on "Emergent Artificial Intelligence Approaches for Pattern Recognition in Speech and Language Processing"
      Computer Speech and Language, Elsevier       Deadline for paper submission: September 26, 2008.                        =
Back to Top

8-2 . Special issue IEEE Trans. ASL Signal models and representation of musical and environmental sounds

Special Issue of IEEE Transactions on Audio, Speech and Language Processing
-- Submission deadline: 15 December, 2008
-- Notification of acceptance: 15 June, 2009
--Final manuscript due: 1st July, 2009
--Tentative publication date: 1st September, 2009
Guest editors
Dr. Bertrand David (Telecom ParisTech, France)
Dr. Laurent Daudet (UPMC University Paris 06, France)
Dr. Masataka Goto (National Institute of Advanced Industrial Science and Technology, Japan)
Dr. Paris Smaragdis (Adobe Systems, Inc, USA)
The non-stationary nature, the richness of the spectra and the mixing of diverse sources are common characteristics shared by musical and environmental audio scenes. It leads to specific challenges of audio processing tasks such as information retrieval, source separation, analysis-transformation-synthesis and coding. When seeking to extract information from musical or environmental audio signals, the time-varying waveform or spectrum are often further analysed and decomposed into sound elements. Two aims of this decomposition can be identified, which are sometimes antagonist: to be together adapted to the particular properties of the signal and to the targeted application. This special issue is focused on how the choices of a low level representation (typically a time-frequency distribution with or without probabilistic framework, with or without perceptual considerations), a source model or a decomposition technique may influence the overall performance. Specific topics of interest include but are not limited to:
* factorizations of time-frequency distribution
* sparse representations
* Bayesian frameworks
* parametric modeling
* subspace-based methods for audio signals
* representations based on instrument or/and environmental sources signal models
* sinusoidal modeling of non-stationary spectra (sinusoids, noise, transients)
Typical applications considered are (non exclusively):
* source separation/recognition
* mid or high level features extraction (metrics, onsets, pitches, …)
* sound effects * audio coding * information retrieval
* audio scene structuring, analysis or segmentation * ...
Back to Top

8-3 . "Speech Communication" special issue on "Speech and Face to Face Communication

 "Speech Communication" special issue on "Speech and Face to Face Communication

Speech communication is increasingly studied in a face to face perspective:
- It is interactive: the speaking partners build a complex communicative act together
involving linguistic, emotional, expressive, and more generally cognitive and social
- It involves multimodality to a large extent: the “listener” sees and hears the speaker who
produces sounds as well as facial and more generally bodily gestures;
- It involves not only linguistic but also psychological, affective and social aspects of
interaction. Gaze together with speech contribute to maintain mutual attention and to
regulate turn-taking for example. Moreover the true challenge of speech communication is
to take into account and integrate information not only from the speaker but also from the
entire physical environment in which the interaction takes place.

The present issue proposes to synthetize the most recent developments in
this topic considering its various aspects from complementary perspectives: cognitive and
neurocognitive (multisensory and perceptuo-motor interactions), linguistic (dialogic face to
face interactions), paralinguistic (emotions and affects, turn-taking, mutual attention),
computational (animated conversational agents, multimodal interacting communication

There will be two stages in the submission procedure.

- First stage (by DECEMBER 1ST): submission of a one-to-two page abstract describing the
contents of the work and its relevance to the "Speech and Face to Face Communication" topic
by DECEMBER 1ST. The guest editors will then make a selection of the most relevant
proposals in December.

- Second stage (by MARCH 1ST): the selected contributors will be invited to submit a full
paper by MARCH 1ST. The submitted papers will then be peer reviewed through the regular
Speech Communication journal process (two independent reviews). Accepted papers will then
be published in the special issue.

Abstracts should be directly sent to the guest editors:,,

Back to Top

8-4 . SPECIAL ISSUE of the EURASIP Journal on Audio, Speech, and Music Processing. ON SCALABLE AUDIO-CONTENT ANALYSIS


The amount of easily-accessible audio, whether in the form of large
collections of audio or audio-video recordings, or in the form of
streaming media, has increased exponentially in recent times.
However this audio is not standardized: much of it is noisy,
recordings are frequently not clean, and most of it is not labelled.
The audio content covers a large range of categories including
sports, music and songs, speech, and natural sounds. There is
therefore a need for algorithms that allow us make sense of these
data, to store, process, categorize, summarize, identify and
retrieve them quickly and accurately.

In this special issue we invite papers that present novel approaches
to problems such as (but not limited to):

Audio similarity
Audio categorization
Audio classification
Indexing and retrieval
Semantic tagging
Audio event detection

We are especially interested in work that addresses real-world
issues such as:

Scalable and efficient algorithms
Audio analysis under noisy and real-world conditions
Classification with uncertain labeling
Invariance to recording conditions
On-line and real-time analysis of audio.
Algorithms for very large audio databases.

We encourage theoretical or application-oriented papers that
highlight exploitation of such techniques in practical systems/products.

Authors should follow the EURASIP Journal on Audio, Speech, and Music
Processing manuscript format described at the journal site Prospective authors should
submit an electronic copy of their complete manuscript through the
journal Manuscript Tracking System at,
according to the following timetable:

Manuscript Due: June 1st, 2009
First Round of Reviews: September 1, 2009
Publication Date: December 1st, 2009

Guest Editors:

1) Bhiksha Raj
Associate professor

School of computer science
Carnegie Mellon university

2) Paris Smaragdis
Senior Research Scientist
Advanced Technology Labs, Adobe Systems Inc.
Newton, MA, USA

3) Malcolm Slaney
Principal Scientist
Yahoo! Research
Santa Clara, CA
(Consulting) Professor
Stanford CCRMA

4) Chung-Hsien Wu
Distinguished Professor
Dept. of Computer Science & Infomation Engineering
National Cheng Kung University,
Tainan, TAIWAN

5) Liming Chen
Professor and head of the Dept. Mathematics & Informatics
Ecole Centrale de Lyon
University of Lyon
Lyon, France

6) Professor Hyoung-Gook Kim
Intelligent Multimedia Signal Processing Lab.
Kwangwoon University, Republic of Korea 

Back to Top

8-5 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing.on Atypical Speech

Atypical Speech
Call for Papers

Research in speech processing (e.g., speech coding, speech enhancement, speech recognition, speaker recognition, etc.) tends to concentrate on speech samples collected from normal adult talkers. Focusing only on these “typical speakers” limits the practical applications of automatic speech processing significantly. For instance, a spoken dialogue system should be able to understand any user, even if he or she is under stress or belongs to the elderly population. While there is some research effort in language and gender issues, there remains a critical need for exploring issues related to “atypical speech”. We broadly define atypical speech as speech from speakers with disabilities, children's speech, speech from the elderly, speech with emotional content, speech in a musical context, and speech recorded through unique, nontraditional transducers. The focus of the issue is on voice quality issues rather than unusual talking styles.

In this call for papers, we aim to concentrate on issues related to processing of atypical speech, issues that are commonly ignored by the mainstream speech processing research. In particular, we solicit original, previously unpublished research on:
• Identification of vocal effort, stress, and emotion in speech
• Identification and classification of speech and voice disorders
• Effects of ill health on speech
• Enhancement of disordered speech
• Processing of children's speech
• Processing of speech from elderly speakers
• Song and singer identification
• Whispered, screamed, and masked speech
• Novel transduction mechanisms for speech processing
• Computer-based diagnostic and training systems for speech dysfunctions
• Practical applications

Authors should follow the EURASIP Journal on Audio, Speech, and Music Processing manuscript format described at the journal site Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at, according to the following timetable:
Manuscript Due
April 1, 2009
First Round of Reviews
July 1, 2009
Publication Date
October 1, 2009

Guest Editors

Georg Stemmer, Siemens AG, Corporate Technology, 80333 Munich, Germany

Elmar Nöth, Department of Pattern Recognition, Friedrich-Alexander University of Erlangen-Nuremberg, 91058 Erlangen, Germany

Vijay Parsa, National Centre for Audiology, The University of Western Ontario, London, ON, Canada N6G 1H1 

Back to Top

8-6 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing on Animating virtual speakers or singers from audio: lip-synching facial animation


Special issue on 
Animating virtual speakers or singers from audio: lip-synching facial animation 

Call for PapersLip synchronization (lip-synch) is the term used to describe matching lip movements to a pre-recorded speaking or singing voice. This often is used in the production of films, cartoons, television programs, and computer games. 
We focus here on technologies that are able to compute automatically the facial movements of animated characters given pre-recorded audio. Automating the lip-synch process, generally termed visual speech synthesis, has potential for use in a wide range of applications: from desktop agents on personal computers, to language translation tools, to providing a means for generating and displaying stimuli in speech perception experiments.A visual speech synthesizer comprises at least three modules: a control model that computes articulatory trajectories from the input signal; a shape model that animates the facial geometry from computed trajectories and an appearance model for rendering the animation by varying the colors of pixels. There are numerous solutions proposed in the literature for each of these modules. Control models exploit either direct signal-to-articulation mappings, or more complex trajectory formation systems that utilize a phonetic segmentation of the acoustic signal. Shape models vary from ad-hoc parametric deformations of a 2D mesh to sophisticated 3D biomechanical models. Appearance models exploit morphing of natural images, texture blending or more sophisticated texture models.The aim of this special issue is to provide a detailed description of state-of-the-art systems and identify new techniques that have recently emerged from both the audiovisual speech and computer graphics research communities. 
In particular, we solicit original, previously unpublished research on: 


Audiovisual synthesis from text

Facial animation from audio

Trajectory formation systems

Evaluation methods for audiovisual synthesis

Perception of audiovisual asynchrony in speech and music

Control of speech and facial expressions



This special issue follows the first visual speech synthesis challenge (LIPS’2008) that took place as a special session at INTERSPEECH 2008 in Brisbane, Australia. The aim of the challenge was to stimulate discussion about the subjective quality assessment of synthesized visual speech, with a view to developing standardized evaluation procedures.For this special issue, all papers selected for publication should include a description of a subjective evaluation experiment that outlines the impact of the proposed synthesis scheme on some subjective measure, such as audiovisual intelligibility, cognitive load or perceived naturalness. This evaluation metric could be assessed either by participation in the LIPS’2008 challenge, or by an independent perceptual experiment.Technical organization 
The issue is coordinated by three guest editors: G. Bailly, B.-J. Theobald & S. Fagel. These editors co-organized the LIPS’2008 challenge, and they cover a large spectrum of scientific backgrounds coherent with the theme: audiovisual speech processing, facial animation & computer graphics. They are assisted by a scientific committee. The members of the scientific committee are also invited to submit papers, and promote papers by helping in the communication process around the issue.The special issue will be introduced by a paper written by the editors, with a critical review of the selected papers and with a discussion of the results obtained by the systems participating to the LIPS’2008 challenge. 
ScheduleAuthors should follow the EURASIP Journal on Audio, Speech, and Music Processing manuscript format described at the journal site Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at, according to the following timetable: 


One page abstractJanuary 1, 2009
Preselection of papersFebruary 1, 2009
Manuscript dueMarch 1, 2009
First round of reviewsMay 1, 2009
Camera-ready papersJuly 1, 2009
Publication dateSeptember 1,2009

Guest Editors


Gérard Bailly, GIPSA-Lab, Speech & Cognition Dept., Grenoble-France; 

Sascha Fagel, Speech & Communication Institute, TU Berlin, Germany; 

Barry-John Theobald, School of Computing Sciences, University of East Anglia, UK; 


Back to Top

8-7 . CfP Special issue of Speech Comm: Non-native speech perception in adverse conditions: imperfect knowledge, imperfect signal



Much work in phonetics and speech perception has focused on doubly-optimal conditions, in which the signal reaching listeners is unaffected by distorting influences and in which listeners possess native competence in the sound system. However, in practice, these idealised conditions are rarely met. The processes of speech production and perception thus have to account for imperfections in the state of knowledge of the interlocutor as well as imperfections in the signal received. In noisy settings, these factors combine to create particularly adverse conditions for non-native listeners.

The purpose of the Special Issue is to assemble the latest research on perception in adverse conditions with special reference to non-native communication. The special issue will bring together, interpret and extend the results emerging from current research carried out by engineers, psychologists and phoneticians, such as the general frailty of some sounds for both native and non-native listeners and the strong non-native disadvantage experienced for categories which are apparently equivalent in the listeners’ native and target languages.

Papers describing novel research on non-native speech perception in adverse conditions are welcomed, from any perspective including the following. We especially welcome interdisciplinary contributions.

• models and theories of L2 processing in noise
• informational and energetic masking
• role of attention and processing load
• effect of noise type and reverberation
• inter-language phonetic distance
• audiovisual interactions in L2
• perception-production links
• the role of fine phonetic detail


Maria Luisa Garcia Lecumberri (Department of English, University of the Basque Country, Vitoria, Spain).

Martin Cooke (Ikerbasque and Department of Electrical & Electronic Engineering, University of the Basque Country, Bilbao, Spain).

Anne Cutler (Max-Planck Institute for Psycholinguistics, Nijmegen, The Netherlands and MARCS Auditory Laboratories, Sydney, Australia).


Full papers should be submitted by 31st July 2009


Authors should consult the “guide for authors”, available online at, for information about the preparation of their manuscripts. Papers should be submitted via, choosing “Special Issue: non-native speech perception” as the article type. If you are a first time user of the system, please register yourself as an author. Prospective authors are welcome to contact the guest editors for more details of the Special Issue. 

Back to Top

8-8 . CfP IEEE Special Issue on Speech Processing for Natural Interaction with Intelligent Environments

Call for Papers IEEE Signal Processing Society IEEE Journal of Selected Topics in Signal Processing  Special Issue on Speech Processing for Natural Interaction                   with Intelligent Environments  With the advances in microelectronics, communication technologies and smart materials, our environments are transformed to be increasingly intelligent by the presence of robots, bio-implants, mobile devices, advanced in-car systems, smart house appliances and other professional systems. As these environments are integral parts of our daily work and life, there is a great interest in a natural interaction with them. Also, such interaction may further enhance the perception of intelligence. "Interaction between man and machine should be based on the very same concepts as that between humans, i.e. it should be intuitive, multi-modal and based on emotion," as envisioned by Reeves and Nass (1996) in their famous book "The Media Equation". Speech is the most natural means of interaction for human beings and it offers the unique advantage that it does not require carrying a device for using it since we have our "device" with us all the time.  Speech processing techniques are developed for intelligent environments to support either explicit interaction through message communications, or implicit interaction by providing valuable information about the physical ("who speaks when and where") as well as the emotional and social context of an interaction. Challenges presented by intelligent environments include the use of distant microphone(s), resource constraints and large variations in acoustic condition, speaker, content and context. The two central pieces of techniques to cope with them are high-performing "low-level" signal processing algorithms and sophisticated "high-level" pattern recognition methods.  We are soliciting original, previously unpublished manuscripts directly targeting/related to natural interaction with intelligent environments. The scope of this special issue includes, but is not limited to:  * Multi-microphone front-end processing for distant-talking interaction * Speech recognition in adverse acoustic environments and joint          optimization with array processing * Speech recognition for low-resource and/or distributed computing          infrastructure * Speaker recognition and affective computing for interaction with          intelligent environments * Context-awareness of speech systems with regard to their applied          environments * Cross-modal analysis of speech, gesture and facial expressions for          robots and smart spaces * Applications of speech processing in intelligent systems, such as          robots, bio-implants and advanced driver assistance systems.  Submission information is available at Prospective authors are required to follow the Author's Guide for manuscript preparation of the IEEE Transactions on Signal Processing at Manuscripts will be peer reviewed according to the standard IEEE process.  Manuscript submission due:    		 		  		 		  Jul. 3, 2009 First review completed:       		 		  		 		  Oct. 2, 2009 Revised manuscript due:      		 		  		 		  Nov. 13, 2009 Second review completed:      		 		  		 		  Jan. 29, 2010 Final manuscript due:         		 		  		 		  Mar. 5, 2010  Lead guest editor:         Zheng-Hua Tan, Aalborg University, Denmark     Guest editors:         Reinhold Haeb-Umbach, University of Paderborn, Germany            Sadaoki Furui, Tokyo Institute of Technology, Japan            James R. Glass, Massachusetts Institute of Technology, USA            Maurizio Omologo, FBK-IRST, Italy   
Back to Top

8-9 . CfP Special issue "Speech as a Human Biometric: I know who you are from your voice" Int. Jnl Biometrics

International Journal of Biometrics  (IJBM)
Call For papers
Special Edition on: "Speech as a Human Biometric: I Know Who You Are From Your Voice!"
Guest Editors: 
Dr. Waleed H. Abdulla, The University of Auckland, New Zealand
Professor Sadaoki Furui, Tokyo Institute of Technology, Japan
Professor Kuldip K. Paliwal, Griffith University, Australia
The 2001 MIT Technology Review indicated that biometrics is one of the emerging technologies that will change the world. Human biometrics is the automated recognition of a person using adherent distinctive physiological and/or involuntary behavioural features.
Human voice biometrics has gained significant attention in recent years. The ubiquity of cheap microphones, human identity information carried by voice, ease of deployment, natural use, telephony applications diffusion, and non-obtrusiveness have been significant motivations for developing biometrics based on speech signals. The robustness of speech biometrics is sufficiently good. However, there are significant challenges with respect to conditions that cannot be controlled easily. These issues include changes in acoustical environmental conditions, respiratory and vocal pathology, age, channel, etc. The goal of speech biometric research is to solve and/or mitigate these problems.
This special issue will bring together leading researchers and investigators in speech research for security applications to present their latest successes in this field. The presented work could be new techniques, review papers, challenges, tutorials or other relevant topics.
   Subject Coverage
Suggested topics include, but are not limited to:
Speech biometrics
Speaker recognition
Speech feature extraction for speech biometrics
Machine learning techniques for speech biometrics
Speech enhancement for speech biometrics
Speech recognition for speech biometrics
Speech changeability over age, health condition, emotional status, fatigue, and related factors
Accent, gender, age and ethnicity information extraction from speech signals
Speech watermarking
Speech database security management
Cancellable speech biometrics
Voice activity detection
Conversational speech biometrics
   Notes for Prospective Authors
Submitted papers should not have been previously published nor be currently under consideration for publication elsewhere
All papers are refereed through a peer review process. A guide for authors, sample copies and other relevant information for submitting papers are available on the Author Guidelines page
   Important Dates
Manuscript due: 15 June, 2009
Acceptance/rejection notification: 15 September, 2009
Final manuscript due: 15 October, 2009
For more information please go to Calls for Papers page ( OR The IJBM home page (
Back to Top

8-10 . CfP Special on Voice transformation IEEE Trans ASLP

IEEE Signal Processing Society
IEEE Transactions on Audio, Speech and Language Processing
Special Issue on Voice Transformation
With the increasing demand for Voice Transformation in areas such as
speech synthesis for creating target or virtual voices, modeling various
effects (e.g., Lombard effect), synthesizing emotions, making more natural
dialog systems which use speech synthesis, as well as in areas like
entertainment, film and music industry, toys, chat rooms and games, dialog
systems, security and speaker individuality for interpreting telephony,
high-end hearing aids, vocal pathology and voice restoration, there is a
growing need for high-quality Voice Transformation algorithms and systems
processing synthetic or natural speech signals.
Voice Transformation aims at the control of non-linguistic information of
speech signals such as voice quality and voice individuality. A great deal
of interest and research in the area has been devoted to the design and
development of mapping functions and modifications for vocal tract
configuration and basic prosodic features.
However, high quality Voice Transformation systems that create effective
mapping functions for vocal tract, excitation signal, and speaking style
and whose modifications take into account the interaction of source and
filter during voice production, are still lacking.
We invite researchers to submit original papers describing new approaches
in all areas related to Voice Transformation including, but not limited to,
the following topics:
* Preprocessing for Voice Transformation
(alignment, speaker selection, etc.)
* Speech models for Voice Transformation
(vocal tract, excitation, speaking style)
* Mapping functions
* Evaluation of Transformed Voices
* Detection of Voice Transformation
* Cross-lingual Voice Transformation
* Real-time issues and embedded Voice Transformation Systems
* Applications
The call for paper is also available at:
Prospective authors are required to follow the Information for Authors for
manuscript preparation of the IEEE Transactions on Audio, Speech, and
Language Processing Signal Processing at
Manuscripts will be peer reviewed according to the standard IEEE process.
Submission deadline: May 10, 2009
Notification of acceptance: September 30, 2009
Final manuscript due: October 30, 2009
Publication date: January 2010
Lead Guest Editor:
Yannis Stylianou, University of Crete, Crete, Greece
Guest Editors:
Tomoki Toda, Nara Inst. of Science and Technology, Nara, Japan
Chung-Hsien Wu, National Cheng Kung University, Tainan, Taiwan
Alexander Kain, Oregon Health & Science University, Portland Oregon, USA
Olivier Rosec, Orange-France Telecom R&D, Lannion, France

Back to Top

8-11 . Mathematics, Computing, Language, and the Life: Frontiers in Mathematical Linguistics and Language Theory (tentative)

A new book series is going to be announced in a few weeks by a major publisher under the (tentative) title of  Mathematics, Computing, Language, and the Life: Frontiers in Mathematical Linguistics and Language Theory  SERIES DESCRIPTION:  Language theory, as originated from Chomsky's seminal work in the fifties last century and in parallel to Turing-inspired automata theory, was first applied to natural language syntax within the context of the first unsuccessful attempts to achieve reliable machine translation prototypes. After this, the theory proved to be very valuable in the study of programming languages and the theory of computing.  In the last 15-20 years, language and automata theory has experienced quick theoretical developments as a consequence of the emergence of new interdisciplinary domains and also as the result of demands for application to a number of disciplines, most notably: natural language processing, computational biology, natural computing, programming, and artificial intelligence.  The series will collect recent research on either foundational or applied issues, and is addressed to graduate students as well as to post-docs and academics.  TOPIC CATEGORIES:  A. Theory: language and automata theory, combinatorics on words, descriptional and computational complexity, semigroups, graphs and graph transformation, trees, computability B. Natural language processing: mathematics of natural language processing, finite-state technology, languages and logics, parsing, transducers, text algorithms, web text retrieval C. Artificial intelligence, cognitive science, and programming: patterns, pattern matching and pattern recognition, models of concurrent systems, Petri nets, models of pictures, fuzzy languages, grammatical inference and algorithmic learning, language-based cryptography, data and image compression, automata for system analysis and program verification D. Bio-inspired computing and natural computing: cellular automata, symbolic neural networks, evolutionary algorithms, genetic algorithms, DNA computing, molecular computing, biomolecular nanotechnology, circuit theory, quantum computing, chemical and optical computing, models of artificial life E. Bioinformatics: mathematical biology, string and combinatorial issues in computational biology and bioinformatics, mathematical evolutionary genomics, language processing of biological sequences, digital libraries  The connections of this broad interdisciplinary field with other areas include: computational linguistics, knowledge engineering, theoretical computer science, software science, molecular biology, etc.  The first volumes will be miscellaneous and will globally define the scope of the future series.  INVITATION TO CONTRIBUTE:  Contributions are requested for the first five volumes. In principle, there will be no limit in length. All contributions will be submitted to strict peer-review. Collections of papers are also welcome.  Potential contributors should express their interest in being considered for the volumes by April 25, 2009 to  They should specify:  - the tentative title of the contribution, - the authors and affiliations, - a 5-10 line abstract, - the most appropriate topic category (A to E above).  A selection will be done immediately after, with invited authors submitting their contribution for peer-review by July 25, 2009.  The volumes are expected to appear in the first months of 2010.
Back to Top

9 . Future Speech Science and Technology Events

9-1 . (2009-05-14) Coferences GIPSA Grenoble

 Jeudi 14 mai 2009, 13h30 – Séminaire externe
Mathilde FORT
Laboratoire de Psychologie et NeuroCognition, Grenoble

L'accès au lexique dans la perception visuelle de la parole

Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire

Lundi 25 mai 2009, 13h30 – Séminaire-discussion externe
Laboratoire d'Informatique de Grenoble

Séminaire de rencontre-discussion sur les situations d'apprentissage collaboratif

Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire

Jeudi 28 mai 2009, 15h30 – Séminaire externe
Attention horaire inhabituel
Pascal BELIN
Voice Neurocognition Laboratory
University of Glasgow, UK

"J'entends des voix": Bases cérébrales de la cognition vocale

The human voice is the most important sound category of our auditory environment. The voice carries speech, but it is also an "auditory face" rich in affective and identity information. Little is known on how the processing of these different types of vocal information is organized in the human auditory cortex.
In a series of functional magnetic resonance imaging (fMRI) experiments, we examined the cortical processing of sounds of human voices. The results obtained suggest that: 1) Perceiving sounds of voice involves activation of “temporal voice areas” (TVA), areas of auditory cortex mostly located in superior temporal sulcus (STS) bilaterally, much more activated by sounds of voice than by non-vocal sounds; 2) Voice selective areas in the right anterior STS are particularly involved in the paralinguistic aspects of voice perception, including speaker recognition. 3) This selectivity to voice appears to be largely species-specific, i.e., sounds of animal voices induce a much more restricted activation of STS. 4) At present, the only two individuals who failed to show activation of the TVA were autistic individuals. These results suggest that the different types of vocal information could be processed in partially dissociated functional pathways, and suggest a neurocognitive model of voice perception largely similar to those proposed for face perception.

Pour plus d'infos :

Back to Top

9-2 . (2009-05-18) 3rd Advanced Voice Function Assessment International Workshop (AVFA2009)

3rd Advanced Voice Function Assessment International Workshop (AVFA2009)

Madrid (Spain), 18th - 20th May 2009

     This is the first Call for Papers and Posters for the 3rd Advanced Voice Function Assessment International Workshop (AVFA2009) that will be held from May 18th to 20th at the Universidad Politécncia de Madrid, Spain.


    Speech is the most important means of communication among humans, resulting from a complex interaction among vocal folds vibration at the larynx and voluntary movements of the articulators (i.e., mouth, tongue, velum, jaw, etc.). The function of voice, however, is not limited to speech communication. It also transfers emotions, expresses personality features and reflects situations of stress or pathology. Moreover, it has an aesthetic value in many different professional activities, affecting salesmen, managers, lawyers, singers, actors, etc.

     Although research in speech science has traditionally favoured areas such as synthesis, recognition or speaker verification, the previous facts motivate the current emerging of a new research area related to voice function assessment.

     AVFA2009 aims at fostering interdisciplinary collaboration and interactions among researchers in voice assessment beyond the framework of COST Action 2103, thus reaching the whole scientific community.


     Topics of interest include, but are not limited to: 

  • Automatic detection of voice disorders
  • Automatic assessment & rating of voice quality
  • New strategies for parameterization and modelling normal and pathological voices (biomechanical-based parameters, chaos modelling, etc.)
  • Databases of vocal disorders
  • Inverse filtering
  • Signal processing for remote diagnosis
  • Speech enhancement for pathological & oesophageal voices
  • Objective parameters extraction from vocal fold images using videolaryngoscopy, videokymography, fMRI and other emerging techniques
  • Multi-modal analysis of disordered speech
  • Robust pitch extraction algorithms for pathological & oesophageal voices
  • Emotions in speech
  • Speaker adaptation
  • Voice Physiology and Biomechanics
  • Modelling of Voice Production
  • Diagnosis and Evaluation Protocols
  • Substitution Voices
  • Evaluation of Clinical Treatments
  • Analysis of Oesophageal Voices


    Prospective authors are asked to electronically submit preliminary version of full papers with a maximum length of 4 pages, including figures and tables, in English. Preliminary papers should be submitted as pdf documents, fitted to the linked  templateby the 15th of January. The submitted documents should include the title and authors' names, affiliations and addresses. In addition, the e-mail address and phone number of the corresponding author should be given. 

    Workshop proceedings will be edited both in paper and CD-ROM. Author registration to the conference is required for accepted papers to be included in the proceedings. The best papers presented at the workshop will be eligible for publication in a referred journal.

Best student paper award

Based on the comments given by the reviewers and the presentation at the conference, the organizing committee will give a best student paper award. The awarded author will be nominated at the closing ceremony of AVFA2009.


·        Proposal due 15th January 2009

·        Notification of acceptance 15th February 2009

·        Final papers due 28th February 2009

·        Preliminary program 1st May 2009

·        Workshop 18th May – 20th May 2009

Registration and Information

Registration will be handled via the AVFA2009 web site ( Please contact the secretariat ( for further information.

Program Committee

  • Juan Ignacio Godino Llorente, Universidad Politécnica de Madrid, Co-Chair
  • Pedro Gómez Vilda, Universidad Politécnica de Madrid, Co-Chair
  • Rubén Fraile, Universidad Politécnica de Madrid, Scientific Secretariat
  • Bartolomé Scola Yurrita, Gregorio Marañón Hospital 

·         Phillippe H. Dejonckere, University Medical Center Utrecht

·         Yannis Stylianou, University of Crete

Back to Top

9-3 . (2009-05-31) NAACL-HLT-09: Call for Tutorial Proposals

NAACL-HLT-09: Call for Tutorial Proposals

Proposals are invited for the Tutorial Program of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT) 2009 Conference. The conference is to be held from May 31 to June 5, 2009 in Boulder, Colorado. The tutorials will be held on Sunday, May 31.

Proposals for tutorials on all topics of computational linguistics and speech processing, such as processing for purposes of indexing and retrieval, processing for data mining, and so forth, are welcome. Especially encouraged are tutorials that educate the community about advancements in speech and natural language processing occurring in situ with contextual awareness, such as understanding speech, language or gesture in particular physical contexts.

Information on the tutorial instructor payment policy can be found at Tutorial_teacher_payment_policy

PLEASE NOTE: Remuneration for Tutorial presenters is fixed according to the above policy and does not cover registration fees for the main conference.


Proposals for tutorials should contain:

  1. A title and brief description of the tutorial content and its relevance to the NAACL-HLT community (not more than 2 pages).
  2. A brief outline of the tutorial structure showing that the tutorial's core content can be covered in a three-hour slot (including a coffee break). In exceptional cases six-hour tutorial slots are available as well.
  3. The names, postal addresses, phone numbers, and email addresses of the tutorial instructors, including a one-paragraph statement of their research interests and areas of expertise.
  4. A list of previous venues and approximate audience sizes, if the same or a similar tutorial has been given elsewhere; otherwise an estimate of the audience size.
  5. A description of special requirements for technical equipment (e.g., internet access).

Proposals should be submitted by electronic mail, in plain ASCII text no later than January 15, 2009 to tutorials.hlt09 "at" gmail "dot" com. The subject line should be: "NAACL HLT 2009: TUTORIAL PROPOSAL".


  1. Proposals will not be accepted by regular mail or fax, only by email to: tutorials.hlt09 "at" gmail "dot" com.
  2. You will receive an email confirmation from us that your proposal has been received. If you do not receive this confirmation 24 hours after sending the proposal, please contact us personally using all the following emails: ciprianchelba "at" google "dot" com,
    kantor "at" scils "dot" rutgers "dot" edu, and
    roark "at" cslu "dot" ogi "dot" edu.


Accepted tutorial speakers will be notified by February 1, 2009, and must then provide abstracts of their tutorials for inclusion in the conference registration material by March 1, 2009. The description should be in two formats: an ASCII version that can be included in email announcements and published on the conference web site, and a PDF version for inclusion in the electronic proceedings (detailed instructions will be given). Tutorial speakers must provide tutorial materials, at least containing copies of the course slides as well as a bibliography for the material covered in the tutorial, by April 15, 2009.


  • Submission deadline for tutorial proposals: January 15, 2009
  • Notification of acceptance: February 1, 2009
  • Tutorial descriptions due: March 1, 2009
  • Tutorial course material due: April 15, 2009
  • Tutorial date: May 31, 2009


  • Ciprian Chelba, Google
  • Paul Kantor, Rutgers
  • Brian Roark, Oregon Health & Science University

Please send inquiries concerning NAACL-HLT-09 tutorials to tutorials.hlt09 "at" gmail "dot" com  

Back to Top

9-4 . (2009-05-31) Call for Workshop proposals EACL 2009, NAACL HLT 2009, ACL-UCNLP 2009



Joint site:

The Association for Computational Linguistics invites proposals for
workshops to be held in conjunction with one of the three flagship
conferences sponsored in 2009 by the Association for Computational
Linguistics: ACL-IJCNLP 2009, EACL 2009, and NAACL HLT 2009.  We solicit
proposals on any topic of interest to the ACL community. Workshops will
be held at one of the following conference venues:

EACL 2009 is the annual meeting of the European chapter of the ACL. The
conference will be held in Athens, Greece, March 30-April 3 2009;
workshops March 30-31.

NAACL HLT 2009 is the annual meeting of the North American chapter of
the ACL.  It continues the inclusive tradition of encompassing relevant
work from the natural language processing, speech and information
retrieval communities.  The conference will be held in Boulder,
Colorado, USA, from May 31-June 5 2009; workshops will be held June 4-5.

ACL-IJCNLP 2009 combines the 47th Annual Meeting of the Association for
Computational Linguistics (ACL 2009) with the 4th International Joint
Conference on Natural Language Processing (IJCNLP).  The conference will
be held in Singapore, August 2-7 2009; workshops will be held August 6-7.


In a departure from previous years, ACL-IJCNLP, EACL and NAACL HLT will
coordinate the submission and reviewing of workshop proposals for all
three ACL 2009 conferences.

Proposals for workshops should contain:

    * A title and brief (2-page max) description of the workshop topic
      and content.
    * The desired workshop length (one or two days), and an estimate
      of the audience size.
    * The names, postal addresses, phone numbers, and email addresses
      of the organizers, with one-paragraph statements of their
      research interests and areas of expertise.
    * A budget.
    * A list of potential members of the program committee, with an
      indication of which members have already agreed.
    * A description of any shared tasks associated with the workshop.
    * A description of special requirements for technical needs.
    * A venue preference specification.

The venue preference specification should list the venues at which the
organizers would be willing to present the workshop (EACL, NAACL HLT, or
ACL-IJCNLP).  A proposal may specify one, two, or three acceptable
workshop venues; if more than one venue is acceptable, the venues should
be preference-ordered.  There will be a single workshop committee,
coordinated by the three sets of workshop chairs.  This single committee
will review the quality of the workshop proposals.  Once the reviews are
complete, the workshop chairs will work together to assign workshops to
each of the three conferences, taking into account the location
preferences given by the proposers.

The ACL has a set of policies on workshops. You can find general
information on policies regarding attendance, publication, financing,
and sponsorship, as well as on financial support of SIG workshops, at
the following URL:

Please submit proposals by electronic mail no later than September 1
2008, to acl09-workshops at with the
subject line: "ACL 2009 WORKSHOP PROPOSAL."


Notification of acceptance of workshop proposals will occur no later
than September 23, 2008.  Since the three ACL conferences will occur at
different times, the timescales for the submission and reviewing of
workshop papers, and the preparation of camera-ready copies, will be
different for the three conferences. Suggested timescales for each of
the conferences are given below.

Sep 1, 2008     Workshop proposal deadline
Sep 23, 2008    Notification of acceptance of workshops

EACL 2009
Sep 30, 2008    Call for papers issued by this date
Dec 12, 2008    Deadline for paper submission
Jan 23, 2009    Notification of acceptance of papers
Feb  6, 2009    Camera-ready copies due
Mar 30-31, 2009 EACL 2009 workshops

Dec 10, 2008    Call for papers issued by this date
Mar 6, 2009     Deadline for paper submissions
Mar 30, 2009    Notification of paper acceptances
Apr 12, 2009    Camera-ready copies due
June 4-5, 2009  NAACL HLT 2009 workshops

Feb 6, 2009     Call for papers issued issued by this date
May 1, 2009     Deadline for paper submissions
Jun 1, 2009     Notification of acceptances
Jun 14, 2009    Camera-ready copies due
Aug 6-7, 2009   ACL-IJCNLP 2009 Workshops

Workshop Co-Chairs:

    * Miriam Butt, EACL, University of Konstanz
    * Stephen Clark, EACL, Oxford University
    * Nizar Habash, NAACL HLT, Columbia University
    * Mark Hasegawa-Johnson, NAACL HLT, University of Illinois at
    * Jimmy Lin, ACL-IJCNLP, University of Maryland
    * Yuji Matumoto, ACL-IJCNLP, Nara Institute of Science and Technology

For inquiries, send email to: acl09-workshops at


Back to Top

9-5 . (2009-05-31) CfP NAACL HLT 2009 Bouldr CO, USA

Call for Papers for NAACL HLT 2009 May 31 – June 5, 2009, Boulder, Colorado
Deadline for full paper submission – Monday, December 1, 2008 Deadline for short paper submission – Monday, February 9, 2009 NAACL HLT 2009 combines the Annual Meeting of the North American Association for Computational Linguistics (NAACL) with the Human Language Technology Conference (HLT) of NAACL. The conference covers a broad spectrum of disciplines working towards enabling intelligent systems to interact with humans using natural language, and towards enhancing human-human communication through services such as speech recognition, automatic translation, information retrieval, text summarization, and information extraction. NAACL HLT 2009 will feature full papers, short papers, posters, demonstrations, and a doctoral consortium, as well as pre- and post-conference tutorials and workshops. The conference invites the submission of papers on substantial, original, and unpublished research in disciplines that could impact human language processing systems. We encourage the submission of short papers that can be characterized as a small, focused contribution, a work in progress, a negative result, an opinion piece or an interesting application note. A separate review form for short papers will be introduced this year.
NAACL HLT 2009 aims to hold two special sessions, Large Scale Language Processing and Speech Indexing and Retrieval.
Topics include, but are not limited to, the following areas, and are understood to be applied to speech and/or text:
- Large scale language processing
- Speech indexing and retrieval
- Information retrieval (including monolingual and CLIR)
- Information extraction
- Speech-centered applications (e.g., human-computer, human-robot interaction, education and learning systems, assistive technologies, digital entertainment)
- Machine translation
- Summarization
- Question answering
- Topic classification and information filtering
- Non-topical classification (e.g., sentiment/attribution/genre analysis)
- Topic clustering
- Text and speech mining
- Statistical and machine learning techniques for language processing
- Spoken term detection and spoken document indexing
- Language generation
- Speech synthesis
- Speech understanding
- Speech analysis and recognition
- Multilingual processing
- Phonology
- Morphology (including word segmentation)
- Part of speech tagging
- Syntax and parsing (e.g., grammar induction, formal grammar, algorithms)
- Word sense disambiguation
- Lexical semantics
- Formal semantics and logic
- Textual entailment and paraphrasing
- Discourse and pragmatics
- Dialog systems
- Knowledge acquisition and representation
- Evaluation (e.g., intrinsic, extrinsic, user studies)
- Development of language resources (e.g., lexicons, ontologies, annotated corpora)
- Rich transcription (automatic annotation of information structure and sources in speech)
- Multimodal representations and processing, including speech and gesture
Submission information will soon be available at:
General Conference Chair: Mari Ostendorf, University of Washington Program Co-Chairs: Michael Collins, Massachusetts Institute of Technology Shri Narayanan, University of Southern California
Douglas W. Oard, University of Maryland Lucy Vanderwende, Microsoft Research Local Arrangements: James Martin, University of Colorado at Boulder Martha Palmer, University of Colorado at Boulder
Back to Top

9-6 . (2009-05-31) Cf Short papers NAACL HLT 2009

Call for Short Papers for NAACL HLT 2009 
May 31 – June 5, 2009, Boulder, Colorado 
Deadline for short paper submission – Monday, February 9, 2009
Special sessions: Large Scale Language Processing, and Speech Indexing and Retrieval 
NAACL HLT 2009 combines the Annual Meeting of the North American Association for 
Computational Linguistics (NAACL) with the Human Language Technology Conference 
(HLT) of NAACL. The conference covers a broad spectrum of disciplines working 
towards enabling intelligent systems to interact with humans using natural language, and 
towards enhancing human-human communication through services such as speech 
recognition, automatic translation, information retrieval, text summarization, and 
information extraction. NAACL HLT 2009 will feature full papers, short papers, posters, 
demonstrations, and a doctoral consortium, as well as pre- and post-conference tutorials 
and workshops. 
The conference invites the submission of papers on substantial, original, and unpublished 
research in disciplines that could impact human language processing systems.  We 
encourage the submission of short papers that can be characterized as a small, focused 
contribution, a work in progress, a negative result, an opinion piece or an interesting 
application note. A separate review form for short papers will be introduced this year.
NAACL HLT 2009 aims to hold two special sessions, Large Scale Language Processing 
and Speech Indexing and Retrieval. 
Topics include, but are not limited to, the following areas, and are understood to be 
applied to speech and/or text: 
- Large scale language processing
- Speech indexing and retrieval
- Information retrieval (including monolingual and CLIR) 
- Information extraction 
- Speech-centered applications (e.g., human-computer, human-robot interaction, 
education and learning systems, assistive technologies, digital entertainment)
- Machine translation
- Summarization
- Question answering
- Topic classification and information filtering 
- Non-topical classification (e.g., sentiment/attribution/genre analysis) 
- Topic clustering 
- Text and speech mining
- Statistical and machine learning techniques for language processing
- Spoken term detection and spoken document indexing
- Language generation
- Speech synthesis
- Speech understanding
- Speech analysis and recognition
- Multilingual processing
- Phonology
- Morphology (including word segmentation)
- Part of speech tagging
- Syntax and parsing (e.g., grammar induction, formal grammar, algorithms)
- Word sense disambiguation
- Lexical semantics
- Formal semantics and logic
- Textual entailment and paraphrasing
- Discourse and pragmatics
- Dialog systems
- Knowledge acquisition and representation
- Evaluation (e.g., intrinsic, extrinsic, user studies)
- Development of language resources (e.g., lexicons, ontologies, annotated corpora) 
- Rich transcription (automatic annotation of information structure and sources in 
- Multimodal representations and processing, including speech and gesture
Submission information is available at: 
General Conference Chair: 
Mari Ostendorf, University of Washington 
Program Co-Chairs: 
Michael Collins, Massachusetts Institute of Technology 
Shri Narayanan, University of Southern California
Douglas W. Oard, University of Maryland 
Lucy Vanderwende, Microsoft Research 
Local Arrangements: 
James Martin, University of Colorado at Boulder  
Martha Palmer, University of Colorado at Boulder 
Back to Top

9-7 . (2009-05-31) NAACL HLT 09 Call for Demonstrations

NAACL HLT 09 Call for Demonstrations

The NAACL HLT 2009 Program Committee invites proposals for the Demonstration Program to be held June 1-3, 2009 at the University of Colorado at Boulder. We encourage both the exhibition of early research prototypes and interesting mature systems. Commercial sales and marketing activities are not appropriate in the Demonstration Program, and should be arranged as part of the Exhibit Program. We invite proposals for two types of demonstrations:

·        Type I: theater-style, as part of the regular program

·        Type II: poster-style, where demos are to be presented on table-tops in sessions scheduled for a specific time slot.


Submission of a demonstration proposal on a particular topic does not preclude or require a separate submission of a paper on that topic; it is possible that some but not all of the demonstrations will illustrate concepts that are described in companion papers.


Areas of Interest

Areas of interest include, but are not limited to, the following types of systems, some of which have been demonstrated at recent ACL conferences:

·        End-to-end natural language processing systems

·        User interfaces for monolingual and multilingual information access systems, including retrieval, summarization, and QA engines

·        Voice search interfaces

·        Dialogue and conversational systems

·        Multimodal systems utilizing language technology

·        Language technology on mobile devices

·        Applications using embedded language technology components

·        Meeting capture and analysis systems utilizing language technology

·        Natural language processing systems for medical informatics

·        Assistive applications of language technology

·        Visualization tools

·        Software for evaluating natural language systems and components

·        Aids for teaching computational linguistics concepts

·        Software tools for facilitating computational linguistics research Reusable components (parsers, generators, speech recognizers, etc.)

·        Tools that assist in the development of other NLP applications (e.g., error analysis)


Format for Submission

Demo proposals consist of the following parts, which should all be sent to the Demonstration Co-Chairs. Please use the main ACL paper formatting guidelines. Please note that no hardware or software will be provided by the local organizer.

·        An extended abstract of the technical content to be demonstrated, including title, authors, full contact information, references, and acknowledgements. Please indicate a Type I or Type II demo.

·        A "script outline" of the demo presentation, including accompanying narrative, and either a Web address for accessing the demo or visual aids (e.g., screenshots, snapshots, or diagrams).

The entire proposal must not be more than four pages.

Submissions Procedure

Proposals must be submitted by February 9, 2009 to the Demonstration Co-Chairs. Submissions must be received electronically. Please submit your proposals and any inquiries to:

Michael Johnston                                         Fred Popowich         

AT&T                                                                      Simon Fraser University

johnston “at” research “dot” att “dot” com   popowich “at” sfu “dot” ca

Submissions will be evaluated on the basis of their relevance to computational linguistics, innovation, scientific contribution, presentation, as well as potential logistical constraints.

Accepted submissions will be allocated four pages in the Companion Volume to the Proceedings of the Conference.


Further Details

Further details on the date, time, and format of the demonstration session(s) will be determined and provided at a later date. Please send any inquiries to the demonstration co-chairs at the email addresses listed above.


Important Dates


February 9, 2009

Submission deadline

March 27, 2009

Notification of acceptance

April 6, 2009

Submission of final demo related literature

June 1-3, 2009



All submissions or camera-ready copies are due by 11:59pm EST on the dates specified above.

Back to Top

9-8 . (2009-06-03) 7th International Workshop on Content-Based Multimedia Indexing

7ème Atelier International sur Indexation Multimédia Par le Contenu.
7th International Workshop on Content-Based Multimedia Indexing

Après le succès des six événements précédents (Toulouse 1999, Brescia 2001, Rennes 2003, Riga 2005, Bordeaux 2007, Londres 2008), l’atelier international  CBMI 2009 aura lieu du 3 au 5 juin 2009 dans la ville pittoresque de Chania sur l'île de Crète en Grèce. Il sera organisé par le laboratoire Image, Vidéo et Multimédia de l'Université Technique Nationale d'Athènes. Le CBMI 2009 a pour but de rassembler les différentes communautés impliquées dans les différents aspects de l'indexation multimédia basée sur le contenu, tels que le traitement d'images et la recherche d'information avec les tendances et développements actuels des industriels. L’atelier est soutenu par les sociétés savantes IEEE et EURASIP, Université d’Athènes. Le programme technique du CBMI 2009 comprend les conférences plénières invitées, des sessions spéciales ainsi que des sessions régulières.


Liste non exhaustive des thèmes traités:

l  Indexation et recherche multimédia (image, audio, vidéo, texte)

l  Mise en correspondance et recherche de similarité

l  Construction d'indices de haut niveau

l  Extraction du contenu multimédia

l  Identification et suivi des régions sémantiques dans les scènes

l  Indexation multi-modale et cross-modale

l  Recherche basée contenu

l  L'extraction de données multimédia

l  Génération, codage et transformation de métadonnées

l  Gestion de bases de données multimédia de grande échelle

l  Résumé, navigation et organisation du contenu multimédia

l  Outils de présentation et de visualisation

l  Interaction avec l'utilisateur et pertinence du retour

l  Personnalisation et adaptation au contenu

l  Evaluation et métriques




            Les auteurs sont invités à soumettre des papiers sur le site web de la conférence:  Des fichiers de style (Latex et Word) seront fourni pour la commodité des auteurs.

Dates importantes

Présentation des textes complets:

8 janvier 2009

Notification d'acceptation:

23 février 2009

Soumission des versions finales:

13 mars 2009

Début de l'enregistrement:

13 mars 2009


3 au 5 Juin 2009


Lieu de la manifestation

            Le CBMI 2009 aura lieu dans l'enceinte du KAM - Center méditerranéen de l'architecture, de Chania, sur l'île de la Crète, l'une des destinations les plus excitantes en Grèce. Le KAM a été créé par la commune de Chania en 1996 et est situé depuis 2002 au Grand Arsenal, le vieux port de Chania.

Following the six successful previous events (Toulouse 1999, Brescia 2001, Rennes 2003, Riga 2005, Bordeaux 2007, London 2008), 2009 International Workshop on Content-Based Multimedia Indexing (CBMI) will be held on June 3-5, 2009 at the picturesque city of Chania, in Crete Island, Greece. It will be organized by Image, Video and Multimedia Laboratory of National Technical University of Athens. CBMI 2009 aims at bringing together the various communities involved in the different aspects of content-based multimedia indexing, such as image processing and information retrieval with current industrial trends and developments. CBMI 2009 is supported by IEEE, EURASIP, University of Athens. The technical program of CBMI 2009 will include presentation of invited plenary talks, special sessions as well as regular sessions with contributed research papers.

Topics of interest include, but are not limited to:

Multimedia indexing and retrieval (image, audio, video, text)
Matching and similarity search
Construction of high level indices
Multimedia content extraction
Identification and tracking of semantic regions in scenes
Multi-modal and cross-modal indexing
Content-based search
Multimedia data mining
Metadata generation, coding and transformation
Large scale multimedia database management
Summarisation, browsing and organization of multimedia content
Presentation and visualization tools
User interaction and relevance feedback
Personalization and content adaptation
Evaluation and metrics

Paper Submission

Prospective authors are invited to submit full papers at the conference web site: Style files (Latex and Word) will be provided for the convenience of the authors.

Important Dates

Submission of full papers:

January 8, 2009

Notification of acceptance:

February 23, 2009

Submission of camera-ready papers:

March 13, 2009

Early registration due:

March 13, 2009

Main Workshop:

June 3-5, 2009


CBMI 2009 will be hosted at KAM - Mediterranean Centre of Architecture, Chania, at the island of Crete, one of the most exciting Greek destinations. KAM was settled by Chania municipality in 1996 and is situated since 2002 at Great Arsenali, the old port of Chania.


Back to Top

9-9 . (2009-06-04) CfP NAACL Workshop on Computational Approaches to Linguistic Creativity

Second Call For Papers  NAACL Workshop on Computational Approaches to Linguistic Creativity (CALC 2009)  Boulder, Colorado June 4, 2009   It is generally agreed upon that "linguistic creativity" is a unique property of human language. Some claim that linguistic creativity is expressed in our ability to combine known words in a new sentence, others refer to our skill to express thoughts in figurative language, and yet others talk about syntactic recursion and lexical creativity.  For the purpose of this workshop, we treat the term "linguistic creativity" to mean "creative language usage at different levels", from the lexicon to syntax to discourse and text (see also topics, below).  The recognition of instances of linguistic creativity and the computation of their meaning constitute one of the most challenging problems for a variety of Natural Language Processing tasks, such as machine translation, text summarization, information retrieval, question answering, and sentiment analysis. Computational systems incorporating models of linguistic creativity operate on different types of data (including written text, audio/speech/sound, and video/images/gestures). New approaches might combine information from different modalities. Creativity-aware systems will improve the contribution Computational Linguistics has to offer to many practical areas, including education, entertainment, and engineering.  Within the scope of the workshop, the event is intended to be interdisciplinary. Besides contributions from an NLP perspective, we also welcome the participation of researchers who deal with linguistic creativity from different perspectives, including psychology, neuroscience, or human-computer interaction.  Topics ======  We are particularly interested in work on the automatic detection, classification, understanding, or generation of:  * neologisms; * figurative language, including metaphor, metonymy, personification, idioms; * new or unconventional syntactic constructions ("May I serve who's next?") and constructions defying traditional parsers (e.g. gapping: "Many words were spoken, and sentiments expressed"); * indirect speech acts (such as curses, insults, sarcasm and irony); * verbally expressed humor; * poetry and fiction; * and other phenomena illustrating linguistic creativity.  Depending on the state of the art of approaches to the various phenomena and languages, preference will be given to work on deeper processing (e.g., understanding, goal-driven generation) rather than shallow approaches (e.g., binary classification, random generation). We also welcome descriptions and discussions of:  * computational tools that support people in using language creatively (e.g. tools for computer-assisted creative writing, intelligent thesauri); * computational and/or cognitive models of linguistic creativity; * metrics and tools for evaluating the performance of creativity-aware systems; * specific application scenarios of computational linguistic creativity; * design and implementation of creativity-aware systems.  Related topics, including corpora collection, elicitation, and annotation of creative language usage, will also be considered, as long as their relevance to automatic systems is clearly pointed out.  Invited Speaker ===============  Nick Montfort, MIT  Submissions ===========  Submissions should describe original, unpublished work. Papers are limited to 8 pages. The style files can be found here: []. No author information should be included in the papers, since reviewing will be blind. Papers not conforming to these requirements are subject to rejection without review. Papers should be submitted via START [] in PDF format.  We encourage submissions from everyone. For those how are new to ACL conferences and workshops, or with special needs, we are planning to set up a lunch mentoring program. Let us know if you are interested. Also, a limited number of student travel grants might become available, intended for individuals with minority background and current residents of countries where conference travel funding is usually hard to find.  Important Dates ===============  Submission Deadline:	Feb 27, 2009 Notification Due:	Mar 30, 2009 Final Version Due:	Apr 12, 2009 Workshop:		Jun 04, 2009  Organizers ==========  * Anna Feldman, Montclair State University ( * Birte Loenneker-Rodman, University of Hamburg, Germany (  Program Committee =================  * Shlomo Argamon, Illinois Institute of Technology; * Roberto Basili, University of Roma, Italy; * Amilcar Cardoso, University of Coimbra, Portugal; * Afsaneh Fazly, University of Toronto, Canada; * Eileen Fitzpatrick, Montclair State University; * Pablo Gervas, Universidad Complutense de Madrid, Spain; * Sam Glucksberg, Princeton University; * Jerry Hobbs, ISI, Marina del Rey; * Sid Horton, Northwestern University; * Diana Inkpen, University of Ottawa, Canada; * Mark Lee, Birmingham, UK; * Hugo Liu, MIT; * Xiaofei Lu, Penn State; * Ruli Manurung, University of Indonesia; * Katja Markert, University of Leeds, UK; * Rada Mihalcea, University of North Texas; * Anton Nijholt, University of Twente, The Netherlands; * Andrew Ortony, Northwestern University; * Vasile Rus, The University of Memphis; * Richard Sproat, Oregon Health and Science University; * Gerard Steen, Vrije Universiteit, Amsterdam, The Netherlands; * Carlo Strapparava, Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy; * Juergen Trouvain, Saarland University, Germany. 
Back to Top

9-10 . (2009-06-05) Nasal 2009 Nasalité en phonétique et en phonologie (french)


L'équipe Praxiling UMR 5267 CNRS (Université Paul Valéry, Montpellier 3)
le Laboratoire des Sciences de la Parole de l'Académie Universitaire
Wallonie-Bruxelles (Université de Mons-Hainaut) organisent un colloque
international consacré à la nasalité en phonétique et en phonologie.

Le colloque aura lieu le vendredi 5 juin 2009 de 9h00 à 18h30 au Grand
Amphithéâtre de la Délégation régionale du CNRS, 1919, route de Mende,
F-34293 Montpellier cedex 5.

L'objectif de ce colloque international est de permettre aux chercheurs du
monde entier de se réunir et d'échanger à propos de leurs travaux
concernant la nasalité. Toute proposition de communication concernant la
nasalité est la bienvenue, en particulier les travaux concernant: la
(mesures articulatoires, études aérodynamiques, analyses acoustiques,
la perception, les aspects phonologiques, les universaux phonétiques, la
modélisation, les langues peu décrites, les aspects pathologiques et
cliniques, l'acquisition du langage et l'apprentissage d'une langue
seconde. Un
intérêt tout particulier sera accordé aux communications traitant de
questions transversales aux champs disciplinaires cités ci-dessus :
multiinstrumentation, liens entre production et perception, comparaisons
inter-langues, relations entre l'organisation des systèmes phonologiques
et les
contraintes phonétiques, points communs et divergences entre acquisition
en L1
et apprentissage en L2, etc.

Conférenciers invités
Patrice S. Beddor, University of Michigan, USA
Didier Demolin, Université Libre de Bruxelles, Belgique
John Hajek, University of Melbourne, Australia
Ian Maddieson, University of Albuquerque, New Mexico, USA
Alain Marchal, Université d'Aix-en-Provence, France
Jacqueline Vaissière, Université de Paris III, France

Comité scientifique
Pierre Badin, Gipsa-Lab, France
Nick Clements, Université de Paris III, France
Bernard Harmegnies, Université de Mons-Hainaut, Belgique
Sarah Hawkins, University of Cambridge, UK
Marie Huffman, State University of New York Stony Brook, USA
John Kingston, University of Massachussets at Amherst, USA
Christine Matyear, University of Texas at Austin, USA
John Ohala, University of California at Berkeley, USA
Daniel Recasens, Universitat Autonoma de Barcelona, Espana
Ryan Shosted, University of Illinois at Urbana-Champaign, USA
Maria Josep Solé, Universitat Autonoma de Barcelona, Espana
Nathalie Vallée, Gipsa-Lab, France
Doug Whalen, Haskins Laboratories, USA

Date limite de soumission
15 février 2009

Modalités de soumission
Envoyer un message avec les coordonnées complètes du premier auteur et le
des éventuels autres auteurs à avec, en fichier
un article anonyme de quatre pages A4 maximum sous la forme d'un fichier
Un modèle de fichier Word est téléchargeable sur notre site web.

Notez par ailleurs que tous les intervenants seront invités à soumettre
version longue de leur communication (50000 caractères) pour une
publication dans un livre à paraître chez un éditeur international. Date
limite de soumission des papiers: autour du 14 septembre 2009.

Pour plus d'infos, visitez notre site web:

Pour le comité d'organisation,
V. Delvaux
Chargée de Recherches FNRS
Laboratoire de Phonétique
Service de Métrologie et Sciences du Langage
Université de Mons-Hainaut
18, Place du Parc
7000 Mons

Back to Top

9-11 . (2009-06-08) CfP 16th International ECSE Summer School in Novel Computing (Joensuu, FINLAND)

Call for participation:
16th International ECSE Summer School in
Novel Computing (Joensuu, FINLAND)

University of Joensuu, Finland, announces the 16th
International ECSE Summer School in Novel Computing:

The summer school includes three independent courses, one in June and two
in August:

June 8-10
    Jean-Luc LeBrun: Scientific Writing Skills
    "Publish or perish, reviewers decide.
    Be cited or not, readers decide"

    Registration deadline: May 20, 2009

August 10-14 -- two parallel courses:
    Douglas A. Reynolds (MIT Lincoln Lab)
    "Speaker and Language Recognition"

    Paul De Bra (Eindhoven Univ Technology)
    "Platforms for Stories-Based Learning
    in Future Schools"

    Early registration deadline: June 15, 2009

In addition to high-quality lectures, the summer school offers an
inspiring learning environment and relaxed social program, including the
Finnish sauna, in the middle of North Carelia region. Joensuu is located
next to the Russian border and about 400 km North-East from the capital of
the country. It is a vivid student city with over 6000 students in the
University of Joensuu and 3500 in North Karelia Polytechnic. The European
Forest Institute, The University and many other institutes and export
enterprises such as Abloy, LiteonMobile and John Deere give Joensuu an
international flavour.

The summer school is organized by the Department of Computer Science and
Statistics, University of Joensuu, Finland ( The
research areas of the department include speech and image processing,
educational technology, color research, and psychology of programming.

More information:


Welcome to Joensuu! 

Back to Top

9-12 . (2009-06-15) TrebleCLEF Summer School Pisa Italy

TrebleCLEF Summer School on Multilingual Information Access  Santa Croce in Fossabanda Pisa, Italy 15-19 June 2009  Objectives  The aim of the Summer School is to give participants a grounding in the core topics that constitute the multidisciplinary area of Multilingual Information Access (MLIA). The School is intended for advanced undergraduate and post-graduate students, post-doctorial researchers plus academic and industrial researchers and system developers with backgrounds in Computer Science, Information Science, Language Technologies and related areas. The focus of the school will be on "How to build effective multilingual information retrieval systems and how to evaluate them".  Programme  The programme of the school will cover the following areas:  .         Multilingual Text Processing  .         Cross-Language Information Retrieval  .         Content and Text-based Image Retrieval, including multilingual approaches  .         Cross-language Speech and Video Retrieval  .         System Architectures and Multilinguality  .         Information Extraction in a Multilingual Context  .         Machine Translation for Multilingual Information processing  .         Interactive Aspects of Cross-Language Information Retrieval  .         Evaluation for Multilingual Systems and Components.  An optional student mentoring session where students can present and discuss with lecturers their research ideas will also be organised.  Location and Dates  The Summer School will be held 15 - 19 June 2009 in the beautiful ex-convent <> Santa Croce in Fossabanda, Pisa. Santa Croce provides the perfect setting for study and discussions in a peaceful, relaxed atmosphere and is just a short walk from the town centre and the famous Piazza dei Miracoli with its Leaning Tower.  Accommodation and Registration  A maximum of 40 registrations will be accepted. Tuition fees are set at 200 Euros up to 30 April and 350 Euros after this date. Tuition fees cover all courses and lectures, course material, lunch and coffee breaks during the School, the Welcome Reception on the evening of Sunday 14 June, and the Social Dinner on Monday 15 June. Accommodation will be on the School site at Santa Croce in Fossabanda.  Financial Support for Students  A number of grants will be made available by TrebleCLEF and by the DELOS Association covering accommodation costs. Students wishing to receive a grant must submit a brief application (maximum 1 page) explaining why attendance at the school would be important for them. The application must be supported by a letter of reference from the student's advisor / supervisor or equivalent.  More information  Further details including the programme of lectures and information on how to register can be found at  <>  or contact Carol Peters (
Back to Top

9-13 . (2009-06-21) CfP Specom 2009- St Petersburg Russia


    13-th International Conference "Speech and Computer"
                             21-25 June 2009
     Grand Duke Vladimir's palace, St. Petersburg, Russia

(!) Due to many requests the submission deadline has been postponed to Monday, February 9, 2009 (!) 

Organized by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS)

Dear Colleagues, we are pleased to invite you to the 13-th International Conference on Speech and Computer SPECOM'2009, which will be held in June
21-25, 2009 in St.Petersburg. The global aim of the conference is to discuss state-of-the-art problems and recent achievements in Signal Processing and
Human-Computer Interaction related to speech technologies. Main topics of SPECOM'2009 are:
- Signal processing and feature extraction
- Multimodal analysis and synthesis
- Speech recognition and understanding
- Natural language processing
- Spoken dialogue systems
- Speaker and language identification
- Text-to-speech systems
- Speech perception and speech disorders
- Speech and language resources
- Applications for human-computer interaction

The official language of the event is English. Full papers up to 6 pages will be published in printed and electronic proceedings with ISBN.

Imporatnt Dates:
- Submission of full papers: February 1, 2009 (extended)
- Notification of acceptance: March 1, 2009
- Submission of final papers: March 20, 2009
- Early registration: March 20, 2009
- Conference dates: June 21-25, 2009

Scientific Committee:
Andrey Ronzhin, Russia (conference chairman)
Niels Ole Bernsen, Denmark
Denis Burnham, Australia
Jean Caelen, France
Christoph Draxler, Germany
Thierry Dutoit, Belgium
Hiroya Fujisaki, Japan
Sadaoki Furui, Japan
Jean-Paul Haton, France
Ruediger Hoffmann, Germany
Dimitri Kanevsky, USA
George Kokkinakis, Greece
Steven Krauwer, Netherlands
Lin-shan Lee, Taiwan
Boris Lobanov, Belarus
Benoit Macq, Belgium
Jury Marchuk, Russia
Roger Moore, UK
Heinrich Niemann, Germany
Rajmund Piotrowski, Russia
Louis Pols, Netherlands
Rodmonga Potapova, Russia
Josef Psutka, Czech Republic
Lawrence Rabiner, USA
Gerhard Rigoll, Germany
John Rubin, UK
Murat Saraclar, Turkey
Jesus Savage, Mexico
Pavel Skrelin, Russia
Viktor Sorokin, Russia
Yannis Stylianou, Greece
Jean E. Viallet, France
Taras Vintsiuk, Ukraine
Christian Wellekens, France

The invited speakers of SPECOM'2009 are:
- Prof. Walter Kellermann (University of Erlangen-Nuremberg, Germany), lecture "Towards Natural Acoustic Interfaces for Automatic Speech Recognition"
- Prof. Mikko Kurimo (Helsinki University of Technology, Finland), lecture "Unsupervised decomposition of words for speech recognition and retrieval"

The conference venue is House of Scientists (former Grand Duke Vladimir's palace) located in the very heart of the city, in the neighborhood
of the Winter Palace (Hermitage), the residence of Russian emperor, and the Peter's and Paul's Fortress. Independently of the scientific actions
we will provide essential possibilities for acquaintance with cultural and historical valuables of  Saint-Petersburg, the conference will be hosted
during a unique and wonderful period known as the White Nights.

Contact Information:
SPECOM'2009 Organizing Committee,
SPIIRAS, 39, 14-th line, St.Petersburg, 199178, RUSSIA




Back to Top

9-14 . (2009-06-22) Summer workshop at Johns Hopkins University

                                            The Center for Language and Speech Processing


at Johns Hopkins University invites one page research proposals for a

NSF-sponsored, Six-week Summer Research Workshop on

Machine Learning for Language Engineering

to be held in Baltimore, MD, USA,

June 22 to July 31, 2009.


Deadline: Wednesday, October 15, 2008.

One-page proposals are invited for the 15th annual NSF sponsored JHU summer workshop.  Proposals should be suitable for a six-week team exploration, and should aim to advance the state of the art in any of the various fields of Human Language Technology (HLT) including speech recognition, machine translation, information retrieval, text summarization and question answering.  This year, proposals in related areas of Machine Intelligence, such as Computer Vision (CV), that share techniques with HLT are also being solicited.  Research topics selected for investigation by teams in previous workshops may serve as good examples for your proposal. (See

Proposals on all topics of scientific interest to HLT and technically related areas are encouraged.  Proposals that address one of the following long-term challenges are particularly encouraged.

Ø  ROBUST TECHNOLOGY FOR SPEECH:  Technologies like speech transcription, speaker identification, and language identification share a common weakness: accuracy degrades disproportionately with seemingly small changes in input conditions (microphone, genre, speaker, dialect, etc.), where humans are able to adapt quickly and effectively. The aim is to develop technology whose performance would be minimally degraded by input signal variations.

Ø  KNOWLEDGE DISCOVERY FROM LARGE UNSTRUCTURED TEXT COLLECTIONS: Scaling natural language processing (NLP) technologies—including parsing, information extraction, question answering, and machine translation—to very large collections of unstructured or informal text, and domain adaptation in NLP is of interest.

Ø  VISUAL SCENE INTERPRETATION: New strategies are needed to parse visual scenes or generic (novel) objects, analyzing an image as a set of spatially related components.  Such strategies may integrate global top-down knowledge of scene structure (e.g., generative models) with the kind of rich bottom-up, learned image features that have recently become popular for object detection.  They will support both learning and efficient search for the best analysis.

Ø  UNSUPERVISED AND SEMI-SUPERVISED LEARNING: Novel techniques that do not require extensive quantities of human annotated data to address any of the challenges above could potentially make large strides in machine performance as well as lead to greater robustness to changes in input conditions.  Semi-supervised and unsupervised learning techniques with applications to HLT and CV are therefore of considerable interest.

An independent panel of experts will screen all received proposals for suitability. Results of this screening will be communicated no later than October 22, 2008. Authors passing this initial screening will be invited to Baltimore to present their ideas to a peer-review panel on November 7-9, 2008.  It is expected that the proposals will be revised at this meeting to address any outstanding concerns or new ideas. Two or three research topics and the teams to tackle them will be selected for the 2009 workshop.

We attempt to bring the best researchers to the workshop to collaboratively pursue the selected topics for six weeks.  Authors of successful proposals typically become the team leaders.  Each topic brings together a diverse team of researchers and students.  The senior participants come from academia, industry and government.  Graduate student participants familiar with the field are selected in accordance with their demonstrated performance, usually by the senior researchers. Undergraduate participants, selected through a national search, will be rising seniors who are new to the field and have shown outstanding academic promise.

If you are interested in participating in the 2009 Summer Workshop we ask that you submit a one-page research proposal for consideration, detailing the problem to be addressed.  If your proposal passes the initial screening, we will invite you to join us for the organizational meeting in Baltimore (as our guest) for further discussions aimed at consensus.  If a topic in your area of interest is chosen as one of the two or three to be pursued next summer, we expect you to be available for participation in the six-week workshop. We are not asking for an ironclad commitment at this juncture, just a good faith understanding that if a project in your area of interest is chosen, you will actively pursue it.

Proposals should be submitted via e-mail to by 4PM EST on Wed, October 15, 2008.

Back to Top

9-15 . (2009-06-22) Third International Conference on Intelligent Technologies for Interactive Entertainment (Intetain 2009)

Intetain 2009, Amsterdam, 22-24th June 2009

Third International Conference on Intelligent Technologies for Interactive Entertainment



Call for Papers



==== OVERVIEW ====


The Human Media Interaction (HMI) department of the University of Twente in the Netherlands and the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (ICST) are pleased to announce the Third International Conference on Intelligent Technologies for Interactive Entertainment to be held on June 22-24, 2009 in Amsterdam, the Netherlands.


INTETAIN 09 intends to stimulate interaction among academic researchers and commercial developers of interactive entertainment systems. We are seeking long (full) and short (poster) papers as well as proposals for interactive demos. In addition, the conference organisation aims at an interactive hands-on session along the lines of the Design Garage that was held at INTETAIN 2005. Individuals who want to organise special sessions during INTETAIN 09 may contact the General Chair, Anton Nijholt  ( 


The global theme of this third edition of the international conference is “Playful interaction, with others and with the environment”.


Contributions may, for example, contribute to this theme by focusing on the Supporting Device Technologies underlying interactive systems (mobile devices, home entertainment centers, haptic devices, wall screen displays, information kiosks, holographic displays, fog screens, distributed smart sensors, immersive screens and wearable devices), on the Intelligent Computational Technologies used to build the interactive systems, or by discussing the Interactive Applications for Entertainment themselves.


We seek novel, revolutionary, and exciting work in areas including but not limited to:


== Supporting Technology ==

 * New hardware technology for interaction and entertainment

 * Novel sensors and displays

 * Haptic devices

 * Wearable devices


== Intelligent Computational Technologies ==

 * Animation and Virtual Characters

 * Holographic Interfaces

 * Adaptive Multimodal Presentations

 * Creative language environments

 * Affective User Interfaces

 * Intelligent Speech Interfaces

 * Tele-presence in Entertainment

 * (Collaborative) User Models and Group Behavior

 * Collaborative and virtual Environments

 * Brain Computer Interaction

 * Cross Domain User Models

 * Augmented, Virtual and Mixed Reality

 * Computer Graphics & Multimedia

 * Pervasive Multimedia

 * Robots

 * Computational humor


== Interactive Applications for Entertainment ==

 * Intelligent Interactive Games

 * Emergent games

 * Human Music Interaction

 * Interactive Cinema

 * Edutainment

 * Urban Gaming

 * Interactive Art

 * Interactive Museum Guides

 * Evaluation

 * City and Tourism Explorers Assistants

 * Shopping Assistants

 * Interactive Real TV

 * Interactive Social Networks

 * Interactive Story Telling

 * Personal Diaries, Websites and Blogs

 * Comprehensive assisting environments for special populations

     (handicapped, children, elderly)

 * Exertion games





INTETAIN 09 accepts long papers and short poster papers as well as demo proposals accompanied by a two page extended abstract. Accepted long and short papers will be published in the new Springer series LNICST: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. The organisation of INTETAIN 09 is currently working to secure a special edition of a journal, as happened previously for the 2005 edition of the Intetain conference.


Submissions should adhere to the LNICST instructions for authors, available from the INTETAIN 09 web site.


== Long papers ==

Submissions of a maximum of 12 pages that describe original research work not submitted or published elsewhere. Long papers will be orally presented at the conference.


== Short papers ==

Submissions of a maximum of 6 pages that describe original research work not submitted or published elsewhere. Short papers will be presented with a poster during the demo and poster session at the conference.


== Demos ==

Researchers are invited to submit proposals for demonstrations to be held during a special demo and poster session at the INTETAIN 09. For more information, see the Call for Demos below. Demo proposals may either be accompanied by a long or short paper submission, or by a two page extended abstract describing the demo. The extended abstracts will be published in a supplementary proceedings distributed during the conference.





Submission deadline:

Monday, Februari 16, 2009



Monday, March 16, 2009


Camera ready submission deadline:

Monday, March 30, 2009


Late demo submission deadline (extended abstract only!):

Monday, March 30, 2009



June 22-24, 2009, Amsterdam, the Netherlands



==== COMMITTEE ====


General Program Chair:

Anton Nijholt, Human Media Interaction, University of Twente, the Netherlands


Local Chair:

Dennis Reidsma, Human Media Interaction, University of Twente, the Netherlands


Web Master and Publication Chair:

Hendri Hondorp, Human Media Interaction, University of Twente, the Netherlands


Steering Committee Chair:

Imrich Chlamtac, Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering



==== CALL FOR DEMOS ====


We actively seek proposals from both industry and academia for interactive demos to be held during a dedicated session at the conference. Demos may accompany a long or short paper. Also, demos may be submitted at a later deadline instead, with a short, two page extended abstract explaining the demo and showing why the demo would be a worthwhile contribution the INTETAIN 09's demo session.


== Format ==

Demo submissions should be accompanied by the following additional information:

 * A short description of the setup and demo (2 alineas)

 * Requirements (hardware, power, network, space,

     sound conditions, etc, time needed for setup)

 * A sketch or photo of the setup


Videos showing the demonstration setup in action are very welcome.


== Review ==

Demo proposals will be reviewed by a review team that will take into account aspects such as novelty, relevance to the conference, coverage of topics and available resources.


== Topics ==

 * Topics for demo submissions include, but are not limited to:

 * New technology for interaction and entertainment

 * (serious) gaming

 * New entertainment applications

 * BCI

 * Human Music Interaction

 * Music technology

 * Edutainment

 * Exertion interfaces





Stefan Agamanolis Distance Lab, Forres, UK
Elisabeth Andre Augsburg University, Germany
Lora Aroyo Vrije Universiteit Amsterdam, the Netherlands
Regina Bernhaupt University of Salzburg, Austria
Kim Binsted University of Hawai, USA
Andreas Butz University of Munich, Germany
Yang Cai Visual Intelligence Studio, CYLAB, Carnegie Mellon, USA
Antonio Camurri University of Genoa, Italy
Marc Cavazza University of Teesside, UK
Keith Cheverst University of Lancaster, UK
Drew Davidson CMU, Pittsburgh, USA
Barry Eggen University of Eindhoven, the Netherlands
Arjan Egges University of Utrecht, the Netherlands
Anton Eliens Vrije Universiteit Amsterdam, the Netherlands
Steven Feiner Columbia University, New York
Alois Ferscha University of Linz, Austria
Matthew Flagg Georgia Tech, USA
Jaap van den Herik University of Tilburg, the Netherlands
Dirk Heylen University of Twente, the Netherlands
Frank Kresin Waag Society, Amsterdam, the Netherlands
Antonio Krueger University of Muenster, Germany
Tsvi Kuflik University of Haifa, Israel
Markus Löckelt DFKI Saarbrücken, Germany
Henry Lowood University of Stanford, USA 
Mark Maybury MITRE, Boston, USA
Oscar Mayora Create-Net Research Consortium, Italy
John-Jules Meijer University of Utrecht, the Netherlands
Louis-Philippe Morency Institute for Creative Technologies, USC, USA
Florian 'Floyd' Mueller University of Melbourne, Australia
Patrick Olivier University of Newcastle, UK
Paolo Petta Medical University of Vienna, Austria
Fabio Pianesi ITC-irst, Trento, Italy
Helmut Prendinger National Institute of Informatics, Tokyo, Japan
Matthias Rauterberg University of Eindhoven, the Netherlands
Isaac Rudomin Monterrey Institute of Technology, Mexico
Pieter Spronck University of Tilburg, the Netherlands
Oliviero Stock ITC-irst, Trento, Italy
Carlo Strapparava ITC-irst, Trento, Italy
Mariet Theune University of Twente, the Netherlands
Thanos Vasilikos University of Western Macedonia, Greece
Sean White Columbia University, USA
Woontack Woo Gwangju Institute of Science and Technology, Korea
Wijnand IJsselstein University of Eindhoven, the Netherlands
Massimo Zancanaro ITC-irst, Trento, Italy

Back to Top



**** NOTE: Deadline for 2-page submissions           ****
**** (posters and demos) has been extended to May 7. **** 

KTH, Stockholm, Sweden, 24-26 June, 2009

The SemDial series of workshops aims to bring together researchers working on the semantics and pragmatics of dialogue in fields such as artificial intelligence, computational linguistics, formal semantics/pragmatics, philosophy, psychology, and neural science. DiaHolmia will be the 13th workshop in the SemDial series, and will be organized at the Department of Speech Music and Hearing, KTH (Royal Institute of Technology). KTH is Scandinavia's largest institution of higher education in technology and is located in central Stockholm (Holmia in Latin).



Full 8-page papers:
Submission due: 22 March 2009
Notification of acceptance: 25 April 2009
Final version due: 7 May 2009

2-page poster or demo descriptions:
Submission due: 25 April 2009
Notification of acceptance: 7 May 2009

DiaHolmia 2009: 24-26 June 2009 (Wednesday-Friday)


We invite papers on all topics related to the semantics and pragmatics of dialogues, including, but not limited to:

- common ground/mutual belief
- turn-taking and interaction control
- dialogue and discourse structure
- goals, intentions and commitments
- natural language understanding/semantic interpretation
- reference, anaphora and ellipsis
- collaborative and situated dialogue
- multimodal dialogue
- extra- and paralinguistic phenomena
- categorization of dialogue phenomena in corpora
- designing and evaluating dialogue systems
- incremental, context-dependent processing
- reasoning in dialogue systems
- dialogue management

Full papers will be in the usual 8-page, 2-column format. There will also be poster and demo presentations. The selection of posters and demos will be based on 2-page descriptions. Selected descriptions will be included in the proceedings.

Details on programme and local arrangements will be announced at a later date.

The best accepted papers will be invited to submit extended versions to Dialogue & Discourse, the new open-access journal dedicated exclusively to research on language 'beyond the single sentence' (


Harry Bunt (Tilburg University, Netherlands)
Nick Campbell (ATR, Japan)
Julia Hirschberg (Columbia University, New York)
Sverre Sjölander (Linköping University, Sweden)


Jan Alexandersson, Srinivas Bangalore, Ellen Gurman Bard, Anton Benz, Johan Bos, Johan Boye, Harry Bunt, Donna Byron, Jean Carletta, Rolf Carlson, Robin Cooper, Paul Dekker, Giuseppe Di Fabbrizio, Raquel Fernández, Claire Gardent, Simon Garrod, Jonathan Ginzburg, Pat Healey, Peter Heeman, Mattias Heldner, Joris Hulstijn, Michael Johnston, Kristiina Jokinen, Arne Jönsson, Alistair Knott, Ivana Kruijff-Korbayova, Staffan Larsson, Oliver Lemon, Ian Lewin, Diane Litman, Susann Luperfoy, Colin Matheson, Nicolas Maudet, Michael McTear, Wolfgang Minker, Philippe Muller, Fabio Pianesi, Martin Pickering, Manfred Pinkal, Paul Piwek, Massimo Poesio, Alexandros Potamianos, Matthew Purver, Manny Rayner, Hannes Rieser, Laurent Romary, Alex Rudnicky, David Schlangen, Stephanie Seneff, Ronnie Smith, Mark Steedman, Amanda Stent, Matthew Stone, David Traum, Marilyn Walker and Mats Wirén


Jens Edlund
Joakim Gustafson
Anna Hjalmarsson
Gabriel Skantze 



Back to Top

9-17 . (2009-07) 6th IJCAI workshop on knowledge and reasoning in practical dialogue systems

6th WORKSHOP ON KNOWLEDGE AND REASONING IN PRACTICAL DIALOGUE SYSTEMS > > The sixth IJCAI workshop on "Knowledge and Reasoning in Practical Dialogue 
Systems" will focus on challenges of novel applications of practical dialogue systems. The venue for IJCAI 2009 is Pasadena Conference Center, California, USA. 
> Topics addressed in the workshop include, but are not limited to the 
following, particularly focusing on the challenges offered by these novel applications: 
> >    * What kinds of novel applications have a need for natural language 
dialogue interaction? 
>    * How can authoring tools for dialogue systems be developed such that 
application designers who are not experts in natural language can make use of these systems? 
>    * How can one easily adapt a dialogue system to a new application? >    * Methods for design and development of dialogue systems. >    * What are the extra constraints and resources of a dialogue system for 
these novel applications, that might not be present in a speech or text only dialogue system or even traditional multi-modal interfaces? 
>    * Representation of language resources for dialogue systems. >    * The role of ontologies in dialogue systems >    * Evaluation of dialogue systems, what to evaluate and how. >    * Techniques and algorithms for adaptivity in dialogue systems on 
various levels, e.g. interpretation, dialogue strategy, and generation. 
>    * Robustness and how to handle unpredictability. >    * Architectures and frameworks for adaptive dialogue systems. >    * Requirements and methods for development related to the architecture. > > This is the sixth IJCAI workshop on "Knowledge and Reasoning in Practical 
Dialogue Systems". The first workshop was held at IJCAI in Stockholm in 1999. The second workshop was held at IJCAI 2001 in Seattle, with a focus on multimodal interfaces. The Third workshop was held in Acapulco, in 2003, and focused on the role and use of ontologies in multi-modal dialogue systems. The fourth workshop was held in Edinburgh in 2005, and focused on adaptivity in dialogue systems. The fifth workshop was held in Hyderabad, India, 2007 and focused on dialogue systems for robots and virtual humans. 
> > Who should attend > > This workshop aims at bringing together researchers and practitioners that 
work on the development of communication models that support robust and efficient interaction in natural language, both for commercial dialogue systems and in basic research. 
> > It should be of interest also for anyone studying dialogue and multimodal 
interfaces and how to coordinate different information sources. This involves theoretical as well as practical research, e.g. empirical evaluations of usability, formalization of dialogue phenomena and development of intelligent interfaces for various applications, including such areas as robotics. 
> > Workshop format > > The workshop will be kept small, with a maximum of 40 participants. 
Preference will be given to active participants selected on the basis of their submitted papers. 
> > Each paper will be given ample time for discussion, more than what is 
customary at a conference. As said above, we encourage contributions of a critical or comparative nature that provide fuel for discussion. We also invite people to share their experiences of implementing and coordinating knowledge modules in their dialogue systems, and integrating dialogue components to other applications. 
> > Important Dates > >    * Submission deadline: March 6, 2009 >    * Notification date: April 17, 2009 >    * Accepted paper submission deadline: May 8, 2009 >    * Workshop July, 2009 > > Submissions > > Papers may be any of the following types: > >    * Regular Papers papers of length 4-8 pages, for regular presentation >    * Short Papers with brief results, or position papers, of length up to 
4 pages for brief or panel presentation. 
>    * Extended papers with extra details on system architecture, background 
theory or data presentation, of up to 12 pages, for regular presentation. 
> > Papers should include authors names and affiliation and full references 
(not anonymous submission). All papers should be formatted according to the AAAI formats: AAAI Press Author Instructions 
> > Submission procedure > > Papers should be submitted by web by registering at the following address: 
> > Organizing Committee > > Arne Jönsson (Chair) > Department of Computer and Information Science > Linköping University > S-581 83 Linköping, Sweden > tel: +46 13 281717 > fax: +46 13 142231 > email: > > David Traum (Co-Chair) > Institute for Creative Technologies > University of Southern California > 13274 Fiji Way > Marina del Rey, CA 90405 USA > tel: +1 (310) 574-5729 > fax: +1 (310) 574-5725 > email: > > Jan Alexandersson (Co-Chair) > German Research Center for Artificial Intelligence, DFKI GmbH > Stuhlsatzenhausweg 3 > D-66 123 Saarbrücken > Germany > tel: +49-681-3025347 > fax: +49-681-3025341 > email: > > Ingrid Zukerman (Co-Chair) > Faculty of Information Technolog > Monash University > Clayton, Victoria 3800, Australia > tel: +61 3 9905-5202 > fax: +61 3 9905-5146 > email: > > Programme committee > > Dan Bohus, USA > Johan Bos, Italy > Sandra Carberry, USA > Kallirroi Georgila, USA > Genevieve Gorrell, UK > Joakim Gustafson, Sweden > Yasuhiro Katagiri, Japan > Ali Knott, New Zealand > Kazunori Komatani, Japan > Staffan Larsson, Sweden > Anton Nijholt, Netherlands > Tim Paek, USA > Antoine Raux, USA > Candace Sidner, USA > Amanda Stent, USA > Marilyn Walker, UK > Jason Williams, USA > > Web page: > > Arne Jönsson > Tel: +4613281717
Back to Top

9-18 . (2009-07-09) MULTIMOD 2009 Multimodality of communication in children: gestures, emotions, language and cognition

The Multimod 2009 conference - Multimodality of communication in children:
gestures, emotions, language and cognition is being organized jointly by
psychologists and linguists from the Universities of Toulouse (Toulouse II)
and Grenoble (Grenoble III) and will take place in Toulouse (France) from
Thursday 9th July to Saturday 11th July 2009.

The aim of the conference will be to assess research on theories, concepts
and methods relating to multimodality in children.

The invited speakers are :
- Susan Goldin-Meadow (University of Chicago, USA),
- Jana Iverson (University of Pittsburg, USA),
- Paul Harris (Harvard University, USA),
- Judy Reilly (San Diego State University, USA),
- Gwyneth Doherty-Sneddon (University of Stirling, UK),
- Marianne Gullberg (MPI Nijmegen, The Netherlands).

We invite you to submit proposals for symposia, individual papers or posters
of original, previously unpublished research on all aspects of multimodal
communication in children, including:

- Gestures and language development, both typical and atypical
- Emotional development, both typical and atypical
- Multimodality of communication and bilingualism
- Gestural and/or emotional communication in non-human and human primates
- Multimodality of communication and didactics
- Multimodality of communication in the classroom
- Multimodality of communication and brain development
- Prosodic (emotional) aspects of language and communication development
- Pragmatic aspects of multimodality development

Please visit the conference website to find all useful Information
about submissions (individual papers, posters and symposia); the deadline
for submissions is December 15th, 2008. 

Back to Top

9-19 . (2009-08-02) ACL-IJCNLP 2009 1st Call for Papers

ACL-IJCNLP 2009 1st Call for Papers

Joint Conference of
the 47th Annual Meeting of the Association for Computational Linguistics
the 4th International Joint Conference on Natural Language Processing of
the Asian Federation of Natural Language Processing

August 2 - 7, 2009

Full Paper Submission Deadline:  February 22, 2009 (Sunday)
Short Paper Submission Deadline:  April 26, 2009 (Sunday)

For the first time, the flagship conferences of the Association of
Computational Linguistics (ACL) and the Asian Federation of Natural
Language Processing (AFNLP) --the ACL and IJCNLP -- are jointly
organized as a single event. The conference will cover a broad
spectrum of technical areas related to natural language and
computation. ACL-IJCNLP 2009 will include full papers, short papers,
oral presentations, poster presentations, demonstrations, tutorials,
and workshops. The conference invites the submission of papers on
original and unpublished research on all aspects of computational

Important Dates:

* Feb 22, 2009    Full paper submissions due;
* Apr 12, 2009    Full paper notification of acceptance;
* Apr 26, 2009    Short paper submissions due;
* May 17, 2009    Camera-ready full papers due;
* May 31, 2009    Short Paper notification of acceptance;
* Jun 7, 2009       Camera-ready short papers due;
* Aug 2-7, 2009   ACL-IJCNLP 2009

Topics of interest:

Topics include, but are not limited to:

* Phonology/morphology, tagging and chunking, and word segmentation
* Grammar induction and development
* Parsing algorithms and implementations
* Mathematical linguistics and grammatical formalisms
* Lexical and ontological semantics
* Formal semantics and logic
* Word sense disambiguation
* Semantic role labeling
* Textual entailment and paraphrasing
* Discourse, dialogue, and pragmatics
* Language generation
* Summarization
* Machine translation
* Information retrieval
* Information extraction
* Sentiment analysis and opinion mining
* Question answering
* Text mining and natural language processing applications
* NLP in vertical domains, such as biomedical, chemical and legal text
* NLP on noisy unstructured text, such as email, blogs, and SMS
* Spoken language processing
* Speech recognition and synthesis
* Spoken language understanding and generation
* Language modeling for spoken language
* Multimodal representations and processing
* Rich transcription and spoken information retrieval
* Speech translation
* Statistical and machine learning methods
* Language modeling for text processing
* Lexicon and ontology development
* Treebank and corpus development
* Evaluation methods and user studies
* Science of annotation


Full Papers: Submissions must describe substantial, original,
completed and unpublished work. Wherever appropriate, concrete
evaluation and analysis should be included. Submissions will be judged
on correctness, originality, technical strength, significance,
relevance to the conference, and interest to the attendees. Each
submission will be reviewed by at least three program committee

Full papers may consist of up to eight (8) pages of content, plus one
extra page for references, and will be presented orally or as a poster
presentation as determined by the program committee.  The decisions as
to which papers will be presented orally and which as poster
presentations will be based on the nature rather than on the quality
of the work. There will be no distinction in the proceedings between
full papers presented orally and those presented as poster

The deadline for full papers is February 22, 2009 (GMT+8). Submission
is electronic using paper submission software at:

Short papers: ACL-IJCNLP 2009 solicits short papers as well. Short
paper submissions must describe original and unpublished work. The
short paper deadline is just about three months before the conference
to accommodate the following types of papers:

* A small, focused contribution
* Work in progress
* A negative result
* An opinion piece
* An interesting application nugget

Short papers will be presented in one or more oral or poster sessions,
and will be given four pages in the proceedings. While short papers
will be distinguished from full papers in the proceedings, there will
be no distinction in the proceedings between short papers presented
orally and those presented as poster presentations. Each short paper
submission will be reviewed by at least two program committee members.
The deadline for short papers is April 26, 2009 (GMT + 8).  Submission
is electronic using paper submission software at:


Full paper submissions should follow the two-column format of
ACL-IJCNLP 2009 proceedings without exceeding eight (8) pages of
content plus one extra page for references.  Short paper submissions
should also follow the two-column format of ACL-IJCNLP 2009
proceedings, and should not exceed four (4) pages, including
references. We strongly recommend the use of ACL LaTeX style files or
Microsoft Word style files tailored for this year's conference, which
are available on the conference website under Information for Authors.
Submissions must conform to the official ACL-IJCNLP 2009 style
guidelines, which are contained in the style files, and they must be
electronic in PDF.

As the reviewing will be blind, the paper must not include the
authors' names and affiliations. Furthermore, self-references that
reveal the author's identity, e.g., "We previously showed (Smith,
1991) ...", must be avoided. Instead, use citations such as "Smith
previously showed (Smith, 1991) ...". Papers that do not conform to
these requirements will be rejected without review.

Multiple-submission policy:

Papers that have been or will be submitted to other meetings or
publications must provide this information at submission time. If
ACL-IJCNLP 2009 accepts a paper, authors must notify the program
chairs by April 19, 2009 (full papers) or June 7, 2009 (short papers),
indicating which meeting they choose for presentation of their work.
ACL-IJCNLP 2009 cannot accept for publication or presentation work
that will be (or has been) published elsewhere.

Mentoring Service:

ACL is providing a mentoring (coaching) service for authors from
regions of the world where English is less emphasized as a language of
scientific exchange. Many authors from these regions, although able to
read the scientific literature in English, have little or no
experience in writing papers in English for conferences such as the
ACL meetings. The service will be arranged as follows. A set of
potential mentors will be identified by Mentoring Service Chairs Ng,
Hwee Tou (NUS, Singapore) and Reeder, Florence (Mitre, USA), who will
organize this service for ACL-IJCNLP 2009. If you would like to take
advantage of the service, please upload your paper in PDF format by
January 14, 2009 for long papers and March 18 2009 for short papers
using the paper submission software for mentoring service which will
be available at conference website.

An appropriate mentor will be assigned to your paper and the mentor
will get back to you by February 8 for long papers or April 12 for
short papers, at least 2 weeks before the deadline for the submission
to the ACL-IJCNLP 2009 program committee.

Please note that this service is for the benefit of the authors as
described above. It is not a general mentoring service for authors to
improve the technical content of their papers.

If you have any questions about this service please feel free to send
a message to Ng, Hwee Tou (nght[at] or Reeder,
Florence (floreederacl[at]

General Conference Chair:
Su, Keh-Yih (Behavior Design Corp., Taiwan; kysu[at]

Program Committee Chairs:
Su, Jian (Institute for Infocomm Research, Singapore;
Wiebe, Janyce (University of Pittsburgh, USA; janycewiebe[at]

Area Chairs:
Agirre, Eneko (University of Basque Country, Spain; e.agirre[at]
Ananiodou, Sophia (University of Manchester, UK;
Belz, Anja (University of Brighton, UK; a.s.belz[at]
Carenini, Giuseppe (University of British Columbia, Canada;
Chen, Hsin-Hsi (National Taiwan University, TaiWan, hh_chen[at]
Chen, Keh-Jiann (Sinica, Tai Wan, kchen[at]
Curran, James (University of Sydney, Australia; james[at]
Gao, Jian Feng (MSR, USA; jfgao[at]
Harabagiu, Sanda (University of Texas at Dallas, USA, sanda[at]
Koehn, Philipp (University of Edinburgh, UK; pkoehn[at]
Kondrak, Grzegorz (University of Alberta, Canada; kondrak[at]
Meng, Helen Mei-Ling (Chinese University of Hong Kong, Hong Kong;
      hmmeng[at] )
Mihalcea, Rada (University of Northern Texas, USA; rada[at]
Poesio, Massimo(University of Trento, Italy; poesio[at]
Riloff, Ellen (University of Utah, USA; riloff[at]
Sekine, Satoshi (New York University, USA; sekine[at]
Smith, Noah (CMU, USA; nasmith[at]
Strube, Michael (EML Research, Germany; strube[at]
Suzuki, Jun (NTT, Japan; jun[at]
Wang, Hai Feng (Toshiba, China; wanghaifeng[at] 

Back to Top

9-20 . (2009-09) Emotion challenge INTERSPEECH 2009

Call for Papers
INTERSPEECH 2009 Emotion Challenge
Feature, Classifier, and Open Performance Comparison for
Non-Prototypical Spontaneous Emotion Recognition
Bjoern Schuller (Technische Universitaet Muenchen, Germany)
Stefan Steidl (FAU Erlangen-Nuremberg, Germany)
Anton Batliner (FAU Erlangen-Nuremberg, Germany)
Sponsored by:
HUMAINE Association
Deutsche Telekom Laboratories
The Challenge
The young field of emotion recognition from voice has recently gained considerable interest in Human-Machine Communication, Human-Robot Communication, and Multimedia Retrieval. Numerous studies have been seen in the last decade trying to improve on features and classifiers. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead, a multiplicity of evaluation strategies employed such as cross-validation or percentage splits without proper instance definition, prevents exact reproducibility. Further, to face more realistic use-cases, the community is in desperate need of more spontaneous and less prototypical data.
In these respects, the INTERSPEECH 2009 Emotion Challenge shall help bridging the gap between excellent research on human emotion recognition from speech and low compatibility of results: the FAU Aibo Emotion Corpus of spontaneous, emotionally coloured speech, and benchmark results of the two most popular approaches will be provided by the organisers. Nine hours of speech (51 children) were recorded at two different schools. This allows for distinct definition of test and training partitions incorporating speaker independence as needed in most real-life settings. The corpus further provides a uniquely detailed transcription of spoken content with word boundaries, non-linguistic vocalisations, emotion labels, units of analysis, etc.
Three sub-challenges are addressed in two different degrees of difficulty by using non-prototypical five or two emotion classes (including a garbage model):
 The Open Performance Sub-Challenge allows contributors to find their own features with their own classification algorithm. However, they will have to stick to the definition of test and training sets.
 In the Feature Sub-Challenge, participants are encouraged to upload their individual best features per unit of analysis with a maximum of 100 per contribution. These features will then be tested by the organisers with equivalent settings in one classification task, and pooled together in a feature selection process.
 In the Classifier Sub-Challenge, participants may use a large set of standard acoustic features provided by the organisers for classifier tuning.
The labels of the test set will be unknown, but each participant can upload instance predictions to receive the confusion matrix and results up to 25 times. As classes are un-balanced, the measure to optimise will be mean recall. The organisers will not take part in the sub-challenges but provide baselines.
Overall, contributions using the provided or an equivalent database are sought in (but not limited to) the areas:
 Participation in any of the sub-challenges
 Speaker adaptation for emotion recognition
 Noise/coding/transmission robust emotion recognition
 Effects of prototyping on performance
 Confidences in emotion recognition
 Contextual knowledge exploitation
The results of the Challenge will be presented at a Special Session of Interspeech 2009 in Brighton, UK.
Prizes will be awarded to the sub-challenge winners and a best paper.
If you are interested and planning to participate in the Emotion Challenge, or if you want to be kept informed about the Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage:
Back to Top

9-21 . (2009-09-06) Special session at Interspeech 2009:adaptivity in dialog systems

Call for papers (submission deadline Friday 17 April 2009)
Special Session : "Machine Learning for Adaptivity in Spoken Dialogue Systems"
at Interspeech 2009, Brighton U.K.,
Session chairs: Oliver Lemon, Edinburgh University,
and Olivier Pietquin, Supélec - IMS Research Group
In the past decade, research in the field of Spoken Dialogue Systems
(SDS) has experienced increasing growth, and new applications include
interactive mobile search, tutoring, and troubleshooting systems
(e.g. fixing a broken internet connection). The design and
optimization of robust SDS for such tasks requires the development of
dialogue strategies which can automatically adapt to different types
of users (novice/expert, youth/senior) and noise conditions
(room/street). New statistical learning techniques are emerging for
training and optimizing adaptive speech recognition, spoken language
understanding, dialogue management, natural language generation, and
speech synthesis in spoken dialogue systems. Among machine learning
techniques for spoken dialogue strategy optimization, reinforcement
learning using Markov Decision Processes (MDPs) and Partially
Observable MDP (POMDPs) has become a particular focus.
We therefore solicit papers on new research in the areas of:
- Adaptive dialogue strategies and adaptive multimodal interfaces
- User simulation techniques for adaptive strategy learning and testing
- Rapid adaptation methods
- Reinforcement Learning of dialogue strategies
- Partially Observable MDPs in dialogue strategy optimization
- Statistical spoken language understanding in dialogue systems
- Machine learning and context-sensitive speech recognition
- Learning for adaptive Natural Language Generation in dialogue
- Corpora and annotation for machine learning approaches to SDS
- Machine learning for adaptive multimodal interaction
- Evaluation of adaptivity in statistical approaches to SDS and user
Important Dates--
Full paper submission deadline: Friday 17 April 2009
Notification of paper acceptance: Wednesday 17 June 2009
Conference dates: 6-10 September 2009
Back to Top

9-22 . (2009-09-07)CfP Information Retrieval and Information Extraction for Less Resourced Languages

Information Retrieval and Information Extraction for Less Resourced Languages (IE-IR-LRL)
SEPLN 2009 pre-conference workshop
University of the Basque Country
Donostia-San Sebastián. Monday 7th September 2009
Organised by the SALTMIL Special Interest Group of ISCA
SEPLN 2009:
Call For Papers:
Paper submission:
Deadline for submission: 8 June 2009
Papers are invited for the above half-day workshop, in the format outlined below. Most submitted papers will be presented in poster form, though some authors may be invited to present in lecture format.
The phenomenal growth of the Internet has led to a situation where, by some estimates, more than one billion words of text is currently available. This is far more text than any given person can possibly process. Hence there is a need for automatic tools to access and process his mass of textual information. Emerging techniques of this kind include Information Retrieval (IR), Information Extraction (IE), and Question Answering (QA)
However, there is a growing concern among researchers about the situation of languages other than English. Although not all Internet text is in English, it is clear that non-English languages do not have the same degree of representation on the Internet. Simply counting the number of articles in Wikipedia, English is the only language with more than 20 percent of the available articles. There then follows a group of 17 languages with between one and ten percent of the articles. The remaining 245 languages each have less than one percent of the articles. Even these low-profile languages are relatively privileged, as the total number of languages in the world is estimated to be 6800.
Clearly there is a danger that the gap between high-profile and low-profile languages on the Internet will continue to increase, unless tools are developed for the low-profile languages to access textual information. Hence there is a pressing need to develop basic language technology software for less-resourced languages as well. In particular, the priority is to adapt the scope of recently-developed IE, IR and QA systems so that they can be used also for these languages. In doing so, several questions will naturally arise, such as:
* What problems emerge when faced with languages having different linguistic features from the major languages?
* Which techniques should be promoted in order to get the maximum yield from sparse training data?
* What standards will enable researchers to share tools and techniques across several different languages?
* Which tools are easily re-useable across several unrelated languages?
It is hoped that presentations will focus on real-world examples, rather than purely theoretical discussions of the questions. Researchers are encouraged to share examples of best practice -- and also examples where tools have not worked as well as expected. Also of interest will be
cases where the particular features of a less-resourced language raise a challenge to currently accepted linguistic models that were based on features of major languages.
Given the context of IR, IE and QA, topics for discussion may include, but are not limited to:
* Information retrieval;
* Text and web mining;
* Information extraction;
* Text summarization;
* Term recognition;
* Text categorization and clustering;
* Question answering;
* Re-use of existing IR, IE and QA data;
* Interoperability between tools and data.
* General speech and language resources for minority languages, with particular emphasis on resources for IR,IE and QA.
* 8 June 2009: Deadline for submission
* 1 July 2009: Notification
* 15 July 2009: Final version
* 7 September 2009: Workshop
* Kepa Sarasola, University of the Basque Country
* Mikel Forcada, Universitat d'Alacant, Spain
* Iñaki Alegria. University of the Basque Country
* Xabier Arregi, University of the Basque Country
* Arantza Casillas. University of the Basque Country
* Briony Williams, Language Technologies Unit, Bangor University, Wales, UK
* Iñaki Alegria. University of the Basque Country.
* Atelach Alemu Argaw: Stockholm University, Sweden
* Xabier Arregi, University of the Basque Country.
* Jordi Atserias, Barcelona Media (yahoo! research Barcelona)
* Shannon Bischoff, Universidad de Puerto Rico, Puerto Rico
* Arantza Casillas. University of the Basque Country.
* Mikel Forcada: Universitat d'Alacant, Spain
* Xavier Gomez Guinovart. University of Vigo.
* Lori Levin, Carnegie-Mellon University, USA
* Climent Nadeu, Universitat Politècnica de Catalunya
* Jon Patrick, University of Sydney, Australia
* Juan Antonio Pérez-Ortiz, Universitat d'Alacant, Spain
* Bojan Petek, University of Ljubljana, Slovenia
* Kepa Sarasola, University of the Basque Country
* Oliver Streiter, National University of Kaohsiung, Taiwan
* Vasudeva Varma, IIIT, Hyderabad, India
* Briony Williams: Bangor University, Wales, UK
We expect short papers of max 3500 words (about 4-6 pages) describing research addressing one of the above topics, to be submitted as PDF documents by uploading to the following URL:
The final papers should not have more than 6 pages, adhering to the stylesheet that will be adopted for the SEPLN Proceedings (to be announced later on the Conference web site).
Mikel L. Forcada <>
Back to Top

9-23 . (2009-09-09) CfP IDP 09 Discourse-Prosody Interface



Discourse – Prosody Interface


Paris, September 9-10-11, 2009


The third round of the “Discourse – Prosody Interface” Conference will be hosted by the Laboratoire de Linguistique Formelle (UMR 7110 / LLF), the Equipe CLILLAC-ARP (EA 3967) and the Linguistic Department (UFRL) of the University of Paris-Diderot (Paris 7), on September 9-10-11, 2009 in Paris. The first round was organized by the Laboratoire Parole et Langage (UMR 6057 /LPL) in September 2005, in Aix-en-Provence. The second took place in Geneva in September 2007 and was organized by the Department of Linguistics at the University of Geneva, in collaboration with the École de Langue et Civilisation Françaises at the University of Geneva, and the VALIBEL research centre at the Catholic University of Louvain.

The third round will be held at the Paris Center of the University of Chicago, 6, rue Thomas Mann, in the XIIIth arrondissement, near the Bibliothèque François Mitterrand (BNF).


The Conference is addressed to researchers in prosody, phonology, phonetics, pragmatics, discourse analysis and also psycholinguistics, who are particularly interested in the relations between prosody and discourse. The participants may develop their research programmes within different theoretical paradigms (formal approaches to phonology and semantics/ pragmatics, conversation analysis, descriptive linguistics, etc.). For this third edition, spécial attention will be given to research work that propose a formal analysis of the Discourse- Prosody interface.


So as to favour convergence among contributions, the IDP09 conference will focus on :

* Prosody, its parts and discourse :

- How to analyze the interaction between the different prosodic subsystems (accentuation,

intonation, rhythm; register changes or voice quality)?

- How to model the contribution of each subsystem to the global interpretation of discourse?

- How to describe and analyze prosodic facts, and at which level (phonetic vs. phonological) ?

* Prosodic units & discourse units

- What are the relevant units for discourse or conversation analysis ? What are their prosodic

properties ?

- How the embedding of utterances in discourse is marked syntactically or prosodically ?

What consequence of the modelling of syntax & prosody ?

* Prosody and context(s)

- What is the contribution of the context in the analysis of prosody in discourse?

- How can the relations between prosody and context(s) be modelled?

* Acquisition of the relations between prosody & discourse in L1 and L2

- How are the relations between prosody & discourse acquired in L1, in L2 ?

- Which methodological tools could best describe and transcribe these processes ?



Guest speakers :

* Diane Blakemore (School of Languages, University of Salford, United Kingdom)

* Piet Mertens (Department of Linguistics, K.U Leuven, Belgium)

* Hubert Truckenbrodt (ZAS, Zentrum für Allgemeine Sprachwissenschaft, Berlin,



Conference will be held in English or French. Studies can be about any language.



Submission will be made by uploading an anonymous two pages abstract (plus an extra page for references and figures) in A4 and with Times 12 font, written in either English or French as PDF file at the following address : .


Author’s name and affiliation should be given as requested, but not in the PDF file.


If you have any question concerning the submission procedure or you encounter any problem,

please send an email at the following address :


Authors may submit as many proposals as they wish.


The proposals will be evaluated anonymously by the scientific committee.



Submission deadline: April, 26th, 2009

Notification of acceptation: June, 8th, 2009

Conference (IDP 09): September 9th-11th, 2009.


Further information is available on the conférence website :


Back to Top

9-24 . (2009-09-11) SIGDIAL 2009 CONFERENCE

     10th Annual Meeting of the Special Interest Group
     on Discourse and Dialogue

     Queen Mary University of London, UK September 11-12, 2009
     (right after Interspeech 2009)

     Submission Deadline: April 24, 2009


The SIGDIAL venue provides a regular forum for the presentation of
cutting edge research in discourse and dialogue to both academic and
industry researchers. Due to the success of the nine previous SIGDIAL
workshops, SIGDIAL is now a conference. The conference is sponsored by
the SIGDIAL organization, which serves as the Special Interest Group in
discourse and dialogue for both ACL and ISCA. SIGDIAL 2009 will be
co-located with Interspeech 2009 as a satellite event.

In addition to presentations and system demonstrations, the program
includes an invited talk by Professor Janet Bavelas of the University of
Victoria, entitled "What's unique about dialogue?".


We welcome formal, corpus-based, implementation, experimental, or
analytical work on discourse and dialogue including, but not restricted
to, the following themes:

1. Discourse Processing and Dialogue Systems

Discourse semantic and pragmatic issues in NLP applications such as text
summarization, question answering, information retrieval including
topics like:

- Discourse structure, temporal structure, information structure ;
- Discourse markers, cues and particles and their use;
- (Co-)Reference and anaphora resolution, metonymy and bridging resolution;
- Subjectivity, opinions and semantic orientation;

Spoken, multi-modal, and text/web based dialogue systems including
topics such as:

- Dialogue management models;
- Speech and gesture, text and graphics integration;
- Strategies for preventing, detecting or handling miscommunication
(repair and correction types, clarification and under-specificity,
grounding and feedback strategies);
- Utilizing prosodic information for understanding and for disambiguation;

2. Corpora, Tools and Methodology

Corpus-based and experimental work on discourse and spoken, text-based
and multi-modal dialogue including its support, in particular:

- Annotation tools and coding schemes;
- Data resources for discourse and dialogue studies;
- Corpus-based techniques and analysis (including machine learning);
- Evaluation of systems and components, including methodology, metrics
and case studies;

3. Pragmatic and/or Semantic Modeling

The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a
single sentence) including the following issues:

- The semantics/pragmatics of dialogue acts (including those which are
less studied in the semantics/pragmatics framework);
- Models of discourse/dialogue structure and their relation to
referential and relational structure;
- Prosody in discourse and dialogue;
- Models of presupposition and accommodation; operational models of
  conversational implicature.


The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
presentations, followed by posters and demonstrations.

- Long papers must be no longer than 8 pages, including title, examples,
references, etc. In addition to this, two additional pages are allowed
as an appendix which may include extended example discourses or
dialogues, algorithms, graphical representations, etc.
- Short papers and demo descriptions should be 4 pages or less
(including title, examples, references, etc.).

Please use the official ACL style files:

Papers that have been or will be submitted to other meetings or
publications must provide this information (see submission format).
SIGDIAL 2009 cannot accept for publication or presentation work that
will be (or has been) published elsewhere. Any questions regarding
submissions can be sent to the General Co-Chairs.

Authors are encouraged to make illustrative materials available, on the
web or otherwise. Examples might include excerpts of recorded
conversations, recordings of human-computer dialogues, interfaces to
working systems, and so on.


In order to recognize significant advancements in dialog and discourse
science and technology, SIGDIAL will (for the first time) recognize a
consisting of prominent researchers in the fields of interest will
select the recipients of the awards.


Submission: April 24, 2009
Workshop: September 11-12, 2009


SIGDIAL 2009 conference website:
SIGDIAL organization website:
Interspeech 2009 website:


For any questions, please contact the appropriate members of the
organizing committee:

Pat Healey (Queen Mary University of London):
Roberto Pieraccini (SpeechCycle):

Donna Byron (Northeastern University):
Steve Young (University of Cambridge):

Matt Purver (Queen Mary University of London):

Tim Paek (Microsoft Research):

Amanda Stent (AT&T Labs - Research):

Matthew Purver -

Senior Research Fellow
Interaction, Media and Communication
Department of Computer Science
Queen Mary University of London, London E1 4NS, UK 
Back to Top

9-25 . (2009-09-11) Int. Workshop on spoken language technology for development: from promise to practice.

International Workshop on Spoken Language Technology for Development
- from promise to practice
Venue - The Abbey Hotel, Tintern, UK
Dates - 11-12 September 2009
Following on from a successful special session at SLT 2008 in Goa, this workshop invites participants with an interest in SLT4D and who have expertise and experience in any of the following areas:
- Development of speech technology for resource-scarce languages
- SLT deployments in the developing world
- HCI in a developing world context
- Successful ICT4D interventions
The aim of the workshop is to develop a "Best practice in developing and deploying speech systems for developmental applications". It is also hoped that the participants will form the core of an open community which shares tools, insights and methodologies for future SLT4D projects. 
If you are interested in participating in the workshop, please submit a 2-4 page position paper explaining how your expertise and experience might be applied to SLT4D, formatted according to the Interspeech 2009 guidelines, to Roger Tucker at by 30th April 2009. 
Important Dates:
Papers due: 30th April 2009
Acceptance Notification: 10th June 2009
Early Registration deadline: 3rd July 2009
Workshop: 11-12 September 2009
Further details can be found on the workshop website at

Back to Top

9-26 . (2009-09-11) ACORNS Workshop Brighton UK

Call for Participation
ACORNS Workshop
Computational Models of Language Evolution, Acquisition and Processing
the workshop is a satellite of Interspeech-2009.
September 11, 2009
Brighton, UK
Old Ship Hotel, the oldest hotel of Brighton that offers the allure of history
As a follow-up of the successful ESF workshop with the same title (held in November 2007), we again
would like to bring together a group of outstanding invited speakers and discussants to explore
directions for future research in the multidisciplinary field of computational modeling of language
evolution, processing, and acquisition.
We envisage bringing together a group of maximally 50 researchers from different disciplines who
take an interest in investigating language acquisition and processing. The focus is on computational
models that enhance our understanding of behavioural phenomena and results of experiments.
A pervasive problem in the multi-disciplinary field of language processing and acquisition is that
different disciplines not only favour and exploit different experimental paradigms, but also quite
different publication styles and different journals. As a result, information flow across the borders of
the disciplines is no more than a trickle, where broad streams would be desirable.
We have designed a programme in which there is room for four or five longer presentations by invited
speakers. A team of discussants from a range of disciplines will give comment.
Prospective participants are invited to send an expression of interest to the organizers before June 20.
In selecting participants priority will be given to scientists who submit a position statement with their
expression of interest. The statement should address the topics of the workshop. It should be in the
Interspeech-2009 format with a maximum of four pages (shorter statements are also welcome).
Workshop participants will receive key papers by the speakers and discussants, as well as the full set
of position statements on August 15.
During the workshop, there will be ample time and opportunity for all participants to contribute to the
The workshop should result in sketch of future research in language evolution, acquisition and
processing during the next five to ten years. To that end the workshop will explore the formation of
consortia that can prepare project proposals for EU-funded programmes such as FET, etc. For this
reason we have also invited representatives of the major European funding agencies. Finally, the
workshop will explore the feasibility of forming a couple of small interdisciplinary teams to prepare
papers for journals in a number of disciplines.
Costs: The price for participants is 70 UK pounds; this includes all preparatory materials and a lunch.
All payments must be made in cash at the workshop venue.
9:00 – 9:30 Registration
9:30 – 10:30 Lou Boves, Scientific Manager of the ACORNS project (
10:30 – 10:45 Coffee
10:45 – 12:15 Deb Roy, Massachusetts Institute of Technology, Cambridge, Mass
12:15 – 13:15 Lunch
13:15 – 14:45 Friedemann Pulvermuller, MRC Cognition and Brain Sciences Unit, Cambridge, UK
14:45 – 15:00 Tea
15:00 – 16:30 Rochelle Newman, University of Maryland, MD
16:30 – 17:30 Conclusions
Discussants will include:
Roger Moore, Sheffield University, UK
Hugo Van hamme, Catholic University Leuven, Belgium
Odette Scharenborg, Radboud University, Netherlands
Additional discussants are being negotiated at this moment. Scientists who would like to participate as
a discussant are invited to contact the organizers at the e-mail address shown at the bottom of this
message as soon as possible.
Workshop Organisers:
Lou Boves, Elisabeth den Os, Louis ten Bosch (Radboud University)
All questions regarding the workshop and requests for registration with full name, affiliation and email
address must be sent to:
Back to Top

9-27 . (2009-09-14) 7th International Conference on Recent Advances in Natural Language Processing

RANLP-09 Second Call for Papers and Submission Information




International Conference RANLP-2009


September 14-16, 2009

Borovets, Bulgaria


Further to the successful and highly competitive 1st, 2nd, 3rd, 4th, 5th

and 6th conferences 'Recent Advances in Natural Language Processing'

(RANLP), we are pleased to announce the 7th RANLP conference to be held in

September 2009.


The conference will take the form of addresses from invited keynote

speakers plus peer-reviewed individual papers. There will also be an

exhibition area for poster and demo sessions.


We invite papers reporting on recent advances in all aspects of Natural

Language Processing (NLP). The conference topics are announced at the

RANLP-09 website. All accepted papers will be published in the full

conference proceedings and included in the ACL Anthology. In addition,

volumes of RANLP selected papers are traditionally published by John

Benjamins Publishers; currently the volume of Selected RANLP-07 papers is

under print.



       Kevin Bretonnel Cohen (University of Colorado School of Medicine),

       Mirella Lapata (University of Edinburgh),

       Shalom Lappin (King’s College, London),

       Massimo Poesio (University of Trento and University of Essex).



Ruslan Mitkov (University of Wolverhampton)



Galia Angelova (Bulgarian Academy of Sciences)


The PROGRAMME COMMITTEE members are distinguished experts from all over

the world. The list of PC members will be announced at the conference

website. After the review, the list of all reviewers will be announced at

the website as well.



People interested in participating should submit a paper, poster or demo

following the instructions provided at the conference website. The review

will be blind, so the article text should not reveal the authors' names.

Author identification should be done in additional page of the conference

management system.


TUTORIALS 12-13 September 2009:

Four half-day tutorials will be organised at 12-13 September 2009. The

list of tutorial lecturers includes:

       Kevin Bretonnel Cohen (University of Colorado School of Medicine),

       Constantin Orasan (University of Wolverhampton)


WORKSHOPS 17-18 September 2009:

Post-conference workshops will be organised at 17-18 September 2009. All

workshops will publish hard-copy proceedings, which will be distributed at

the event. Workshop papers might be listed in the ACL Anthology as well

(depending on the workshop organisers). The list of RANLP-09 workshops


       Semantic Roles on Human Language Technology Applications, organised by

Paloma Moreda, Rafael Muсoz and Manuel Palomar,

       Partial Parsing 2: Between Chunking and Deep Parsing, organised by Adam

Przepiorkowski, Jakub Piskorski and Sandra Kuebler,

       1st Workshop on Definition Extraction, organised by Gerardo Eugenio

Sierra Martнnez and Caroline Barriere,

       Evaluation of Resources and Tools for Central and Eastern European

languages, organised by Cristina Vertan, Stelios Piperidis and Elena


       Adaptation of Language Resources and Technology to New Domains,

organised by Nuria Bel, Erhard Hinrichs, Kiril Simov and Petya Osenova,

       Natural Language Processing methods and corpora in translation,

lexicography, and language learning, organised by Viktor Pekar, Iustina

Narcisa Ilisei, and Silvia Bernardini,

       Events in Emerging Text Types (eETTs), organised by Constantin Orasan,

Laura Hasler, and Corina Forascu,

       Biomedical Information Extraction, organised by Guergana Savova,

Vangelis Karkaletsis, and Galia Angelova.





Conference paper submission notification: 6 April 2009

Conference paper submission deadline: 13 April 2009

Conference paper acceptance notification: 1 June 2009

Final versions of conference papers submission: 13 July 2009


Workshop paper submission deadline (suggested): 5 June 2009

Workshop paper acceptance notification (suggested): 20 July 2009

Final versions of workshop papers submission (suggested): 24 August 2009


RANLP-09 tutorials: 12-13 September 2009 (Saturday-Sunday)

RANLP-09 conference: 14-16 September 2009 (Monday-Wednesday)

RANLP-09 workshops: 17-18 September 2009 (Thursday-Friday)


For further information about the conference, please visit the conference





Galia Angelova, Bulgarian Academy of Sciences, Bulgaria, Chair of the Org.


Kalina Bontcheva, University of Sheffield, UK

Ruslan Mitkov, University of Wolverhampton, UK, Chair of the Programme


Nicolas Nicolov, Umbria Inc, USA (Editor of volume with selected papers)

Nikolai Nikolov, INCOMA Ltd., Shoumen, Bulgaria

Kiril Simov, Bulgarian Academy of Sciences, Bulgaria (Workshop Coordinator)


e-mail: ranlp09 [AT] lml (dot) bas (dot) 

Back to Top

9-28 . (2009-09-28) ELMAR 2009

51st International Symposium ELMAR-2009

28-30 September 2009 Zadar, CROATIA
Paper submission deadline: March 16, 2009
CALL FOR PAPERS TECHNICAL CO-SPONSORS IEEE Region 8 EURASIP - European Assoc. Signal, Speech and Image Processing IEEE Croatia Section IEEE Croatia Section Chapter of the Signal Processing Society IEEE Croatia Section Joint Chapter of the AP/MTT Societies
INSPEC TOPICS --> Image and Video Processing --> Multimedia Communications --> Speech and Audio Processing --> Wireless Commununications --> Telecommunications --> Antennas and Propagation --> e-Learning and m-Learning --> Navigation Systems --> Ship Electronic Systems --> Power Electronics and Automation --> Naval Architecture --> Sea Ecology --> Special Sessions Proposals - A special session consist of 5-6 papers which should present a unifying theme from a diversity of viewpoints
* Prof. Gregor Rozinaj,Slovak University of Technology, Bratislava, SLOVAKIA: -Title to be announced soon.
* Mr. David Wood, European Broadcasting Union, Geneva, SWITZERLAND: What strategy and research agenda for Europe in 'new media'?
Papers accepted by two reviewers will be published in conference proceedings available at the conference and abstracted/indexed in the IEEE Xplore and INSPEC database. More info is available here: IMPORTANT: Web-based (online) paper submission of papers in PDF format is required for all authors. No e-mail, fax, or postal submissions will be accepted. Authors should prepare their papers according to ELMAR-2009 paper sample, convert them to PDF based on IEEE requirements, and submit them using web-based submission system by March 16, 2009.
Deadline for submission of full papers: March 16, 2009
Notification of acceptance mailed out by: May 11, 2009
Submission of (final) camera-ready papers: May 21, 2009
Preliminary program available online by: June 11, 2009
Registration forms and payment deadline: June 18, 2009
Accommodation deadline: September 10, 2009
Ive Mustac, Tankerska plovidba, Zadar, Croatia Branka Zovko-Cihlar, University of Zagreb, Croatia
Mislav Grgic, University of Zagreb, Croatia
INTERNATIONAL PROGRAM COMMITTEE Juraj Bartolic, Croatia David Broughton, United Kingdom Paul Dan Cristea, Romania Kresimir Delac, Croatia Zarko Cucej, Slovenia Marek Domanski, Poland Kalman Fazekas, Hungary Janusz Filipiak, Poland Renato Filjar, Croatia Borko Furht, USA Mohammed Ghanbari, United Kingdom Mislav Grgic, Croatia Sonja Grgic, Croatia Yo-Sung Ho, Korea Bernhard Hofmann-Wellenhof, Austria Ismail Khalil Ibrahim, Austria Bojan Ivancevic, Croatia Ebroul Izquierdo, United Kingdom Kristian Jambrosic, Croatia Aggelos K. Katsaggelos, USA Tomislav Kos, Croatia Murat Kunt, Switzerland Panos Liatsis, United Kingdom Rastislav Lukac, Canada Lidija Mandic, Croatia Gabor Matay, Hungary Branka Medved Rogina, Croatia Borivoj Modlic, Croatia Marta Mrak, United Kingdom Fernando Pereira, Portugal Pavol Podhradsky, Slovak Republic Ramjee Prasad, Denmark Kamisetty R. Rao, USA Gregor Rozinaj, Slovak Republic Gerald Schaefer, United Kingdom Mubarak Shah, USA Shiguang Shan, China Thomas Sikora, Germany Karolj Skala, Croatia Marian S. Stachowicz, USA Ryszard Stasinski, Poland Luis Torres, Spain Frantisek Vejrazka, Czech Republic Stamatis Voliotis, Greece Nick Ward, United Kingdom Krzysztof Wajda, Poland Branka Zovko-Cihlar, Croatia
CONTACT INFORMATION Assoc.Prof. Mislav Grgic, Ph.D. FER, Unska 3/XII HR-10000 Zagreb CROATIA Telephone: + 385 1 6129 851 Fax: + 385 1 6129 717 E-mail: elmar2009 (at) For further information please visit:
Back to Top

9-29 . (2009-10-05) 2009 APSIPA ASC

            APSIPA Annual Summit and Conference October 5 - 7, 2009

                       Sapporo Convention Center, Sapporo, Japan
2009 APSIPA Annual Summit and Conference is the inaugural event supported by the Asia-Pacific Signal and Information Processing Association (APSIPA). The APSIPA is a new association and it promotes all aspects of research and education on signal processing, information technology, and communications. The field of interest of APSIPA concerns all aspects of signals and information including processing, recognition, classification, communications, networking, computing, system design, security, implementation, and technology with applications to scientific, engineering, and social areas. The topics for regular sessions include, but are not limited to:
Signal Processing Track
1.1 Audio, speech, and language processing
1.2 Image, video, and multimedia signal processing
1.3 Information forensics and security
1.4 Signal processing for communications
1.5 Signal processing theory and methods
Sapporo and Conference Venue: One of many nice cities in Japan, Sapporo is always recognized as a quite beautiful and well-organized city. With a population of 1,800,000, Hokkaido's largest/capital city, Sapporo, is fully serviced by a network of subway, streetcar, and bus lines connecting to its full
compliment of hotel accommodations. Sapporo has already played host to international meetings, sports events, and academic societies. There are a lot of flights from/to Tokyo, Nagoya, Osaka et al. and overseas cities. With all the amenities of a major city yet in balance with its natural surroundings, this beautiful northern capital, Sapporo, is well-equipped to offer a new generation of conventions.
Important Due Dates and Author's Schedule:
Proposals for Special Session: March 1, 2009
Proposals for Forum, Panel and Tutorial Sessions: March 20, 2009
Deadline for Submission of Full-Papers: March 31, 2009
Notification of Acceptance: July 1, 2009
Deadline for Submission of Camera Ready Papers: August 1, 2009
Conference dates: October 5 - 7, 2009
Submission of Papers: Prospective authors are invited to submit either long papers, up to 10 pages in length, or short papers up to four pages in length, where long papers will be for the single-track oral presentation and short papers will be mostly for poster presentation. The conference proceedings will be published, available, and maintained at the APSIPA website.
Detail Information: WEB Site :
Organizing Committee:
Honorary Chair : Sadaoki Furui, Tokyo Institute of Technology, Japan
General co-Chairs : Yoshikazu Miyanaga, Hokkaido University, Japan K. J. Ray Liu, University of Maryland,USA
Technical Program co-Chairs : Hitoshi Kiya, Tokyo Metropolitan Univ., Japan Tomoaki Ohtsuki, Keio University, Japan Mark Liao, Academia Sinica, Taiwan Takao Onoye, Osaka University, Japan               

Back to Top

9-30 . (2009-10-18) 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics

Call for Papers

2009 IEEE Workshop on Applications of Signal Processing to Audio and



Mohonk Mountain House

New Paltz, New York

October 18-21, 2009


The 2009 IEEE Workshop on Applications of Signal Processing to Audio and

Acoustics (WASPAA'09) will be held at the Mohonk Mountain House in New

Paltz, New York, and is sponsored by the Audio & Electroacoustics committee

of the IEEE Signal Processing Society. The objective of this workshop is to

provide an informal environment for the discussion of problems in audio and

acoustics and the signal processing techniques leading to novel solutions.

Technical sessions will be scheduled throughout the day. Afternoons will be

left free for informal meetings among workshop participants.


Papers describing original research and new concepts are solicited for

technical sessions on, but not limited to, the following topics:


* Acoustic Scenes

- Scene Analysis: Source Localization, Source Separation, Room Acoustics

- Signal Enhancement: Echo Cancellation, Dereverberation, Noise Reduction,


- Multichannel Signal Processing for Audio Acquisition and Reproduction

- Microphone Arrays

- Eigenbeamforming

- Virtual Acoustics via Loudspeakers


* Hearing and Perception

- Auditory Perception, Spatial Hearing, Quality Assessment

- Hearing Aids


* Audio Coding

- Waveform Coding and Parameter Coding

- Spatial Audio Coding

- Internet Audio

- Musical Signal Analysis: Segmentation, Classification, Transcription

- Digital Rights

- Mobile Devices


* Music

- Signal Analysis and Synthesis Tools

- Creation of Musical Sounds: Waveforms, Instrument Models, Singing

- MEMS Technologies for Signal Pick-up



Submission of four-page paper: April 15, 2009

Notification of acceptance: June 26, 2009

Early registration until:  September 1, 2009


Workshop Committee


General Co-Chair:

Jacob Benesty

Université du Québec


Montréal, Québec, Canada


General Co-Chair:

Tomas Gaensler

mh acoustics

Summit, NJ, USA


Technical Program Chair:

Yiteng (Arden) Huang

WeVoice Inc.

Bridgewater, NJ, USA


Technical Program Chair:

Jingdong Chen

Bell Labs


Murray Hill, NJ, USA


Finance Chair:

Michael Brandstein

Information Systems

Technology Group

MIT Lincoln Lab

Lexington, MA, USA


Publications Chair:

Eric J. Diethorn

Multimedia Technologies

Avaya Labs Research

Basking Ridge, NJ, USA


Publicity Chair:

Sofiène Affes

Université du Québec


Montréal, Québec, Canada


Local Arrangements Chair:

Heinz Teutsch

Multimedia Technologies

Avaya Labs Research

Basking Ridge, NJ, USA


Far East Liaison:

Shoji Makino

NTT Communication Science

Laboratories, Japan

Back to Top

9-31 . (2009-10-05) IEEE International Workshop on Multimedia Signal Processing - MMSP'09

Call for Papers 2009 IEEE International Workshop on Multimedia Signal Processing - MMSP'09   October 5-7, 2009 Sheraton Rio Hotel & Resort Rio de Janeiro, Brazil   We would like to invite you to submit your work to MMSP-09, the eleventh IEEE International Workshop on Multimedia Signal Processing. We also would like to advise you of the upcoming paper submission deadline on April, 17th.  This year MMSP will introduce a new type of paper award: the “top 10%” paper award. While MMSP papers are already very well regarded and highly cited, there is a growing need among the scientific community for more immediate quality recognition. The objective of the top 10% award is to acknowledge outstanding quality papers, while at the same time keeping the wider participation and information exchange allowed by higher acceptance rates. MMSP will continue to accept as many as high quality papers as possible, with acceptance rates in line with other top events of the IEEE Signal Processing Society. This new award will be granted to as many as 10% of the total paper submissions, and is open to all accepted papers, whether presented in oral or poster form.     The workshop is organized by the Multimedia Signal Processing Technical Committee of the IEEE Signal Processing Society.   Organized in Rio de Janeiro, MMSP-09 provides excellent conditions for brainstorming on, and sharing the latest advances in multimedia signal processing and technology in one of the most beautiful and exciting cities in the world.   Scope: Papers are solicited on the following topics (but not limited to)   Systems and applications - Teleconferencing, telepresence, tele-immersion, immersive environments - Virtual classrooms and distance learning - Multimodal collaboration, online multiplayer gaming, social networking - Telemedicine, human-human distance collaboration - Multimodal storage and retrieval   Multimedia for communication and collaboration - Ad hoc broadband sensor array processing - Microphone and camera array processing - Automatic sensor calibration, synchronization - De-noising, enhancement, source separation, - Source localization, spatialization   Scene analysis for immersive telecommunication and human collaboration - Audiovisual scene analysis - Object detection, identification, and tracking - Gesture, face, and human pose recognition - Presence detection and activity classification - Multimodal sensor fusion   Coding - Distributed/centralized source coding for sensor arrays - Scalable source coding for multiparty conferencing - Error/loss resilient coding for telecommunications - Channel coding, error protection and error concealment   Networking - Voice/video over IP and wireless - Quality monitoring and management - Security - Priority-based QoS control and scheduling - Ad-hoc and real time communications - Channel coding, packetization, synchronization, buffering   A thematic emphasis for MMSP-09 is on topics related to multimedia processing and interaction for immersive telecommunications and collaboration.  Papers on these topics are encouraged.   Schedule - Papers (full paper, 4 pages, to be received by): April 17, 2009 - Notification of acceptance by: June 13, 2009 - Camera-ready paper submission by: July 6, 2009   More information is available at   ================================================================================ You have received this mailing because you are a member of IEEE and/or one of the IEEE Technical Societies.   To unsubscribe, please go to and be certain to include your IEEE member number.  If you need assistance with your E-Notice subscription, please contact
Back to Top

9-32 . (2009-10-23)ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)

Call for Papers
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
October 23, 2009
Beijing, China

Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."

The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.

We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:

-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech

-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts

-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail

-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)

Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website:

Important Dates:
Monday, June 1, 2009 Submission Deadline
Saturday, July 4, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing

For more information:
SSCS 2009 Website:
ACM Multimedia 2009 Website:

On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands

Back to Top

9-33 . (2009-11-02) CALL FOR ICMI-MLMI 2009 WORKSHOPS New dates !!

Boston MA, USA

Paper submission May 22, 2009Author notification July 20, 2009 Camera-ready due August 20, 2009 Conference Nov 2-4, 2009 Workshops Nov 5-6, 2009 conference: 2-4 November 2009

The ICMI and MLMI conferences will jointly take place in the Boston
area during November 2-6, 2009. The main aim of ICMI-MLMI 2009 is to
further scientific research within the broad field of multimodal
interaction, methods and systems. The joint conference will focus on
major trends and challenges in this area, and work to identify a
roadmap for future research and commercial success.  The main
conference will be followed by a number of workshops, for which we
invite proposals.

The format, style, and content of accepted workshops is under the
control of the workshop organizers.  Workshops will take place on 5-6
November 2009, and may be of one or two days duration.
Workshop organizers will be expected to manage the workshop content,
specify the workshop format, be present to moderate the discussion and
panels, invite experts in the domain, and maintain a website for the

Proposals should specify clearly the workshop's title, motivation,
impact, expected outcomes,  potential invited speakers and the workshop
URL. The proposal should also name the main workshop organizer, and
co-organizers,  and should provide brief bios of the organizers.

Submit workshop proposals, as pdf, by email to

Back to Top

9-34 . (2009-11-15) CIARP 2009

CIARP 2009 Third Call for Papers
Eduardo Bayro Corrochano CINVESTAV, Mexico
Jan Olof Ecklundh
KTH, Sweden
November 15th-18th 2009, Guadalajara, México Venue: Hotel Misión Carlton
CIARP-IAPR Award for best papers Special Issue in Journal Pattern Recognition Letters
The 14th Iberoamerican Congress on Pattern Recognition (CIARP 2009) will be held in Guadalajara, Jalisco, México. CIARP 2009 is organized by CINVESTAV, Unidad Guadalajara, México, supported by IAPR and sponsored by the Mexican Association for Computer Vision, Neural Computing and Robotics (MACVNR) and other five PR iberoamerican PR societies CIARP 2009, as all the thirteen previous conferences, will be a fruitful forum for the exchange of scientific results and experiences, as well as the sharing of new knowledge, and the increase of the co-operation between research groups in pattern recognition and related areas.
Topics of interests
• Artificial Intelligence Techniques in PR
• Bioinformatics
• Clustering
• Computer Vision
• Data Mining
• DB, Knowledge Bases and Linguistic PR-Tools
• Discrete Geometry
• Clifford Algebra Applications in Perception Action
• Document Processing and Recognition
• Fuzzy and Hybrid Techniques in PR
• Image Coding, Processing and Analysis
• Kernel Machines
• Logical Combinatorial Pattern Recognition
• Mathematical Morphology
• Mathematical Theory of Pattern Recognition
• Natural Language Processing and Recognition
• Neural Networks for Pattern Recognition
• Parallel and Distributed Pattern Recognition
• Pattern Recognition Principles
• Petri Nets
• Robotics and humanoids
• Remote Sensing Applications of PR
• Satellite Image processing and radar
• Gognitive Humanoid Vision
• Shape and Texture Analysis
• Signal Processing and Analysis
• Special Hardware Architectures
• Statistical Pattern Recognition
• Syntactical and Structural Pattern Recognition
• Voice and Speech Recognition
Invited Speakers: Prof. M. Petrou Imp. Coll. UK, Prof. I. Kakadiaris Hou TX Univ., Dr. P. Sturm INRIA, Gr. FR, Prof. W. Kropatsch (TU Wien, AU).
Paper Submission
Prospective authors are invited to contribute to the conference by electronically submitting a full paper in English of no more than 8 pages including illustrations, results and references, and must be presented at the conference in English. The papers should be submitted electronically before June 7th, 2009, through the CIARP 2008 webpage ( The papers should be prepared following the instructions from Springer LNCS series. At least one of the authors must have registered for the paper to be published
Workshops/Tutorials: CASI’2009 Intellig. Remote Satellite Imagery & Humanoid Robotics, 4 Tutorials on Texture,CV, PR & Geometric Algebra Applications.
Important Dates
Submission of papers before June 7th, 2009
Notification of acceptance August 1th, 2009 Camera-ready August 21th, 2009
Registration IAPR Members Non-IAPR
Before August 21th , 2008 400 USD 450 USD
After August 21th, 2008 450 USD 500 USD
Extra Conference Dinner 50 USD
Registration fee includes: Proceedings, Ice-break Party, Coffee Breaks, Lunches, Conference Dinner, Tutorials and Cultural Program (1. tour colonial area by night, 2. Latin dance night, 3. folkloric dance spectacle, mariachi traditional concert with superb banquet in colonial romantic garden). Extra: organized tours to Puerto. Vallarta Tequila, archeological places, artisans markets, museums and traditional colonial churches and towns . Contact:
Back to Top

9-35 . (2009-11-16) 8ème Rencontres Jeunes Chercheurs en Parole (french)

               Appel à communications RJCP 2009 : 
            8ème Rencontres Jeunes Chercheurs en Parole 
16-18 novembre 2009 à Avignon 
Cette manifestation, parrainée par l’Association Francophone de la 
Communication Parlée (AFCP), donne aux (futurs) doctorants ou jeunes 
docteurs l’occasion de se rencontrer, de présenter leurs travaux et 
d’échanger sur les divers domaines de la Parole. 
Des jeunes chercheurs de différentes disciplines seront invités lors 
de ces rencontres et viendront disserter sur les travaux en cours 
dans leurs domaines respectifs. Leurs conseils et questions vous 
permettront de porter un regard nouveau sur vos travaux de 
Des sessions "poster" ainsi que des sessions orales seront proposées 
aux participants souhaitant exposer leurs travaux. Ces journées 
sont bien sûr ouvertes à tous ceux qui désirent simplement assister 
aux présentations sans proposer eux-mêmes une communication. 
Date limite de réception des articles : 2 juillet 2009 
Notification aux auteurs : 27 septembre 2009 
Conférence : 16,17 et 18 novembre 2009 
Pour la bonne organisation de ces rencontres, merci de vous inscrire 
le plus rapidement possible, votre article pourra être envoyé par 
la suite. 
Les propositions de communication sous forme de résumé de 
4 à 6 pages devront être envoyées avant le 2 juillet 2009 
sur le site de la conférence : 
Un comité de lecture, composé de scientifiques du domaine, examinera 
les articles soumis et communiquera à chaque participant ses 
remarques éventuelles. 
Les instructions spécifiques et feuilles de style prédéfinies sont 
disponibles sur le site. Un recueil des articles sera publié et 
distribué à l’issue de ces rencontres. 
La conférence se tiendra sur 3 jours dans les locaux de l'Université 
d'Avignon et des Pays de Vaucluse. Outre les présentations des 
participants et les sessions "poster", des personnalités issues du 
monde académique et industriel animeront des conférences plénières. 
De plus, un forum d'entreprises sera organisé, permettant ainsi la 
rencontre entre chercheurs et industriels. 
Toutes les informations pratiques concernant le déroulement de la 
conférence seront disponibles sur le site. 
Pour plus de renseignements, vous pouvez envoyer un mail à : 
Les thématiques abordées (liste non exhaustive) : 
- Phonétique et phonologie 
- Traitement automatique de la langue naturelle orale 
- Production/perception de la parole 
- Pathologies de la parole 
- Acoustique de la parole 
- Reconnaissance et compréhension de la parole 
- Acquisition de la parole et du langage 
- Applications à composante orale (dialogue, indexation,...) 
- Prosodie 
- Diversité linguistique 
- Surdité 
- Gestualité 
Back to Top

9-36 . (2009-12-04) CfP Troisièmes Journées de Phonétique Clinique Aix en Provence France (french)


Troisièmes Journées de Phonétique Clinique

Appel à Communication
**4-5 décembre 2009, Aix-en-Provence, France

_ <>

Ces journées s’inscrivent dans la lignée des premières et deuxièmes journées d’études de phonétique clinique, qui s’étaient tenues respectivement à Paris en 2005 et Grenoble en 2007. La phonétique clinique réunit des chercheurs, enseignants-chercheurs, ingénieurs, médecins et orthophonistes, différents corps de métiers complémentaires qui poursuivent le même objectif : une meilleure connaissance des processus d’acquisition et de dysfonctionnement de la parole et de la voix. Cette approche interdisciplinaire vise à optimiser les connaissances fondamentales relatives à la communication parlée chez le sujet sain et à mieux comprendre, évaluer, diagnostiquer et remédier aux troubles de la parole et de la voix chez le sujet pathologique.

Les communications porteront sur les études phonétiques de la parole et de la voix pathologiques, chez l’adulte et chez l’enfant. Les *thèmes* du colloque incluent, de façon non limitative :

   Perturbations du système oro-pharyngo-laryngé    Perturbations du système perceptif    Troubles cognitifs et moteurs    Instrumentation et ressources en phonétique clinique    Modélisation de la parole et de la voix pathologique    Evaluation et traitement des pathologies de la parole et de la voix
*Les contributions sélectionnées seront présentées sous l’une des deux formes suivantes :*

   Communication longue: 20 minutes, pour l’exposé de travaux aboutis    Communication courte: 8 minutes pour l’exposé d'observations
   cliniques, de travaux préliminaires, de problématiques émergentes
   afin de favoriser au mieux les échanges interdisciplinaires entre
   phonéticiens et cliniciens.
*Format de soumission:
*Les soumissions aux JPC se présentent sous la forme de résumés rédigés en français, d’une longueur maximale d’une page A4, police Times New Roman, 12pt, interligne simple. Les résumés devront être soumis au format PDF à l’adresse suivante:

_*Date limite de soumission: 15 mai 2009
Date de notification auteurs : 1er juillet 2009

*Pour toute information complémentaire, contactez les organisateurs:

_/L’inscription aux JPC3 (1^er juillet 2009) sera ouverte à tous, publiant ou non publiant.

Back to Top

9-37 . (2010-05-11) Speech prosody 2010 Chicago IL USA

Every Language, Every Style: Globalizing the Science of Prosody
Call For Papers

Prosody is, as far as we know, a universal characteristic of human speech, founded on the cognitive processes of speech production and perception.  Adequate modeling of prosody has been shown to improve human-computer interface, to aid clinical diagnosis, and to improve the quality of second language instruction, among many other applications.

Speech Prosody 2010, the fifth international conference on speech prosody, invites papers addressing any aspect of the science and technology of prosody.  Speech Prosody is the only recurring international conference focused on prosody as an organizing principle for the social, psychological, linguistic, and technological aspects of spoken language.  Speech Prosody 2010 seeks, in particular, to discuss the universality of prosody.  To what extent can the observed scientific and technological benefits of prosodic modeling be ported to new languages, and to new styles of spoken language?  Toward this end, Speech Prosody 2010 especially welcomes papers that create or adapt models of prosody to languages, dialects, sociolects, and/or communicative situations that are inadequately addressed by the current state of the art.


Speech Prosody 2010 will include keynote presentations, oral sessions, and poster sessions covering topics including:

* Prosody of under-resourced languages and dialects
* Communicative situation and speaking style
* Dynamics of prosody: structures that adapt to new situations
* Phonology and phonetics of prosody
* Rhythm and duration
* Syntax, semantics, and pragmatics
* Meta-linguistic and para-linguistic communication
* Signal processing
* Automatic speech synthesis, recognition and understanding
* Prosody of sign language
* Prosody in face-to-face interaction: audiovisual modeling and analysis
* Prosodic aspects of speech and language pathology
* Prosody in language contact and second language acquisition
* Prosody and psycholinguistics
* Prosody in computational linguistics
* Voice quality, phonation, and vocal dynamics


Prospective authors are invited to submit full-length, four-page papers, including figures and references, at All Speech Prosody papers will be handled and reviewed electronically.


The Doubletree Hotel Magnificent Mile is located two blocks from North Michigan Avenue, and three blocks from Navy Pier, at the cultural center of Chicago.  The Windy City has been the center of American innovation since the mid nineteenth century, when a railway link connected Chicago to the west coast, civil engineers reversed the direction of the Chicago river, Chicago financiers invented commodity corn (maize), and the Great Chicago Fire destroyed almost every building in the city. The Magnificent Mile hosts scores of galleries and museums, and hundreds of world-class restaurants and boutiques.


Submission of Papers ( October 15, 2009
Notification of Acceptance:                                           December 15, 2009
Conference:                                                                    May 11-14, 2010

Back to Top

9-38 . (2010-05-17) 7th Language Resources and Evaluation Conference

 The 7th edition of the Language Resources and Evaluation Conference will take place in Valetta (Malta) on May 17-23, 2010.
More information will be available soon on:

Back to Top