Contents

1 . Editorial

 Dear members,

 

You are facing a very important deadline. Your contributions to Interspeech is April 17th. We are expecting a lot of important contributions for our major annual event. 

Don't forget to inform me about important events in Speech science and technology you are aware and that could be of interest for your ISCA colleagues.

 Prof. em. Chris Wellekens 

Institut Eurecom

Sophia Antipolis
France 

public@isca-speech.org

 

 
 
Back to Top

2 . ISCA News

 

Back to Top

2-1 . DEADLINE: Fifth Christian Benoit Award 2009

                              Fifth Christian Benoît Award

                                Deadline    April 20th, 2009

The Christian Benoît Award is delivered periodically by the Association Christian Benoit (**). It is given to promising young scientists in the domain of Speech Communication. The Award provides financial support for the development of a multi-media project promoting the work of these young scientists, and is valued at 7,500 Euros(*).
The first award was delivered to Tony Ezzat from MIT in June 2000, for his research in Audiovisual Speech Synthesis, the second award to Johanna Barry from University of Melbourne in September 2002 for her work on the acquisition of lexical tones in profoundly hearing-impaired speakers using a cochlear implant, the third award to Olov Engwal from KTH in Stockholm in October 2004 for the elaboration of ARTUR, a multi-modal articulation tutor able to give automatic feedback to real users, and the fourth award to Susanne Fuchs from ZAS in Berlin in August 2007 for her study of the influence of vocal tract geometry on speaker specific articulatory control strategies and acoustic properties and on the interspeaker variability in vowel production.
The fifth award will be delivered this year to ANY PROJECT IN THE FIELD OF AUDIOVISUAL SPEECH PROCESSING and FACE-TO-FACE COMMUNICATION. Candidates should be in the final stages of their doctoral research or within the five years following the achievement of their PhD.
The Christian Benoît award will offer financial support to develop a multi-media project which (a) demonstrates the candidate's research in a way that helps launching that candidate's career, and (b) leverages electronic publishing technologies intelligently so as to facilitate the widest possible dissemination of this content.
In the application, the candidate should provide
-- a statement of research interest,
-- a detailed curriculum vitae, and
-- a description of the proposed multi-media project including a presentation of the scientific and/or pedagogical objectives and of the methodological aspects, as well as a detailed description of the provisional budget.
If the project already exists, a copy or link should be provided along with the application.
Applications should be sent to Pascal Perrier (Pascal.Perrier@gipsa-lab.inpg.fr) and received by Monday April 20th, 2009. Electronic submissions are mandatory.       
Back to Top

3 . SIG's News: The SLaTE SIG

The SLaTE SIG

 

This special interest group was created in mid-September 2006 at Interspeech. The purpose of the International Speech Communication Association (ISCA) Special Interest Group on Speech and Language Technology in Education (SLaTE) is to promote interest in the use of speech and natural language processing for education, to provide members of ISCA with a special interest in speech and language technology in education with a means of exchanging news of recent research developments and other matters of interest in Speech and Language Technology in Education and to sponsor meetings and workshops on that subject.

The SIG had its first workshop up in the beautiful Laurel Highlands at Farmington PA USA in October 2007. There were two keynote addresses, by Nick Ellis and Pamela Bogart, and by Stephanie Seneff. The workshop included oral paper presentations and demonstration sessions over two and an half days. It elected officers: Maxine Eskenazi is chair of SLaTE and Martin Russell is secretary. The proceedings of the workshop are available from ISCA.

The next SLaTE ITRW meeting is in Warwickshire England at Wroxall Abbey Estate in September 2009 just before Interspeech. The call for papers and general announcement can be found at the SLaTE website, www.sigslate.org.

There is an upcoming special issue of Speech Communication on the topic of Spoken Language Processing for Education. The guest editors, Maxine Eskenazi, Abeer Alwan, and Helmer Strik, received 28 proposed papers, the largest number of papers to date that have been proposed for a special issue of Speech Communication.

The group is in the process of expanding its website to include links to work of its members. If you are interested in having your work linked to the website, you can send your link to max@cmu.edu.

 

 

Back to Top

4 . Future ISCA Conferences and Workshops (ITRW)

 

Back to Top

4-1 . (2009-06-25) ISCA Tutorial and Research Workshop on NON-LINEAR SPEECH PROCESSING

An ISCA Tutorial and Research Workshop on NON-LINEAR SPEECH PROCESSING (NOLISP'09)
25/06/2009 - DeadLine: 2009-03-15
Vic Catalonia Espagne
http://nolisp2009.uvic.cat
After the success of NOLISP'03 held in Le Croisic, NOLISP'05 in Barcelona and NOLISP'07 in Paris, we are pleased to present NOLISP'09 to be held at the University of Vic (Catalonia, Spain) on June 25-27, 2009. The workshop will feature invited lectures by leading researchers as well as contributed talks. The purpose of NOLISP'09 is to present and discuss novel ideas, works and results related to alternative techniques for speech processing, which depart from mainstream approaches. Prospective authors are invited to submit a 3 to 4 page paper proposal in English, which will be evaluated by the Scientific C! ommittee. Final papers will be due one month after the workshop to be included in the CD-ROM proceedings. Contributions are expected (but not restricted to) the following areas: Non-linear approximation and estimation Non-linear oscillators and predictors Higher-order statistics Independent component analysis Nearest neighbours Neural networks Decision trees Non-parametric models Dynamics of non-linear systems Fractal methods Chaos modelling Non-linear differential equations All fields of speech processing are targeted by the workshop, namely: Speech production, speech analysis and modelling, speech coding, speech synthesis, speech recognition, speaker identification/verification, speech enhancement/separation, speech perception, etc. ADDITIONAL INFORMATION Proceedings will be published in Springer-Verlag's Lecture Notes Series in Computer Science (LNCS). LNCS is published, in parallel to the printed books, in full-text electronic form. All contributions should be original, and must not have been previously published, nor be under review for presentation elsewhere. A special issue of Speech Communication (Elsevier) on “Non-Linear and Non-Conventional Speech Processing” will be also published after the workshop Detailed instructions for submission to NOLISP'09 and further informations will be available at the conference Web site (http://nolisp2009.uvic.cat).
IMPORTANT DATES:
* March 15, 2009 - Submission (full papers)
* April 30, 2009 - Notification of acceptance
* September 30, 2009 - Final (revised) paper
Back to Top

4-2 . (2009-09-06) CfP INTERSPEECH 2009 Brighton UK

Interspeech 2009 - Call for Papers
www.interspeech2009.org
Interspeech is the world's largest and most comprehensive
conference on Speech Science and Speech Technology. Interspeech
2009 will be held in Brighton, UK, 6-10 September 2009, and its
theme is Speech and Intelligence. We invite you to submit
original papers in any related area, including (but not limited
to):
Human Speech Production, Perception And Communication
* Human speech production
* Human speech perception
* Phonology and phonetics
* Discourse and dialogue
* Prosody (production, perception, prosodic structure)
* Emotion and Expression
* Paralinguistic and nonlinguistic cues (e.g. emotion and
expression)
* Physiology and pathology
* Spoken language acquisition, development and learning
Speech And Language Technology
* Automatic Speech recognition
* Speech analysis and representation
* Audio segmentation and classification
* Speech enhancement
* Speech coding and transmission
* Speech synthesis and spoken language generation
* Spoken language understanding
* Accent and language identification
* Cross-lingual and multi-lingual processing
* Multimodal/multimedia signal processing
* Speaker characterisation and recognition
Spoken Language Systems And Applications
* Speech Dialogue systems
* Systems for information retrieval from spoken documents
* Systems for speech translation
* Applications for aged and handicapped persons
* Applications for learning and education
* Hearing prostheses
* Other applications
Resources, Standardisation And Evaluation
* Spoken language resources and annotation
* Evaluation and standardisation
---------------------------------------------------
Paper Submission
Papers for the Interspeech 2009 proceedings are up to four pages
in length and should conform to the format given in the paper
preparation guidelines and author kits, which are now available
at www.interspeech2009.org
Authors are asked to categorize their submitted papers as being
one of:
N: Completed empirical studies reporting novel research findings
E: Exploratory studies
P: Position papers
Authors will also have to declare that their contribution is
original and not being submitted for publication elsewhere (e.g.
another conference, workshop, or journal).
Papers must be submitted via the on-line paper submission
system. The deadline for submitting a paper is 17th April 2009.
This date will not be extended.
Interspeech2009 Organising Committee
 
Back to Top

4-3 . (2010-09-26) INTERSPEECH 2010 Chiba Japan

Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."

Back to Top

4-4 . (2011-08-27) INTERSPEECH 2011 Florence Italy

Interspeech 2011

Palazzo dei Congressi,  Italy, August 27-31, 2011.

Organizing committee

Piero Cosi (General Chair),

Renato di Mori (General Co-Chair),

Claudia Manfredi (Local Chair),

Roberto Pieraccini (Technical Program Chair),

Maurizio Omologo (Tutorials),

Giuseppe Riccardi (Plenary Sessions).

More information www.interspeech2011.org

Back to Top

5 . Workshops and conferences supported (but not organized) by ISCA

 

Back to Top

5-1 . (2009-12-13) ASRU 2009

IEEE ASRU2009 Automatic Speech Recognition and Understanding Workshop
Merano, Italy, December 13-17, 2009 http://www.asru2009.org/
 
The eleventh biannual IEEE workshop on Automatic Speech Recognition and Understanding (ASRU) will be held on December 13-17, 2009. The ASRU workshops have a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding. Workshop topics - automatic speech recognition and understanding - human speech recognition and understanding - speech to text systems - spoken dialog systems - multilingual language processing - robustness in ASR - spoken document retrieval - speech-to-speech translation - spontaneous speech processing - speech summarization - new applications of ASR The workshop program will consist of invited lectures, oral and poster presentations, and panel discussions. Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the ASRU 2009 website http://www.asru2009.org/. All papers will be handled and reviewed electronically. The website will provide you with further details. Please note that the submission dates for papers are strict deadlines.
IMPORTANT DATES
Paper submission deadline July 15, 2009
Paper notification of acceptance September 3, 2009
Demo session proposal deadline September 24, 2009
Early registration deadline October 7, 2009
Workshop December 13-17, 2009
Please note that the number of attendees will be limited and priority will be given to paper presenters. Registration will be handled via the ASRU 2009 website, http://www.asru2009.org/, where more information on the workshop will be available.
 
General Chairs Giuseppe Riccardi, U. Trento, Italy Renato De Mori, U. Avignon, France
Technical Chairs Jeff Bilmes, U. Washington, USA Pascale Fung, HKUST, Hong Kong China Shri Narayanan, USC, USA Tanja Schultz, U. Karlsruhe, Germany
Panel Chairs Alex Acero, Microsoft, USA Mazin Gilbert, AT&T, USA Demo Chairs Alan Black, CMU, USA Piero Cosi, CNR, Italy
Publicity Chairs Dilek Hakkani-Tur, ICSI, USA Isabel Trancoso, INESC -ID/IST, Portugal
Publication Chair Giuseppe di Fabbrizio, AT&T, USA
Local Chair Maurizio Omologo, FBK-IRST, Italy .
Back to Top

5-2 . (2009-12-14) 6th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications MAVEBA 2009

University degli Studi di Firenze Italy
Department of Electronics and Telecommunications
6th International Workshop
on
Models and Analysis of Vocal Emissions for Biomedical
Applications
MAVEBA 2009
December 14 - 16, 2009
Firenze, Italy
http://maveba.det.unifi.it
Speech is the primary means of communication among humans, and results from
complex interaction among vocal folds vibration at the larynx and voluntary articulators
movements (i.e. mouth tongue, jaw, etc.). However, only recently has research
focussed on biomedical applications. Since 1999, the MAVEBA Workshop is
organised every two years, aiming to stimulate contacts between specialists active in
clinical, research and industrial developments in the area of voice signal and images
analysis for biomedical applications. This sixth Workshop will offer the participants
an interdisciplinary platform for presenting and discussing new knowledge in the field
of models, analysis and classification of voice signals and images, as far as both
adults, singing and children voices are concerned. Modelling the normal and
pathological voice source, analysis of healthy and pathological voices, are among the
main fields of research. The aim is that of extracting the main voice characteristics,
together with their deviation from “healthy conditions”, ranging from fundamental
research to all kinds of biomedical applications and related established and advanced
technologies.
SCIENTIFIC PROGRAM
linear and non-linear models of voice
signals;
physical and mechanical models;
aids for disabled;
measurement devices (signal and image); prostheses;
robust techniques for voice and glottal
analysis in time, frequency, cepstral,
wavelet domain;
neural networks, artificial intelligence and
other advanced methods for pathology
classification;
linguistic and clinical phonetics; new-born infant cry analysis;
neurological dysfunction; multiparametric/multimodal analysis;
imaging techniques (laryngography,
videokymography, fMRI);
voice enhancement;
protocols and database design;
Industrial applications in the biomedical
field;
singing voice;
speech/hearing interactions;
DEADLINES
30 May 2009 Submission of extended abstracts (1-2 pages, 1 column)
/special session proposal
30 July 2009 Notification of paper acceptance
30 September 2009 Final full paper submission (4 pages, 2 columns, pdf
format) and early registration
14-16 December 2009 Conference venue
SPONSORS
ENTE CRF Ente Cassa di Risparmio di Firenze
IEEE EMBS
IEEE Engineering in Medicine and Biology
Society
ELSEVIER Eds.
Biomedical Signal Processing and Control
ISCA
International Speech and Communication
Association
A.I.I.M.B.
Associazione Italiana di Ingegneria Medica e
Biologica
COST Action
2103 Europ. COop. in Science & Tech. Research
FURTHER INFORMATION
Claudia Manfredi – Conference Chair
Department of Electronics and
Telecommunications
Via S. Marta 3, 50139 Firenze, Italy
Phone: +39-055-4796410
Fax: +39-055-494569
E-mail: claudia.manfredi@unifi.it
Piero Bruscaglioni
Department of Physics
Polo Scientifico Sesto Fiorentino, 50019
Firenze,Italy
Phone: +39-055-4572038
Fax: +39-055-4572356
E-mail: piero.bruscaglioni@unifi.it
Back to Top

6 . Books,databases and softwares

 

Back to Top

6-1 . Books

This section shows recent books whose titles been have communicated by the authors or editors.
 
Also some advertisement for recent books in speech are included.
 
Book presentation is written by the authors and not by this newsletter editor or any  voluntary reviewer.

Back to Top

6-1-1 . Proc. of the IEEE Special Issue on ADVANCES IN MULTIMEDIA INFORMATION RETRIEVAL

Proceedings of the IEEE
 
Special Issue on ADVANCES IN MULTIMEDIA INFORMATION RETRIEVAL
 
Volume 96, Number 4, April 2008
 
Guest Editors:
 
Alan Hanjalic, Delft University of Technology, Netherlands
Rainer Lienhart, University of Augsburg, Germany
Wei-Ying Ma, Microsoft Research Asia, China
John R. Smith, IBM Research, USA
 
Through carefully selected, invited papers written by leading authors and research teams, the April 2008 issue of Proceedings of the IEEE (v.96, no.4) highlights successes of multimedia information retrieval research, critically analyzes the achievements made so far and assesses the applicability of multimedia information retrieval results in real-life scenarios. The issue provides insights into the current possibilities for building automated and semi-automated methods as well as algorithms for segmenting, abstracting, indexing, representing, browsing, searching and retrieving multimedia content in various contexts. Additionally, future challenges that are likely to drive the research in the multimedia information retrieval field for years to come are also discussed.
 
 
 
Back to Top

6-1-2 . Computeranimierte Sprechbewegungen in realen Anwendungen

Computeranimierte Sprechbewegungen in realen Anwendungen
Authors: Sascha Fagel and Katja Madany
102 pages
Publisher: Berlin Institute of Technology
Year: 2008
Website http://www.ub.tu-berlin.de/index.php?id=1843
To learn more, please visit the corresponding IEEE Xplore site at
http://ieeexplore.ieee.org/xpl/tocresult.jsp?isYear=2008&isnumber=4472076&Submit32=Go+To+Issue
Usability of Speech Dialog Systems
 
Back to Top

6-1-3 . Usability of Speech Dialog Systems Listening to the Target Audience

Usability of Speech Dialog Systems
Listening to the Target Audience
Series: Signals and Communication Technology
 
Hempel, Thomas (Ed.)
 
2008, X, 175 p. 14 illus., Hardcover
 
ISBN: 978-3-540-78342-8
Back to Top

6-1-4 . Speech and Language Processing, 2nd Edition

Speech and Language Processing, 2nd Edition
 
By Daniel Jurafsky, James H. Martin
 
Published May 16, 2008 by Prentice Hall.
More Info
Copyright 2009
Dimensions 7" x 9-1/4"
Pages: 1024
Edition: 2nd.
ISBN-10: 0-13-187321-0
ISBN-13: 978-0-13-187321-6
Request an Instructor or Media review copy
Sample Content
An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology – at all levels and with all modern technologies – this book takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. KEY TOPICS: Builds each chapter around one or more worked examples demonstrating the main idea of the chapter, usingthe examples to illustrate the relative strengths and weaknesses of various approaches. Adds coverage of statistical sequence labeling, information extraction, question answering and summarization, advanced topics in speech recognition, speech synthesis. Revises coverage of language modeling, formal grammars, statistical parsing, machine translation, and dialog processing. MARKET: A useful reference for professionals in any of the areas of speech and language processing.
  
 
 
Back to Top

6-1-5 . Advances in Digital Speech Transmission

Advances in Digital Speech Transmission
Editors: Rainer Martin, Ulrich Heute and Christiane Antweiler
Publisher: Wiley&Sons
Year: 2008
Back to Top

6-1-6 . Sprachverarbeitung -- Grundlagen und Methoden der Sprachsynthese und Spracherkennung

Title: Sprachverarbeitung -- Grundlagen und Methoden 
       der Sprachsynthese und Spracherkennung 
Authors: Beat Pfister, Tobias Kaufmann 
Publisher: Springer 
Year: 2008 
Website: http://www.springer.com/978-3-540-75909-6 
Back to Top

6-1-7 . Digital Speech Transmission

Digital Speech Transmission
Authors: Peter Vary and Rainer Martin
Publisher: Wiley&Sons
Year: 2006
Back to Top

6-1-8 . Distant Speech Recognition,

Distant Speech Recognition, Matthias Wölfel and John McDonough (2009), J. Wiley & Sons.
 
 Please link the title to http://www.distant-speech-recognition.com 
 
In the very recent past, automatic speech recognition (ASR) systems have attained acceptable performance when used with speech captured with a head-mounted or close-talking microphone (CTM). The performance of conventional ASR systems, however, degrades dramatically as soon as the microphone is moved away from the mouth of the speaker. This degradation is due to a broad variety of effects that are not found in CTM speech, including background noise, overlapping speech from other speakers, and reverberation. While conventional ASR systems underperform for speech captured with far-field sensors, there are a number of techniques developed in other areas of signal processing that can mitigate the deleterious effects of noise and reverberation, as well as separating speech from overlapping speakers. Distant Speech Recognition presents a contemporary and comprehensive description of both theoretic abstraction and practical issues inherent in the distant ASR problem.
Back to Top

6-1-9 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods

Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
Joseph Keshet and Samy Bengio, Editors
John Wiley & Sons
March, 2009
Website:  Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
 
About the book:
This is the first book dedicated to uniting research related to speech and speaker recognition based on the recent advances in large margin and kernel methods. The first part of the book presents theoretical and practical foundations of large margin and kernel methods, from support vector machines to large margin methods for structured learning. The second part of the book is dedicated to acoustic modeling of continuous speech recognizers, where the grounds for practical large margin sequence learning are set. The third part introduces large margin methods for discriminative language modeling. The last part of the book is dedicated to the application of keyword-spotting, speaker
verification and spectral clustering. 
Contributors: Yasemin Altun, Francis Bach, Samy Bengio, Dan Chazan, Koby Crammer, Mark Gales, Yves Grandvalet, David Grangier, Michael I. Jordan, Joseph Keshet, Johnny Mariéthoz, Lawrence Saul, Brian Roark, Fei Sha, Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebo. 
 
 
 
Back to Top

6-1-10 . Some aspects of Speech and the Brain.

Some aspects of Speech and the Brain. 
Susanne Fuchs, Hélène Loevenbruck, Daniel Pape, Pascal Perrier
Editions Peter Lang, janvier 2009
 
What happens in the brain when humans are producing speech or when they are listening to it ? This is the main focus of the book, which includes a collection of 13 articles, written by researchers at some of the foremost European laboratories in the fields of linguistics, phonetics, psychology, cognitive sciences and neurosciences.
 
-- 
Back to Top

6-2 . Database providers

 

Back to Top

6-2-1 . Join ELRA

Join ELRA

 

2009 ELRA membership is open to all institutions wishing to benefit from ELRA services. To join, please fill in the online membership form here:

 

http://www.elra.info/scripts/adhform.php

 

Members’ Benefits

 

 

By joining us and becoming an ELRA member, you will benefit from:

 

 

-         Price discounts on the resources, up to 70% reduction on some items. In addition, discounts on related technical publications, commercial reports, etc.

-         Information on other databases available or databases being developed.

-         Legal and contractual assistance. This could be when you are negotiating for a resource with a producer or if you need information on other contractual or legal matters.

 

-         Various publications: Special News and Updates on association affairs are issued via e-mail in the monthly Members Bulletin. The ELRA Web site features a section 

for members only, where internal reports and statistics can be found.

 

-         First-hand information on the results from the ELRA studies.

-         Discount of LREC registration fees

 

Contact: Valerie Mapelli (mailto:mapelli@elda.org)

 

 

***************************************************************************

 

 

The European Language Resources Association (ELRA) is a non-profit making organisation founded under the auspices of the European Commission in 1995, with the mission of providing a clearing 

house for language resources and promoting Human Language Technologies (HLT). Find out more about ELRA by visiting our web site: http://www.elra.info

 

 

 

ELRA’s missions are to promote Language Resources (LRs) for the Human Language Technology (HLT) sector, and to evaluate language engineering technologies. To achieve these two major missions, 

we offer the following range of services: identification of LRs, production of LRs, validation of LRs, evaluation of systems, products, tools, etc., related to LRs, distribution of LRs, promotion of best practices 
and standardisation.

 

 

 

To play a determining role as a HLT community leader, ELRA also supports the infrastructure for evaluation campaigns and the development of a scientific field of LRs and evaluation, e.g. via the organisation 

of the LREC conference (http://www.lrec-conf.org).

 

 

 

Back to Top

6-2-2 . ELRA - Language Resources Catalogue - Update

*****************************************************************
ELRA - Language Resources Catalogue - Update
*****************************************************************

ELRA is happy to announce that 1 new Evaluation Package and 2 Speech Resources are now available in its catalogue:
*
**ELRA-E0033 CHIL 2007 Evaluation Package
*The CHIL Seminars are scientific presentations given by students, faculty members or invited speakers in the field of multimodal interfaces and speech processing. The language is European English spoken by non native speakers. The recordings comprise the following: videos of the speaker and the audience from 4 fixed cameras, frontal close ups of the speaker, close talking and far-field microphone data of the speaker’s voice and background sounds.
The CHIL 2007 Evaluation Package consists of:
1) A set of audiovisual recordings of interactive seminars. The recordings were done between June and September 2006 according to the “CHIL Room Setup” specification.
2) Video annotations.
3) Orthographic transcriptions.
For more information, see: http://catalog.elra.info/product_info.php?products_id=1092

*ELRA-S0297 Hungarian Speecon Database
*The Hungarian Speecon database comprises the recordings of 555 adult Hungarian speakers and 50 child Hungarian speakers who uttered respectively over 290 items and 210 items (read and spontaneous).
For more information, see: http://catalog.elra.info/product_info.php?products_id=1094

*ELRA-S0298 Czech Speecon Database
*The Czech Speecon database comprises the recordings of 550 adult Czech speakers and 50 child Czech speakers who uttered respectively over 290 items and 210 items (read and spontaneous).
For more information, see: http://catalog.elra.info/product_info.php?products_id=1095

For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org

Visit our On-line Catalogue: http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html 

Back to Top

6-2-3 . LDC News

 

LDC2009T05
 
LDC2009T06
 
LDC2009T07
 
-  Additional Free LDC Resources  -
 
-  Membership Mailbag:  Commercial Use of LDC data  -

 

 

In this month's newsletter, the Linguistic Data Consortium (LDC) would like to announce the availability of three new publications, highlight free LDC resources, and provide information on commercial use of LDC data.

 


New Publications

(1) 2008 NIST Metrics for Machine Translation (MetricsMATR08) Development Data contains data, reference translations, and software used for NIST MetricsMATR.  NIST MetricsMATR is a series of research challenge events for machine translation (MT) metrology, promoting the development of innovative, even revolutionary, MT metrics that correlate highly with human assessments of MT quality. In this program, participants submit their metrics to the National Institute of Standards and Technology (NIST). NIST runs those metrics on certain held-back test data for which it has human assessments measuring quality and then calculates correlations between the automatic metric scores and the human assessments.

In the NIST Metrics for Machine Translation 2008 Evaluation (MetricsMATR08), participants received as development data a subset of the materials used in the NIST Open MT06 evaluation, specifically, human reference translations, system translations, and human assessments of adequacy and preference. The source data was comprised of twenty-five Arabic language newswire documents with a total of 249 segments. The data in each segment consisted of four human reference translations in English and system translations from eight different MT06 machine translation systems. In addition to the data and reference translations, this release includes software tools for evaluation and reporting and documentation describing how the human assessments were obtained and how they are represented in the data. The evaluation plan contains further information and rules on the use of this data.

The MetricsMATR program seeks to overcome several drawbacks to the methods employed for the evaluation of MT technology. Currently, automatic metrics have not yet proved able to predict the usefulness and reliability of MT technologies with confidence. Nor have automatic metrics demonstrated that they are meaningful in target languages other than English. Human assessments, however, are expensive, slow, subjective and difficult to standardize. These problems, and the need to overcome them through the development of improved automatic (or even semi-automatic) metrics, have been a constant point of discussion at past NIST MT evaluation events. MetricsMATR aims to provide a platform to address these shortcomings.

2008 NIST Metrics for Machine Translation (MetricsMATR08) Development Data is distributed via web download.

2009 Subscription Members will automatically receive two copies of this corpus on disc. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$150.

*

 

(2) GALE Phase 1 Chinese Broadcast Conversation Parallel Text - Part 2 contains transcripts and English translations of 24 hours of Chinese broadcast conversation programming from China Central TV (CCTV), Phoenix TV and Voice of America (VOA). It does not contain the audio files from which the transcripts and translations were generated. This release, along with other corpora, was used as training data in Phase 1 (year 1) of the DARPA-funded GALE program. GALE Phase 1 Chinese Broadcast Conversation Parallel Text - Part 1 was released in January 2009.

A manual selection procedure was used to choose data appropriate for the GALE program, namely, conversation (talk) programs focusing on current events. Stories on topics such as sports, entertainment and business were excluded from the data set. 

The selected audio snippets were carefully transcribed by LDC annotators and professional transcription agencies following LDC's Quick Rich Transcription specification. Manual sentence units/segments (SU) annotation was also performed as part of the transcription task. Three types of end of sentence SU were identified: statement SU, question SU, and incomplete SU.

After transcription and SU annotation, files were reformatted into a human-readable translation format and assigned to professional translators for careful translation. Translators followed LDC's GALE Translation guidelines which describe the makeup of the translation team, the source data format, the translation data format, best practices for translating certain linguistic features (such as names and speech disfluencies) and quality control procedures applied to completed translations.

GALE Phase 1 Chinese Broadcast Conversation Parallel Text - Part 2 is distributed via web download.

2009 Subscription Members will automatically receive two copies of this corpus on disc. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$1500.

*


(3) The Unified Linguistic Annotation Text Collection consists of two datasets:  the Language Understanding Annotation Corpus (LDC2009T10) and Reflex Entity Translation Training Dev/Test (LDC2009T11).  Most recent annotation efforts for language have focused on small pieces of the larger problem of semantic annotation rather than producing a single unified representation. The Unified Linguistic Annotation (ULA) project, sponsored by the National Science Foundation, seeks to integrate into one framework different layers of annotation (e.g., semantics, discourse, temporal, opinions) using various existing resources, including PropBank, NomBank, TimeBank, Penn Discourse Treebank and coreference and opinion annotations. The project represents a concerted effort of researchers from several institutions to develop a large word corpus with balanced and annotated data. The Unified Linguistic Annotation Text Collection is provided as a resource for the ULA effort. It consists of two datasets:

  • The Language Understanding Annotation Corpus (LDC2009T10). The Language Understanding Annotation Corpus was developed at the Johns Hopkins Center of Excellence in Human Language Technology.  It consists of over 9000 words of English text (6949 words) and Arabic text (2183 words) annotated for committed belief, event and entity, coreference, dialog acts and temporal relations. The materials were chosen from various sources to represent "informal input," that is, text that contains colloquial forms. The documents in the corpus include excerpts from newswire stories, telephone conversation transcripts, emails, contracts and written instructions.
  • REFLEX Entity Translation Training/DevTest (LDC2009T11). REFLEX Entity Translation Training/DevTest is the complete set of training data and development test data for the 2007 REFLEX Entity Translation evaluation sponsored by the National Institute of Standards and Technology (NIST). It contains approximately 67.5K words of newswire and weblog text for each of English, Chinese and Arabic (or approximately 22.5K words in each language) translated into each of the other two languages. The data is annotated for entities and TIMEX2 extents and normalization.


Unified Linguistic Annotation Text Collection is distributed via web download.

2009 Subscription Members will automatically receive two copies of this collection on disc. 2009 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data by completing the LDC User Agreement for Non-members.  The agreement can be faxed to +1 215 573 2175 or scanned and emailed to this address.  Please indicate on the license whether you are requesting the entire collection (LDC2009T07) or just one dataset (LDC2009T10 or LDC2009T11).  The collection is being made available at no charge.

Additional Free LDC Resources

LDC is pleased to distribute the Unified Linguistic Annotation Text Collection (LDC2009T07) corpora at no cost to support the work of the ULA project. As mentioned above, to license a copy of this data, non-members should complete the LDC User Agreement for Non-members and fax to +1 215 573 2175 or scan and email to this address.  On the heels of the release of the ULA corpora, LDC would like to highlight other resources which are available at no cost.  Free grant-covered copies of the following Talkbank databases can be licensed from LDC:

 

Further information, including additional free datasets such as TimeBank 1.2, and useful tools such as LDC's parallel text sentence aligner, Champollion, can be found in our What's New! What's Free! Archive.

Membership Mailbag - Commercial Use of LDC data

LDC's Membership office responds to a few thousand emailed queries a year, and, over time, we've noticed that some questions tend to crop up with regularity.  To address the questions that you, our data users, have asked, we'd like to continue our Membership Mailbag series of newsletter articles.  This month we will focus on commercial rights to LDC data, with an emphasis on the LDC For-Profit membership.

To help clarify commercial use of LDC data, let's look at a few examples in which a commercial organization licenses LDC data.  In the first scenario, a company, TryFirst JoinLater LLC., licenses data as a non-member.  At this point, the company is not an LDC member and cannot use LDC data for any commercial purpose.   Some years later, TryFirst JoinLater decides to join LDC as a For-Profit member.  Do they now have commercial rights to the data licensed as a non-member?  Yes, by joining the LDC, TryFirst JoinLater gains commercial rights to any data already licensed, unless those rights are otherwise restricted by a corpus-specific user license.  In short, a commercial organization can first license data as a non-member for research purposes and then join LDC to gain commercial rights to that data.

Second scenario.  Another company, Join Only Once, Ltd., decides to join LDC as a For-Profit Member for Membership Year 2009.  What data will this company be able to use for commercial purposes?  As 2009 member, Join Only Once will gain commercial rights to data from the year that they have joined, that is, Membership Year 2009, unless otherwise restricted by a corpus-specific user license.  Furthermore, while a member of the current year, Join Only Once can license data for commercial use from the closed Membership Years (1993-2007) at the Reduced Licensing Fee. Join Only Once, Ltd. retains ongoing commercial rights to data it licenses as a For-Profit member. Fast forward a few years - Join Only Once has not renewed their LDC membership but they would like to obtain some additional data not from their Membership Year.  If Join Only Once does not renew their LDC membership, they will not have a commercial license to any new data obtained after their Membership Year has ended. 


Which leads us to our final scenario.  A third company, Best LDC Member Ever! Corporation, has been a For-Profit LDC member since our inception in 1992.  Does this company have commercial rights to all LDC data?  No, there are a few caveats to note. All members are reminded to consult corpus-specific license agreements for limitations, including commercial restrictions, on the use of certain corpora. In the case of a small group of corpora that includes American National Corpus (ANC) Second Release (LDC2005T35), Buckwalter Arabic Morphological Analyzer Version 2.0 (LDC2004L02) and all CSLU corpora, commercial licenses must be obtained separately from the owners of the data. A full list of corpus-specific user licenses can be found on our License Agreements page. 

Got a question?  About LDC data?  Forward it to ldc@ldc.upenn.edu.  The answer may appear in a future Membership Mailbag article.

Back to Top

7 . Jobs openings

We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobs.html as well as http://www.elsnet.org/ Jobs). 

The ads will be automatically removed from ISCApad after  6 months. Informing ISCApad editor when the positions are filled will avoid irrelevant mails between applicants and proposers.


Back to Top

7-1 . (2008-09-18) Senior staff position at ICSI Berkeley CA USA

SENIOR STAFF OPENING AT THE ICSI SPEECH GROUP

 

The International Computer Science Institute (ICSI) invites applications for a senior staff position in its speech research group. Successful applicants must have significant post-PhD experience, and a world-class research reputation.  Candidates must also have demonstrated ability to grow and manage a strong research effort. A successful track record with obtaining funding for the chosen area is essential. 

 

The ICSI Speech Group (including its predecessor, the ICSI Realization Group) has been a source of novel approaches to speech processing since 1988. ICSI’s speech group is well known for its efforts in speech recognition (particularly in neural network approaches and novel forms of feature extraction), as well as in speaker recognition, diarization, and speech understanding. It has close ties to research efforts in machine translation on the Berkeley campus, and to the STAR lab at SRI for the complete range of its research. It also works closely with several European labs, particularly IDIAP in Switzerland and to the University of Edinburgh.

 

ICSI is an independent not-for-profit Institute located a few blocks from the Berkeley campus of the University of California. It is closely affiliated with the University, and particularly with the Electrical Engineering and Computer Science (EECS) Department. Students, faculty, and administrative colleagues from the University all play a key role at the Institute. In addition to its Speech Group, areas of current strength in the Institute include: Artificial Intelligence (primarily in natural language), Internet research (primarily in architecture and internet security), and Algorithms (primarily associated with problems in bioinformatics and networking). We also have new activities in Computer Vision and Computer Architecture. See http://www.icsi.berkeley.edu to learn more about ICSI.

 

Applications should include a cover letter, a vita, the names of at least 3 references (with both postal and email addresses), and a research statement. Applications should be sent by email to speechjob@icsi.berkeley.edu and by postal mail to

 

Nelson Morgan (re Senior Search)

ICSI

1947 Center Street, Suite 600

Berkeley, CA 94704

 

ICSI is an Affirmative Action/Equal Opportunity Employer. Applications from women and minorities are especially encouraged. Hiring is contingent on eligibility to work in the United States.

Back to Top

7-2 . (2008-10-30) Programmer Analyst Position at LDC

                                                          Programmer Analyst Position at LDC
The Linguistic Data Consortium (LDC) at the University of Pennsylvania, Philadelphia, PA has an immediate opening for a full-time programmer analyst.
 
Programmer Analyst – Publications Programmer (#081025790)
 
Duties: Position will have primary responsibility for developing, implementing and managing data processing systems required to coordinate and prepare publications of language resources used for human language technology research and technology development.  Such resources include video, computer-readable speech, software and text data that are distributed via media and internet.  Position will  communicate with external data providers and internal project managers to acquire raw source material and to schedule releases; perform quality assessment of large data collections and render analyses/descriptions of their formats; create or adapt software tools to condition data to a uniform format and level of quality (e.g., eliminating corrupted data, normalizing data, etc.); validate quality control standards to published data and verify results; document initial and final data formats; review author documentation and supporting materials; create additional documentation as needed; and master and replicate publications. Position will also maintain the publications catalog system, the publications inventory, the archive of publishable and published data and the publication equipment, software and licenses.  Position requires attention to detail and is responsible for managing multiple short-term projects.
 
For further information on the duties and qualifications for this position, or to apply online please visit http://jobs.hr.upenn.edu/; search postings for the reference number indicated above.
 
Penn offers an excellent benefits package including medical/dental, retirement plans, tuition assistance and a minimum of 3 weeks paid vacation per year. The University of Pennsylvania is an affirmative action/equal opportunity employer.
 
Position contingent upon funding. For more information about LDC and the programs we support, visit http://www.ldc.upenn.edu/.
 
Back to Top

7-3 . (2008-11-17) Volunteering at ISCA Student advisory committee

 Announcement #1: ISCA-SAC Call for Volunteers
 
The ISCA Student Advisory Committee (ISCA-SAC) is seeking student volunteers to help with several interesting projects such as transcribe interviews from the Saras Institute, plan/organize student events at ISCA-sponsored conferences/workshops, increase awareness of speech and language research to undergraduate and junior graduate students, assist with website redesign to facilitate interaction with Google Scholar, as well as collect resources (e.g., conferences, theses, job listings, speech labs, etc.) for the isca-students.org website, to name a few.
 
There are many small tasks that can be done, each of which would only take up a few hours. Unless it is of your interest to become a long term volunteer, no further commitment is required. If interested, please contact the ISCA-SAC Volunteer Coordinator at: vo lun te er [at] isca-students [dot] org.
 
Announcement #2: ISCA-SAC Logo Contest
 
The ISCA Student Advisory Committee is in the search for a new logo. This is your chance to release your artistic side and enter the ISCA-SAC Logo Competition. All students are invited to participate and a prize (still to be determined) will be awarded to the winner; not to mention the importance of having your logo posted on the isca-students.org website for the world to see.
 
The deadline for submissions is March 31st, 2009. The new Logo will be unveiled during the Interspeech 2009 conference in the form of merchandise embedded with the new logo (e.g., mugs, pens, etc.).

If interested, please send your submissions to: logocontest [at] isca-students.org  

Back to Top

7-4 . (2008-11-20) 12 PhD Positions and 2 Post Doc Positions available in SCALE (EU Marie Curie)

12 PhD Positions and 2 Post Doc Positions available in

 

the Marie Curie International Training Network on

 

Speech Communication with Adaptive LEarning (SCALE)

 

SCALE is a cooperative project between

 

·        IDIAP Research Institute in Martigny, Switzerland (Prof Herve Bourlard)

·        Radboud University Nijmegen, The Netherlands (Prof Lou Boves, Dr Louis ten Bosch, Dr-ir Bert Cranen, Dr O. Scharenborg)

·        RWTH Aachen, Germany (Prof Hermann Ney, Dr Ralf Schlüter)

·        Saarland University, Germany (Prof Dietrich Klakow, Dr John McDonough)

·        University of Edinburgh, UK (Prof Steve Renals, Dr Simon King, Dr Korin Richmond, Dr Joe Frankel)

·        University of Sheffield, UK (Prof Roger Moore, Prof Phil Green, Dr Thomas Hain, Dr Guido Sanguinetti) .

 

Companies like Motorola or Philips Speech Recognition Systems/Nuance are associated partners of the program.

 

Each PhD position is funded for three years and degrees can be obtained from the participating academic institutions. 

 

Distinguishing features of the cooperation include:

 

·        Joint supervision of dissertations by lecturers from two partner institutions

·        While staying with one institution for most of the time, the program includes a stay at a second partner institution either from academic or industry for three to nine month 

·        An intensive research exchange program between all participating institutions

 

PhD and Post Doc projects will be in the area of

 

·        Automatic Speech Recognition

·        Machine learning

·        Speech Synthesis

·        Signal Processing

·        Human speech recognition

 

The salary of a PhD position is roughly 33.800€ per year. There are additional mobility (up to 800€/month) and travel allowances (yearly allowance). Applicants should hold a strong university degree which would entitle them to embark on a doctorate (Masters/diploma or equivalent) in a relevant discipline, and should be in the first four years of their research careers. As the project is funded by a EU mobility scheme, there are also certain mobility requirements.

 

Each Post Doc position is funded for two years. The salary is approximately 52000€ per year. Applicants must have a doctoral degree at the time of recruitment or equivalent research experience. The research experience may not exceed 5 years at the time of appointment.

 

Women are particularly encouraged to apply.

 

Deadlines for applications:

 

January 1, 2009

April 1, 2009

July 1, 2009

September 1, 2009.

 

After each deadline all submitted applications will be reviewed and positions awarded until all positions are filled.

 

Applications should be submitted at http://www.scale.uni-saarland.de/ .

 

To be fully considered, please include:

 

- a curriculum vitae indicating degrees obtained, disciplines covered

(e.g. list of courses ), publications, and other relevant experience

 

- a sample of written work (e.g. research paper, or thesis,

preferably in English)

 

- copies of high school and university certificates, and transcripts

 

- two references (e-mailed directly to the SCALE office

(Diana.Schreyer@LSV.Uni-Saarland.De) before the deadline)

 

- a statement of research interests, previous knowledge and activities

in any of the relevant research areas.

 

In case an application can only be submitted by regular post, it should

be sent to:

 

SCALE office

Diana Schreyer

Spoken Language Systems, FR 7.4

C 71 Office 0.02

Saarland University

P.O. Box 15 11 50

D-66041 Saarbruecken

Germany

 

If you have any questions, please contact Prof. Dr. Dietrich Klakow

(Dietrich.Klakow@LSV.Uni-Saarland.De).

 

Back to Top

7-5 . (2009-01-08) Assistant Professor Toyota Technological Institute at Chicago

Assistant Professor Toyota Technological Institute at Chicago  ########################################################  Toyota Technological Institute at Chicago (http://www.tti-c.org) is a philanthropically endowed academic computer science institute, dedicated to basic research and graduate education in computer science.  TTI-C opened for operation in 2003 and by 2010 plans to have 12 tenured and tenure track faculty and 18 research (3-year) faculty. Regular faculty will have a teaching load of at most one course per year and research faculty will have no teaching responsibilities.  Applications are welcome in all areas of computer science, but TTI-C is currently focusing on a number of areas including speech and language processing.  For all positions we require a Ph.D. degree or Ph.D. candidacy, with the degree conferred prior to date of hire.  Applications received after December 31 may not get full consideration.  Applications can be submitted online at http://www.tti-c.org/facultyapplication
Back to Top

7-6 . (2009-01-09) Poste d'ingénieur CDD : environnement intelligent

Poste d'ingénieur CDD : environnement intelligent

Ingenieur - CDD

DeadLine: 15/02/2008

olivier.pietquin@supelec.fr

http://ims.metz.supelec.fr/spip.php?article99

 

Un poste d'ingénieur CDD de 18 mois est ouvert sur le campus de Metz de Supélec. Le candidat s’intégrera au sein de l’équipe « Information, Multimodalité & Signal » (http://ims.metz.supelec.fr). Cette équipe composée de 15 personnes est active dans les domaines du traitement numérique du signal et de l’information (traitement statistique du signal, apprentissage numérique, méthodes d’inspiration biologique), de la représentation des connaissances (fouille de données, analyse et apprentissage symbolique) et du calcul intensif et distribué. Le poste vise un profil permettant l’implémentation matérielle intégrée des méthodes développées au sein de l’équipe dans des applications liées aux environnements intelligents ainsi que leur maintenance. Le campus de Metz s’est en effet doté d’une plateforme en vraie grandeur reproduisant une pièce intelligente intégrant caméras, microphones, capteurs infrarouges, interfaces homme-machine (interface vocale, interface cerveau-machine), robo!

 ts et moyens de diffusion d’information. Il s’agira de réaliser une plateforme intégrée permettant de déployer des démonstrations rapidement dans cet environnement et de les maintenir.

 

 

Profil recherché :

– diplôme d’ingénieur en informatique, ou équivalent universitaire

– expérience de travail dans le cadre d’équipes multidisciplinaires,

– une bonne pratique de l’anglais est un plus.

 

Plus d'informations sont disponibles sur le site de l'équipe (http://ims.metz.supelec.fr)

 

Faire acte de candidature (CV+lettre) auprès de O. Pietquin : olivier.pietquin@supelec.fr.

 

 

 

http://gdr-isis.org/rilk/gdr/Kiosque/poste.php?jobid=3010

Back to Top

7-7 . (2009-01-13) 2009 PhD Research Fellowships at the University of Trento (Italy)

2009 PhD Research Fellowships 

=============================

 

The Adaptive Multimodal Information and  Interface  Research Lab

(casa.disi.unitn.it) at University of Trento (Italy) has several

PhD Research fellowships in the following areas:

 

                Statistical Machine Translation                

                Natural Language Processing    

                Automatic Speech Recognition

                Machine Learning

                Spoken/Multimodal Conversational Systems

                   

We are looking for students with _excellent_ academic records

and relevant technical background. Students with EE, CS Master degrees

( or equivalent ) are welcome as well other related disciplines will

be considered. Prospective students are encouraged to look at the lab

website to search for current and past research projects.

 

PhD research fellowships benefits are described in the graduate school

website (http://ict.unitn.it/disi/edu/ict).

The  applicants should be fluent in _English_. The Italian language

competence is optional and applicants are encouraged to acquire

this skill during training. All applicants should have very good

programming skills. University of Trento is an equal opportunity employer.

 

The selection of candidates will be open until positions are filled.

Interested applicants should send their CV along with their

statement of research interest, transcript records and three reference

letters to :

 

 

                Prof. Dr.-Ing. Giuseppe Riccardi

        Email: riccardi@disi.unitn.it

 

 

-------------------

About University of Trento and Information Engineering and Computer

 Science Department

 

The University of Trento is constantly ranked as

 premiere Italian graduate university institution (see www.disi.unitn.it).

 

Please visit the DISI Doctorate school website at http://ict.unitn.it/edu/ict

 

DISI Department

DISI has a strong focus on Interdisciplinarity with professors from

different faculties of the University (Physical Science, Electrical

Engineering, Economics, Social Science, Cognitive Science, Computer Science)

 with international background.

DISI aims at exploiting the complementary experiences present in the

various research areas in order to develop innovative methods and

technologies, applications and advanced services.

English is the official language.

 

 

--

Prof. Ing. Giuseppe Riccardi

Marie Curie Excellence Leader

Department of Information Engineering and Computer Science

University of Trento

Room D11, via Sommarive 14

38050 Povo di Trento, Italy

tel  : +39-0461 882087

email: riccardi@dit.unitn.it

   

Back to Top

7-8 . (2009-02-06) Position at ELDA

The Evaluation and Language Distribution Agency (ELDA) is offering a 6-month to 1-year internship in Human Language Technology for the Arabic language, with a special focus on Machine Translation (MT) and Multilingual Information Retrieval (MLIR). The internship is organised in the framework of the European project MEDAR (MEDiterranean ARabic language and speech technology). She or he will work in ELDA offices in Paris and the main work will consist of the development and adaptation of MT and MLIR open source software for Arabic.

http://www.medar.info
http://www.elda.org

Qualifications:
---------------
The applicant should have a high-quality degree in Computer Science. Good programming skills in C, C++, Perl and Eclipse are required.
The applicant should have a good knowledge of Linux and open source software.

Interest in Speech/Text Processing, Machine Learning, Computational Linguistics, or Cognitive Science is a plus.
Proficiency in written English is required.


Starting date:
--------------
February 2009.


Applications
-------------
Applications in the first instance should be made by email to
Djamel Mostefa, Head of Production and Evaluation department, ELDA,  email: mostefa _AT_ elda.org

Please include a cover letter and  your CV 

Back to Top

7-9 . (2009-01-18) Ph D position at Universitaet Karlsruhe

 

 

 

At the Institut für Theoretische Informatik, Lehrstuhl Prof. Waibel Universität Karlsruhe (TH) a

 

 

Ph.D. position

in the field of

Multimodal Dialog Systems

 

is to be filled immediately with a salary according to TV-L, E13.

 

The responsibilities include basic research in the area of multimodal dialog systems, especially multimodal human-robot interaction and learning robots, within application targeted research projects in the area of multimodal Human-machine interaction.  Set in a framework of internationally and industry funded research programs, the successful candidate(s) are expected to contribute to the state-of-the art of modern spoken dialog systems, improving natural interaction with robots.

 

We are an internationally renowned research group with an excellent infrastructure. Current research projects for improving Human-Machine and Human-to-Human interaction are focus on dialog management for Human-Robot interaction.

 

Within the framework of the International Center for Advanced Communication Technology (interACT), our institute operates in two locations, Universität Karlsruhe (TH), Germany and at Carnegie Mellon University, Pittsburgh, USA.  International joint and collaborative research at and between our centers is common and encouraged, and offers great international exposure and activity. 

 

Applicants are expected to have:

  • an excellent university degree (M.S, Diploma or Ph.D.) in Computer Science, Computational Linguistics, or related fields
  • excellent programming skills 
  • advanced knowledge in at least one of the fields of Speech and Language Processing, Pattern Recognition, or Machine Learning

 

For candidates with Bachelor or Master’s degrees, the position offers the opportunity to work toward a Ph.D. degree.

 

In line with the university's policy of equal opportunities, applications from qualified women are particularly encouraged. Handicapped applicants will be preferred in case of the same qualification.

 

Questions may be directed to: Hartwig Holzapfel, Tel. 0721 608 4057, E-Mail: hartwig@ira.uka.de,  http://isl.ira.uka.de

 

The application should be sent to Professor Waibel, Institut für Theoretische Informatik, Universität Karlsruhe (TH), Adenauerring 4, 76131 Karlsruhe, Germany

Back to Top

7-10 . (2009-01-16) Two post-docs at the University of Rennes (France)

Two post-doc positions on sparse representations at IRISA, Rennes,  Post-Doc DeadLine: 28/02/2009 stephanie.lemaile@irisa.fr  Two postdoc positions are opened in the METISS team at INRIA, Rennes, France, in the area of data analysis / signal processing for large-scale data.   INRIA, the French National Institute for Research in Computer Science and Control plays a leading role in the development of Information and Communication Science and Technology (ICST) in France.  The METISS project team gathers more than 15 researchers and engineers for research in audio signal and speech modelling and processing.  The positions are opened in the context of the European project SMALL (Sparse Models, Algorithms and Learning for Large-scale data), within the FET-Open program of FP7, and of the ECHANGE project (ECHantillonnage Acoustique Nouvelle GEnération), funded by the french ANR.  The objective of the SMALL project is to build a theoretical framework with solid foundations, as well as efficient algorithms, to discover and exploit structure in large-scale multimodal or multichannel data, using sparse signal representations. The SMALL consortium is made of 5 academic partners located in four countries (France, United Kingdom, Switzerland, and Israel). INRIA is the scientific coordinator of the SMALL project.   INRIA is also the coordinator of the ECHANGE project, which gathers three academic partners (Institut Jean Le Rond d'Alembert & Institut Jacques Louis Lions from Université Paris 6, and INRIA).  The objective of ECHANGE is to design a theoretical and experimental framework based on sparse representations and compressed sensing to measure and process large complex acoustic fields through a limited number of acoustic sensors.  DESCRIPTION The postdocs will work on theoretical, algorithmic and practical aspects of sparse representations of large-dimensional data, with a particular emphasis on acoustic fields, for various applications such as compressed sensing, source separation and localization, and signal classification.   REQUESTED PROFILE: Candidates should hold a Ph.D in Signal/Image Processing, Machine Learning, or Applied Mathematics.  Previous experience in sparse representations (time-frequency and time-scale transforms, pursuit algorithms, support vector machines and related approaches) is desirable, as well as a strong taste for the mathematical aspects of signal processing.     ADDITIONAL INFORMATION  For additional technical information, please contact :  remi.gribonval@inria.fr    DURATION OF THE CONTRACT The positions, funded for at least 2 years (up to three years), will be renewed on a yearly basis depending on scientific progress and achievement. The gross minimum salary will be 28287 € annually (~ 1923 € net per month) and will be adjusted according to experience. The usual funding support of any French institution (medical insurance, etc.) will be provided.   TENTATIVE RECRUITING DATE  01.03.2009  as soon as possible  PLACE OF EMPLOYMENT  INRIA Rennes – Bretagne Atlantique  (France) - Websites: : http://www.irisa.fr/http://www.inria.fr   SCIENTIFIC COORDINATOR  Rémi GRIBONVAL - SMALL/ECHANGE project leader -  METISS Project-Team - INRIA-Bretagne Atlantique -  Email: remi.gribonval@inria.fr -  phone: +33 2 99 84 25 06   APPLICATIONS TO BE SENT TO  Please send application files (a motivation letter, a full resume, a statement of research interests, a list of  publications, and up to five reference letters) to  Stéphanie Lemaile, SMALL/ECHANGE administrative assistant.  Email: stephanie.lemaile@irisa.fr  Deadline: end of february 2009.    http://gdr-isis.org/rilk/gdr/Kiosque/poste.php?jobid=3051
Back to Top

7-11 . (2009-01-14)AT&T Labs-Research Research staff

AT&T Labs - Research : Research Staff

AT&T Labs - Research is seeking exceptional candidates for
Research Staff positions. AT&T is the premiere broadband, IP,
entertainment, and wireless communications company in the U.S.
and one of the largest in the world. Our researchers are
dedicated to solving real problems in speech and language
processing, and are involved in inventing, creating and
deploying innovative services. We also explore fundamental
research problems in these areas. Outstanding Ph.D.-level
candidates at all levels of experience are encouraged to apply.
Candidates must demonstrate excellence in research, a
collaborative spirit and strong communication and software
skills. Areas of particular interest are

    * Large-vocabulary automatic speech recognition
    * Acoustic and language modeling
    * Robust speech recognition
    * Signal processing
    * Adaptive learning
    * Pronunciation modeling
    * Natural language understanding
    * Voice and multimodal search

AT&T Companies are Equal Opportunity Employers. All qualified
candidates will receive full and fair consideration for
employment. Application instructions are available on our
website at http://www.research.att.com. Click on "Join us". 

Back to Top

7-12 . (2009-01-13) Ph D Research fellowships at University of Trento (Italy)

2009 PhD Research Fellowships 

The Adaptive Multimodal Information and  Interface  Research Lab

(casa.disi.unitn.it) at University of Trento (Italy) has several

PhD Research fellowships in the following areas:

 

                Statistical Machine Translation                

                Natural Language Processing    

                Automatic Speech Recognition

                Machine Learning

                Spoken/Multimodal Conversational Systems

                   

We are looking for students with _excellent_ academic records

and relevant technical background. Students with EE, CS Master degrees

( or equivalent ) are welcome as well other related disciplines will

be considered. Prospective students are encouraged to look at the lab

website to search for current and past research projects.

 

PhD research fellowships benefits are described in the graduate school

website (http://ict.unitn.it/disi/edu/ict).

The  applicants should be fluent in _English_. The Italian language

competence is optional and applicants are encouraged to acquire

this skill during training. All applicants should have very good

programming skills. University of Trento is an equal opportunity employer.

 

The selection of candidates will be open until positions are filled.

Interested applicants should send their CV along with their

statement of research interest, transcript records and three reference

letters to :

 

 

                Prof. Dr.-Ing. Giuseppe Riccardi

        Email: riccardi@disi.unitn.it

 

 

-------------------

About University of Trento and Information Engineering and Computer

 Science Department

 

The University of Trento is constantly ranked as

 premiere Italian graduate university institution (see www.disi.unitn.it).

 

Please visit the DISI Doctorate school website at http://ict.unitn.it/edu/ict

 

DISI Department

DISI has a strong focus on Interdisciplinarity with professors from

different faculties of the University (Physical Science, Electrical

Engineering, Economics, Social Science, Cognitive Science, Computer Science)

 with international background.

DISI aims at exploiting the complementary experiences present in the

various research areas in order to develop innovative methods and

technologies, applications and advanced services.

English is the official language.

 

 

--

Prof. Ing. Giuseppe Riccardi

Marie Curie Excellence Leader

Department of Information Engineering and Computer Science

University of Trento

Room D11, via Sommarive 14

38050 Povo di Trento, Italy

tel  : +39-0461 882087

email: riccardi@dit.unitn.it

   

Back to Top

7-13 . (2009-02-15) Research Grants for PhD Students and Postdoc Researchers-Bielefeld University

The Graduate School Cognitive Interaction Technology at Bielefeld University,
Germany offers
Research Grants for PhD Students and Postdoc Researchers
The Center of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University
has been established in the framework of the Excellence Initiative as a research center for
intelligent systems and cognitive interaction between humans and technical systems.
CITEC's focus is directed towards motion intelligence, attentive systems, situated
communication, and memory and learning. Research and development are directed towards
understanding the processes and functional constituents of cognitive interaction, and
establishing cognitive interfaces that facilitate the use of complex technical systems.
The Graduate School Cognitive Interaction Technology invites applications from outstanding
young scientists, in the fields of robotics, computer science, biology, physics, sports
sciences, linguistics or psychology, that are willing to contribute to the cross-disciplinary
research agenda of CITEC. The international profile of CITEC fosters the exchange of
researchers and students with related scientific institutions. For PhD students, a structured
program including taught courses and time for individual research is offered. The integration
and active participation in interdisciplinary research projects, which includes access to first
class lab facilities, is facilitated by CITEC. For more information, please see: www.cit-ec.de .
Successful candidates must hold an excellent academic degree (MSc/Diploma/PhD) in a
related discipline, have a strong interest in research, and be proficient in both written and
spoken English. Research grants will be given for the duration of three years for PhD
students, and one to three years for Postdocs.
All applications should include: a cover letter indicating the motivation and research interests
of the candidate, a CV including a list of publications, and relevant certificates of academic
qualification. PhD applicants are asked to provide the outline of a PhD project (2-3 pages)
and a short abstract. Postdoc researchers are asked to provide the outline of a research
project (4-5 pages) relevant to CITEC's research objectives and a short abstract. It is
obligatory for Postdoc applicants, and strongly recommended for PhD applicants, to provide
two letters of recommendation. In the absence of letters of recommendation, PhD candidates
should provide the names and contact details of two referees. All documentation should be
submitted in electronic form.
We strongly encourage candidates to contact our researchers, in advance of application, in
order to develop project ideas. For a list of CITEC researchers please visit: www.cit-ec.de .
Bielefeld University is an equal opportunity employer. Women are especially encouraged to
apply and in the case of comparable competences and qualification, will be given preference.
Bielefeld University explicitly encourages disabled people to apply.
Applications will be considered until all positions have been filled. For guaranteed
consideration, please submit your documents no later than March 22, 2009. Please address
your application to Prof. Thomas Schack, Head of Graduate School, Email: gradschool@citec.
uni-bielefeld.de . Please direct any queries relating to your application to Claudia Muhl,
Graduate School Manager, phone: +49-(0)521-106-6566, cmuhl@cit-ec.uni-bielefeld.de
Back to Top

7-14 . (2009-03-09) 9 PhD positions in the Marie Curie International Training Network

Up to 9 PhD Positions available in

 

 the Marie Curie International Training Network on

 

Speech Communication with Adaptive LEarning (SCALE)

 

 

 

 

SCALE is a cooperative project between

 

·        IDIAP Research Institute in Martigny, Switzerland (Prof Herve Bourlard)

·        Radboud University Nijmegen, The Netherlands (Prof Lou Boves, Dr Louis ten Bosch, Dr-ir Bert Cranen, Dr O. Scharenborg)

·        RWTH Aachen, Germany (Prof Hermann Ney, Dr Ralf Schlüter)

·        Saarland University, Germany (Prof Dietrich Klakow, Dr John McDonough)

·        University of Edinburgh, UK (Prof Steve Renals, Dr Simon King, Dr Korin Richmond, Dr Joe Frankel)the Marie Curie International Training Network

·        University of Sheffield, UK (Prof Roger Moore, Prof Phil Green, Dr Thomas Hain, Dr Guido Sanguinetti) .

 

Companies like Toshiba or Philips Speech Recognition Systems/Nuance are associated partners of the program.

 

Each PhD position is funded for three years and degrees can be obtained from the participating academic institutions. 

 

Distinguishing features of the cooperation include:

 

·        Joint supervision of dissertations by lecturers from two partner institutions

·        While staying with one institution for most of the time, the program includes a stay at a second partner institution either from academic or industry for three to nine month 

·        An intensive research exchange program between all participating institutions

 

PhD projects will be in the area of

 

·        Automatic Speech Recognition

·        Machine learning

·        Speech Synthesis

·        Signal Processing

·        Human speech recognition

 

The salary of a PhD position is roughly 33.800 Euro per year. There are additional mobility (up to 800 Euro/month) and travel allowances (yearly allowance). Applicants should hold a strong university degree which would entitle them to embark on a doctorate (Masters/diploma or equivalent) in a relevant discipline, and should be in the first four years of their research careers. As the project is funded by a EU mobility scheme, there are also certain mobility requirements.

 

Women are particularly encouraged to apply.

 

Deadlines for applications:

 

April 1, 2009

July 1, 2009

September 1, 2009.

 

After each deadline all submitted applications will be reviewed and positions awarded until all positions are filled.

 

Applications should be submitted at http://www.scale.uni-saarland.de/index.php?authorsInstructions=1 .

 

To be fully considered, please include:

 

- a curriculum vitae indicating degrees obtained, disciplines covered

(e.g. list of courses ), publications, and other relevant experience

 

- a sample of written work (e.g. research paper, or thesis,

preferably in English)

 

- copies of high school and university certificates, and transcripts

 

- two references (e-mailed directly to the SCALE office

(Diana.Schreyer@LSV.Uni-Saarland.De) before the deadline)

 

- a statement of research interests, previous knowledge and activities

in any of the relevant research areas.

 

In case an application can only be submitted by regular post, it should

be sent to:

 

SCALE office

Diana Schreyer

Spoken Language Systems, FR 7.4

C 71 Office 0.02

Saarland University

P.O. Box 15 11 50

D-66041 Saarbruecken

Germany

 

If you have any questions, please contact Prof. Dr. Dietrich Klakow

(Dietrich.Klakow@LSV.Uni-Saarland.De).

 

For more information see also http://www.scale.uni-saarland.de/

 

Back to Top

7-15 . )2009-03-10) Maitre de conferences a l'Universite Descartes Paris (french)

 Un poste de maître de conférences en informatique (section 27) 27MCF0031 est à pouvoir à l'université Paris Descartes.
L’objectif de ce recrutement est de renforcer la thématique de recherche en traitement de la parole pour la détection et la remédiation d’altérations de la voix. On attend du candidat une solide expérience en traitement automatique de la parole (reconnaissance, synthèse, …).
Pour l'enseignement, tous les diplômes de l'UFR mathématiques et informatique sont concernés : la Licence MIA, le Master Mathématique et Informatique, le Master MIAGE. 

 

Contact: Marie-José Carat

Professeur d'Informatique
CRIP5 - Diadex (Dialogue et indexation)

Université Paris Descartes
45, rue des Saints Pères - 75270 Paris cedex 06
< mailto:Marie-Jose.Caraty@ParisDescartes.fr>

Tél  : (33/0) 1 42 86 38 48 

 

Back to Top

7-16 . (2009-03-14) Institut de linguistique et de phonetique Sorbonne Paris (french)

UNIVERSITE PARIS 3 (SORBONNE NOUVELLE) Poste n° 3743

07-Sciences du langage : linguistique et phonétique générales ...
Informatique et Traitemant Automatique des Langues
PARIS 75005
Vacant
Adresse d'envoi du
dossier :
17, RUE DE LA SORBONNE
Bureau du personnel enseignant
PR - 7eme - 0743
75005 - PARIS
Contact administratif :
N° de téléphone :
N° de Fax :
Email :
MARTINE GRAFFAN
GESTION MCF
01 40 46 28 96 01 40 46 28 92
01 43 25 74 71
Martine.Graffan@univ-paris3.fr
Date de prise de fonction : 01/09/2009
Mots-clés :
Profil enseignement :
Composante ou UFR :
Référence UFR :
Institut de linguistique et phonetique generales et appliquees  

0751982X 

Laboratoire5 : 

EA2290 - SYSTEMES LINGUISTIQUES, ENONCIATION ET DISCURSIVITE
(SYLED)
UMR7018 - LABORATOIRE DE PHONETIQUE ET PHONOLOGIE
EA1483 - RECHERCHE SUR LE FRANCAIS CONTEMPORAIN
UMR7107 - LABORATOIRE DES LANGUES ET CIVILISATIONS A TRADITION
ORALE (LACITO)
Informations Complémentaires
Enseignement :
Profil :
L’enseignement interviendra dès la 1ère année de la filière de la Licence des Sciences
du Langage jusqu’au Doctorat des Sciences du Langage, spécialité TAL. La formation en
Traitement Automatique des Langues peut bien entendu aussi trouver des applications dans
un Master de Sciences du Langage, spécialité Langage, Langues, Modèles et un Doctorat de
Sciences du Langage d’autres spécialités.
Le poste permettra l’encadrement d’enseignements associant Sciences du Langage et
Traitement Automatique des Langues, orientés à la fois vers la poursuite d’études en Master
et Doctorat et vers la professionnalisation, en préparant à des métiers des Industries de la
Langue.
Département d’enseignement : UFR de Linguistique e Phonétique Générales et
Appliquées
Lieu(x) d’exercice : 19, rue des Bernardins 75005 - PARIS
Equipe pédagogique :
Nom directeur département : Madame Martine VERTALIER
Tél. directeur département. : 01 44 32 05 79
Email directeur département : Martine.Vertalier@univ-paris3.fr
URL département. : /
Recherche :
Profil :
Développement et encadrement des recherches en TAL, recherche sur « grands
corpus » oraux et/ou écrits dans des langues diverses, éventuellement fouille de données,
induction de grammaires, mais aussi en synergie, au sein des équipes de recherche
constituées, avec les composantes travaillant sur d’autres domaines de recherche, par un
apport de ressources théoriques et technologiques. Le Professeur inscrira sa recherche dans
l’Ecole Doctorale 268 de Paris3 en priorité dans l’équipe fondatrice des filières de formation
et de recherche décrites ci-dessus : le SYLED, en particulier sa composante CLA2t (Centre de
Lexicométrie et d’Analyse Automatique des Textes), ou dans une équipe dont les enseignants
chercheurs contribuent à l’enseignement et à la recherche à l’ILPAG (Laboratoire de
Phonéthique et Phonologie ) ; UMR 7107 Laboratoire des Langues et Civilisations à Tradition
Orale (LACITO).
Lieu(x) d’exercice :
1- EA 2290 SYLED 19 , rue des Bernardins 75005-PARIS
2- UMR 7018 Laboratoire de phonétique et phonologie 19, rue des Bernardins
75005-PARIS
3- EA 1483 Recherche sur le Français Contemporain 19, rue des Bernardins
75005-PARIS
4- UMR 7107 LACITO CNRS 7, rue G. Môquet 94800-VILLEJUIF
Nom directeur laboratoire : 1- M. André SALEM 01 44 32 05 84
2- Me Jacqueline VAISSIERE et Me Annie RIALLAND 01 43 26 57 17
3- Me Anne SALAZAR-ORVIG 01 44 32 05 07
4- Me Zlatka GUENTCHEVA 01 49 58 37 78
Email directeur laboratoire : syled@univ-paris3.fr -
jacqueline.vaissiere@univ-paris3.fr - anne.salazar-orvig@univ-paris3.fr - lacito@vjf.cnrs.fr 

 

Back to Top

7-17 . (2009-03-15) Poste Maitre de conferences Nanterre Paris (french)

Poste MCF, 221 : Linguistique : pathologie des acquisitions langagières
Université Paris X, Nanterre, Département des Sciences du langage
Contact : Anne Lacheret, anne@lacheret.com

Préférence accordée aux candidats et candidates à double profil :
linguistique et orthophonie ou discipline connexe. 

Back to Top

7-18 . (2009-03-18) Ingenieur etude/developpement Semantique, TAL, traduction automatique (french)

Ingénieur Etude & Développement (H/F)

POSTE BASE DANS LE NORD PAS DE CALAIS (62)

 

 

Fort d’une croissance continue de ses activités, soutenue par un investissement permanent en R&D, notre CLIENT, leader Européen du traitement de l’information recrute un Ingénieur Développement (h/f) spécialisé en sémantique, traitement automatique du langage naturel, outils de traduction automatique et de recherche d’informations cross-lingue et système de gestion de ressources linguistique multilingues (dictionnaires, lexiques, mémoires de traduction, corpus alignés).


Passionné(e) par l’application des technologies les plus avancées au traitement industriel de l’information, vos missions consistent à concevoir, développer et industrialiser les chaînes de traitement documentaire utilisées par les lignes de production pour le compte des clients de l’entreprise.

De formation supérieure en informatique (BAC+5 ou équivalent), autonome et créatif, nous vous proposons d’intégrer une structure dynamique et à taille humaine où l’innovation est permanente au service de la production et du client.

Vous justifiez idéalement de 2/3 ans d'expérience dans la programmation orientée objet et les processus de développement logiciel. La pratique de C++ et/ou Java est indispensable.
La maîtrise de l’anglais est exigée pour évoluer dans un groupe à envergure internationale.
Vos qualités d’analyse et de synthèse, votre sens du service et de l’engagement client vous permettront de relever le challenge que nous vous proposons.

Back to Top

7-19 . (2009-04-02)The Johns Hopkins University: Post-docs, research staff, professors on sabbaticals

The Johns Hopkins University
The Human Language Technology Center of Excellence
Post-docs, research staff, professors on sabbaticals
The Human Language Technology Center of Excellence (COE) at the Johns Hopkins University is seeking to hire outstanding Ph.D. researchers in the field of speech and natural language processing. The COE seeks the most talented candidates for both junior and senior level positions including, but not limited to, full-time research staff, professors on sabbaticals, visiting scientists and post-docs. Candidates will be expected to work in a team setting with other researchers and graduate students at the Johns Hopkins University, the University of Maryland College Park and other affiliated institutions.
Candidates should have a strong background in speech processing:
Robust speech recognition across language channel, formal vs. informal genres, speaker identification, language identification, speech retrieval, spoken term detection, etc.
The COE was founded in January 2007 and has a long-term research contract as an independent center within Johns Hopkins University. Located next to Johns Hopkins’ Homewood Campus in Baltimore, Maryland, the COE’s distinguished contract partners include the University of Maryland College Park, the Johns Hopkins University Applied Physics Lab, and BBN Technologies of Cambridge, Massachusetts. World-class researchers at the COE focus on fundamental challenge problems critical to finding solutions for real-world problems of importance to our government sponsor. The COE offers substantial computing capability for research that requires heavy computation and massive storage. In the summer of 2009, the COE will hold its first annual Summer Camp for Advanced Language Exploration (SCALE), inviting the best and brightest researchers to work on common areas in speech and NLP. Researchers are expected to publish in peer-reviewed venues. For more information about the COE, visit www.hltcoe.org.
Applicants should have earned a Ph.D. in Computer Science (CS), Electrical and Computer Engineering (ECE), or a closely related field. Applicants should submit a curriculum vitae, research statement, names and addresses of at least four references, and an optional teaching statement. Please send applications and inquiries about the position to hltcoe-hiring@jhu.edu.
While
Back to Top

8 . Journals

 

Back to Top

8-1 . Special issue of CSL on Emergent Artificial Intelligence Approaches for Pattern Recognition in Speech and Language Processing

Special Issue on "Emergent Artificial Intelligence Approaches for Pattern Recognition in Speech and Language Processing"
      Computer Speech and Language, Elsevier       Deadline for paper submission: September 26, 2008.  http://speechlab.ifsc.usp.br/call/csl.pdf                        =
Back to Top

8-2 . Special issue IEEE Trans. ASL Signal models and representation of musical and environmental sounds

Special Issue of IEEE Transactions on Audio, Speech and Language Processing
**SIGNAL MODELS AND REPRESENTATION OF MUSICAL AND ENVIRONMENTAL SOUNDS**
http://www.ewh.ieee.org/soc/sps/tap http://www.ewh.ieee.org/soc/sps/tap/sp_issue/audioCFP.pdf
-- Submission deadline: 15 December, 2008
-- Notification of acceptance: 15 June, 2009
--Final manuscript due: 1st July, 2009
--Tentative publication date: 1st September, 2009
Guest editors
Dr. Bertrand David (Telecom ParisTech, France) bertrand.david@telecom-paristech.fr
Dr. Laurent Daudet (UPMC University Paris 06, France) daudet@lam.jussieu.fr
Dr. Masataka Goto (National Institute of Advanced Industrial Science and Technology, Japan) m.goto@aist.go.jp
Dr. Paris Smaragdis (Adobe Systems, Inc, USA) paris@adobe.com
The non-stationary nature, the richness of the spectra and the mixing of diverse sources are common characteristics shared by musical and environmental audio scenes. It leads to specific challenges of audio processing tasks such as information retrieval, source separation, analysis-transformation-synthesis and coding. When seeking to extract information from musical or environmental audio signals, the time-varying waveform or spectrum are often further analysed and decomposed into sound elements. Two aims of this decomposition can be identified, which are sometimes antagonist: to be together adapted to the particular properties of the signal and to the targeted application. This special issue is focused on how the choices of a low level representation (typically a time-frequency distribution with or without probabilistic framework, with or without perceptual considerations), a source model or a decomposition technique may influence the overall performance. Specific topics of interest include but are not limited to:
* factorizations of time-frequency distribution
* sparse representations
* Bayesian frameworks
* parametric modeling
* subspace-based methods for audio signals
* representations based on instrument or/and environmental sources signal models
* sinusoidal modeling of non-stationary spectra (sinusoids, noise, transients)
Typical applications considered are (non exclusively):
* source separation/recognition
* mid or high level features extraction (metrics, onsets, pitches, …)
* sound effects * audio coding * information retrieval
* audio scene structuring, analysis or segmentation * ...
Back to Top

8-3 . "Speech Communication" special issue on "Speech and Face to Face Communication

 "Speech Communication" special issue on "Speech and Face to Face Communication 
http://www.elsevier.com/wps/find/journaldescription.cws_home/505597/description

Speech communication is increasingly studied in a face to face perspective:
- It is interactive: the speaking partners build a complex communicative act together
involving linguistic, emotional, expressive, and more generally cognitive and social
dimensions;
- It involves multimodality to a large extent: the “listener” sees and hears the speaker who
produces sounds as well as facial and more generally bodily gestures;
- It involves not only linguistic but also psychological, affective and social aspects of
interaction. Gaze together with speech contribute to maintain mutual attention and to
regulate turn-taking for example. Moreover the true challenge of speech communication is
to take into account and integrate information not only from the speaker but also from the
entire physical environment in which the interaction takes place.

The present issue proposes to synthetize the most recent developments in
this topic considering its various aspects from complementary perspectives: cognitive and
neurocognitive (multisensory and perceptuo-motor interactions), linguistic (dialogic face to
face interactions), paralinguistic (emotions and affects, turn-taking, mutual attention),
computational (animated conversational agents, multimodal interacting communication
systems).

There will be two stages in the submission procedure.

- First stage (by DECEMBER 1ST): submission of a one-to-two page abstract describing the
contents of the work and its relevance to the "Speech and Face to Face Communication" topic
by DECEMBER 1ST. The guest editors will then make a selection of the most relevant
proposals in December.

- Second stage (by MARCH 1ST): the selected contributors will be invited to submit a full
paper by MARCH 1ST. The submitted papers will then be peer reviewed through the regular
Speech Communication journal process (two independent reviews). Accepted papers will then
be published in the special issue.

Abstracts should be directly sent to the guest editors:
Marion.Dohen@gipsa-lab.inpg.fr, Gerard.Bailly@gipsa-lab.inpg.fr, Jean-Luc.Schwartz@gipsa-lab.inpg.fr

Back to Top

8-4 . SPECIAL ISSUE of the EURASIP Journal on Audio, Speech, and Music Processing. ON SCALABLE AUDIO-CONTENT ANALYSIS

SPECIAL ISSUE ON SCALABLE AUDIO-CONTENT ANALYSIS

The amount of easily-accessible audio, whether in the form of large
collections of audio or audio-video recordings, or in the form of
streaming media, has increased exponentially in recent times.
However this audio is not standardized: much of it is noisy,
recordings are frequently not clean, and most of it is not labelled.
The audio content covers a large range of categories including
sports, music and songs, speech, and natural sounds. There is
therefore a need for algorithms that allow us make sense of these
data, to store, process, categorize, summarize, identify and
retrieve them quickly and accurately.

In this special issue we invite papers that present novel approaches
to problems such as (but not limited to):

Audio similarity
Audio categorization
Audio classification
Indexing and retrieval
Semantic tagging
Audio event detection
Summarization
Mining

We are especially interested in work that addresses real-world
issues such as:

Scalable and efficient algorithms
Audio analysis under noisy and real-world conditions
Classification with uncertain labeling
Invariance to recording conditions
On-line and real-time analysis of audio.
Algorithms for very large audio databases.

We encourage theoretical or application-oriented papers that
highlight exploitation of such techniques in practical systems/products.

Authors should follow the EURASIP Journal on Audio, Speech, and Music
Processing manuscript format described at the journal site
http://www.hindawi.com/journals/asmp/. Prospective authors should
submit an electronic copy of their complete manuscript through the
journal Manuscript Tracking System at http://mts.hindawi.com/,
according to the following timetable:

Manuscript Due: June 1st, 2009
First Round of Reviews: September 1, 2009
Publication Date: December 1st, 2009


Guest Editors:

1) Bhiksha Raj
Associate professor


School of computer science
Carnegie Mellon university


2) Paris Smaragdis
Senior Research Scientist
Advanced Technology Labs, Adobe Systems Inc.
Newton, MA, USA

3) Malcolm Slaney
Principal Scientist
Yahoo! Research
Santa Clara, CA
and
(Consulting) Professor
Stanford CCRMA

4) Chung-Hsien Wu
Distinguished Professor
Dept. of Computer Science & Infomation Engineering
National Cheng Kung University,
Tainan, TAIWAN

5) Liming Chen
Professor and head of the Dept. Mathematics & Informatics
Ecole Centrale de Lyon
University of Lyon
Lyon, France

6) Professor Hyoung-Gook Kim
Intelligent Multimedia Signal Processing Lab.
Kwangwoon University, Republic of Korea 

Back to Top

8-5 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing.on Atypical Speech

Atypical Speech
Call for Papers

Research in speech processing (e.g., speech coding, speech enhancement, speech recognition, speaker recognition, etc.) tends to concentrate on speech samples collected from normal adult talkers. Focusing only on these “typical speakers” limits the practical applications of automatic speech processing significantly. For instance, a spoken dialogue system should be able to understand any user, even if he or she is under stress or belongs to the elderly population. While there is some research effort in language and gender issues, there remains a critical need for exploring issues related to “atypical speech”. We broadly define atypical speech as speech from speakers with disabilities, children's speech, speech from the elderly, speech with emotional content, speech in a musical context, and speech recorded through unique, nontraditional transducers. The focus of the issue is on voice quality issues rather than unusual talking styles.

In this call for papers, we aim to concentrate on issues related to processing of atypical speech, issues that are commonly ignored by the mainstream speech processing research. In particular, we solicit original, previously unpublished research on:
• Identification of vocal effort, stress, and emotion in speech
• Identification and classification of speech and voice disorders
• Effects of ill health on speech
• Enhancement of disordered speech
• Processing of children's speech
• Processing of speech from elderly speakers
• Song and singer identification
• Whispered, screamed, and masked speech
• Novel transduction mechanisms for speech processing
• Computer-based diagnostic and training systems for speech dysfunctions
• Practical applications

Authors should follow the EURASIP Journal on Audio, Speech, and Music Processing manuscript format described at the journal site

http://www.hindawi.com/journals/asmp/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at

http://mts.hindawi.com/, according to the following timetable:
Manuscript Due
April 1, 2009
First Round of Reviews
July 1, 2009
Publication Date
October 1, 2009

Guest Editors

Georg Stemmer, Siemens AG, Corporate Technology, 80333 Munich, Germany

Elmar Nöth, Department of Pattern Recognition, Friedrich-Alexander University of Erlangen-Nuremberg, 91058 Erlangen, Germany

Vijay Parsa, National Centre for Audiology, The University of Western Ontario, London, ON, Canada N6G 1H1 

Back to Top

8-6 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing on Animating virtual speakers or singers from audio: lip-synching facial animation

 

Special issue on 
Animating virtual speakers or singers from audio: lip-synching facial animation 

Call for PapersLip synchronization (lip-synch) is the term used to describe matching lip movements to a pre-recorded speaking or singing voice. This often is used in the production of films, cartoons, television programs, and computer games. 
We focus here on technologies that are able to compute automatically the facial movements of animated characters given pre-recorded audio. Automating the lip-synch process, generally termed visual speech synthesis, has potential for use in a wide range of applications: from desktop agents on personal computers, to language translation tools, to providing a means for generating and displaying stimuli in speech perception experiments.A visual speech synthesizer comprises at least three modules: a control model that computes articulatory trajectories from the input signal; a shape model that animates the facial geometry from computed trajectories and an appearance model for rendering the animation by varying the colors of pixels. There are numerous solutions proposed in the literature for each of these modules. Control models exploit either direct signal-to-articulation mappings, or more complex trajectory formation systems that utilize a phonetic segmentation of the acoustic signal. Shape models vary from ad-hoc parametric deformations of a 2D mesh to sophisticated 3D biomechanical models. Appearance models exploit morphing of natural images, texture blending or more sophisticated texture models.The aim of this special issue is to provide a detailed description of state-of-the-art systems and identify new techniques that have recently emerged from both the audiovisual speech and computer graphics research communities. 
In particular, we solicit original, previously unpublished research on: 

 

Audiovisual synthesis from text

Facial animation from audio

Trajectory formation systems

Evaluation methods for audiovisual synthesis

Perception of audiovisual asynchrony in speech and music

Control of speech and facial expressions

 

 

This special issue follows the first visual speech synthesis challenge (LIPS’2008) that took place as a special session at INTERSPEECH 2008 in Brisbane, Australia. The aim of the challenge was to stimulate discussion about the subjective quality assessment of synthesized visual speech, with a view to developing standardized evaluation procedures.For this special issue, all papers selected for publication should include a description of a subjective evaluation experiment that outlines the impact of the proposed synthesis scheme on some subjective measure, such as audiovisual intelligibility, cognitive load or perceived naturalness. This evaluation metric could be assessed either by participation in the LIPS’2008 challenge, or by an independent perceptual experiment.Technical organization 
The issue is coordinated by three guest editors: G. Bailly, B.-J. Theobald & S. Fagel. These editors co-organized the LIPS’2008 challenge, and they cover a large spectrum of scientific backgrounds coherent with the theme: audiovisual speech processing, facial animation & computer graphics. They are assisted by a scientific committee. The members of the scientific committee are also invited to submit papers, and promote papers by helping in the communication process around the issue.The special issue will be introduced by a paper written by the editors, with a critical review of the selected papers and with a discussion of the results obtained by the systems participating to the LIPS’2008 challenge. 
ScheduleAuthors should follow the EURASIP Journal on Audio, Speech, and Music Processing manuscript format described at the journal site http://www.hindawi.com/journals/asmp/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://mts.hindawi.com/, according to the following timetable: 

 


One page abstractJanuary 1, 2009
Preselection of papersFebruary 1, 2009
Manuscript dueMarch 1, 2009
First round of reviewsMay 1, 2009
Camera-ready papersJuly 1, 2009
Publication dateSeptember 1,2009

Guest Editors

 

Gérard Bailly, GIPSA-Lab, Speech & Cognition Dept., Grenoble-France; 

gerard.bailly@gipsa-lab.inpg.fr 

Sascha Fagel, Speech & Communication Institute, TU Berlin, Germany; 
sascha.fagel@tu-berlin.de 

Barry-John Theobald, School of Computing Sciences, University of East Anglia, UK; 
b.theobald@uea.ac.uk 

 

Back to Top

8-7 . CfP Special issue of Speech Comm: Non-native speech perception in adverse conditions: imperfect knowledge, imperfect signal

CALL FOR PAPERS: SPECIAL ISSUE OF SPEECH COMMUNICATION

NON-NATIVE SPEECH PERCEPTION IN ADVERSE CONDITIONS: IMPERFECT KNOWLEDGE, IMPERFECT SIGNAL

Much work in phonetics and speech perception has focused on doubly-optimal conditions, in which the signal reaching listeners is unaffected by distorting influences and in which listeners possess native competence in the sound system. However, in practice, these idealised conditions are rarely met. The processes of speech production and perception thus have to account for imperfections in the state of knowledge of the interlocutor as well as imperfections in the signal received. In noisy settings, these factors combine to create particularly adverse conditions for non-native listeners.

The purpose of the Special Issue is to assemble the latest research on perception in adverse conditions with special reference to non-native communication. The special issue will bring together, interpret and extend the results emerging from current research carried out by engineers, psychologists and phoneticians, such as the general frailty of some sounds for both native and non-native listeners and the strong non-native disadvantage experienced for categories which are apparently equivalent in the listeners’ native and target languages.

Papers describing novel research on non-native speech perception in adverse conditions are welcomed, from any perspective including the following. We especially welcome interdisciplinary contributions.

• models and theories of L2 processing in noise
• informational and energetic masking
• role of attention and processing load
• effect of noise type and reverberation
• inter-language phonetic distance
• audiovisual interactions in L2
• perception-production links
• the role of fine phonetic detail

GUEST EDITORS

Maria Luisa Garcia Lecumberri (Department of English, University of the Basque Country, Vitoria, Spain).
garcia.lecumberri@ehu.es

Martin Cooke (Ikerbasque and Department of Electrical & Electronic Engineering, University of the Basque Country, Bilbao, Spain).
m.cooke@ikerbasque.org

Anne Cutler (Max-Planck Institute for Psycholinguistics, Nijmegen, The Netherlands and MARCS Auditory Laboratories, Sydney, Australia).
anne.cutler@mpi.nl


DEADLINE

Full papers should be submitted by 31st July 2009

SUBMISSION PROCEDURE

Authors should consult the “guide for authors”, available online at http://www.elsevier.com/locate/specom, for information about the preparation of their manuscripts. Papers should be submitted via http://ees.elsevier.com/specom, choosing “Special Issue: non-native speech perception” as the article type. If you are a first time user of the system, please register yourself as an author. Prospective authors are welcome to contact the guest editors for more details of the Special Issue. 

Back to Top

8-8 . CfP IEEE Special Issue on Speech Processing for Natural Interaction with Intelligent Environments

Call for Papers IEEE Signal Processing Society IEEE Journal of Selected Topics in Signal Processing  Special Issue on Speech Processing for Natural Interaction                   with Intelligent Environments  With the advances in microelectronics, communication technologies and smart materials, our environments are transformed to be increasingly intelligent by the presence of robots, bio-implants, mobile devices, advanced in-car systems, smart house appliances and other professional systems. As these environments are integral parts of our daily work and life, there is a great interest in a natural interaction with them. Also, such interaction may further enhance the perception of intelligence. "Interaction between man and machine should be based on the very same concepts as that between humans, i.e. it should be intuitive, multi-modal and based on emotion," as envisioned by Reeves and Nass (1996) in their famous book "The Media Equation". Speech is the most natural means of interaction for human beings and it offers the unique advantage that it does not require carrying a device for using it since we have our "device" with us all the time.  Speech processing techniques are developed for intelligent environments to support either explicit interaction through message communications, or implicit interaction by providing valuable information about the physical ("who speaks when and where") as well as the emotional and social context of an interaction. Challenges presented by intelligent environments include the use of distant microphone(s), resource constraints and large variations in acoustic condition, speaker, content and context. The two central pieces of techniques to cope with them are high-performing "low-level" signal processing algorithms and sophisticated "high-level" pattern recognition methods.  We are soliciting original, previously unpublished manuscripts directly targeting/related to natural interaction with intelligent environments. The scope of this special issue includes, but is not limited to:  * Multi-microphone front-end processing for distant-talking interaction * Speech recognition in adverse acoustic environments and joint          optimization with array processing * Speech recognition for low-resource and/or distributed computing          infrastructure * Speaker recognition and affective computing for interaction with          intelligent environments * Context-awareness of speech systems with regard to their applied          environments * Cross-modal analysis of speech, gesture and facial expressions for          robots and smart spaces * Applications of speech processing in intelligent systems, such as          robots, bio-implants and advanced driver assistance systems.  Submission information is available at http://www.ece.byu.edu/jstsp. Prospective authors are required to follow the Author's Guide for manuscript preparation of the IEEE Transactions on Signal Processing at http://ewh.ieee.org/soc/sps/tsp. Manuscripts will be peer reviewed according to the standard IEEE process.  Manuscript submission due:    		 		  		 		  Jul. 3, 2009 First review completed:       		 		  		 		  Oct. 2, 2009 Revised manuscript due:      		 		  		 		  Nov. 13, 2009 Second review completed:      		 		  		 		  Jan. 29, 2010 Final manuscript due:         		 		  		 		  Mar. 5, 2010  Lead guest editor:         Zheng-Hua Tan, Aalborg University, Denmark             zt@es.aau.dk  Guest editors:         Reinhold Haeb-Umbach, University of Paderborn, Germany             haeb@nt.uni-paderborn.de         Sadaoki Furui, Tokyo Institute of Technology, Japan             furui@cs.titech.ac.jp         James R. Glass, Massachusetts Institute of Technology, USA             glass@mit.edu         Maurizio Omologo, FBK-IRST, Italy             omologo@fbk.eu
Back to Top

8-9 . CfP Special issue "Speech as a Human Biometric: I know who you are from your voice" Int. Jnl Biometrics

International Journal of Biometrics  (IJBM)
 
Call For papers
 
Special Edition on: "Speech as a Human Biometric: I Know Who You Are From Your Voice!"
 
Guest Editors: 
Dr. Waleed H. Abdulla, The University of Auckland, New Zealand
Professor Sadaoki Furui, Tokyo Institute of Technology, Japan
Professor Kuldip K. Paliwal, Griffith University, Australia
 
 
The 2001 MIT Technology Review indicated that biometrics is one of the emerging technologies that will change the world. Human biometrics is the automated recognition of a person using adherent distinctive physiological and/or involuntary behavioural features.
 
Human voice biometrics has gained significant attention in recent years. The ubiquity of cheap microphones, human identity information carried by voice, ease of deployment, natural use, telephony applications diffusion, and non-obtrusiveness have been significant motivations for developing biometrics based on speech signals. The robustness of speech biometrics is sufficiently good. However, there are significant challenges with respect to conditions that cannot be controlled easily. These issues include changes in acoustical environmental conditions, respiratory and vocal pathology, age, channel, etc. The goal of speech biometric research is to solve and/or mitigate these problems.
 
This special issue will bring together leading researchers and investigators in speech research for security applications to present their latest successes in this field. The presented work could be new techniques, review papers, challenges, tutorials or other relevant topics.
 
   Subject Coverage
 
Suggested topics include, but are not limited to:
 
Speech biometrics
Speaker recognition
Speech feature extraction for speech biometrics
Machine learning techniques for speech biometrics
Speech enhancement for speech biometrics
Speech recognition for speech biometrics
Speech changeability over age, health condition, emotional status, fatigue, and related factors
Accent, gender, age and ethnicity information extraction from speech signals
Speech watermarking
Speech database security management
Cancellable speech biometrics
Voice activity detection
Conversational speech biometrics
   Notes for Prospective Authors
 
Submitted papers should not have been previously published nor be currently under consideration for publication elsewhere
 
All papers are refereed through a peer review process. A guide for authors, sample copies and other relevant information for submitting papers are available on the Author Guidelines page
 
   Important Dates
 
Manuscript due: 15 June, 2009
 
Acceptance/rejection notification: 15 September, 2009
 
Final manuscript due: 15 October, 2009
 
For more information please go to Calls for Papers page (http://www.inderscience.com/callPapers.php) OR The IJBM home page (http://www.inderscience.com/ijbm).
 
 
Back to Top

8-10 . CfP Special on Voice transformation IEEE Trans ASLP

CALL FOR PAPERS
IEEE Signal Processing Society
IEEE Transactions on Audio, Speech and Language Processing
Special Issue on Voice Transformation
With the increasing demand for Voice Transformation in areas such as
speech synthesis for creating target or virtual voices, modeling various
effects (e.g., Lombard effect), synthesizing emotions, making more natural
dialog systems which use speech synthesis, as well as in areas like
entertainment, film and music industry, toys, chat rooms and games, dialog
systems, security and speaker individuality for interpreting telephony,
high-end hearing aids, vocal pathology and voice restoration, there is a
growing need for high-quality Voice Transformation algorithms and systems
processing synthetic or natural speech signals.
Voice Transformation aims at the control of non-linguistic information of
speech signals such as voice quality and voice individuality. A great deal
of interest and research in the area has been devoted to the design and
development of mapping functions and modifications for vocal tract
configuration and basic prosodic features.
However, high quality Voice Transformation systems that create effective
mapping functions for vocal tract, excitation signal, and speaking style
and whose modifications take into account the interaction of source and
filter during voice production, are still lacking.
We invite researchers to submit original papers describing new approaches
in all areas related to Voice Transformation including, but not limited to,
the following topics:
* Preprocessing for Voice Transformation
(alignment, speaker selection, etc.)
* Speech models for Voice Transformation
(vocal tract, excitation, speaking style)
* Mapping functions
* Evaluation of Transformed Voices
* Detection of Voice Transformation
* Cross-lingual Voice Transformation
* Real-time issues and embedded Voice Transformation Systems
* Applications
The call for paper is also available at:
http://www.ewh.ieee.org/soc/sps/tap/sp_issue/VoiceTransformationCFP.pdf
Prospective authors are required to follow the Information for Authors for
manuscript preparation of the IEEE Transactions on Audio, Speech, and
Language Processing Signal Processing at
http://www.signalprocessingsociety.org/periodicals/journals/taslp-author-information/
Manuscripts will be peer reviewed according to the standard IEEE process.
Schedule:
Submission deadline: May 10, 2009
Notification of acceptance: September 30, 2009
Final manuscript due: October 30, 2009
Publication date: January 2010
Lead Guest Editor:
Yannis Stylianou, University of Crete, Crete, Greece
yannis@csd.uoc.gr
Guest Editors:
Tomoki Toda, Nara Inst. of Science and Technology, Nara, Japan
tomoki@is.naist.jp
Chung-Hsien Wu, National Cheng Kung University, Tainan, Taiwan
chwu@csie.ncku.edu.tw
Alexander Kain, Oregon Health & Science University, Portland Oregon, USA
kaina@ohsu.edu
Olivier Rosec, Orange-France Telecom R&D, Lannion, France
olivier.rosec@orange-ftgroup.com

Back to Top

9 . Future Speech Science and Technology Events

9-1 . (2009-04-2) Seminaires du GIPSA (french)


 Jeudi 2 avril 2009, 13h30 – Séminaire externe
========================================
Athanassios KATSAMANIS
Computer Vision, Speech Communication & Signal Processing Group
National Technical University of Athens

Titre à préciser

Résumé à venir

Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire

Jeudi 23 avril 2009, 15h30 – Séminaire externe – Attention horaire inhabituel !!
========================================
Paolo ZEDDA
Conservatoire National Supérieur de Musique et de Danse de Paris
Conservatoire d'Alfortville/Paris

Langue chantée et spéculations phonétiques

La langue chantée nous aide à « comprendre » le fonctionnement articulatoire d'une langue. Elle peut éclaircir et parfois même rectifier quelques notions fondamentales désormais consacrées par la linguistique et la phonétique. La gymnastique articulatoire exercée par les variantes régionales et individuelles (sociolectes et idiolectes) influence directement la qualité de l'émission parlée (et éventuellement chantée !) et facilite ou entrave l'apprentissage phonétique d'une langue « seconde ». Ce dernier peut être l'occasion pour un « bilan de diction » où la langue chantée, grâce à sa capacité de ralentir et mettre en évidence le débit articulatoire, nous aide à trouver un système allophonique de bonne diction qui, loin du danger idéologique des approches puristes, facilite un apprentissage personnalisé de nuances phonétiques d'une langue, permettant aussi un fonctionnement optimal de l'appareil vocal.

web site: http://zeddap.club.fr/paolozsite
Lecture préalable conseillée : http://acedle.u-strasbg.fr/article.php3?id_article=467

Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire

Jeudi 30 avril 2009, 13h30 – Séminaire externe
========================================
Franz CHOULY
INRIA, Rocquencourt

Résolution numérique des équations RNS/P et simulation d'écoulements fluides

Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire


-- _____________________________________ 

Back to Top

9-2 . (2009-04-19) ICASSP 2009 Taipei, Taiwan

IEEE International Conference on Acoustics, Speech, and Signal Processing

http://icassp09.com

Sponsored by IEEE Signal Processing Society

April 19 - 24, 2009

Taipei International Convention Center

Taipei, Taiwan, R.O.C.

 

The 34th International Conference on Acoustics, Speech, and Signal Processing (ICASSP) will be held at the Taipei International Convention Center in Taipei, Taiwan, April 19 - 24, 2009. The ICASSP meeting is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 50 lecture and poster sessions on:

 

Audio and electroacoustics

 

Bio imaging and signal processing

 

Design and implementation of signal processing systems

 

Image and multidimensional signal processing

 

Industry technology tracks

 

Information forensics and security

 

Machine learning for signal processing

 

Multimedia signal processing

 

Sensor array and multichannel systems

 

Signal processing education

 

Signal processing for communications

 

Signal processing theory and methods

 

Speech and language processing

 

Taiwan: The Ideal Travel Destination. Taiwan, also referred to as Formosa – the Portuguese word for "graceful" – is situated on the western edge of the Pacific Ocean off the southeastern coast of mainland Asia, across the Taiwan Strait from Mainland China. To the north lie Okinawa and the main islands of Japan, and to the south is the Philippines. ICASSP 2009 will be held in Taipei, a city that blends traditional culture and cosmopolitan life. As the political, economic, educational, and recreational center of Taiwan, Taipei offers a dazzling array of cultural sights not seen elsewhere, including exquisite food from every corner of China and the world. You and your entire family will be able to fully experience and enjoy this unique city and island. Prepare yourself for the trip of your dreams, as Taiwan has it all: fantastic food, a beautiful ocean, stupendous mountains and lots of sunshine!

 

Submission of Papers: Prospective authors are invited to submit full-length, four-page papers, including figures and references, to the ICASSP Technical Committee. All ICASSP papers will be handled and reviewed electronically. The ICASSP 2009 website www.icassp09.com will provide you with further details. Please note that the submission dates for papers are strict deadlines.

 

Tutorial and Special Session Proposals: Tutorials will be held on April 19 and 20, 2009. Brief proposals should be submitted by August 4, 2008, to Tsuhan Chen at tutorials@icassp09.com and must include title, outline, contact information, biography and selected publications for the presenter, a description of the tutorial, and material to be distributed to participants. Special sessions proposals should be submitted by August 4, 2008, to Shih-Fu Chang at specialsessions@icassp09.com and must include a topical title, rationale, session outline, contact information, and a list of invited speakers. Tutorial and special session authors are referred to the ICASSP website for additional information regarding submissions.

 

Important Dates

Tutorial Proposals Due

August 4, 2008

Special Session Proposals Due

August 4, 2008

Notification of Special Session & Tutorial Acceptance

September 8, 2008

Submission of Regular Papers

September 29, 2008

Notification of Acceptance (by email)

December 15, 2008

Author’s Registration Deadline

February 2, 2009

 

 

 

Organizing Committee

 

 

General Chair

Lin-shan, Lee

National Taiwan University

 

General Vice-Chair

Iee-Ray Wei

Chunghwa Telecom Co.,Ltd.

 

Secretaries General

Tsungnan Lin

National Taiwan University

Fu-Hao Hsing

Chunghwa Telecom Co.,Ltd

 

Technical Program Chairs

Liang-Gee Chen

National Taiwan University

James R. Glass

Massachusetts Institute of Technology

 

Technical Program Members

Petar Djuric

Stony Brook University

Joern Ostermann

Leibniz University Hannover

Yoshinori Sagisaka

Waseda University

 

Plenary Sessions

Soo-Chang Pei (Chair)

National Taiwan University

Hermann Ney (Co-chair)

RWTH Aachen

 

Special Sessions

Shih-Fu Chang (Chair)

Columbia University

Lee Swindlehurst (Co-chair)

University of California, Irvine

 

Tutorial Chair

Tsuhan Chen

Carnegie Mellon University

 

Publications Chair

Homer Chen

National Taiwan University

 

Publicity Chair

Chin-Teng Lin

National Chiao Tung University

 

Finance Chair

Hsuan-Jung Su

National Taiwan University

 

Local Arrangements Chairs

Tzu-Han Huang

Chunghwa Telecom Co.,Ltd.

Chong-Yung Chi

National Tsing Hwa University

Jen-Tzung Chien

National Cheng Kung University

 

Conference Management

Conference Management Services

Back to Top

9-3 . (2009-04-22) MEDAR 2009

MEDAR 2009
http://www.medar.info/conference/index.php
2nd International Conference on Arabic Language Resources and Tools
Held under the auspices of the Egyptian minister of CIT, His Excellence Dr. Tarek Kamel
Grand Hyatt Cairo
Cairo, Egypt
22 - 23 April, 2009

Online Registration is now OPEN
http://www.medar.info/conference/registration.php


Important dates

  • 9 March 2009: Notification of acceptance
  • 15 March 2009: Early registration deadline
  • 6 April 2009: Final versions for the proceedings


Conference Schedule

  • 21 April 2009: Tutorials at the Faculty of Computers and Information, Cairo University (FCI-CU)
  • 22-23 April 2009: Conference at the Grand Hyatt Cairo

Contact: medar@elda.org

Back to Top

9-4 . (2009-04-25) CfP 1st Young Researchers Workshop on Speech Technology

 

The 1st Young Researchers Workshop on Speech Technology, YRWST 2009 (http://muster.ucd.ie/YRWST/index.html), will be held in University College Dublin, Dublin, Ireland on 25 April 2009. The workshop will be hosted by the UCD School of Computer Science and Informatics (CSI) and is in conjunction with the CLUKI Research Colloquium that is being held in Dublin on 23-24 April 2009 (http://www.cngl.ie/cluki/)

 

Important Dates

 

Full Paper Submission:

20 March 2009, 23:59 IST

Notification of Acceptance:

30 March 2009

Registration Deadline:

20 April 2009, 17:00 IST

Workshop Dates:

25 April 2009

 

1st Young Researchers Workshop on Speech Technology 

The aim of the workshop is for PhD students to present their current work, meet other PhD students in the same field and get feedback on their progress to date.

There are two keynote speakers, who will both talk about future trends in speech technology. One talk will focus on speech synthesis (Prof. Nick Campbell) and the other will focus on speech recognition (TBA).

To this end, we extend a special welcome to authors of papers on novel and emerging areas of research in speech technology.

Topics of interest include, but are not limited to:

  • Automatic Speech Recognition
  • Speech Synthesis
  • Corpus Construction and Annotation
  • Speaker Identification
  • Multilingual Speech Technology
  • Machine Learning Applied to Speech
  • Articulatory-Acoustic Feature Extraction and Applications
  • Speech Modelling
  • Irish and Hiberno Speech
  • Natural Language Processing

Submission Guidelines 

The workshop has two types of submission formats: full 4-page papers and short 1-page papers for work-in-progress. Prospective authors are invited to submit papers in either format using the template provided on the YRWST website (link provided above) to either of the contact emails provided here by 20 March 2009.

Submissions have to be sent by email to Dr. Peter Cahill (peter.cahill@ucd.ie) or Dr. Julie Mauclair (julie.mauclair@ucd.ie). 

All papers will be reviewed by at least three members of the Programme Committee. Following review, authors will be informed whether their paper is accepted, as well as feedback from the reviews will be supplied.
The Programme Committee will select papers containing original work. The Programme Committee encourages work-in-progress submissions from authors. 

Peter Cahill and Julie Mauclair
General Chairs


Back to Top

9-5 . (2009-05-18) 3rd Advanced Voice Function Assessment International Workshop (AVFA2009)

3rd Advanced Voice Function Assessment International Workshop (AVFA2009)

Madrid (Spain), 18th - 20th May 2009

http://www.avfa09.upm.es

     This is the first Call for Papers and Posters for the 3rd Advanced Voice Function Assessment International Workshop (AVFA2009) that will be held from May 18th to 20th at the Universidad Politécncia de Madrid, Spain.

Motivation

    Speech is the most important means of communication among humans, resulting from a complex interaction among vocal folds vibration at the larynx and voluntary movements of the articulators (i.e., mouth, tongue, velum, jaw, etc.). The function of voice, however, is not limited to speech communication. It also transfers emotions, expresses personality features and reflects situations of stress or pathology. Moreover, it has an aesthetic value in many different professional activities, affecting salesmen, managers, lawyers, singers, actors, etc.

     Although research in speech science has traditionally favoured areas such as synthesis, recognition or speaker verification, the previous facts motivate the current emerging of a new research area related to voice function assessment.

     AVFA2009 aims at fostering interdisciplinary collaboration and interactions among researchers in voice assessment beyond the framework of COST Action 2103, thus reaching the whole scientific community.

Topics

     Topics of interest include, but are not limited to: 

  • Automatic detection of voice disorders
  • Automatic assessment & rating of voice quality
  • New strategies for parameterization and modelling normal and pathological voices (biomechanical-based parameters, chaos modelling, etc.)
  • Databases of vocal disorders
  • Inverse filtering
  • Signal processing for remote diagnosis
  • Speech enhancement for pathological & oesophageal voices
  • Objective parameters extraction from vocal fold images using videolaryngoscopy, videokymography, fMRI and other emerging techniques
  • Multi-modal analysis of disordered speech
  • Robust pitch extraction algorithms for pathological & oesophageal voices
  • Emotions in speech
  • Speaker adaptation
  • Voice Physiology and Biomechanics
  • Modelling of Voice Production
  • Diagnosis and Evaluation Protocols
  • Substitution Voices
  • Evaluation of Clinical Treatments
  • Analysis of Oesophageal Voices

Submission

    Prospective authors are asked to electronically submit preliminary version of full papers with a maximum length of 4 pages, including figures and tables, in English. Preliminary papers should be submitted as pdf documents, fitted to the linked  templateby the 15th of January. The submitted documents should include the title and authors' names, affiliations and addresses. In addition, the e-mail address and phone number of the corresponding author should be given. 

    Workshop proceedings will be edited both in paper and CD-ROM. Author registration to the conference is required for accepted papers to be included in the proceedings. The best papers presented at the workshop will be eligible for publication in a referred journal.

Best student paper award

Based on the comments given by the reviewers and the presentation at the conference, the organizing committee will give a best student paper award. The awarded author will be nominated at the closing ceremony of AVFA2009.

Schedule

·        Proposal due 15th January 2009

·        Notification of acceptance 15th February 2009

·        Final papers due 28th February 2009

·        Preliminary program 1st May 2009

·        Workshop 18th May – 20th May 2009

Registration and Information

Registration will be handled via the AVFA2009 web site (http://www.avfa09.upm.es). Please contact the secretariat (avfa09@ics.upm.es) for further information.

Program Committee

  • Juan Ignacio Godino Llorente, Universidad Politécnica de Madrid, Co-Chair
  • Pedro Gómez Vilda, Universidad Politécnica de Madrid, Co-Chair
  • Rubén Fraile, Universidad Politécnica de Madrid, Scientific Secretariat
  • Bartolomé Scola Yurrita, Gregorio Marañón Hospital 

·         Phillippe H. Dejonckere, University Medical Center Utrecht

·         Yannis Stylianou, University of Crete

Back to Top

9-6 . (2009-05-31) NAACL-HLT-09: Call for Tutorial Proposals

NAACL-HLT-09: Call for Tutorial Proposals

Proposals are invited for the Tutorial Program of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT) 2009 Conference. The conference is to be held from May 31 to June 5, 2009 in Boulder, Colorado. The tutorials will be held on Sunday, May 31.

Proposals for tutorials on all topics of computational linguistics and speech processing, such as processing for purposes of indexing and retrieval, processing for data mining, and so forth, are welcome. Especially encouraged are tutorials that educate the community about advancements in speech and natural language processing occurring in situ with contextual awareness, such as understanding speech, language or gesture in particular physical contexts.

Information on the tutorial instructor payment policy can be found at http://aclweb.org/aclwiki/index.php?title= Tutorial_teacher_payment_policy

PLEASE NOTE: Remuneration for Tutorial presenters is fixed according to the above policy and does not cover registration fees for the main conference.

SUBMISSION DETAILS

Proposals for tutorials should contain:

  1. A title and brief description of the tutorial content and its relevance to the NAACL-HLT community (not more than 2 pages).
  2. A brief outline of the tutorial structure showing that the tutorial's core content can be covered in a three-hour slot (including a coffee break). In exceptional cases six-hour tutorial slots are available as well.
  3. The names, postal addresses, phone numbers, and email addresses of the tutorial instructors, including a one-paragraph statement of their research interests and areas of expertise.
  4. A list of previous venues and approximate audience sizes, if the same or a similar tutorial has been given elsewhere; otherwise an estimate of the audience size.
  5. A description of special requirements for technical equipment (e.g., internet access).

Proposals should be submitted by electronic mail, in plain ASCII text no later than January 15, 2009 to tutorials.hlt09 "at" gmail "dot" com. The subject line should be: "NAACL HLT 2009: TUTORIAL PROPOSAL".

PLEASE NOTE:

  1. Proposals will not be accepted by regular mail or fax, only by email to: tutorials.hlt09 "at" gmail "dot" com.
  2. You will receive an email confirmation from us that your proposal has been received. If you do not receive this confirmation 24 hours after sending the proposal, please contact us personally using all the following emails: ciprianchelba "at" google "dot" com,
    kantor "at" scils "dot" rutgers "dot" edu, and
    roark "at" cslu "dot" ogi "dot" edu.

TUTORIAL SPEAKER RESPONSIBILITIES

Accepted tutorial speakers will be notified by February 1, 2009, and must then provide abstracts of their tutorials for inclusion in the conference registration material by March 1, 2009. The description should be in two formats: an ASCII version that can be included in email announcements and published on the conference web site, and a PDF version for inclusion in the electronic proceedings (detailed instructions will be given). Tutorial speakers must provide tutorial materials, at least containing copies of the course slides as well as a bibliography for the material covered in the tutorial, by April 15, 2009.

IMPORTANT DATES

  • Submission deadline for tutorial proposals: January 15, 2009
  • Notification of acceptance: February 1, 2009
  • Tutorial descriptions due: March 1, 2009
  • Tutorial course material due: April 15, 2009
  • Tutorial date: May 31, 2009

TUTORIALS CO-CHAIRS

  • Ciprian Chelba, Google
  • Paul Kantor, Rutgers
  • Brian Roark, Oregon Health & Science University
     

Please send inquiries concerning NAACL-HLT-09 tutorials to tutorials.hlt09 "at" gmail "dot" com  

Back to Top

9-7 . (2009-05-31) Call for Workshop proposals EACL 2009, NAACL HLT 2009, ACL-UCNLP 2009

CALL FOR WORKSHOP PROPOSALS EACL 2009, NAACL HLT 2009, AND ACL-IJCNLP 2009

 

Joint site:  http://www.eacl2009.gr/conference/callforworkshops

The Association for Computational Linguistics invites proposals for
workshops to be held in conjunction with one of the three flagship
conferences sponsored in 2009 by the Association for Computational
Linguistics: ACL-IJCNLP 2009, EACL 2009, and NAACL HLT 2009.  We solicit
proposals on any topic of interest to the ACL community. Workshops will
be held at one of the following conference venues:

EACL 2009 is the annual meeting of the European chapter of the ACL. The
conference will be held in Athens, Greece, March 30-April 3 2009;
workshops March 30-31.

NAACL HLT 2009 is the annual meeting of the North American chapter of
the ACL.  It continues the inclusive tradition of encompassing relevant
work from the natural language processing, speech and information
retrieval communities.  The conference will be held in Boulder,
Colorado, USA, from May 31-June 5 2009; workshops will be held June 4-5.

ACL-IJCNLP 2009 combines the 47th Annual Meeting of the Association for
Computational Linguistics (ACL 2009) with the 4th International Joint
Conference on Natural Language Processing (IJCNLP).  The conference will
be held in Singapore, August 2-7 2009; workshops will be held August 6-7.


    SUBMISSION INFORMATION

In a departure from previous years, ACL-IJCNLP, EACL and NAACL HLT will
coordinate the submission and reviewing of workshop proposals for all
three ACL 2009 conferences.

Proposals for workshops should contain:

    * A title and brief (2-page max) description of the workshop topic
      and content.
    * The desired workshop length (one or two days), and an estimate
      of the audience size.
    * The names, postal addresses, phone numbers, and email addresses
      of the organizers, with one-paragraph statements of their
      research interests and areas of expertise.
    * A budget.
    * A list of potential members of the program committee, with an
      indication of which members have already agreed.
    * A description of any shared tasks associated with the workshop.
    * A description of special requirements for technical needs.
    * A venue preference specification.

The venue preference specification should list the venues at which the
organizers would be willing to present the workshop (EACL, NAACL HLT, or
ACL-IJCNLP).  A proposal may specify one, two, or three acceptable
workshop venues; if more than one venue is acceptable, the venues should
be preference-ordered.  There will be a single workshop committee,
coordinated by the three sets of workshop chairs.  This single committee
will review the quality of the workshop proposals.  Once the reviews are
complete, the workshop chairs will work together to assign workshops to
each of the three conferences, taking into account the location
preferences given by the proposers.

The ACL has a set of policies on workshops. You can find general
information on policies regarding attendance, publication, financing,
and sponsorship, as well as on financial support of SIG workshops, at
the following URL:
http://www.cis.udel.edu/~carberry/ACL/index-policies.html

Please submit proposals by electronic mail no later than September 1
2008, to acl09-workshops at acl09-workshops@uni-konstanz.de with the
subject line: "ACL 2009 WORKSHOP PROPOSAL."


    PRACTICAL ARRANGEMENTS

Notification of acceptance of workshop proposals will occur no later
than September 23, 2008.  Since the three ACL conferences will occur at
different times, the timescales for the submission and reviewing of
workshop papers, and the preparation of camera-ready copies, will be
different for the three conferences. Suggested timescales for each of
the conferences are given below.

ALL CONFERENCES
Sep 1, 2008     Workshop proposal deadline
Sep 23, 2008    Notification of acceptance of workshops

EACL 2009
Sep 30, 2008    Call for papers issued by this date
Dec 12, 2008    Deadline for paper submission
Jan 23, 2009    Notification of acceptance of papers
Feb  6, 2009    Camera-ready copies due
Mar 30-31, 2009 EACL 2009 workshops

NAACL HLT 2009
Dec 10, 2008    Call for papers issued by this date
Mar 6, 2009     Deadline for paper submissions
Mar 30, 2009    Notification of paper acceptances
Apr 12, 2009    Camera-ready copies due
June 4-5, 2009  NAACL HLT 2009 workshops

ACL-IJCNLP 2009
Feb 6, 2009     Call for papers issued issued by this date
May 1, 2009     Deadline for paper submissions
Jun 1, 2009     Notification of acceptances
Jun 14, 2009    Camera-ready copies due
Aug 6-7, 2009   ACL-IJCNLP 2009 Workshops

Workshop Co-Chairs:

    * Miriam Butt, EACL, University of Konstanz
    * Stephen Clark, EACL, Oxford University
    * Nizar Habash, NAACL HLT, Columbia University
    * Mark Hasegawa-Johnson, NAACL HLT, University of Illinois at
Urbana-Champaign
    * Jimmy Lin, ACL-IJCNLP, University of Maryland
    * Yuji Matumoto, ACL-IJCNLP, Nara Institute of Science and Technology

For inquiries, send email to: acl09-workshops at
acl09-workshops@uni-konstanz.de

 

Back to Top

9-8 . (2009-05-31) CfP NAACL HLT 2009 Bouldr CO, USA

Call for Papers for NAACL HLT 2009
http://www.naaclhlt2009.org May 31 – June 5, 2009, Boulder, Colorado
 
Deadline for full paper submission – Monday, December 1, 2008 Deadline for short paper submission – Monday, February 9, 2009 NAACL HLT 2009 combines the Annual Meeting of the North American Association for Computational Linguistics (NAACL) with the Human Language Technology Conference (HLT) of NAACL. The conference covers a broad spectrum of disciplines working towards enabling intelligent systems to interact with humans using natural language, and towards enhancing human-human communication through services such as speech recognition, automatic translation, information retrieval, text summarization, and information extraction. NAACL HLT 2009 will feature full papers, short papers, posters, demonstrations, and a doctoral consortium, as well as pre- and post-conference tutorials and workshops. The conference invites the submission of papers on substantial, original, and unpublished research in disciplines that could impact human language processing systems. We encourage the submission of short papers that can be characterized as a small, focused contribution, a work in progress, a negative result, an opinion piece or an interesting application note. A separate review form for short papers will be introduced this year.
NAACL HLT 2009 aims to hold two special sessions, Large Scale Language Processing and Speech Indexing and Retrieval.
Topics include, but are not limited to, the following areas, and are understood to be applied to speech and/or text:
- Large scale language processing
- Speech indexing and retrieval
- Information retrieval (including monolingual and CLIR)
- Information extraction
- Speech-centered applications (e.g., human-computer, human-robot interaction, education and learning systems, assistive technologies, digital entertainment)
- Machine translation
- Summarization
- Question answering
- Topic classification and information filtering
- Non-topical classification (e.g., sentiment/attribution/genre analysis)
- Topic clustering
- Text and speech mining
- Statistical and machine learning techniques for language processing
- Spoken term detection and spoken document indexing
- Language generation
- Speech synthesis
- Speech understanding
- Speech analysis and recognition
- Multilingual processing
- Phonology
- Morphology (including word segmentation)
- Part of speech tagging
- Syntax and parsing (e.g., grammar induction, formal grammar, algorithms)
- Word sense disambiguation
- Lexical semantics
- Formal semantics and logic
- Textual entailment and paraphrasing
- Discourse and pragmatics
- Dialog systems
- Knowledge acquisition and representation
- Evaluation (e.g., intrinsic, extrinsic, user studies)
- Development of language resources (e.g., lexicons, ontologies, annotated corpora)
- Rich transcription (automatic annotation of information structure and sources in speech)
- Multimodal representations and processing, including speech and gesture
Submission information will soon be available at: http://www.naaclhlt2009.org
General Conference Chair: Mari Ostendorf, University of Washington Program Co-Chairs: Michael Collins, Massachusetts Institute of Technology Shri Narayanan, University of Southern California
Douglas W. Oard, University of Maryland Lucy Vanderwende, Microsoft Research Local Arrangements: James Martin, University of Colorado at Boulder Martha Palmer, University of Colorado at Boulder
Back to Top

9-9 . (2009-05-31) Cf Short papers NAACL HLT 2009

Call for Short Papers for NAACL HLT 2009
 
http://www.naaclhlt2009.org 
 
May 31 – June 5, 2009, Boulder, Colorado 
 
Deadline for short paper submission – Monday, February 9, 2009
Special sessions: Large Scale Language Processing, and Speech Indexing and Retrieval 
 
NAACL HLT 2009 combines the Annual Meeting of the North American Association for 
Computational Linguistics (NAACL) with the Human Language Technology Conference 
(HLT) of NAACL. The conference covers a broad spectrum of disciplines working 
towards enabling intelligent systems to interact with humans using natural language, and 
towards enhancing human-human communication through services such as speech 
recognition, automatic translation, information retrieval, text summarization, and 
information extraction. NAACL HLT 2009 will feature full papers, short papers, posters, 
demonstrations, and a doctoral consortium, as well as pre- and post-conference tutorials 
and workshops. 
 
The conference invites the submission of papers on substantial, original, and unpublished 
research in disciplines that could impact human language processing systems.  We 
encourage the submission of short papers that can be characterized as a small, focused 
contribution, a work in progress, a negative result, an opinion piece or an interesting 
application note. A separate review form for short papers will be introduced this year.
 
NAACL HLT 2009 aims to hold two special sessions, Large Scale Language Processing 
and Speech Indexing and Retrieval. 
 
Topics include, but are not limited to, the following areas, and are understood to be 
applied to speech and/or text: 
 
- Large scale language processing
- Speech indexing and retrieval
- Information retrieval (including monolingual and CLIR) 
- Information extraction 
- Speech-centered applications (e.g., human-computer, human-robot interaction, 
education and learning systems, assistive technologies, digital entertainment)
- Machine translation
- Summarization
- Question answering
- Topic classification and information filtering 
- Non-topical classification (e.g., sentiment/attribution/genre analysis) 
- Topic clustering 
- Text and speech mining
- Statistical and machine learning techniques for language processing
- Spoken term detection and spoken document indexing
- Language generation
- Speech synthesis
- Speech understanding
- Speech analysis and recognition
- Multilingual processing
- Phonology
- Morphology (including word segmentation)
- Part of speech tagging
- Syntax and parsing (e.g., grammar induction, formal grammar, algorithms)
- Word sense disambiguation
- Lexical semantics
- Formal semantics and logic
- Textual entailment and paraphrasing
- Discourse and pragmatics
- Dialog systems
- Knowledge acquisition and representation
- Evaluation (e.g., intrinsic, extrinsic, user studies)
- Development of language resources (e.g., lexicons, ontologies, annotated corpora) 
- Rich transcription (automatic annotation of information structure and sources in 
speech)  
- Multimodal representations and processing, including speech and gesture
 
 
 
Submission information is available at: http://www.naaclhlt2009.org 
 
 
 
General Conference Chair: 
 
Mari Ostendorf, University of Washington 
 
Program Co-Chairs: 
 
Michael Collins, Massachusetts Institute of Technology 
Shri Narayanan, University of Southern California
Douglas W. Oard, University of Maryland 
Lucy Vanderwende, Microsoft Research 
 
Local Arrangements: 
 
James Martin, University of Colorado at Boulder  
Martha Palmer, University of Colorado at Boulder 
 
Back to Top

9-10 . (2009-05-31) NAACL HLT 09 Call for Demonstrations

NAACL HLT 09 Call for Demonstrations

The NAACL HLT 2009 Program Committee invites proposals for the Demonstration Program to be held June 1-3, 2009 at the University of Colorado at Boulder. We encourage both the exhibition of early research prototypes and interesting mature systems. Commercial sales and marketing activities are not appropriate in the Demonstration Program, and should be arranged as part of the Exhibit Program. We invite proposals for two types of demonstrations:

·        Type I: theater-style, as part of the regular program

·        Type II: poster-style, where demos are to be presented on table-tops in sessions scheduled for a specific time slot.

 

Submission of a demonstration proposal on a particular topic does not preclude or require a separate submission of a paper on that topic; it is possible that some but not all of the demonstrations will illustrate concepts that are described in companion papers.

 

Areas of Interest

Areas of interest include, but are not limited to, the following types of systems, some of which have been demonstrated at recent ACL conferences:

·        End-to-end natural language processing systems

·        User interfaces for monolingual and multilingual information access systems, including retrieval, summarization, and QA engines

·        Voice search interfaces

·        Dialogue and conversational systems

·        Multimodal systems utilizing language technology

·        Language technology on mobile devices

·        Applications using embedded language technology components

·        Meeting capture and analysis systems utilizing language technology

·        Natural language processing systems for medical informatics

·        Assistive applications of language technology

·        Visualization tools

·        Software for evaluating natural language systems and components

·        Aids for teaching computational linguistics concepts

·        Software tools for facilitating computational linguistics research Reusable components (parsers, generators, speech recognizers, etc.)

·        Tools that assist in the development of other NLP applications (e.g., error analysis)

      

Format for Submission

Demo proposals consist of the following parts, which should all be sent to the Demonstration Co-Chairs. Please use the main ACL paper formatting guidelines. Please note that no hardware or software will be provided by the local organizer.

·        An extended abstract of the technical content to be demonstrated, including title, authors, full contact information, references, and acknowledgements. Please indicate a Type I or Type II demo.

·        A "script outline" of the demo presentation, including accompanying narrative, and either a Web address for accessing the demo or visual aids (e.g., screenshots, snapshots, or diagrams).

The entire proposal must not be more than four pages.

Submissions Procedure

Proposals must be submitted by February 9, 2009 to the Demonstration Co-Chairs. Submissions must be received electronically. Please submit your proposals and any inquiries to:

Michael Johnston                                         Fred Popowich         

AT&T                                                                      Simon Fraser University

johnston “at” research “dot” att “dot” com   popowich “at” sfu “dot” ca

Submissions will be evaluated on the basis of their relevance to computational linguistics, innovation, scientific contribution, presentation, as well as potential logistical constraints.

Accepted submissions will be allocated four pages in the Companion Volume to the Proceedings of the Conference.

 

Further Details

Further details on the date, time, and format of the demonstration session(s) will be determined and provided at a later date. Please send any inquiries to the demonstration co-chairs at the email addresses listed above.

 

Important Dates

 

February 9, 2009

Submission deadline

March 27, 2009

Notification of acceptance

April 6, 2009

Submission of final demo related literature

June 1-3, 2009

Conference

 

All submissions or camera-ready copies are due by 11:59pm EST on the dates specified above.

Back to Top

9-11 . (2009-06-04) CfP NAACL Workshop on Computational Approaches to Linguistic Creativity

Second Call For Papers  NAACL Workshop on Computational Approaches to Linguistic Creativity (CALC 2009)  Boulder, Colorado June 4, 2009  http://aclweb.org/aclwiki/index.php?title=CALC-09   It is generally agreed upon that "linguistic creativity" is a unique property of human language. Some claim that linguistic creativity is expressed in our ability to combine known words in a new sentence, others refer to our skill to express thoughts in figurative language, and yet others talk about syntactic recursion and lexical creativity.  For the purpose of this workshop, we treat the term "linguistic creativity" to mean "creative language usage at different levels", from the lexicon to syntax to discourse and text (see also topics, below).  The recognition of instances of linguistic creativity and the computation of their meaning constitute one of the most challenging problems for a variety of Natural Language Processing tasks, such as machine translation, text summarization, information retrieval, question answering, and sentiment analysis. Computational systems incorporating models of linguistic creativity operate on different types of data (including written text, audio/speech/sound, and video/images/gestures). New approaches might combine information from different modalities. Creativity-aware systems will improve the contribution Computational Linguistics has to offer to many practical areas, including education, entertainment, and engineering.  Within the scope of the workshop, the event is intended to be interdisciplinary. Besides contributions from an NLP perspective, we also welcome the participation of researchers who deal with linguistic creativity from different perspectives, including psychology, neuroscience, or human-computer interaction.  Topics ======  We are particularly interested in work on the automatic detection, classification, understanding, or generation of:  * neologisms; * figurative language, including metaphor, metonymy, personification, idioms; * new or unconventional syntactic constructions ("May I serve who's next?") and constructions defying traditional parsers (e.g. gapping: "Many words were spoken, and sentiments expressed"); * indirect speech acts (such as curses, insults, sarcasm and irony); * verbally expressed humor; * poetry and fiction; * and other phenomena illustrating linguistic creativity.  Depending on the state of the art of approaches to the various phenomena and languages, preference will be given to work on deeper processing (e.g., understanding, goal-driven generation) rather than shallow approaches (e.g., binary classification, random generation). We also welcome descriptions and discussions of:  * computational tools that support people in using language creatively (e.g. tools for computer-assisted creative writing, intelligent thesauri); * computational and/or cognitive models of linguistic creativity; * metrics and tools for evaluating the performance of creativity-aware systems; * specific application scenarios of computational linguistic creativity; * design and implementation of creativity-aware systems.  Related topics, including corpora collection, elicitation, and annotation of creative language usage, will also be considered, as long as their relevance to automatic systems is clearly pointed out.  Invited Speaker ===============  Nick Montfort, MIT  Submissions ===========  Submissions should describe original, unpublished work. Papers are limited to 8 pages. The style files can be found here: [http://clear.colorado.edu/NAACLHLT2009/stylefiles.html]. No author information should be included in the papers, since reviewing will be blind. Papers not conforming to these requirements are subject to rejection without review. Papers should be submitted via START [https://www.softconf.com/naacl-hlt09/CALC2009/] in PDF format.  We encourage submissions from everyone. For those how are new to ACL conferences and workshops, or with special needs, we are planning to set up a lunch mentoring program. Let us know if you are interested. Also, a limited number of student travel grants might become available, intended for individuals with minority background and current residents of countries where conference travel funding is usually hard to find.  Important Dates ===============  Submission Deadline:	Feb 27, 2009 Notification Due:	Mar 30, 2009 Final Version Due:	Apr 12, 2009 Workshop:		Jun 04, 2009  Organizers ==========  * Anna Feldman, Montclair State University (anna.feldman@montclair.edu) * Birte Loenneker-Rodman, University of Hamburg, Germany (birte.loenneker@uni-hamburg.de)  Program Committee =================  * Shlomo Argamon, Illinois Institute of Technology; * Roberto Basili, University of Roma, Italy; * Amilcar Cardoso, University of Coimbra, Portugal; * Afsaneh Fazly, University of Toronto, Canada; * Eileen Fitzpatrick, Montclair State University; * Pablo Gervas, Universidad Complutense de Madrid, Spain; * Sam Glucksberg, Princeton University; * Jerry Hobbs, ISI, Marina del Rey; * Sid Horton, Northwestern University; * Diana Inkpen, University of Ottawa, Canada; * Mark Lee, Birmingham, UK; * Hugo Liu, MIT; * Xiaofei Lu, Penn State; * Ruli Manurung, University of Indonesia; * Katja Markert, University of Leeds, UK; * Rada Mihalcea, University of North Texas; * Anton Nijholt, University of Twente, The Netherlands; * Andrew Ortony, Northwestern University; * Vasile Rus, The University of Memphis; * Richard Sproat, Oregon Health and Science University; * Gerard Steen, Vrije Universiteit, Amsterdam, The Netherlands; * Carlo Strapparava, Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy; * Juergen Trouvain, Saarland University, Germany. 
Back to Top

9-12 . (2009-06-03) 7th International Workshop on Content-Based Multimedia Indexing

7ème Atelier International sur Indexation Multimédia Par le Contenu.
7th International Workshop on Content-Based Multimedia Indexing

Après le succès des six événements précédents (Toulouse 1999, Brescia 2001, Rennes 2003, Riga 2005, Bordeaux 2007, Londres 2008), l’atelier international  CBMI 2009 aura lieu du 3 au 5 juin 2009 dans la ville pittoresque de Chania sur l'île de Crète en Grèce. Il sera organisé par le laboratoire Image, Vidéo et Multimédia de l'Université Technique Nationale d'Athènes. Le CBMI 2009 a pour but de rassembler les différentes communautés impliquées dans les différents aspects de l'indexation multimédia basée sur le contenu, tels que le traitement d'images et la recherche d'information avec les tendances et développements actuels des industriels. L’atelier est soutenu par les sociétés savantes IEEE et EURASIP, Université d’Athènes. Le programme technique du CBMI 2009 comprend les conférences plénières invitées, des sessions spéciales ainsi que des sessions régulières.

 

Liste non exhaustive des thèmes traités:

l  Indexation et recherche multimédia (image, audio, vidéo, texte)

l  Mise en correspondance et recherche de similarité

l  Construction d'indices de haut niveau

l  Extraction du contenu multimédia

l  Identification et suivi des régions sémantiques dans les scènes

l  Indexation multi-modale et cross-modale

l  Recherche basée contenu

l  L'extraction de données multimédia

l  Génération, codage et transformation de métadonnées

l  Gestion de bases de données multimédia de grande échelle

l  Résumé, navigation et organisation du contenu multimédia

l  Outils de présentation et de visualisation

l  Interaction avec l'utilisateur et pertinence du retour

l  Personnalisation et adaptation au contenu

l  Evaluation et métriques

 

 

Soumission

            Les auteurs sont invités à soumettre des papiers sur le site web de la conférence: http://www.cbmi2009.org/submission.  Des fichiers de style (Latex et Word) seront fourni pour la commodité des auteurs.

Dates importantes


Présentation des textes complets:

8 janvier 2009

Notification d'acceptation:

23 février 2009

Soumission des versions finales:

13 mars 2009

Début de l'enregistrement:

13 mars 2009

Conférence

3 au 5 Juin 2009

 

Lieu de la manifestation

            Le CBMI 2009 aura lieu dans l'enceinte du KAM - Center méditerranéen de l'architecture, de Chania, sur l'île de la Crète, l'une des destinations les plus excitantes en Grèce. Le KAM a été créé par la commune de Chania en 1996 et est situé depuis 2002 au Grand Arsenal, le vieux port de Chania.

Following the six successful previous events (Toulouse 1999, Brescia 2001, Rennes 2003, Riga 2005, Bordeaux 2007, London 2008), 2009 International Workshop on Content-Based Multimedia Indexing (CBMI) will be held on June 3-5, 2009 at the picturesque city of Chania, in Crete Island, Greece. It will be organized by Image, Video and Multimedia Laboratory of National Technical University of Athens. CBMI 2009 aims at bringing together the various communities involved in the different aspects of content-based multimedia indexing, such as image processing and information retrieval with current industrial trends and developments. CBMI 2009 is supported by IEEE, EURASIP, University of Athens. The technical program of CBMI 2009 will include presentation of invited plenary talks, special sessions as well as regular sessions with contributed research papers.

Topics of interest include, but are not limited to:

Multimedia indexing and retrieval (image, audio, video, text)
Matching and similarity search
Construction of high level indices
Multimedia content extraction
Identification and tracking of semantic regions in scenes
Multi-modal and cross-modal indexing
Content-based search
Multimedia data mining
Metadata generation, coding and transformation
Large scale multimedia database management
Summarisation, browsing and organization of multimedia content
Presentation and visualization tools
User interaction and relevance feedback
Personalization and content adaptation
Evaluation and metrics

Paper Submission

Prospective authors are invited to submit full papers at the conference web site: http://www.cbmi2009.org/submission. Style files (Latex and Word) will be provided for the convenience of the authors.

Important Dates

Submission of full papers:

January 8, 2009

Notification of acceptance:

February 23, 2009

Submission of camera-ready papers:

March 13, 2009

Early registration due:

March 13, 2009

Main Workshop:

June 3-5, 2009

Venue

CBMI 2009 will be hosted at KAM - Mediterranean Centre of Architecture, Chania, at the island of Crete, one of the most exciting Greek destinations. KAM was settled by Chania municipality in 1996 and is situated since 2002 at Great Arsenali, the old port of Chania.

 

Back to Top

9-13 . (2009-06-05) Nasal 2009 Nasalité en phonétique et en phonologie (french)

**********************************************
NASAL2009
DEUXIEME APPEL A COMMUNICATIONS
http://w3.umh.ac.be/~nasal/Workshop/appel.html
**********************************************


L'équipe Praxiling UMR 5267 CNRS (Université Paul Valéry, Montpellier 3)
et
le Laboratoire des Sciences de la Parole de l'Académie Universitaire
Wallonie-Bruxelles (Université de Mons-Hainaut) organisent un colloque
international consacré à la nasalité en phonétique et en phonologie.

Le colloque aura lieu le vendredi 5 juin 2009 de 9h00 à 18h30 au Grand
Amphithéâtre de la Délégation régionale du CNRS, 1919, route de Mende,
F-34293 Montpellier cedex 5.

L'objectif de ce colloque international est de permettre aux chercheurs du
monde entier de se réunir et d'échanger à propos de leurs travaux
récents
concernant la nasalité. Toute proposition de communication concernant la
nasalité est la bienvenue, en particulier les travaux concernant: la
production
(mesures articulatoires, études aérodynamiques, analyses acoustiques,
etc.),
la perception, les aspects phonologiques, les universaux phonétiques, la
modélisation, les langues peu décrites, les aspects pathologiques et
cliniques, l'acquisition du langage et l'apprentissage d'une langue
seconde. Un
intérêt tout particulier sera accordé aux communications traitant de
questions transversales aux champs disciplinaires cités ci-dessus :
multiinstrumentation, liens entre production et perception, comparaisons
inter-langues, relations entre l'organisation des systèmes phonologiques
et les
contraintes phonétiques, points communs et divergences entre acquisition
en L1
et apprentissage en L2, etc.


Conférenciers invités
Patrice S. Beddor, University of Michigan, USA
Didier Demolin, Université Libre de Bruxelles, Belgique
John Hajek, University of Melbourne, Australia
Ian Maddieson, University of Albuquerque, New Mexico, USA
Alain Marchal, Université d'Aix-en-Provence, France
Jacqueline Vaissière, Université de Paris III, France


Comité scientifique
Pierre Badin, Gipsa-Lab, France
Nick Clements, Université de Paris III, France
Bernard Harmegnies, Université de Mons-Hainaut, Belgique
Sarah Hawkins, University of Cambridge, UK
Marie Huffman, State University of New York Stony Brook, USA
John Kingston, University of Massachussets at Amherst, USA
Christine Matyear, University of Texas at Austin, USA
John Ohala, University of California at Berkeley, USA
Daniel Recasens, Universitat Autonoma de Barcelona, Espana
Ryan Shosted, University of Illinois at Urbana-Champaign, USA
Maria Josep Solé, Universitat Autonoma de Barcelona, Espana
Nathalie Vallée, Gipsa-Lab, France
Doug Whalen, Haskins Laboratories, USA


Date limite de soumission
15 février 2009

Modalités de soumission
Envoyer un message avec les coordonnées complètes du premier auteur et le
nom
des éventuels autres auteurs à nasal2009@umh.ac.be avec, en fichier
attaché,
un article anonyme de quatre pages A4 maximum sous la forme d'un fichier
pdf.
Un modèle de fichier Word est téléchargeable sur notre site web.

Notez par ailleurs que tous les intervenants seront invités à soumettre
une
version longue de leur communication (50000 caractères) pour une
éventuelle
publication dans un livre à paraître chez un éditeur international. Date
limite de soumission des papiers: autour du 14 septembre 2009.

Pour plus d'infos, visitez notre site web:
http://w3.umh.ac.be/~nasal/Workshop/appel.html


Pour le comité d'organisation,
V. Delvaux
Chargée de Recherches FNRS
Laboratoire de Phonétique
Service de Métrologie et Sciences du Langage
Université de Mons-Hainaut
18, Place du Parc
7000 Mons
Belgium
+3265373140





Back to Top

9-14 . (2009-06-15) TrebleCLEF Summer School Pisa Italy

TrebleCLEF Summer School on Multilingual Information Access      http://www.trebleclef.eu/summerschool.php  Santa Croce in Fossabanda Pisa, Italy 15-19 June 2009  Objectives  The aim of the Summer School is to give participants a grounding in the core topics that constitute the multidisciplinary area of Multilingual Information Access (MLIA). The School is intended for advanced undergraduate and post-graduate students, post-doctorial researchers plus academic and industrial researchers and system developers with backgrounds in Computer Science, Information Science, Language Technologies and related areas. The focus of the school will be on "How to build effective multilingual information retrieval systems and how to evaluate them".  Programme  The programme of the school will cover the following areas:  .         Multilingual Text Processing  .         Cross-Language Information Retrieval  .         Content and Text-based Image Retrieval, including multilingual approaches  .         Cross-language Speech and Video Retrieval  .         System Architectures and Multilinguality  .         Information Extraction in a Multilingual Context  .         Machine Translation for Multilingual Information processing  .         Interactive Aspects of Cross-Language Information Retrieval  .         Evaluation for Multilingual Systems and Components.  An optional student mentoring session where students can present and discuss with lecturers their research ideas will also be organised.  Location and Dates  The Summer School will be held 15 - 19 June 2009 in the beautiful ex-convent <http://www.fossabanda.it/> Santa Croce in Fossabanda, Pisa. Santa Croce provides the perfect setting for study and discussions in a peaceful, relaxed atmosphere and is just a short walk from the town centre and the famous Piazza dei Miracoli with its Leaning Tower.  Accommodation and Registration  A maximum of 40 registrations will be accepted. Tuition fees are set at 200 Euros up to 30 April and 350 Euros after this date. Tuition fees cover all courses and lectures, course material, lunch and coffee breaks during the School, the Welcome Reception on the evening of Sunday 14 June, and the Social Dinner on Monday 15 June. Accommodation will be on the School site at Santa Croce in Fossabanda.  Financial Support for Students  A number of grants will be made available by TrebleCLEF and by the DELOS Association covering accommodation costs. Students wishing to receive a grant must submit a brief application (maximum 1 page) explaining why attendance at the school would be important for them. The application must be supported by a letter of reference from the student's advisor / supervisor or equivalent.  More information  Further details including the programme of lectures and information on how to register can be found at  <http://www.trebleclef.eu/summerschool.php> http://www.trebleclef.eu/summerschool.php  or contact Carol Peters (carol.peters@isti.cnr.it)
Back to Top

9-15 . (2009-06-21) CfP Specom 2009- St Petersburg Russia

SPECOM 2009 - FINAL CALL FOR PAPERS

    13-th International Conference "Speech and Computer"
                             21-25 June 2009
     Grand Duke Vladimir's palace, St. Petersburg, Russia
                      http://www.specom.nw.ru

(!) Due to many requests the submission deadline has been postponed to Monday, February 9, 2009 (!) 

Organized by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS)

Dear Colleagues, we are pleased to invite you to the 13-th International Conference on Speech and Computer SPECOM'2009, which will be held in June
21-25, 2009 in St.Petersburg. The global aim of the conference is to discuss state-of-the-art problems and recent achievements in Signal Processing and
Human-Computer Interaction related to speech technologies. Main topics of SPECOM'2009 are:
- Signal processing and feature extraction
- Multimodal analysis and synthesis
- Speech recognition and understanding
- Natural language processing
- Spoken dialogue systems
- Speaker and language identification
- Text-to-speech systems
- Speech perception and speech disorders
- Speech and language resources
- Applications for human-computer interaction

The official language of the event is English. Full papers up to 6 pages will be published in printed and electronic proceedings with ISBN.

Imporatnt Dates:
- Submission of full papers: February 1, 2009 (extended)
- Notification of acceptance: March 1, 2009
- Submission of final papers: March 20, 2009
- Early registration: March 20, 2009
- Conference dates: June 21-25, 2009

Scientific Committee:
Andrey Ronzhin, Russia (conference chairman)
Niels Ole Bernsen, Denmark
Denis Burnham, Australia
Jean Caelen, France
Christoph Draxler, Germany
Thierry Dutoit, Belgium
Hiroya Fujisaki, Japan
Sadaoki Furui, Japan
Jean-Paul Haton, France
Ruediger Hoffmann, Germany
Dimitri Kanevsky, USA
George Kokkinakis, Greece
Steven Krauwer, Netherlands
Lin-shan Lee, Taiwan
Boris Lobanov, Belarus
Benoit Macq, Belgium
Jury Marchuk, Russia
Roger Moore, UK
Heinrich Niemann, Germany
Rajmund Piotrowski, Russia
Louis Pols, Netherlands
Rodmonga Potapova, Russia
Josef Psutka, Czech Republic
Lawrence Rabiner, USA
Gerhard Rigoll, Germany
John Rubin, UK
Murat Saraclar, Turkey
Jesus Savage, Mexico
Pavel Skrelin, Russia
Viktor Sorokin, Russia
Yannis Stylianou, Greece
Jean E. Viallet, France
Taras Vintsiuk, Ukraine
Christian Wellekens, France

The invited speakers of SPECOM'2009 are:
- Prof. Walter Kellermann (University of Erlangen-Nuremberg, Germany), lecture "Towards Natural Acoustic Interfaces for Automatic Speech Recognition"
- Prof. Mikko Kurimo (Helsinki University of Technology, Finland), lecture "Unsupervised decomposition of words for speech recognition and retrieval"

The conference venue is House of Scientists (former Grand Duke Vladimir's palace) located in the very heart of the city, in the neighborhood
of the Winter Palace (Hermitage), the residence of Russian emperor, and the Peter's and Paul's Fortress. Independently of the scientific actions
we will provide essential possibilities for acquaintance with cultural and historical valuables of  Saint-Petersburg, the conference will be hosted
during a unique and wonderful period known as the White Nights.

Contact Information:
SPECOM'2009 Organizing Committee,
SPIIRAS, 39, 14-th line, St.Petersburg, 199178, RUSSIA
E-mail: specom@iias.spb.su
Web: http://www.specom.nw.ru 

 

 

 

Back to Top

9-16 . (2009-06-22) Summer workshop at Johns Hopkins University

                                            The Center for Language and Speech Processing

 

at Johns Hopkins University invites one page research proposals for a

NSF-sponsored, Six-week Summer Research Workshop on

Machine Learning for Language Engineering

to be held in Baltimore, MD, USA,

June 22 to July 31, 2009.

CALL FOR PROPOSALS

Deadline: Wednesday, October 15, 2008.

One-page proposals are invited for the 15th annual NSF sponsored JHU summer workshop.  Proposals should be suitable for a six-week team exploration, and should aim to advance the state of the art in any of the various fields of Human Language Technology (HLT) including speech recognition, machine translation, information retrieval, text summarization and question answering.  This year, proposals in related areas of Machine Intelligence, such as Computer Vision (CV), that share techniques with HLT are also being solicited.  Research topics selected for investigation by teams in previous workshops may serve as good examples for your proposal. (See http://www.clsp.jhu.edu/workshops.)

Proposals on all topics of scientific interest to HLT and technically related areas are encouraged.  Proposals that address one of the following long-term challenges are particularly encouraged.

Ø  ROBUST TECHNOLOGY FOR SPEECH:  Technologies like speech transcription, speaker identification, and language identification share a common weakness: accuracy degrades disproportionately with seemingly small changes in input conditions (microphone, genre, speaker, dialect, etc.), where humans are able to adapt quickly and effectively. The aim is to develop technology whose performance would be minimally degraded by input signal variations.

Ø  KNOWLEDGE DISCOVERY FROM LARGE UNSTRUCTURED TEXT COLLECTIONS: Scaling natural language processing (NLP) technologies—including parsing, information extraction, question answering, and machine translation—to very large collections of unstructured or informal text, and domain adaptation in NLP is of interest.

Ø  VISUAL SCENE INTERPRETATION: New strategies are needed to parse visual scenes or generic (novel) objects, analyzing an image as a set of spatially related components.  Such strategies may integrate global top-down knowledge of scene structure (e.g., generative models) with the kind of rich bottom-up, learned image features that have recently become popular for object detection.  They will support both learning and efficient search for the best analysis.

Ø  UNSUPERVISED AND SEMI-SUPERVISED LEARNING: Novel techniques that do not require extensive quantities of human annotated data to address any of the challenges above could potentially make large strides in machine performance as well as lead to greater robustness to changes in input conditions.  Semi-supervised and unsupervised learning techniques with applications to HLT and CV are therefore of considerable interest.

An independent panel of experts will screen all received proposals for suitability. Results of this screening will be communicated no later than October 22, 2008. Authors passing this initial screening will be invited to Baltimore to present their ideas to a peer-review panel on November 7-9, 2008.  It is expected that the proposals will be revised at this meeting to address any outstanding concerns or new ideas. Two or three research topics and the teams to tackle them will be selected for the 2009 workshop.

We attempt to bring the best researchers to the workshop to collaboratively pursue the selected topics for six weeks.  Authors of successful proposals typically become the team leaders.  Each topic brings together a diverse team of researchers and students.  The senior participants come from academia, industry and government.  Graduate student participants familiar with the field are selected in accordance with their demonstrated performance, usually by the senior researchers. Undergraduate participants, selected through a national search, will be rising seniors who are new to the field and have shown outstanding academic promise.

If you are interested in participating in the 2009 Summer Workshop we ask that you submit a one-page research proposal for consideration, detailing the problem to be addressed.  If your proposal passes the initial screening, we will invite you to join us for the organizational meeting in Baltimore (as our guest) for further discussions aimed at consensus.  If a topic in your area of interest is chosen as one of the two or three to be pursued next summer, we expect you to be available for participation in the six-week workshop. We are not asking for an ironclad commitment at this juncture, just a good faith understanding that if a project in your area of interest is chosen, you will actively pursue it.

Proposals should be submitted via e-mail to clsp@jhu.edu by 4PM EST on Wed, October 15, 2008.

Back to Top

9-17 . (2009-06-22) Third International Conference on Intelligent Technologies for Interactive Entertainment (Intetain 2009)

Intetain 2009, Amsterdam, 22-24th June 2009

Third International Conference on Intelligent Technologies for Interactive Entertainment

 http://intetain.org/

**********************************************************************

 

Call for Papers

 

==================

==== OVERVIEW ====

==================

The Human Media Interaction (HMI) department of the University of Twente in the Netherlands and the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (ICST) are pleased to announce the Third International Conference on Intelligent Technologies for Interactive Entertainment to be held on June 22-24, 2009 in Amsterdam, the Netherlands.

 

INTETAIN 09 intends to stimulate interaction among academic researchers and commercial developers of interactive entertainment systems. We are seeking long (full) and short (poster) papers as well as proposals for interactive demos. In addition, the conference organisation aims at an interactive hands-on session along the lines of the Design Garage that was held at INTETAIN 2005. Individuals who want to organise special sessions during INTETAIN 09 may contact the General Chair, Anton Nijholt  (anijholt@cs.utwente.nl). 

 

The global theme of this third edition of the international conference is “Playful interaction, with others and with the environment”.

 

Contributions may, for example, contribute to this theme by focusing on the Supporting Device Technologies underlying interactive systems (mobile devices, home entertainment centers, haptic devices, wall screen displays, information kiosks, holographic displays, fog screens, distributed smart sensors, immersive screens and wearable devices), on the Intelligent Computational Technologies used to build the interactive systems, or by discussing the Interactive Applications for Entertainment themselves.

 

We seek novel, revolutionary, and exciting work in areas including but not limited to:

 

== Supporting Technology ==

 * New hardware technology for interaction and entertainment

 * Novel sensors and displays

 * Haptic devices

 * Wearable devices

 

== Intelligent Computational Technologies ==

 * Animation and Virtual Characters

 * Holographic Interfaces

 * Adaptive Multimodal Presentations

 * Creative language environments

 * Affective User Interfaces

 * Intelligent Speech Interfaces

 * Tele-presence in Entertainment

 * (Collaborative) User Models and Group Behavior

 * Collaborative and virtual Environments

 * Brain Computer Interaction

 * Cross Domain User Models

 * Augmented, Virtual and Mixed Reality

 * Computer Graphics & Multimedia

 * Pervasive Multimedia

 * Robots

 * Computational humor

 

== Interactive Applications for Entertainment ==

 * Intelligent Interactive Games

 * Emergent games

 * Human Music Interaction

 * Interactive Cinema

 * Edutainment

 * Urban Gaming

 * Interactive Art

 * Interactive Museum Guides

 * Evaluation

 * City and Tourism Explorers Assistants

 * Shopping Assistants

 * Interactive Real TV

 * Interactive Social Networks

 * Interactive Story Telling

 * Personal Diaries, Websites and Blogs

 * Comprehensive assisting environments for special populations

     (handicapped, children, elderly)

 * Exertion games

 

===========================

==== SUBMISSION FORMAT ====

===========================

INTETAIN 09 accepts long papers and short poster papers as well as demo proposals accompanied by a two page extended abstract. Accepted long and short papers will be published in the new Springer series LNICST: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. The organisation of INTETAIN 09 is currently working to secure a special edition of a journal, as happened previously for the 2005 edition of the Intetain conference.

 

Submissions should adhere to the LNICST instructions for authors, available from the INTETAIN 09 web site.

 

== Long papers ==

Submissions of a maximum of 12 pages that describe original research work not submitted or published elsewhere. Long papers will be orally presented at the conference.

 

== Short papers ==

Submissions of a maximum of 6 pages that describe original research work not submitted or published elsewhere. Short papers will be presented with a poster during the demo and poster session at the conference.

 

== Demos ==

Researchers are invited to submit proposals for demonstrations to be held during a special demo and poster session at the INTETAIN 09. For more information, see the Call for Demos below. Demo proposals may either be accompanied by a long or short paper submission, or by a two page extended abstract describing the demo. The extended abstracts will be published in a supplementary proceedings distributed during the conference.

 

=========================

==== IMPORTANT DATES ====

=========================

Submission deadline:

Monday, Februari 16, 2009

 

Notification:

Monday, March 16, 2009

 

Camera ready submission deadline:

Monday, March 30, 2009

 

Late demo submission deadline (extended abstract only!):

Monday, March 30, 2009

 

Conference:

June 22-24, 2009, Amsterdam, the Netherlands

 

===================

==== COMMITTEE ====

===================

General Program Chair:

Anton Nijholt, Human Media Interaction, University of Twente, the Netherlands

 

Local Chair:

Dennis Reidsma, Human Media Interaction, University of Twente, the Netherlands

 

Web Master and Publication Chair:

Hendri Hondorp, Human Media Interaction, University of Twente, the Netherlands

 

Steering Committee Chair:

Imrich Chlamtac, Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering

 

========================

==== CALL FOR DEMOS ====

========================

We actively seek proposals from both industry and academia for interactive demos to be held during a dedicated session at the conference. Demos may accompany a long or short paper. Also, demos may be submitted at a later deadline instead, with a short, two page extended abstract explaining the demo and showing why the demo would be a worthwhile contribution the INTETAIN 09's demo session.

 

== Format ==

Demo submissions should be accompanied by the following additional information:

 * A short description of the setup and demo (2 alineas)

 * Requirements (hardware, power, network, space,

     sound conditions, etc, time needed for setup)

 * A sketch or photo of the setup

 

Videos showing the demonstration setup in action are very welcome.

 

== Review ==

Demo proposals will be reviewed by a review team that will take into account aspects such as novelty, relevance to the conference, coverage of topics and available resources.

 

== Topics ==

 * Topics for demo submissions include, but are not limited to:

 * New technology for interaction and entertainment

 * (serious) gaming

 * New entertainment applications

 * BCI

 * Human Music Interaction

 * Music technology

 * Edutainment

 * Exertion interfaces

 

============================

==== PROGRAM COMMITTEE ====

============================

Stefan Agamanolis Distance Lab, Forres, UK
Elisabeth Andre Augsburg University, Germany
Lora Aroyo Vrije Universiteit Amsterdam, the Netherlands
Regina Bernhaupt University of Salzburg, Austria
Kim Binsted University of Hawai, USA
Andreas Butz University of Munich, Germany
Yang Cai Visual Intelligence Studio, CYLAB, Carnegie Mellon, USA
Antonio Camurri University of Genoa, Italy
Marc Cavazza University of Teesside, UK
Keith Cheverst University of Lancaster, UK
Drew Davidson CMU, Pittsburgh, USA
Barry Eggen University of Eindhoven, the Netherlands
Arjan Egges University of Utrecht, the Netherlands
Anton Eliens Vrije Universiteit Amsterdam, the Netherlands
Steven Feiner Columbia University, New York
Alois Ferscha University of Linz, Austria
Matthew Flagg Georgia Tech, USA
Jaap van den Herik University of Tilburg, the Netherlands
Dirk Heylen University of Twente, the Netherlands
Frank Kresin Waag Society, Amsterdam, the Netherlands
Antonio Krueger University of Muenster, Germany
Tsvi Kuflik University of Haifa, Israel
Markus Löckelt DFKI Saarbrücken, Germany
Henry Lowood University of Stanford, USA 
Mark Maybury MITRE, Boston, USA
Oscar Mayora Create-Net Research Consortium, Italy
John-Jules Meijer University of Utrecht, the Netherlands
Louis-Philippe Morency Institute for Creative Technologies, USC, USA
Florian 'Floyd' Mueller University of Melbourne, Australia
Patrick Olivier University of Newcastle, UK
Paolo Petta Medical University of Vienna, Austria
Fabio Pianesi ITC-irst, Trento, Italy
Helmut Prendinger National Institute of Informatics, Tokyo, Japan
Matthias Rauterberg University of Eindhoven, the Netherlands
Isaac Rudomin Monterrey Institute of Technology, Mexico
Pieter Spronck University of Tilburg, the Netherlands
Oliviero Stock ITC-irst, Trento, Italy
Carlo Strapparava ITC-irst, Trento, Italy
Mariet Theune University of Twente, the Netherlands
Thanos Vasilikos University of Western Macedonia, Greece
Sean White Columbia University, USA
Woontack Woo Gwangju Institute of Science and Technology, Korea
Wijnand IJsselstein University of Eindhoven, the Netherlands
Massimo Zancanaro ITC-irst, Trento, Italy


Back to Top

9-18 . (2009-06-24) DIAHOLMIA 2009: THE 13TH WORKSHOP ON THE SEMANTICS AND PRAGMATICS OF DIALOGUE

DIAHOLMIA 2009: THE 13TH WORKSHOP ON THE SEMANTICS AND PRAGMATICS OF DIALOGUE

KTH, Stockholm, Sweden, 24-26 June, 2009

The SemDial series of workshops aims to bring together researchers working on the semantics and pragmatics of dialogue in fields such as artificial intelligence, computational linguistics, formal semantics/pragmatics, philosophy, psychology, and neural science. DiaHolmia will be the 13th workshop in the SemDial series, and will be organized at the Department of Speech Music and Hearing, KTH (Royal Institute of Technology). KTH is Scandinavia's largest institution of higher education in technology and is located in central Stockholm (Holmia in Latin).

WEBSITE: www.diaholmia.org

DATES AND DEADLINES:

Full 8-page papers:
Submission due: 22 March 2009
Notification of acceptance: 25 April 2009
Final version due: 7 May 2009

2-page poster or demo descriptions:
Submission due: 25 April 2009
Notification of acceptance: 7 May 2009

DiaHolmia 2009: 24-26 June 2009 (Wednesday-Friday)

SCOPE:

We invite papers on all topics related to the semantics and pragmatics of dialogues, including, but not limited to:

- common ground/mutual belief
- turn-taking and interaction control
- dialogue and discourse structure
- goals, intentions and commitments
- natural language understanding/semantic interpretation
- reference, anaphora and ellipsis
- collaborative and situated dialogue
- multimodal dialogue
- extra- and paralinguistic phenomena
- categorization of dialogue phenomena in corpora
- designing and evaluating dialogue systems
- incremental, context-dependent processing
- reasoning in dialogue systems
- dialogue management

Full papers will be in the usual 8-page, 2-column format. There will also be poster and demo presentations. The selection of posters and demos will be based on 2-page descriptions. Selected descriptions will be included in the proceedings.

Details on programme and local arrangements will be announced at a later date.

The best accepted papers will be invited to submit extended versions to Dialogue & Discourse, the new open-access journal dedicated exclusively to research on language 'beyond the single sentence' (www.dialogue-and-discourse.org).

KEYNOTE SPEAKERS:

Harry Bunt (Tilburg University, Netherlands)
Nick Campbell (ATR, Japan)
Julia Hirschberg (Columbia University, New York)
Sverre Sjölander (Linköping University, Sweden)

PROGRAMME COMMITTEE:

Jan Alexandersson, Srinivas Bangalore, Ellen Gurman Bard, Anton Benz, Johan Bos, Johan Boye, Harry Bunt, Donna Byron, Jean Carletta, Rolf Carlson, Robin Cooper, Paul Dekker, Giuseppe Di Fabbrizio, Raquel Fernández, Claire Gardent, Simon Garrod, Jonathan Ginzburg, Pat Healey, Peter Heeman, Mattias Heldner, Joris Hulstijn, Michael Johnston, Kristiina Jokinen, Arne Jönsson, Alistair Knott, Ivana Kruijff-Korbayova, Staffan Larsson, Oliver Lemon, Ian Lewin, Diane Litman, Susann Luperfoy, Colin Matheson, Nicolas Maudet, Michael McTear, Wolfgang Minker, Philippe Muller, Fabio Pianesi, Martin Pickering, Manfred Pinkal, Paul Piwek, Massimo Poesio, Alexandros Potamianos, Matthew Purver, Manny Rayner, Hannes Rieser, Laurent Romary, Alex Rudnicky, David Schlangen, Stephanie Seneff, Ronnie Smith, Mark Steedman, Amanda Stent, Matthew Stone, David Traum, Marilyn Walker and Mats Wirén

ORGANIZING COMMITTEE:

Jens Edlund
Joakim Gustafson
Anna Hjalmarsson
Gabriel Skantze 

 

 

Back to Top

9-19 . (2009-07) 6th IJCAI workshop on knowledge and reasoning in practical dialogue systems

6th WORKSHOP ON KNOWLEDGE AND REASONING IN PRACTICAL DIALOGUE SYSTEMS > > The sixth IJCAI workshop on "Knowledge and Reasoning in Practical Dialogue 
Systems" will focus on challenges of novel applications of practical dialogue systems. The venue for IJCAI 2009 is Pasadena Conference Center, California, USA. 
> Topics addressed in the workshop include, but are not limited to the 
following, particularly focusing on the challenges offered by these novel applications: 
> >    * What kinds of novel applications have a need for natural language 
dialogue interaction? 
>    * How can authoring tools for dialogue systems be developed such that 
application designers who are not experts in natural language can make use of these systems? 
>    * How can one easily adapt a dialogue system to a new application? >    * Methods for design and development of dialogue systems. >    * What are the extra constraints and resources of a dialogue system for 
these novel applications, that might not be present in a speech or text only dialogue system or even traditional multi-modal interfaces? 
>    * Representation of language resources for dialogue systems. >    * The role of ontologies in dialogue systems >    * Evaluation of dialogue systems, what to evaluate and how. >    * Techniques and algorithms for adaptivity in dialogue systems on 
various levels, e.g. interpretation, dialogue strategy, and generation. 
>    * Robustness and how to handle unpredictability. >    * Architectures and frameworks for adaptive dialogue systems. >    * Requirements and methods for development related to the architecture. > > This is the sixth IJCAI workshop on "Knowledge and Reasoning in Practical 
Dialogue Systems". The first workshop was held at IJCAI in Stockholm in 1999. The second workshop was held at IJCAI 2001 in Seattle, with a focus on multimodal interfaces. The Third workshop was held in Acapulco, in 2003, and focused on the role and use of ontologies in multi-modal dialogue systems. The fourth workshop was held in Edinburgh in 2005, and focused on adaptivity in dialogue systems. The fifth workshop was held in Hyderabad, India, 2007 and focused on dialogue systems for robots and virtual humans. 
> > Who should attend > > This workshop aims at bringing together researchers and practitioners that 
work on the development of communication models that support robust and efficient interaction in natural language, both for commercial dialogue systems and in basic research. 
> > It should be of interest also for anyone studying dialogue and multimodal 
interfaces and how to coordinate different information sources. This involves theoretical as well as practical research, e.g. empirical evaluations of usability, formalization of dialogue phenomena and development of intelligent interfaces for various applications, including such areas as robotics. 
> > Workshop format > > The workshop will be kept small, with a maximum of 40 participants. 
Preference will be given to active participants selected on the basis of their submitted papers. 
> > Each paper will be given ample time for discussion, more than what is 
customary at a conference. As said above, we encourage contributions of a critical or comparative nature that provide fuel for discussion. We also invite people to share their experiences of implementing and coordinating knowledge modules in their dialogue systems, and integrating dialogue components to other applications. 
> > Important Dates > >    * Submission deadline: March 6, 2009 >    * Notification date: April 17, 2009 >    * Accepted paper submission deadline: May 8, 2009 >    * Workshop July, 2009 > > Submissions > > Papers may be any of the following types: > >    * Regular Papers papers of length 4-8 pages, for regular presentation >    * Short Papers with brief results, or position papers, of length up to 
4 pages for brief or panel presentation. 
>    * Extended papers with extra details on system architecture, background 
theory or data presentation, of up to 12 pages, for regular presentation. 
> > Papers should include authors names and affiliation and full references 
(not anonymous submission). All papers should be formatted according to the AAAI formats: AAAI Press Author Instructions 
> > Submission procedure > > Papers should be submitted by web by registering at the following address: 
http://www.easychair.org/conferences/?conf=krpd09 
> > Organizing Committee > > Arne Jönsson (Chair) > Department of Computer and Information Science > Linköping University > S-581 83 Linköping, Sweden > tel: +46 13 281717 > fax: +46 13 142231 > email: arnjo@ida.liu.se > > David Traum (Co-Chair) > Institute for Creative Technologies > University of Southern California > 13274 Fiji Way > Marina del Rey, CA 90405 USA > tel: +1 (310) 574-5729 > fax: +1 (310) 574-5725 > email: traum@ict.usc.edu > > Jan Alexandersson (Co-Chair) > German Research Center for Artificial Intelligence, DFKI GmbH > Stuhlsatzenhausweg 3 > D-66 123 Saarbrücken > Germany > tel: +49-681-3025347 > fax: +49-681-3025341 > email: jan.alexandersson@dfki.de > > Ingrid Zukerman (Co-Chair) > Faculty of Information Technolog > Monash University > Clayton, Victoria 3800, Australia > tel: +61 3 9905-5202 > fax: +61 3 9905-5146 > email: ingrid@csse.monash.edu.au > > Programme committee > > Dan Bohus, USA > Johan Bos, Italy > Sandra Carberry, USA > Kallirroi Georgila, USA > Genevieve Gorrell, UK > Joakim Gustafson, Sweden > Yasuhiro Katagiri, Japan > Ali Knott, New Zealand > Kazunori Komatani, Japan > Staffan Larsson, Sweden > Anton Nijholt, Netherlands > Tim Paek, USA > Antoine Raux, USA > Candace Sidner, USA > Amanda Stent, USA > Marilyn Walker, UK > Jason Williams, USA > > Web page: http://www.ida.liu.se/~arnjo/Ijcai09ws/ > > Arne Jönsson > Tel: +4613281717
Back to Top

9-20 . (2009-07-09) MULTIMOD 2009 Multimodality of communication in children: gestures, emotions, language and cognition

The Multimod 2009 conference - Multimodality of communication in children:
gestures, emotions, language and cognition is being organized jointly by
psychologists and linguists from the Universities of Toulouse (Toulouse II)
and Grenoble (Grenoble III) and will take place in Toulouse (France) from
Thursday 9th July to Saturday 11th July 2009.

The aim of the conference will be to assess research on theories, concepts
and methods relating to multimodality in children.

The invited speakers are :
- Susan Goldin-Meadow (University of Chicago, USA),
- Jana Iverson (University of Pittsburg, USA),
- Paul Harris (Harvard University, USA),
- Judy Reilly (San Diego State University, USA),
- Gwyneth Doherty-Sneddon (University of Stirling, UK),
- Marianne Gullberg (MPI Nijmegen, The Netherlands).

We invite you to submit proposals for symposia, individual papers or posters
of original, previously unpublished research on all aspects of multimodal
communication in children, including:

- Gestures and language development, both typical and atypical
- Emotional development, both typical and atypical
- Multimodality of communication and bilingualism
- Gestural and/or emotional communication in non-human and human primates
- Multimodality of communication and didactics
- Multimodality of communication in the classroom
- Multimodality of communication and brain development
- Prosodic (emotional) aspects of language and communication development
- Pragmatic aspects of multimodality development

Please visit the conference website
http://w3.eccd.univ-tlse2.fr/multimod2009/ to find all useful Information
about submissions (individual papers, posters and symposia); the deadline
for submissions is December 15th, 2008. 

Back to Top

9-21 . (2009-08-02) ACL-IJCNLP 2009 1st Call for Papers

ACL-IJCNLP 2009 1st Call for Papers

Joint Conference of
the 47th Annual Meeting of the Association for Computational Linguistics
and
the 4th International Joint Conference on Natural Language Processing of
the Asian Federation of Natural Language Processing

August 2 - 7, 2009
Singapore

http://www.acl-ijcnlp-2009.org

Full Paper Submission Deadline:  February 22, 2009 (Sunday)
Short Paper Submission Deadline:  April 26, 2009 (Sunday)

For the first time, the flagship conferences of the Association of
Computational Linguistics (ACL) and the Asian Federation of Natural
Language Processing (AFNLP) --the ACL and IJCNLP -- are jointly
organized as a single event. The conference will cover a broad
spectrum of technical areas related to natural language and
computation. ACL-IJCNLP 2009 will include full papers, short papers,
oral presentations, poster presentations, demonstrations, tutorials,
and workshops. The conference invites the submission of papers on
original and unpublished research on all aspects of computational
linguistics.

Important Dates:

* Feb 22, 2009    Full paper submissions due;
* Apr 12, 2009    Full paper notification of acceptance;
* Apr 26, 2009    Short paper submissions due;
* May 17, 2009    Camera-ready full papers due;
* May 31, 2009    Short Paper notification of acceptance;
* Jun 7, 2009       Camera-ready short papers due;
* Aug 2-7, 2009   ACL-IJCNLP 2009

Topics of interest:

Topics include, but are not limited to:

* Phonology/morphology, tagging and chunking, and word segmentation
* Grammar induction and development
* Parsing algorithms and implementations
* Mathematical linguistics and grammatical formalisms
* Lexical and ontological semantics
* Formal semantics and logic
* Word sense disambiguation
* Semantic role labeling
* Textual entailment and paraphrasing
* Discourse, dialogue, and pragmatics
* Language generation
* Summarization
* Machine translation
* Information retrieval
* Information extraction
* Sentiment analysis and opinion mining
* Question answering
* Text mining and natural language processing applications
* NLP in vertical domains, such as biomedical, chemical and legal text
* NLP on noisy unstructured text, such as email, blogs, and SMS
* Spoken language processing
* Speech recognition and synthesis
* Spoken language understanding and generation
* Language modeling for spoken language
* Multimodal representations and processing
* Rich transcription and spoken information retrieval
* Speech translation
* Statistical and machine learning methods
* Language modeling for text processing
* Lexicon and ontology development
* Treebank and corpus development
* Evaluation methods and user studies
* Science of annotation

Submissions:

Full Papers: Submissions must describe substantial, original,
completed and unpublished work. Wherever appropriate, concrete
evaluation and analysis should be included. Submissions will be judged
on correctness, originality, technical strength, significance,
relevance to the conference, and interest to the attendees. Each
submission will be reviewed by at least three program committee
members.

Full papers may consist of up to eight (8) pages of content, plus one
extra page for references, and will be presented orally or as a poster
presentation as determined by the program committee.  The decisions as
to which papers will be presented orally and which as poster
presentations will be based on the nature rather than on the quality
of the work. There will be no distinction in the proceedings between
full papers presented orally and those presented as poster
presentations.

The deadline for full papers is February 22, 2009 (GMT+8). Submission
is electronic using paper submission software at:
https://www.softconf.com/acl-ijcnlp09/papers

Short papers: ACL-IJCNLP 2009 solicits short papers as well. Short
paper submissions must describe original and unpublished work. The
short paper deadline is just about three months before the conference
to accommodate the following types of papers:

* A small, focused contribution
* Work in progress
* A negative result
* An opinion piece
* An interesting application nugget

Short papers will be presented in one or more oral or poster sessions,
and will be given four pages in the proceedings. While short papers
will be distinguished from full papers in the proceedings, there will
be no distinction in the proceedings between short papers presented
orally and those presented as poster presentations. Each short paper
submission will be reviewed by at least two program committee members.
The deadline for short papers is April 26, 2009 (GMT + 8).  Submission
is electronic using paper submission software at:
https://www.softconf.com/acl-ijcnlp09/shortpapers

Format:

Full paper submissions should follow the two-column format of
ACL-IJCNLP 2009 proceedings without exceeding eight (8) pages of
content plus one extra page for references.  Short paper submissions
should also follow the two-column format of ACL-IJCNLP 2009
proceedings, and should not exceed four (4) pages, including
references. We strongly recommend the use of ACL LaTeX style files or
Microsoft Word style files tailored for this year's conference, which
are available on the conference website under Information for Authors.
Submissions must conform to the official ACL-IJCNLP 2009 style
guidelines, which are contained in the style files, and they must be
electronic in PDF.

As the reviewing will be blind, the paper must not include the
authors' names and affiliations. Furthermore, self-references that
reveal the author's identity, e.g., "We previously showed (Smith,
1991) ...", must be avoided. Instead, use citations such as "Smith
previously showed (Smith, 1991) ...". Papers that do not conform to
these requirements will be rejected without review.

Multiple-submission policy:

Papers that have been or will be submitted to other meetings or
publications must provide this information at submission time. If
ACL-IJCNLP 2009 accepts a paper, authors must notify the program
chairs by April 19, 2009 (full papers) or June 7, 2009 (short papers),
indicating which meeting they choose for presentation of their work.
ACL-IJCNLP 2009 cannot accept for publication or presentation work
that will be (or has been) published elsewhere.

Mentoring Service:

ACL is providing a mentoring (coaching) service for authors from
regions of the world where English is less emphasized as a language of
scientific exchange. Many authors from these regions, although able to
read the scientific literature in English, have little or no
experience in writing papers in English for conferences such as the
ACL meetings. The service will be arranged as follows. A set of
potential mentors will be identified by Mentoring Service Chairs Ng,
Hwee Tou (NUS, Singapore) and Reeder, Florence (Mitre, USA), who will
organize this service for ACL-IJCNLP 2009. If you would like to take
advantage of the service, please upload your paper in PDF format by
January 14, 2009 for long papers and March 18 2009 for short papers
using the paper submission software for mentoring service which will
be available at conference website.

An appropriate mentor will be assigned to your paper and the mentor
will get back to you by February 8 for long papers or April 12 for
short papers, at least 2 weeks before the deadline for the submission
to the ACL-IJCNLP 2009 program committee.

Please note that this service is for the benefit of the authors as
described above. It is not a general mentoring service for authors to
improve the technical content of their papers.

If you have any questions about this service please feel free to send
a message to Ng, Hwee Tou (nght[at]comp.nus.edu.sg) or Reeder,
Florence (floreederacl[at]yahoo.com).

General Conference Chair:
Su, Keh-Yih (Behavior Design Corp., Taiwan; kysu[at]bdc.com.tw)

Program Committee Chairs:
Su, Jian (Institute for Infocomm Research, Singapore;
sujian[at]i2r.a-star.edu.sg)
Wiebe, Janyce (University of Pittsburgh, USA; janycewiebe[at]gmail.com)

Area Chairs:
Agirre, Eneko (University of Basque Country, Spain; e.agirre[at]ehu.es)
Ananiodou, Sophia (University of Manchester, UK;
      sophia.ananiadou[at]manchester.ac.uk)
Belz, Anja (University of Brighton, UK; a.s.belz[at]itri.brighton.ac.uk)
Carenini, Giuseppe (University of British Columbia, Canada;
carenini[at]cs.ubc.ca)
Chen, Hsin-Hsi (National Taiwan University, TaiWan, hh_chen[at]csie.ntu.edu.tw)
Chen, Keh-Jiann (Sinica, Tai Wan, kchen[at]iis.sinica.edu.tw)
Curran, James (University of Sydney, Australia; james[at]it.usyd.edu.au)
Gao, Jian Feng (MSR, USA; jfgao[at]microsoft.com)
Harabagiu, Sanda (University of Texas at Dallas, USA, sanda[at]hlt.utdallas.edu)
Koehn, Philipp (University of Edinburgh, UK; pkoehn[at]inf.ed.ac.uk)
Kondrak, Grzegorz (University of Alberta, Canada; kondrak[at]cs.ualberta.ca)
Meng, Helen Mei-Ling (Chinese University of Hong Kong, Hong Kong;
      hmmeng[at]se.cuhk.edu.hk )
Mihalcea, Rada (University of Northern Texas, USA; rada[at]cs.unt.edu)
Poesio, Massimo(University of Trento, Italy; poesio[at]disi.unitn.it)
Riloff, Ellen (University of Utah, USA; riloff[at]cs.utah.edu)
Sekine, Satoshi (New York University, USA; sekine[at]cs.nyu.edu)
Smith, Noah (CMU, USA; nasmith[at]cs.cmu.edu)
Strube, Michael (EML Research, Germany; strube[at]eml-research.de)
Suzuki, Jun (NTT, Japan; jun[at]cslab.kecl.ntt.co.jp)
Wang, Hai Feng (Toshiba, China; wanghaifeng[at]rdc.toshiba.com.cn) 

Back to Top

9-22 . (2009-09) Emotion challenge INTERSPEECH 2009

Call for Papers
INTERSPEECH 2009 Emotion Challenge
Feature, Classifier, and Open Performance Comparison for
Non-Prototypical Spontaneous Emotion Recognition
Organisers:
Bjoern Schuller (Technische Universitaet Muenchen, Germany)
Stefan Steidl (FAU Erlangen-Nuremberg, Germany)
Anton Batliner (FAU Erlangen-Nuremberg, Germany)
Sponsored by:
HUMAINE Association
Deutsche Telekom Laboratories
The Challenge
The young field of emotion recognition from voice has recently gained considerable interest in Human-Machine Communication, Human-Robot Communication, and Multimedia Retrieval. Numerous studies have been seen in the last decade trying to improve on features and classifiers. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead, a multiplicity of evaluation strategies employed such as cross-validation or percentage splits without proper instance definition, prevents exact reproducibility. Further, to face more realistic use-cases, the community is in desperate need of more spontaneous and less prototypical data.
In these respects, the INTERSPEECH 2009 Emotion Challenge shall help bridging the gap between excellent research on human emotion recognition from speech and low compatibility of results: the FAU Aibo Emotion Corpus of spontaneous, emotionally coloured speech, and benchmark results of the two most popular approaches will be provided by the organisers. Nine hours of speech (51 children) were recorded at two different schools. This allows for distinct definition of test and training partitions incorporating speaker independence as needed in most real-life settings. The corpus further provides a uniquely detailed transcription of spoken content with word boundaries, non-linguistic vocalisations, emotion labels, units of analysis, etc.
Three sub-challenges are addressed in two different degrees of difficulty by using non-prototypical five or two emotion classes (including a garbage model):
 The Open Performance Sub-Challenge allows contributors to find their own features with their own classification algorithm. However, they will have to stick to the definition of test and training sets.
 In the Feature Sub-Challenge, participants are encouraged to upload their individual best features per unit of analysis with a maximum of 100 per contribution. These features will then be tested by the organisers with equivalent settings in one classification task, and pooled together in a feature selection process.
 In the Classifier Sub-Challenge, participants may use a large set of standard acoustic features provided by the organisers for classifier tuning.
The labels of the test set will be unknown, but each participant can upload instance predictions to receive the confusion matrix and results up to 25 times. As classes are un-balanced, the measure to optimise will be mean recall. The organisers will not take part in the sub-challenges but provide baselines.
Overall, contributions using the provided or an equivalent database are sought in (but not limited to) the areas:
 Participation in any of the sub-challenges
 Speaker adaptation for emotion recognition
 Noise/coding/transmission robust emotion recognition
 Effects of prototyping on performance
 Confidences in emotion recognition
 Contextual knowledge exploitation
The results of the Challenge will be presented at a Special Session of Interspeech 2009 in Brighton, UK.
Prizes will be awarded to the sub-challenge winners and a best paper.
If you are interested and planning to participate in the Emotion Challenge, or if you want to be kept informed about the Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage:
http://emotion-research.net/sigs/speech-sig/emotion-challenge
Back to Top

9-23 . (2009-09-06) Special session at Interspeech 2009:adaptivity in dialog systems

 
Call for papers (submission deadline Friday 17 April 2009)
 
Special Session : "Machine Learning for Adaptivity in Spoken Dialogue Systems"
at Interspeech 2009, Brighton U.K., http://www.interspeech2009.org/
Session chairs: Oliver Lemon, Edinburgh University,
and Olivier Pietquin, Supélec - IMS Research Group
In the past decade, research in the field of Spoken Dialogue Systems
(SDS) has experienced increasing growth, and new applications include
interactive mobile search, tutoring, and troubleshooting systems
(e.g. fixing a broken internet connection). The design and
optimization of robust SDS for such tasks requires the development of
dialogue strategies which can automatically adapt to different types
of users (novice/expert, youth/senior) and noise conditions
(room/street). New statistical learning techniques are emerging for
training and optimizing adaptive speech recognition, spoken language
understanding, dialogue management, natural language generation, and
speech synthesis in spoken dialogue systems. Among machine learning
techniques for spoken dialogue strategy optimization, reinforcement
learning using Markov Decision Processes (MDPs) and Partially
Observable MDP (POMDPs) has become a particular focus.
We therefore solicit papers on new research in the areas of:
- Adaptive dialogue strategies and adaptive multimodal interfaces
- User simulation techniques for adaptive strategy learning and testing
- Rapid adaptation methods
- Reinforcement Learning of dialogue strategies
- Partially Observable MDPs in dialogue strategy optimization
- Statistical spoken language understanding in dialogue systems
- Machine learning and context-sensitive speech recognition
- Learning for adaptive Natural Language Generation in dialogue
- Corpora and annotation for machine learning approaches to SDS
- Machine learning for adaptive multimodal interaction
- Evaluation of adaptivity in statistical approaches to SDS and user
simulation.
Important Dates--
Full paper submission deadline: Friday 17 April 2009
Notification of paper acceptance: Wednesday 17 June 2009
Conference dates: 6-10 September 2009
Back to Top

9-24 . (2009-09-07)CfP Information Retrieval and Information Extraction for Less Resourced Languages

CALL FOR PAPERS
Information Retrieval and Information Extraction for Less Resourced Languages (IE-IR-LRL)
SEPLN 2009 pre-conference workshop
University of the Basque Country
Donostia-San Sebastián. Monday 7th September 2009
Organised by the SALTMIL Special Interest Group of ISCA
SALTMIL: http://ixa2.si.ehu.es/saltmil/
SEPLN 2009: http://ixa2.si.ehu.es/sepln2009
Call For Papers: http://ixa2.si.ehu.es/saltmil/en/activities/lrec2008/sepln-2009-workshop-cfp.html
Paper submission: http://sepln.org/myreview-saltmil2009
Deadline for submission: 8 June 2009
Papers are invited for the above half-day workshop, in the format outlined below. Most submitted papers will be presented in poster form, though some authors may be invited to present in lecture format.
CONTEXT AND FOCUS
The phenomenal growth of the Internet has led to a situation where, by some estimates, more than one billion words of text is currently available. This is far more text than any given person can possibly process. Hence there is a need for automatic tools to access and process his mass of textual information. Emerging techniques of this kind include Information Retrieval (IR), Information Extraction (IE), and Question Answering (QA)
However, there is a growing concern among researchers about the situation of languages other than English. Although not all Internet text is in English, it is clear that non-English languages do not have the same degree of representation on the Internet. Simply counting the number of articles in Wikipedia, English is the only language with more than 20 percent of the available articles. There then follows a group of 17 languages with between one and ten percent of the articles. The remaining 245 languages each have less than one percent of the articles. Even these low-profile languages are relatively privileged, as the total number of languages in the world is estimated to be 6800.
Clearly there is a danger that the gap between high-profile and low-profile languages on the Internet will continue to increase, unless tools are developed for the low-profile languages to access textual information. Hence there is a pressing need to develop basic language technology software for less-resourced languages as well. In particular, the priority is to adapt the scope of recently-developed IE, IR and QA systems so that they can be used also for these languages. In doing so, several questions will naturally arise, such as:
* What problems emerge when faced with languages having different linguistic features from the major languages?
* Which techniques should be promoted in order to get the maximum yield from sparse training data?
* What standards will enable researchers to share tools and techniques across several different languages?
* Which tools are easily re-useable across several unrelated languages?
It is hoped that presentations will focus on real-world examples, rather than purely theoretical discussions of the questions. Researchers are encouraged to share examples of best practice -- and also examples where tools have not worked as well as expected. Also of interest will be
cases where the particular features of a less-resourced language raise a challenge to currently accepted linguistic models that were based on features of major languages.
TOPICS
Given the context of IR, IE and QA, topics for discussion may include, but are not limited to:
* Information retrieval;
* Text and web mining;
* Information extraction;
* Text summarization;
* Term recognition;
* Text categorization and clustering;
* Question answering;
* Re-use of existing IR, IE and QA data;
* Interoperability between tools and data.
* General speech and language resources for minority languages, with particular emphasis on resources for IR,IE and QA.
IMPORTANT DATES
* 8 June 2009: Deadline for submission
* 1 July 2009: Notification
* 15 July 2009: Final version
* 7 September 2009: Workshop
ORGANISERS
* Kepa Sarasola, University of the Basque Country
* Mikel Forcada, Universitat d'Alacant, Spain
* Iñaki Alegria. University of the Basque Country
* Xabier Arregi, University of the Basque Country
* Arantza Casillas. University of the Basque Country
* Briony Williams, Language Technologies Unit, Bangor University, Wales, UK
PROGRAMME COMMITTEE
* Iñaki Alegria. University of the Basque Country.
* Atelach Alemu Argaw: Stockholm University, Sweden
* Xabier Arregi, University of the Basque Country.
* Jordi Atserias, Barcelona Media (yahoo! research Barcelona)
* Shannon Bischoff, Universidad de Puerto Rico, Puerto Rico
* Arantza Casillas. University of the Basque Country.
* Mikel Forcada: Universitat d'Alacant, Spain
* Xavier Gomez Guinovart. University of Vigo.
* Lori Levin, Carnegie-Mellon University, USA
* Climent Nadeu, Universitat Politècnica de Catalunya
* Jon Patrick, University of Sydney, Australia
* Juan Antonio Pérez-Ortiz, Universitat d'Alacant, Spain
* Bojan Petek, University of Ljubljana, Slovenia
* Kepa Sarasola, University of the Basque Country
* Oliver Streiter, National University of Kaohsiung, Taiwan
* Vasudeva Varma, IIIT, Hyderabad, India
* Briony Williams: Bangor University, Wales, UK
SUBMISSION INFORMATION
We expect short papers of max 3500 words (about 4-6 pages) describing research addressing one of the above topics, to be submitted as PDF documents by uploading to the following URL:
http://sepln.org/myreview-saltmil2009
The final papers should not have more than 6 pages, adhering to the stylesheet that will be adopted for the SEPLN Proceedings (to be announced later on the Conference web site).
--
Mikel L. Forcada <mlf@dlsi.ua.es>
http://www.dlsi.ua.es/~mlf
Back to Top

9-25 . (2009-09-09) CfP IDP 09 Discourse-Prosody Interface

IDP 09 : CALL FOR PAPERS

 

Discourse – Prosody Interface

 

Paris, September 9-10-11, 2009

 

The third round of the “Discourse – Prosody Interface” Conference will be hosted by the Laboratoire de Linguistique Formelle (UMR 7110 / LLF), the Equipe CLILLAC-ARP (EA 3967) and the Linguistic Department (UFRL) of the University of Paris-Diderot (Paris 7), on September 9-10-11, 2009 in Paris. The first round was organized by the Laboratoire Parole et Langage (UMR 6057 /LPL) in September 2005, in Aix-en-Provence. The second took place in Geneva in September 2007 and was organized by the Department of Linguistics at the University of Geneva, in collaboration with the École de Langue et Civilisation Françaises at the University of Geneva, and the VALIBEL research centre at the Catholic University of Louvain.

The third round will be held at the Paris Center of the University of Chicago, 6, rue Thomas Mann, in the XIIIth arrondissement, near the Bibliothèque François Mitterrand (BNF).

 

The Conference is addressed to researchers in prosody, phonology, phonetics, pragmatics, discourse analysis and also psycholinguistics, who are particularly interested in the relations between prosody and discourse. The participants may develop their research programmes within different theoretical paradigms (formal approaches to phonology and semantics/ pragmatics, conversation analysis, descriptive linguistics, etc.). For this third edition, spécial attention will be given to research work that propose a formal analysis of the Discourse- Prosody interface.

 

So as to favour convergence among contributions, the IDP09 conference will focus on :

* Prosody, its parts and discourse :

- How to analyze the interaction between the different prosodic subsystems (accentuation,

intonation, rhythm; register changes or voice quality)?

- How to model the contribution of each subsystem to the global interpretation of discourse?

- How to describe and analyze prosodic facts, and at which level (phonetic vs. phonological) ?

* Prosodic units & discourse units

- What are the relevant units for discourse or conversation analysis ? What are their prosodic

properties ?

- How the embedding of utterances in discourse is marked syntactically or prosodically ?

What consequence of the modelling of syntax & prosody ?

* Prosody and context(s)

- What is the contribution of the context in the analysis of prosody in discourse?

- How can the relations between prosody and context(s) be modelled?

* Acquisition of the relations between prosody & discourse in L1 and L2

- How are the relations between prosody & discourse acquired in L1, in L2 ?

- Which methodological tools could best describe and transcribe these processes ?

 

 

Guest speakers :

* Diane Blakemore (School of Languages, University of Salford, United Kingdom)

* Piet Mertens (Department of Linguistics, K.U Leuven, Belgium)

* Hubert Truckenbrodt (ZAS, Zentrum für Allgemeine Sprachwissenschaft, Berlin,

Germany)

 

Conference will be held in English or French. Studies can be about any language.

 

 

Submission will be made by uploading an anonymous two pages abstract (plus an extra page for references and figures) in A4 and with Times 12 font, written in either English or French as PDF file at the following address : http://www.easychair.org/conferences/?conf=idp09 .

 

Author’s name and affiliation should be given as requested, but not in the PDF file.

 

If you have any question concerning the submission procedure or you encounter any problem,

please send an email at the following address : idp09@linguist.jussieu.fr

 

Authors may submit as many proposals as they wish.

 

The proposals will be evaluated anonymously by the scientific committee.

 

Schedule

Submission deadline: April, 26th, 2009

Notification of acceptation: June, 8th, 2009

Conference (IDP 09): September 9th-11th, 2009.

 

Further information is available on the conférence website : http://idp09.linguist.univ-paris-diderot.fr

 

Back to Top

9-26 . (2009-09-11) SIGDIAL 2009 CONFERENCE

 SIGDIAL 2009 CONFERENCE
     10th Annual Meeting of the Special Interest Group
     on Discourse and Dialogue

     Queen Mary University of London, UK September 11-12, 2009
     (right after Interspeech 2009)

     Submission Deadline: April 24, 2009


     PRELIMINARY CALL FOR PAPERS

The SIGDIAL venue provides a regular forum for the presentation of
cutting edge research in discourse and dialogue to both academic and
industry researchers. Due to the success of the nine previous SIGDIAL
workshops, SIGDIAL is now a conference. The conference is sponsored by
the SIGDIAL organization, which serves as the Special Interest Group in
discourse and dialogue for both ACL and ISCA. SIGDIAL 2009 will be
co-located with Interspeech 2009 as a satellite event.

In addition to presentations and system demonstrations, the program
includes an invited talk by Professor Janet Bavelas of the University of
Victoria, entitled "What's unique about dialogue?".


TOPICS OF INTEREST

We welcome formal, corpus-based, implementation, experimental, or
analytical work on discourse and dialogue including, but not restricted
to, the following themes:

1. Discourse Processing and Dialogue Systems

Discourse semantic and pragmatic issues in NLP applications such as text
summarization, question answering, information retrieval including
topics like:

- Discourse structure, temporal structure, information structure ;
- Discourse markers, cues and particles and their use;
- (Co-)Reference and anaphora resolution, metonymy and bridging resolution;
- Subjectivity, opinions and semantic orientation;

Spoken, multi-modal, and text/web based dialogue systems including
topics such as:

- Dialogue management models;
- Speech and gesture, text and graphics integration;
- Strategies for preventing, detecting or handling miscommunication
(repair and correction types, clarification and under-specificity,
grounding and feedback strategies);
- Utilizing prosodic information for understanding and for disambiguation;

2. Corpora, Tools and Methodology

Corpus-based and experimental work on discourse and spoken, text-based
and multi-modal dialogue including its support, in particular:

- Annotation tools and coding schemes;
- Data resources for discourse and dialogue studies;
- Corpus-based techniques and analysis (including machine learning);
- Evaluation of systems and components, including methodology, metrics
and case studies;

3. Pragmatic and/or Semantic Modeling

The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a
single sentence) including the following issues:

- The semantics/pragmatics of dialogue acts (including those which are
less studied in the semantics/pragmatics framework);
- Models of discourse/dialogue structure and their relation to
referential and relational structure;
- Prosody in discourse and dialogue;
- Models of presupposition and accommodation; operational models of
  conversational implicature.


SUBMISSIONS

The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
presentations, followed by posters and demonstrations.

- Long papers must be no longer than 8 pages, including title, examples,
references, etc. In addition to this, two additional pages are allowed
as an appendix which may include extended example discourses or
dialogues, algorithms, graphical representations, etc.
- Short papers and demo descriptions should be 4 pages or less
(including title, examples, references, etc.).

Please use the official ACL style files:
http://ufal.mff.cuni.cz/acl2007/styles/

Papers that have been or will be submitted to other meetings or
publications must provide this information (see submission format).
SIGDIAL 2009 cannot accept for publication or presentation work that
will be (or has been) published elsewhere. Any questions regarding
submissions can be sent to the General Co-Chairs.

Authors are encouraged to make illustrative materials available, on the
web or otherwise. Examples might include excerpts of recorded
conversations, recordings of human-computer dialogues, interfaces to
working systems, and so on.


BEST PAPER AWARDS

In order to recognize significant advancements in dialog and discourse
science and technology, SIGDIAL will (for the first time) recognize a
BEST PAPER AWARD and a BEST STUDENT PAPER AWARD. A selection committee
consisting of prominent researchers in the fields of interest will
select the recipients of the awards.


IMPORTANT DATES (SUBJECT TO CHANGE)

Submission: April 24, 2009
Workshop: September 11-12, 2009


WEBSITES

SIGDIAL 2009 conference website:
http://www.sigdial.org/workshops/workshop10/
SIGDIAL organization website: http://www.sigdial.org/
Interspeech 2009 website: http://www.interspeech2009.org/


ORGANIZING COMMITTEE

For any questions, please contact the appropriate members of the
organizing committee:

GENERAL CO-CHAIRS
Pat Healey (Queen Mary University of London): ph@dcs.qmul.ac.uk
Roberto Pieraccini (SpeechCycle): roberto@speechcycle.com

TECHNICAL PROGRAM CO-CHAIRS
Donna Byron (Northeastern University): dbyron@ccs.neu.edu
Steve Young (University of Cambridge): sjy@eng.cam.ac.uk

LOCAL CHAIR
Matt Purver (Queen Mary University of London): mpurver@dcs.qmul.ac.uk

SIGDIAL PRESIDENT
Tim Paek (Microsoft Research): timpaek@microsoft.com

SIGDIAL VICE PRESIDENT
Amanda Stent (AT&T Labs - Research): amanda.stent@gmail.com


-- 
Matthew Purver - http://www.dcs.qmul.ac.uk/~mpurver/

Senior Research Fellow
Interaction, Media and Communication
Department of Computer Science
Queen Mary University of London, London E1 4NS, UK 
 
Back to Top

9-27 . (2009-09-11) Int. Workshop on spoken language technology for development: from promise to practice.

International Workshop on Spoken Language Technology for Development
- from promise to practice
 
Venue - The Abbey Hotel, Tintern, UK
Dates - 11-12 September 2009
  
Following on from a successful special session at SLT 2008 in Goa, this workshop invites participants with an interest in SLT4D and who have expertise and experience in any of the following areas:
- Development of speech technology for resource-scarce languages
- SLT deployments in the developing world
- HCI in a developing world context
- Successful ICT4D interventions
  
The aim of the workshop is to develop a "Best practice in developing and deploying speech systems for developmental applications". It is also hoped that the participants will form the core of an open community which shares tools, insights and methodologies for future SLT4D projects. 
  
If you are interested in participating in the workshop, please submit a 2-4 page position paper explaining how your expertise and experience might be applied to SLT4D, formatted according to the Interspeech 2009 guidelines, to Roger Tucker at roger@outsideecho.com by 30th April 2009. 
  
Important Dates:
Papers due: 30th April 2009
Acceptance Notification: 10th June 2009
Early Registration deadline: 3rd July 2009
Workshop: 11-12 September 2009
  
Further details can be found on the workshop website at www.llsti.org/SLT4D-09

Back to Top

9-28 . (2009-09-14) 7th International Conference on Recent Advances in Natural Language Processing

RANLP-09 Second Call for Papers and Submission Information

 

"RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING"

 

International Conference RANLP-2009

 

September 14-16, 2009

Borovets, Bulgaria

http://www.lml.bas.bg/ranlp2009

 

Further to the successful and highly competitive 1st, 2nd, 3rd, 4th, 5th

and 6th conferences 'Recent Advances in Natural Language Processing'

(RANLP), we are pleased to announce the 7th RANLP conference to be held in

September 2009.

 

The conference will take the form of addresses from invited keynote

speakers plus peer-reviewed individual papers. There will also be an

exhibition area for poster and demo sessions.

 

We invite papers reporting on recent advances in all aspects of Natural

Language Processing (NLP). The conference topics are announced at the

RANLP-09 website. All accepted papers will be published in the full

conference proceedings and included in the ACL Anthology. In addition,

volumes of RANLP selected papers are traditionally published by John

Benjamins Publishers; currently the volume of Selected RANLP-07 papers is

under print.

 

KEYNOTE SPEAKERS:

       Kevin Bretonnel Cohen (University of Colorado School of Medicine),

       Mirella Lapata (University of Edinburgh),

       Shalom Lappin (King’s College, London),

       Massimo Poesio (University of Trento and University of Essex).

 

CHAIR OF THE PROGRAMME COMMITTEE:

Ruslan Mitkov (University of Wolverhampton)

 

CHAIR OF THE ORGANISING COMMITTEE:

Galia Angelova (Bulgarian Academy of Sciences)

 

The PROGRAMME COMMITTEE members are distinguished experts from all over

the world. The list of PC members will be announced at the conference

website. After the review, the list of all reviewers will be announced at

the website as well.

 

SUBMISSION

People interested in participating should submit a paper, poster or demo

following the instructions provided at the conference website. The review

will be blind, so the article text should not reveal the authors' names.

Author identification should be done in additional page of the conference

management system.

 

TUTORIALS 12-13 September 2009:

Four half-day tutorials will be organised at 12-13 September 2009. The

list of tutorial lecturers includes:

       Kevin Bretonnel Cohen (University of Colorado School of Medicine),

       Constantin Orasan (University of Wolverhampton)

 

WORKSHOPS 17-18 September 2009:

Post-conference workshops will be organised at 17-18 September 2009. All

workshops will publish hard-copy proceedings, which will be distributed at

the event. Workshop papers might be listed in the ACL Anthology as well

(depending on the workshop organisers). The list of RANLP-09 workshops

includes:

       Semantic Roles on Human Language Technology Applications, organised by

Paloma Moreda, Rafael Muсoz and Manuel Palomar,

       Partial Parsing 2: Between Chunking and Deep Parsing, organised by Adam

Przepiorkowski, Jakub Piskorski and Sandra Kuebler,

       1st Workshop on Definition Extraction, organised by Gerardo Eugenio

Sierra Martнnez and Caroline Barriere,

       Evaluation of Resources and Tools for Central and Eastern European

languages, organised by Cristina Vertan, Stelios Piperidis and Elena

Paskaleva,

       Adaptation of Language Resources and Technology to New Domains,

organised by Nuria Bel, Erhard Hinrichs, Kiril Simov and Petya Osenova,

       Natural Language Processing methods and corpora in translation,

lexicography, and language learning, organised by Viktor Pekar, Iustina

Narcisa Ilisei, and Silvia Bernardini,

       Events in Emerging Text Types (eETTs), organised by Constantin Orasan,

Laura Hasler, and Corina Forascu,

       Biomedical Information Extraction, organised by Guergana Savova,

Vangelis Karkaletsis, and Galia Angelova.

 

 

IMPORTANT DATES:

 

Conference paper submission notification: 6 April 2009

Conference paper submission deadline: 13 April 2009

Conference paper acceptance notification: 1 June 2009

Final versions of conference papers submission: 13 July 2009

 

Workshop paper submission deadline (suggested): 5 June 2009

Workshop paper acceptance notification (suggested): 20 July 2009

Final versions of workshop papers submission (suggested): 24 August 2009

 

RANLP-09 tutorials: 12-13 September 2009 (Saturday-Sunday)

RANLP-09 conference: 14-16 September 2009 (Monday-Wednesday)

RANLP-09 workshops: 17-18 September 2009 (Thursday-Friday)

 

For further information about the conference, please visit the conference

site http://www.lml.bas.bg/ranlp2009.

 

 

THE TEAM BEHIND RANLP-09

Galia Angelova, Bulgarian Academy of Sciences, Bulgaria, Chair of the Org.

Committee

Kalina Bontcheva, University of Sheffield, UK

Ruslan Mitkov, University of Wolverhampton, UK, Chair of the Programme

Committee

Nicolas Nicolov, Umbria Inc, USA (Editor of volume with selected papers)

Nikolai Nikolov, INCOMA Ltd., Shoumen, Bulgaria

Kiril Simov, Bulgarian Academy of Sciences, Bulgaria (Workshop Coordinator)

 

e-mail: ranlp09 [AT] lml (dot) bas (dot) 

Back to Top

9-29 . (2009-09-28) ELMAR 2009

51st International Symposium ELMAR-2009

28-30 September 2009 Zadar, CROATIA
Paper submission deadline: March 16, 2009
http://www.elmar-zadar.org/
CALL FOR PAPERS TECHNICAL CO-SPONSORS IEEE Region 8 EURASIP - European Assoc. Signal, Speech and Image Processing IEEE Croatia Section IEEE Croatia Section Chapter of the Signal Processing Society IEEE Croatia Section Joint Chapter of the AP/MTT Societies
CONFERENCE PROCEEDINGS INDEXED BY IEEE Xplore
INSPEC TOPICS --> Image and Video Processing --> Multimedia Communications --> Speech and Audio Processing --> Wireless Commununications --> Telecommunications --> Antennas and Propagation --> e-Learning and m-Learning --> Navigation Systems --> Ship Electronic Systems --> Power Electronics and Automation --> Naval Architecture --> Sea Ecology --> Special Sessions Proposals - A special session consist of 5-6 papers which should present a unifying theme from a diversity of viewpoints
KEYNOTE TALKS
* Prof. Gregor Rozinaj,Slovak University of Technology, Bratislava, SLOVAKIA: -Title to be announced soon.
* Mr. David Wood, European Broadcasting Union, Geneva, SWITZERLAND: What strategy and research agenda for Europe in 'new media'?
SUBMISSION
Papers accepted by two reviewers will be published in conference proceedings available at the conference and abstracted/indexed in the IEEE Xplore and INSPEC database. More info is available here: http://www.elmar-zadar.org/ IMPORTANT: Web-based (online) paper submission of papers in PDF format is required for all authors. No e-mail, fax, or postal submissions will be accepted. Authors should prepare their papers according to ELMAR-2009 paper sample, convert them to PDF based on IEEE requirements, and submit them using web-based submission system by March 16, 2009.
SCHEDULE OF IMPORTANT DATES
Deadline for submission of full papers: March 16, 2009
Notification of acceptance mailed out by: May 11, 2009
Submission of (final) camera-ready papers: May 21, 2009
Preliminary program available online by: June 11, 2009
Registration forms and payment deadline: June 18, 2009
Accommodation deadline: September 10, 2009
GENERAL CO-CHAIRS
Ive Mustac, Tankerska plovidba, Zadar, Croatia Branka Zovko-Cihlar, University of Zagreb, Croatia
PROGRAM CHAIR
Mislav Grgic, University of Zagreb, Croatia
INTERNATIONAL PROGRAM COMMITTEE Juraj Bartolic, Croatia David Broughton, United Kingdom Paul Dan Cristea, Romania Kresimir Delac, Croatia Zarko Cucej, Slovenia Marek Domanski, Poland Kalman Fazekas, Hungary Janusz Filipiak, Poland Renato Filjar, Croatia Borko Furht, USA Mohammed Ghanbari, United Kingdom Mislav Grgic, Croatia Sonja Grgic, Croatia Yo-Sung Ho, Korea Bernhard Hofmann-Wellenhof, Austria Ismail Khalil Ibrahim, Austria Bojan Ivancevic, Croatia Ebroul Izquierdo, United Kingdom Kristian Jambrosic, Croatia Aggelos K. Katsaggelos, USA Tomislav Kos, Croatia Murat Kunt, Switzerland Panos Liatsis, United Kingdom Rastislav Lukac, Canada Lidija Mandic, Croatia Gabor Matay, Hungary Branka Medved Rogina, Croatia Borivoj Modlic, Croatia Marta Mrak, United Kingdom Fernando Pereira, Portugal Pavol Podhradsky, Slovak Republic Ramjee Prasad, Denmark Kamisetty R. Rao, USA Gregor Rozinaj, Slovak Republic Gerald Schaefer, United Kingdom Mubarak Shah, USA Shiguang Shan, China Thomas Sikora, Germany Karolj Skala, Croatia Marian S. Stachowicz, USA Ryszard Stasinski, Poland Luis Torres, Spain Frantisek Vejrazka, Czech Republic Stamatis Voliotis, Greece Nick Ward, United Kingdom Krzysztof Wajda, Poland Branka Zovko-Cihlar, Croatia
CONTACT INFORMATION Assoc.Prof. Mislav Grgic, Ph.D. FER, Unska 3/XII HR-10000 Zagreb CROATIA Telephone: + 385 1 6129 851 Fax: + 385 1 6129 717 E-mail: elmar2009 (at) fer.hr For further information please visit: http://www.elmar-zadar.org/
Back to Top

9-30 . (2009-10-05) 2009 APSIPA ASC

            APSIPA Annual Summit and Conference October 5 - 7, 2009

                       Sapporo Convention Center, Sapporo, Japan
2009 APSIPA Annual Summit and Conference is the inaugural event supported by the Asia-Pacific Signal and Information Processing Association (APSIPA). The APSIPA is a new association and it promotes all aspects of research and education on signal processing, information technology, and communications. The field of interest of APSIPA concerns all aspects of signals and information including processing, recognition, classification, communications, networking, computing, system design, security, implementation, and technology with applications to scientific, engineering, and social areas. The topics for regular sessions include, but are not limited to:
Signal Processing Track
1.1 Audio, speech, and language processing
1.2 Image, video, and multimedia signal processing
1.3 Information forensics and security
1.4 Signal processing for communications
1.5 Signal processing theory and methods
Sapporo and Conference Venue: One of many nice cities in Japan, Sapporo is always recognized as a quite beautiful and well-organized city. With a population of 1,800,000, Hokkaido's largest/capital city, Sapporo, is fully serviced by a network of subway, streetcar, and bus lines connecting to its full
compliment of hotel accommodations. Sapporo has already played host to international meetings, sports events, and academic societies. There are a lot of flights from/to Tokyo, Nagoya, Osaka et al. and overseas cities. With all the amenities of a major city yet in balance with its natural surroundings, this beautiful northern capital, Sapporo, is well-equipped to offer a new generation of conventions.
Important Due Dates and Author's Schedule:
Proposals for Special Session: March 1, 2009
Proposals for Forum, Panel and Tutorial Sessions: March 20, 2009
Deadline for Submission of Full-Papers: March 31, 2009
Notification of Acceptance: July 1, 2009
Deadline for Submission of Camera Ready Papers: August 1, 2009
Conference dates: October 5 - 7, 2009
Submission of Papers: Prospective authors are invited to submit either long papers, up to 10 pages in length, or short papers up to four pages in length, where long papers will be for the single-track oral presentation and short papers will be mostly for poster presentation. The conference proceedings will be published, available, and maintained at the APSIPA website.
Detail Information: WEB Site : http://www.gcoe.ist.hokudai.ac.jp/apsipa2009/
Organizing Committee:
Honorary Chair : Sadaoki Furui, Tokyo Institute of Technology, Japan
General co-Chairs : Yoshikazu Miyanaga, Hokkaido University, Japan K. J. Ray Liu, University of Maryland,USA
Technical Program co-Chairs : Hitoshi Kiya, Tokyo Metropolitan Univ., Japan Tomoaki Ohtsuki, Keio University, Japan Mark Liao, Academia Sinica, Taiwan Takao Onoye, Osaka University, Japan               

Back to Top

9-31 . (2009-10-18) 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics

Call for Papers

2009 IEEE Workshop on Applications of Signal Processing to Audio and

Acoustics

 

Mohonk Mountain House

New Paltz, New York

October 18-21, 2009

http://www.waspaa2009.com

 

The 2009 IEEE Workshop on Applications of Signal Processing to Audio and

Acoustics (WASPAA'09) will be held at the Mohonk Mountain House in New

Paltz, New York, and is sponsored by the Audio & Electroacoustics committee

of the IEEE Signal Processing Society. The objective of this workshop is to

provide an informal environment for the discussion of problems in audio and

acoustics and the signal processing techniques leading to novel solutions.

Technical sessions will be scheduled throughout the day. Afternoons will be

left free for informal meetings among workshop participants.

 

Papers describing original research and new concepts are solicited for

technical sessions on, but not limited to, the following topics:

 

* Acoustic Scenes

- Scene Analysis: Source Localization, Source Separation, Room Acoustics

- Signal Enhancement: Echo Cancellation, Dereverberation, Noise Reduction,

Restoration

- Multichannel Signal Processing for Audio Acquisition and Reproduction

- Microphone Arrays

- Eigenbeamforming

- Virtual Acoustics via Loudspeakers

 

* Hearing and Perception

- Auditory Perception, Spatial Hearing, Quality Assessment

- Hearing Aids

 

* Audio Coding

- Waveform Coding and Parameter Coding

- Spatial Audio Coding

- Internet Audio

- Musical Signal Analysis: Segmentation, Classification, Transcription

- Digital Rights

- Mobile Devices

 

* Music

- Signal Analysis and Synthesis Tools

- Creation of Musical Sounds: Waveforms, Instrument Models, Singing

- MEMS Technologies for Signal Pick-up

 

 

Submission of four-page paper: April 15, 2009

Notification of acceptance: June 26, 2009

Early registration until:  September 1, 2009

 

Workshop Committee

 

General Co-Chair:

Jacob Benesty

Université du Québec

INRS-EMT

Montréal, Québec, Canada

benesty@emt.inrs.ca

 

General Co-Chair:

Tomas Gaensler

mh acoustics

Summit, NJ, USA

tfg@mhacoustics.com

 

Technical Program Chair:

Yiteng (Arden) Huang

WeVoice Inc.

Bridgewater, NJ, USA

arden_huang@ieee.org

 

Technical Program Chair:

Jingdong Chen

Bell Labs

Alcatel-Lucent

Murray Hill, NJ, USA

jingdong@research.bell-labs.com

 

Finance Chair:

Michael Brandstein

Information Systems

Technology Group

MIT Lincoln Lab

Lexington, MA, USA

msb@ll.mit.edu

 

Publications Chair:

Eric J. Diethorn

Multimedia Technologies

Avaya Labs Research

Basking Ridge, NJ, USA

ejd@avaya.com

 

Publicity Chair:

Sofiène Affes

Université du Québec

INRS-EMT

Montréal, Québec, Canada

affes@emt.inrs.ca

 

Local Arrangements Chair:

Heinz Teutsch

Multimedia Technologies

Avaya Labs Research

Basking Ridge, NJ, USA

teutsch@avaya.com

 

Far East Liaison:

Shoji Makino

NTT Communication Science

Laboratories, Japan

maki@cslab.kecl.ntt.co.jp

Back to Top

9-32 . (2009-11-02) CALL FOR ICMI-MLMI 2009 WORKSHOPS

CALL FOR ICMI-MLMI 2009 WORKSHOPS

http://icmi2009.acm.org
Boston MA, USA

Main conference: 2-4 November 2009
Workshops: 5-6 November 2009
Proposal Deadline: 1 March 2009
Acceptance Notification: 22 March 2009

The ICMI and MLMI conferences will jointly take place in the Boston
area during November 2-6, 2009. The main aim of ICMI-MLMI 2009 is to
further scientific research within the broad field of multimodal
interaction, methods and systems. The joint conference will focus on
major trends and challenges in this area, and work to identify a
roadmap for future research and commercial success.  The main
conference will be followed by a number of workshops, for which we
invite proposals.

The format, style, and content of accepted workshops is under the
control of the workshop organizers.  Workshops will take place on 5-6
November 2009, and may be of one or two days duration.
Workshop organizers will be expected to manage the workshop content,
specify the workshop format, be present to moderate the discussion and
panels, invite experts in the domain, and maintain a website for the
workshop.

Proposals should specify clearly the workshop's title, motivation,
impact, expected outcomes,  potential invited speakers and the workshop
URL. The proposal should also name the main workshop organizer, and
co-organizers,  and should provide brief bios of the organizers.

Submit workshop proposals, as pdf, by email to
  workshops-icmi2009@acm.org

Back to Top

9-33 . (2009-12-04) CfP JPC3 Journees de phonetique clinique (in french)

JPC3
TROISIEMES JOURNEES DE PHONETIQUE CLINIQUE
APPEL A COMMUNICATION
4-5 DECEMBRE 2009, AIX-EN-PROVENCE, FRANCE
http://www.lpl-aix.fr/~jpc3/
Ces journées s’inscrivent dans la lignée des premières et deuxièmes journées d’études de
phonétique clinique, qui s’étaient tenues respectivement à Paris en 2005 et Grenoble en 2007. La
phonétique clinique réunit des chercheurs, enseignants-chercheurs, ingénieurs, médecins et
orthophonistes, différents corps de métiers complémentaires qui poursuivent le même objectif :
une meilleure connaissance des processus d’acquisition et de dysfonctionnement de la parole et de
la voix. Cette approche interdisciplinaire vise à optimiser les connaissances fondamentales
relatives à la communication parlée chez le sujet sain et à mieux comprendre, évaluer,
diagnostiquer et remédier aux troubles de la parole et de la voix chez le sujet pathologique.
Les communications porteront sur les études phonétiques de la parole et de la voix pathologiques,
chez l’adulte et chez l’enfant. Les thèmes du colloque incluent, de façon non limitative :
Perturbations du système oro-pharyngo-laryngé
Perturbations du système perceptif
Troubles cognitifs et moteurs
Instrumentation et ressources en phonétique clinique
Modélisation de la parole et de la voix pathologique
Evaluation et traitement des pathologies de la parole et de la voix
Les contributions sélectionnées seront présentées sous l’une des deux formes suivantes :
Communication longue : 20 minutes, pour l’exposé de travaux aboutis
Communication courte : 8 minutes pour l’exposé d'observations cliniques, de travaux
préliminaires, de problématiques émergentes afin de favoriser au mieux les échanges
interdisciplinaires entre phonéticiens et cliniciens.
Format de soumission :
Les soumissions aux JPC se présentent sous la forme de résumés rédigés en français, d’une
longueur maximale d’une page A4, police Times New Roman, 12pt, interligne simple. Les résumés
devront être soumis au format PDF à l’adresse suivante : soumission.jpc3@lpl-aix.fr
Date limite de soumission : 15 mai 2009
Date de notification auteurs : 1er juillet 2009
Pour toute information complémentaire, contactez les organisateurs :
org.jpc3@lpl-aix.fr
L’inscription aux JPC3 (1er juillet 2009) sera ouverte à tous, publiant ou non publiant.
Back to Top

9-34 . (2010-05-17) 7th Language Resources and Evaluation Conference

 The 7th edition of the Language Resources and Evaluation Conference will take place in Valetta (Malta) on May 17-23, 2010.
More information will be available soon on: http://www.lrec-conf.org/lrec2010/

Back to Top