Contents

1 . Editorial

 Dear members,

 Here is the summer verswion of ISCApad but with a lot of new informations: conferences, job offers,  announcement of journal special issues.

I particularly draw the attention of students in speech processing that ISCA offers a possibiity to post their resume in our database: see below the invitation of Professor Helen Meng.

Interspeech 2009 is coming soon. Don't forget to register in time ( early registration deadline is July 15th). ISCA board and myself will be happy to meet you there.

Prof. em. Chris Wellekens 

Institut Eurecom

Sophia Antipolis
France 

public@isca-speech.org

 

 
 
 

 

Back to Top

2 . ISCA News

 

Back to Top

2-1 . The Christian Benoit Award

The 5th Christian Benoit Award (7500 euros) has been granted by the Christian Benoit Association to Sascha Fagel  from the Dept Language and Communication of the Technologicla University of Berlin.

Sascha Fagel has been working for several years on the development of an audiovisual speech synthesizer named MASSY (Modular Audiovisual Speech SYnthesize).
He obtained  his PhD in 2004. His current research  mainly deals with expressive audiovisual synthesis and on face to face interaction. The multimedia project (Thea pour Talking Heads for Elderly and Alzheimer Patients in Ambient Assisted Living) that he will realize in the framework of the funding  by the Christian Benoit Award consists in a multimedia software allowing the creation of personalized talking faces aiming at easing portable audio-visual dialog systems  for persons suffering from degraded cognitive skills in a medically monitored environment.


In the name of the Association Christian Benoît

Pascal Perrier

Back to Top

2-2 . Message to students

Dear students,

The International Speech Communication Association ISCA is now opening an online system to build a database of résumés of researchers/students working in the various fields of speech communication.
The goal of  this service is to build a centralized place where many interested employers/corporations can access and search for potential candidates.

Please be advised that the posting service will be updated at 4 month intervals. Next switch will be mid October 2009.

We encourage all of you to upload an updated version of your résumé to: http://www.isca-speech.org/resumes/ and wish you good luck with a fruitful career.


Professor Helen Meng 

Back to Top

3 . Future ISCA Conferences and Workshops (ITRW)

 

Back to Top

3-1 . (2009-09-06) INTERSPEECH 2009 Brighton UK

Interspeech 2009 - Call for Papers
www.interspeech2009.org
Interspeech is the world's largest and most comprehensive
conference on Speech Science and Speech Technology. Interspeech
2009 will be held in Brighton, UK, 6-10 September 2009, and its
theme is Speech and Intelligence. We invite you to submit
original papers in any related area, including (but not limited
to):
Human Speech Production, Perception And Communication
* Human speech production
* Human speech perception
* Phonology and phonetics
* Discourse and dialogue
* Prosody (production, perception, prosodic structure)
* Emotion and Expression
* Paralinguistic and nonlinguistic cues (e.g. emotion and
expression)
* Physiology and pathology
* Spoken language acquisition, development and learning
Speech And Language Technology
* Automatic Speech recognition
* Speech analysis and representation
* Audio segmentation and classification
* Speech enhancement
* Speech coding and transmission
* Speech synthesis and spoken language generation
* Spoken language understanding
* Accent and language identification
* Cross-lingual and multi-lingual processing
* Multimodal/multimedia signal processing
* Speaker characterisation and recognition
Spoken Language Systems And Applications
* Speech Dialogue systems
* Systems for information retrieval from spoken documents
* Systems for speech translation
* Applications for aged and handicapped persons
* Applications for learning and education
* Hearing prostheses
* Other applications
Resources, Standardisation And Evaluation
* Spoken language resources and annotation
* Evaluation and standardisation
---------------------------------------------------
Paper Submission
Papers for the Interspeech 2009 proceedings are up to four pages
in length and should conform to the format given in the paper
preparation guidelines and author kits, which are now available
at www.interspeech2009.org
Authors are asked to categorize their submitted papers as being
one of:
N: Completed empirical studies reporting novel research findings
E: Exploratory studies
P: Position papers
Authors will also have to declare that their contribution is
original and not being submitted for publication elsewhere (e.g.
another conference, workshop, or journal).
Papers must be submitted via the on-line paper submission
system. The deadline for submitting a paper is 17th April 2009.
This date will not be extended.
Interspeech2009 Organising Committee
 
Back to Top

3-2 . (2009-09-06) Satellite workshops Interspeech 2009

Interspeech 2009 satellite workshops
---------------------------------------------------

http://www.interspeech2009.org/conference/workshops.php


ACORNS Workshop on Computational Models of Language Evolution, Acquisition and Processing

The workshop brings together up to 50 scientists to discuss future research in language acquisition, processing and evolution. Deb Roy, Friedemann Pulvermüller, Rochelle Newman and Lou Boves will provide an overview of the state-of-art, a number of discussants from different disciplines will widen the perspective, and all participants can contribute to a roadmap.




AVSP 2009 - Audio-Visual Speech Processing

The International Conference on Auditory-Visual Speech Processing (AVSP) attracts an interdisciplinary audience of psychologists, engineers, scientists and linguists, and considers a range of topics related to speech perception, production, recognition and synthesis. Recently the scope of AVSP has broadened to also include discussion on more general issues related to audiovisual communication. For example, the interplay between speech and the expressions of emotion, and the relationship between speech and manual gestures.




Blizzard Challenge Workshop

In order to better understand and compare research techniques in building corpus-based speech synthesizers on the same data, the Blizzard Challenge was devised. The basic challenge is to take the released speech database, build a synthetic voice from the data and synthesize a prescribed set of test sentences which are evaluated through listening tests. The results are presented at this workshop. Attendance at the 2009 workshop for the 4th Blizzard Challenge is open to all, not just participants in the challenge. Registration closes on 14th August 2009.




SIGDIAL - Special Interest Group on Dialogue

The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in discourse and dialogue to both academic and industry researchers. The conference is sponsored by the SIGDIAL organization, which serves as the Special Interest Group in discourse and dialogue for both the Association for Computational Linguistics and theInternational Speech Communication Association.



SLaTE Workshop on Speech and Language Technology in Education

SLaTE 2009 follows SLaTE 2007, held in Farmington, Pennsylvania, USA, and the STiLL meeting organized by KTH in Marholmen, Sweden, in 1998. The workshop will address all topics which concern speech and language technology for education. Papers will discuss theories, applications, evaluation, limitations, persistent difficulties, general research tools and techniques. Papers that critically evaluate approaches or processing strategies will be especially welcome, as will prototype demonstrations of real-world applications.




Young Researchers' Roundtable on Spoken Dialogue Systems

The Young Researchers' Roundtable on Spoken Dialog Systems is an annual workshop designed for students, post docs, and junior researchers working in research related to spoken dialogue systems in both academia and industry. The roundtable provides an open forum where participants can discuss their research interests, current work and future plans. The workshop is meant to provide an interdisciplinary forum for creative thinking about current issues in spoken dialogue systems research, and help create a stronger international network of young researchers working in the field.

Back to Top

3-3 . (2010-09-26) INTERSPEECH 2010 Chiba Japan

Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."

Back to Top

3-4 . (2011-08-27) INTERSPEECH 2011 Florence Italy

Interspeech 2011

Palazzo dei Congressi,  Italy, August 27-31, 2011.

Organizing committee

Piero Cosi (General Chair),

Renato di Mori (General Co-Chair),

Claudia Manfredi (Local Chair),

Roberto Pieraccini (Technical Program Chair),

Maurizio Omologo (Tutorials),

Giuseppe Riccardi (Plenary Sessions).

More information www.interspeech2011.org

Back to Top

4 . Workshops and conferences supported (but not organized) by ISCA

 

Back to Top

4-1 . (2009-09-04) 4th Blizzard Challenge Workshop

The 4th Blizzard Challenge Workshop, 4th September 2009, Edinburgh, U.K.

Immediately before Interspeech 2009.

Registration closes on 14th August 2009.

URL: http://www.synsig.org/index.php/Blizzard_Challenge_2009




 

Back to Top

4-2 . (2009-11-05) Workshop on Child, Computer and Interaction

Call for Papers

The Workshop on Child, Computer and Interaction (wocci2009.fbk.eu) will be held in Boston on November 5th, 2009. 
For registration visit
http://icmi2009.acm.org
The Workshop is a satellite event of the Eleventh International
Conference on Multi-modal Interfaces, this year jointly with Machine Learning Multimodal
Interaction (ICMI-MLMI 2009) that will take place in the same venue
November 2-4, 2009.
This Workshop aims at bringing together researchers and practitioners from
universities and industry working in all aspects of child-machine interaction including
computer, robotics and multi-modal interfaces. Children are special both at the
acoustic/linguistic level but also at the interaction level. The Workshop provides a
unique opportunity for bringing together different research communities from
cognitive science, robotics, speech processing, linguistics and application areas
such as medical and education. Various state-of-the-art components can be
presented here as key components for next generation child centred computer
interaction. Technological advances are increasingly necessary in a world where
education and health pose growing challenges to the core well-being of our
societies. Noticeable examples are remedial treatments for children with or without
disabilities and individualised attention. The Workshop should serve for presenting
recent advancements in core technologies as well as experimental systems and
prototypes.
Technical Scope
The technical scope of the workshop includes, but it is not limited to:
● Speech Interfaces: acoustic and linguistic analysis of children's speech, discourse
analysis of spoken language in child-machine interaction, age-dependent
characteristics of spoken language, automatic speech recognition for children and
spoken dialogue systems
● Multi-modality and Robotics: multi-modal child-machine interaction, multi-modal
input and output interfaces, including robotic interfaces, intrusive, non-intrusive
devices for environmental data processing, pen or gesture/visual interfaces
● User Modelling: user modelling and adaptation, usability studies accounting for
age preferences in child-machine interaction
● Cognitive Models: internal learning models, personality types, user-centred and
participatory design
● Application Areas: training systems, educational software, gaming interfaces,
medical conditions and diagnostic tools
Paper submission
Authors are invited to submit papers in any technical areas relevant to the workshop.
The technical committee will select papers for oral/poster presentations.
Demonstrations are especially welcome. Instructions for paper submission are
available at the Workshop website. An electronic version of Workshop proceedings will be published by ACM 
 
Chairs
Kay Berkling
(Inline Internet Online GmbH,
Germany)
Diego Giuliani
(FBK, Italy)
Shrikanth Narayanan
(Univ. Sourthern California, USA

 

Back to Top

4-3 . (2009-12-13) ASRU 2009

IEEE ASRU2009
Automatic Speech Recognition and Understanding Workshop
Merano, Italy December 13-17, 2009
http://www.asru2009.org/

The eleventh biannual IEEE workshop on Automatic Speech Recognition
and Understanding (ASRU) will be held on December 13-17, 2009.
The ASRU workshops have a tradition of bringing together
researchers from academia and industry in an intimate and
collegial setting to discuss problems of common interest in
automatic speech recognition and understanding.

Workshop topics

• automatic speech recognition and understanding
• human speech recognition and understanding
• speech to text systems
• spoken dialog systems
• multilingual language processing
• robustness in ASR
• spoken document retrieval
• speech-to-speech translation
• spontaneous speech processing
• speech summarization
• new applications of ASR.

The workshop program will consist of invited lectures, oral
and poster presentations,  and panel discussions. Prospective
 authors are invited to submit full-length, 4-6 page papers,
including figures and references, to the ASRU 2009 website
http://www.asru2009.org/.
All papers will be handled and reviewed electronically.
The website will provide you with further details. Please note
that the submission dates for papers are strict deadlines.

IMPORTANT DATES

Paper submission deadline         July 15, 2009
Paper notification of acceptance     September 3, 2009
Demo session proposal deadline        September 24, 2009
Early registration deadline        October 7, 2009
Workshop                 December 13-17, 2009


Please note that the number of attendees will be limited and
priority will be given to paper presenters. Registration will
be handled via the ASRU 2009 website,
http://www.asru2009.org/, where more information on the workshop
will be available.

General Chairs
    Giuseppe Riccardi, U. Trento, Italy
    Renato De Mori, U. Avignon, France

Technical Chairs
    Jeff Bilmes, U. Washington, USA
    Pascale Fung, HKUST, Hong Kong China
    Shri Narayanan, USC, USA
    Tanja Schultz, U. Karlsruhe, Germany

Panel Chairs
    Alex Acero, Microsoft, USA
    Mazin Gilbert, AT&T, USA

Demo Chairs
    Alan Black, CMU, USA
    Piero Cosi, CNR, Italy

Publicity Chairs
    Dilek Hakkani-Tür, ICSI, USA
    Isabel Trancoso, INESC -ID/IST, Portugal

Publication Chair
    Giuseppe di Fabbrizio, AT&T, USA

Local Chair
    Maurizio Omologo, FBK-irst, Italy 
 
 
 
 
 
 
Back to Top

4-4 . (2009-12-14) 6th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications MAVEBA 2009

University degli Studi di Firenze Italy
Department of Electronics and Telecommunications
6th International Workshop
on
Models and Analysis of Vocal Emissions for Biomedical
Applications
MAVEBA 2009
December 14 - 16, 2009
Firenze, Italy
http://maveba.det.unifi.it
Speech is the primary means of communication among humans, and results from
complex interaction among vocal folds vibration at the larynx and voluntary articulators
movements (i.e. mouth tongue, jaw, etc.). However, only recently has research
focussed on biomedical applications. Since 1999, the MAVEBA Workshop is
organised every two years, aiming to stimulate contacts between specialists active in
clinical, research and industrial developments in the area of voice signal and images
analysis for biomedical applications. This sixth Workshop will offer the participants
an interdisciplinary platform for presenting and discussing new knowledge in the field
of models, analysis and classification of voice signals and images, as far as both
adults, singing and children voices are concerned. Modelling the normal and
pathological voice source, analysis of healthy and pathological voices, are among the
main fields of research. The aim is that of extracting the main voice characteristics,
together with their deviation from “healthy conditions”, ranging from fundamental
research to all kinds of biomedical applications and related established and advanced
technologies.
SCIENTIFIC PROGRAM
linear and non-linear models of voice
signals;
physical and mechanical models;
aids for disabled;
measurement devices (signal and image); prostheses;
robust techniques for voice and glottal
analysis in time, frequency, cepstral,
wavelet domain;
neural networks, artificial intelligence and
other advanced methods for pathology
classification;
linguistic and clinical phonetics; new-born infant cry analysis;
neurological dysfunction; multiparametric/multimodal analysis;
imaging techniques (laryngography,
videokymography, fMRI);
voice enhancement;
protocols and database design;
Industrial applications in the biomedical
field;
singing voice;
speech/hearing interactions;
DEADLINES
30 May 2009 Submission of extended abstracts (1-2 pages, 1 column)
/special session proposal
30 July 2009 Notification of paper acceptance
30 September 2009 Final full paper submission (4 pages, 2 columns, pdf
format) and early registration
14-16 December 2009 Conference venue
SPONSORS
ENTE CRF Ente Cassa di Risparmio di Firenze
IEEE EMBS
IEEE Engineering in Medicine and Biology
Society
ELSEVIER Eds.
Biomedical Signal Processing and Control
ISCA
International Speech and Communication
Association
A.I.I.M.B.
Associazione Italiana di Ingegneria Medica e
Biologica
COST Action
2103 Europ. COop. in Science & Tech. Research
FURTHER INFORMATION
Claudia Manfredi – Conference Chair
Department of Electronics and
Telecommunications
Via S. Marta 3, 50139 Firenze, Italy
Phone: +39-055-4796410
Fax: +39-055-494569
E-mail: claudia.manfredi@unifi.it
Piero Bruscaglioni
Department of Physics
Polo Scientifico Sesto Fiorentino, 50019
Firenze,Italy
Phone: +39-055-4572038
Fax: +39-055-4572356
E-mail: piero.bruscaglioni@unifi.it
Back to Top

5 . Books,databases and softwares

 

Back to Top

5-1 . Books

This section shows recent books whose titles been have communicated by the authors or editors.
 
Also some advertisement for recent books in speech are included.
 
Book presentation is written by the authors and not by this newsletter editor or any  voluntary reviewer.

Back to Top

5-1-1 . Computeranimierte Sprechbewegungen in realen Anwendungen

Computeranimierte Sprechbewegungen in realen Anwendungen
Authors: Sascha Fagel and Katja Madany
102 pages
Publisher: Berlin Institute of Technology
Year: 2008
Website http://www.ub.tu-berlin.de/index.php?id=1843
To learn more, please visit the corresponding IEEE Xplore site at
http://ieeexplore.ieee.org/xpl/tocresult.jsp?isYear=2008&isnumber=4472076&Submit32=Go+To+Issue
Usability of Speech Dialog Systems
 
Back to Top

5-1-2 . Usability of Speech Dialog Systems Listening to the Target Audience

Usability of Speech Dialog Systems
Listening to the Target Audience
Series: Signals and Communication Technology
 
Hempel, Thomas (Ed.)
 
2008, X, 175 p. 14 illus., Hardcover
 
ISBN: 978-3-540-78342-8
Back to Top

5-1-3 . Speech and Language Processing, 2nd Edition

Speech and Language Processing, 2nd Edition
 
By Daniel Jurafsky, James H. Martin
 
Published May 16, 2008 by Prentice Hall.
More Info
Copyright 2009
Dimensions 7" x 9-1/4"
Pages: 1024
Edition: 2nd.
ISBN-10: 0-13-187321-0
ISBN-13: 978-0-13-187321-6
Request an Instructor or Media review copy
Sample Content
An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology – at all levels and with all modern technologies – this book takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. KEY TOPICS: Builds each chapter around one or more worked examples demonstrating the main idea of the chapter, usingthe examples to illustrate the relative strengths and weaknesses of various approaches. Adds coverage of statistical sequence labeling, information extraction, question answering and summarization, advanced topics in speech recognition, speech synthesis. Revises coverage of language modeling, formal grammars, statistical parsing, machine translation, and dialog processing. MARKET: A useful reference for professionals in any of the areas of speech and language processing.
  
 
 
Back to Top

5-1-4 . Advances in Digital Speech Transmission

Advances in Digital Speech Transmission
Editors: Rainer Martin, Ulrich Heute and Christiane Antweiler
Publisher: Wiley&Sons
Year: 2008
Back to Top

5-1-5 . Sprachverarbeitung -- Grundlagen und Methoden der Sprachsynthese und Spracherkennung

Title: Sprachverarbeitung -- Grundlagen und Methoden 
       der Sprachsynthese und Spracherkennung 
Authors: Beat Pfister, Tobias Kaufmann 
Publisher: Springer 
Year: 2008 
Website: http://www.springer.com/978-3-540-75909-6 
Back to Top

5-1-6 . Digital Speech Transmission

Digital Speech Transmission
Authors: Peter Vary and Rainer Martin
Publisher: Wiley&Sons
Year: 2006
Back to Top

5-1-7 . Distant Speech Recognition,

Distant Speech Recognition, Matthias Wölfel and John McDonough (2009), J. Wiley & Sons.
 
 Please link the title to http://www.distant-speech-recognition.com 
 
In the very recent past, automatic speech recognition (ASR) systems have attained acceptable performance when used with speech captured with a head-mounted or close-talking microphone (CTM). The performance of conventional ASR systems, however, degrades dramatically as soon as the microphone is moved away from the mouth of the speaker. This degradation is due to a broad variety of effects that are not found in CTM speech, including background noise, overlapping speech from other speakers, and reverberation. While conventional ASR systems underperform for speech captured with far-field sensors, there are a number of techniques developed in other areas of signal processing that can mitigate the deleterious effects of noise and reverberation, as well as separating speech from overlapping speakers. Distant Speech Recognition presents a contemporary and comprehensive description of both theoretic abstraction and practical issues inherent in the distant ASR problem.
Back to Top

5-1-8 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods

Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
Joseph Keshet and Samy Bengio, Editors
John Wiley & Sons
March, 2009
Website:  Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
 
About the book:
This is the first book dedicated to uniting research related to speech and speaker recognition based on the recent advances in large margin and kernel methods. The first part of the book presents theoretical and practical foundations of large margin and kernel methods, from support vector machines to large margin methods for structured learning. The second part of the book is dedicated to acoustic modeling of continuous speech recognizers, where the grounds for practical large margin sequence learning are set. The third part introduces large margin methods for discriminative language modeling. The last part of the book is dedicated to the application of keyword-spotting, speaker
verification and spectral clustering. 
Contributors: Yasemin Altun, Francis Bach, Samy Bengio, Dan Chazan, Koby Crammer, Mark Gales, Yves Grandvalet, David Grangier, Michael I. Jordan, Joseph Keshet, Johnny Mariéthoz, Lawrence Saul, Brian Roark, Fei Sha, Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebo. 
 
 
 
Back to Top

5-1-9 . Some aspects of Speech and the Brain.

Some aspects of Speech and the Brain. 
Susanne Fuchs, Hélène Loevenbruck, Daniel Pape, Pascal Perrier
Editions Peter Lang, janvier 2009
 
What happens in the brain when humans are producing speech or when they are listening to it ? This is the main focus of the book, which includes a collection of 13 articles, written by researchers at some of the foremost European laboratories in the fields of linguistics, phonetics, psychology, cognitive sciences and neurosciences.
 
-- 
Back to Top

5-2 . Database providers

 

Back to Top

5-2-1 . LDC News

 


 

In this newsletter:
 
-  LDC at 2009 ALA Annual Conference  -
 
-  LDC at NAACL 2009  -
 
-  LDC Introduces its Standard Arabic Morphological Tagger  -
 
LDC2009T15
 
LDC2009T14
 
-  LDC Offices to Close for Independence Day Holiday  -

 


 

LDC at 2009 ALA Annual Conference


We are pleased to announce that the Linguistic Data Consortium will be exhibiting at the American Library Association’s (ALA) Annual Conference in Chicago from July 11-14, 2009. In accordance with ALA’s conference policies, LDC’s members, friends and associates are eligible to receive a FREE exhibition pass for the duration of the conference (a $25 savings). ALA’s Annual meeting is a famous conference that typically attracts over 20,000 attendees and over 600 exhibitors, including: publishing houses, universities, university presses and many related organizations.  Please follow this link to take advantage of this solicitous offer:

http://registration.experient-inc.com/ShowALA092/defaultexhguest.aspx

You may forward this link to any student, coworker or colleague whom you think would be interested in viewing the exhibits at ALA 2009. The main conference lasts from July 9-15 and covers a wide range of topics related to information science, traditional library science, digital cataloging and more! Please follow these links for additional information on the main conference:

ALA Annual Conference main page Current exhibitors’ list  |  Main conference registration page


LDC will be exhibiting at small press table #1143. We hope to see you there!!

LDC at NAACL 2009


The North American Chapter of the ACL (Association for Computational Linguistics), NAACL, met at the University of Colorado at Boulder from May 31 - June 4. LDC is happy to report that we co-sponsored the entertainment at the festive gala dinner on June 2nd. NAACL featured a diverse collection of research papers and you may access the conference program here.

As a reminder, ACL’s annual meeting will be held in Singapore from August 2-7, 2009. Please click here to learn more about this conference and the ACL community.

LDC Introduces its Standard Arabic Morphological Tagger

 

At a recent LDC Institute seminar, Rushin Shah, a visiting scholar at LDC, presented a new tool for corpus annotation, the Standard Arabic Morphological Tagger (SAMT).  The current process of Arabic corpus annotation at LDC relies on using the Standard Arabic Morphological Analyzer (SAMA) to generate various morphology and lemma choices, and supplying these to manual annotators who then pick the correct choice. SAMA can generate dozens of choices for each word and does not provide any information about the likelihood of a particular choice being correct.  SAMT addresses these problems by ranking choices in order of their probabilities with a high degree of accuracy, and thereby, speeds annotation time.

You can view abstracts and presentation slides of this and other presentations in LDC's seminar series on data creation on our LDC Institute project page.

 

 

 
New Publications

 


(1)  GALE Phase 1 Chinese Newsgroup Parallel Text - Part 1 contains 240,000 characters (112 files) of Chinese newsgroup text and its translation selected from twenty-five sources. Newsgroups consist of posts to electronic bulletin boards, Usenet newsgroups, discussion groups and similar forums. This release was used as training data in Phase 1 (year 1) of the DARPA-funded GALE.

Preparing the source data involved four stages of work: data scouting, data harvesting, formating and data selection.

Data scouting involved manually searching the web for suitable newsgroup text. Data scouts were assigned particular topics and genres along with a production target in order to focus their web search. Formal annotation guidelines and a customized annotation toolkit helped data scouts to manage the search process and to track progress.

Data scouts logged their decisions about potential text of interest to a database. A nightly process queried the annotation database and harvested all designated URLs. Whenever possible, the entire site was downloaded, not just the individual thread or post located by the data scout. Once the text was downloaded, its format was standardized so that the data could be more easily integrated into downstream annotation processes. Typically, a new script was required for each new domain name that was identified. After scripts were run, an optional manual process corrected any remaining formatting problems.

The selected documents were then reviewed for content-suitability using a semi-automatic process. A statistical approach was used to rank a document's relevance to a set of already-selected documents labeled as "good." An annotator then reviewed the list of relevance-ranked documents and selected those which were suitable for a particular annotation task or for annotation in general. These newly-judged documents in turn provided additional input for the generation of new ranked lists.

Manual sentence units/segments (SU) annotation was also performed as part of the transcription task. Three types of end of sentence SU were identified: statement SU, question SU, and incomplete SU. After transcription and SU annotation, files were reformatted into a human-readable translation format and assigned to professional translators for careful translation. Translators followed LDC's GALE Translation guidelines which describe the makeup of the translation team, the source data format, the translation data format, best practices for translating certain linguistic features and quality control procedures applied to completed translations.

GALE Phase 1 Chinese Newsgroup Parallel Text - Part 1 is distributed via web download.

2009 Subscription Members will automatically receive two copies of this corpus on disc. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$1500.

(2)  Tagged Chinese Gigaword Version 2.0, created by scholars at Academia Sinica, Taipei, Taiwan, is a part-of-speech tagged version of LDC's Chinese Gigaword Second Edition (LDC2005T14). Like the original release, Version 2.0 contains all of the data in Chinese Gigaword Second Edition -- from Central News Agency, Xinhua News Agency and Lianhe Zaobao -- annotated with full part of speech tags. In addition, this new release removes residual noises in the original and improves tagging accuracy by incorporating lexica of unknown words. The changes represented in Version 2.0 include the following:

  • A single-width space is used consistently between two segmented words.
  • The position of the newline character remains fixed, better reflecting the source files from Chinese Gigaword Second Edition (LDC2005T14).
  • The original coding of partial Latin letters or Arabic numerals is preserved.
  • 1,192 documents from Central News Agency (Taiwan) and 13 documents from Xinhua News Agency that were missing from the first publication are included.
  • A set of heuristics for building out-of-vocabulary dictionaries to improve annotation quality of very large corpora is incorporated.

Documents in the corpus were assigned one of the following categories:

  • story:   This type of DOC represents a coherent report on a particular topic or event, consisting of paragraphs and full sentences.
  • multi:   This type of DOC contains a series of unrelated "blurbs," each of which briefly describes a particular topic or event; examples include "summaries of today's news," "news briefs in ..." (some general area like finance or sports), and so on.
  • advis:   These are DOCs which the news service addresses to news editors; they are not intended for publication to the "end users."
  • other:   These DOCs clearly do not fall into any of the above types; they include items such as lists of sports scores, stock prices, temperatures around the world, and so on.

Since neither manual checking nor automatic checking against a gold standard is feasible for gigaword size corpora, the authors proposed quality assurance of automatic annotation of very large corpora based on heterogeneous CKIP and ICTCLAS tagging systems (Huang et al., 2008). By comparing to word lists generated from the ICTCLAS version of an automatic tagged Xinhua portion of Chinese Gigaword, a set of heuristics for building out-of-vocabulary dictionaries to improve quality were proposed. Randomly selected texts for evaluating effects of these out-of-vocabulary dictionaries were manually checked. Experimental results indicate that there were 30,562 correct words (about 97.3 %) of tested words.

Tagged Chinese Gigaword Version 2.0 is distributed on one DVD-ROM

2009 Subscription Members will automatically receive two copies of this corpus.  2009 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$4000.

LDC Offices to Close for Independence Day Holiday


LDC would like to inform our customers that we will be closed on Friday, July 3, 2009 in observance of Independence Day.  Our offices will reopen on Monday, July 6, 2009.

Back to Top

6 . Jobs openings

We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobs.html as well as http://www.elsnet.org/ Jobs). 

The ads will be automatically removed from ISCApad after  6 months. Informing ISCApad editor when the positions are filled will avoid irrelevant mails between applicants and proposers.


Back to Top

6-1 . (2009-02-06) Position at ELDA

The Evaluation and Language Distribution Agency (ELDA) is offering a 6-month to 1-year internship in Human Language Technology for the Arabic language, with a special focus on Machine Translation (MT) and Multilingual Information Retrieval (MLIR). The internship is organised in the framework of the European project MEDAR (MEDiterranean ARabic language and speech technology). She or he will work in ELDA offices in Paris and the main work will consist of the development and adaptation of MT and MLIR open source software for Arabic.

http://www.medar.info
http://www.elda.org

Qualifications:
---------------
The applicant should have a high-quality degree in Computer Science. Good programming skills in C, C++, Perl and Eclipse are required.
The applicant should have a good knowledge of Linux and open source software.

Interest in Speech/Text Processing, Machine Learning, Computational Linguistics, or Cognitive Science is a plus.
Proficiency in written English is required.


Starting date:
--------------
February 2009.


Applications
-------------
Applications in the first instance should be made by email to
Djamel Mostefa, Head of Production and Evaluation department, ELDA,  email: mostefa _AT_ elda.org

Please include a cover letter and  your CV 

Back to Top

6-2 . (2009-02-15) Research Grants for PhD Students and Postdoc Researchers-Bielefeld University

The Graduate School Cognitive Interaction Technology at Bielefeld University,
Germany offers
Research Grants for PhD Students and Postdoc Researchers
The Center of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University
has been established in the framework of the Excellence Initiative as a research center for
intelligent systems and cognitive interaction between humans and technical systems.
CITEC's focus is directed towards motion intelligence, attentive systems, situated
communication, and memory and learning. Research and development are directed towards
understanding the processes and functional constituents of cognitive interaction, and
establishing cognitive interfaces that facilitate the use of complex technical systems.
The Graduate School Cognitive Interaction Technology invites applications from outstanding
young scientists, in the fields of robotics, computer science, biology, physics, sports
sciences, linguistics or psychology, that are willing to contribute to the cross-disciplinary
research agenda of CITEC. The international profile of CITEC fosters the exchange of
researchers and students with related scientific institutions. For PhD students, a structured
program including taught courses and time for individual research is offered. The integration
and active participation in interdisciplinary research projects, which includes access to first
class lab facilities, is facilitated by CITEC. For more information, please see: www.cit-ec.de .
Successful candidates must hold an excellent academic degree (MSc/Diploma/PhD) in a
related discipline, have a strong interest in research, and be proficient in both written and
spoken English. Research grants will be given for the duration of three years for PhD
students, and one to three years for Postdocs.
All applications should include: a cover letter indicating the motivation and research interests
of the candidate, a CV including a list of publications, and relevant certificates of academic
qualification. PhD applicants are asked to provide the outline of a PhD project (2-3 pages)
and a short abstract. Postdoc researchers are asked to provide the outline of a research
project (4-5 pages) relevant to CITEC's research objectives and a short abstract. It is
obligatory for Postdoc applicants, and strongly recommended for PhD applicants, to provide
two letters of recommendation. In the absence of letters of recommendation, PhD candidates
should provide the names and contact details of two referees. All documentation should be
submitted in electronic form.
We strongly encourage candidates to contact our researchers, in advance of application, in
order to develop project ideas. For a list of CITEC researchers please visit: www.cit-ec.de .
Bielefeld University is an equal opportunity employer. Women are especially encouraged to
apply and in the case of comparable competences and qualification, will be given preference.
Bielefeld University explicitly encourages disabled people to apply.
Applications will be considered until all positions have been filled. For guaranteed
consideration, please submit your documents no later than March 22, 2009. Please address
your application to Prof. Thomas Schack, Head of Graduate School, Email: gradschool@citec.
uni-bielefeld.de . Please direct any queries relating to your application to Claudia Muhl,
Graduate School Manager, phone: +49-(0)521-106-6566, cmuhl@cit-ec.uni-bielefeld.de
Back to Top

6-3 . (2009-03-09) 9 PhD positions in the Marie Curie International Training Network

Up to 9 PhD Positions available in

 

 the Marie Curie International Training Network on

 

Speech Communication with Adaptive LEarning (SCALE)

 

 

 

 

SCALE is a cooperative project between

 

·        IDIAP Research Institute in Martigny, Switzerland (Prof Herve Bourlard)

·        Radboud University Nijmegen, The Netherlands (Prof Lou Boves, Dr Louis ten Bosch, Dr-ir Bert Cranen, Dr O. Scharenborg)

·        RWTH Aachen, Germany (Prof Hermann Ney, Dr Ralf Schlüter)

·        Saarland University, Germany (Prof Dietrich Klakow, Dr John McDonough)

·        University of Edinburgh, UK (Prof Steve Renals, Dr Simon King, Dr Korin Richmond, Dr Joe Frankel)the Marie Curie International Training Network

·        University of Sheffield, UK (Prof Roger Moore, Prof Phil Green, Dr Thomas Hain, Dr Guido Sanguinetti) .

 

Companies like Toshiba or Philips Speech Recognition Systems/Nuance are associated partners of the program.

 

Each PhD position is funded for three years and degrees can be obtained from the participating academic institutions. 

 

Distinguishing features of the cooperation include:

 

·        Joint supervision of dissertations by lecturers from two partner institutions

·        While staying with one institution for most of the time, the program includes a stay at a second partner institution either from academic or industry for three to nine month 

·        An intensive research exchange program between all participating institutions

 

PhD projects will be in the area of

 

·        Automatic Speech Recognition

·        Machine learning

·        Speech Synthesis

·        Signal Processing

·        Human speech recognition

 

The salary of a PhD position is roughly 33.800 Euro per year. There are additional mobility (up to 800 Euro/month) and travel allowances (yearly allowance). Applicants should hold a strong university degree which would entitle them to embark on a doctorate (Masters/diploma or equivalent) in a relevant discipline, and should be in the first four years of their research careers. As the project is funded by a EU mobility scheme, there are also certain mobility requirements.

 

Women are particularly encouraged to apply.

 

Deadlines for applications:

 

April 1, 2009

July 1, 2009

September 1, 2009.

 

After each deadline all submitted applications will be reviewed and positions awarded until all positions are filled.

 

Applications should be submitted at http://www.scale.uni-saarland.de/index.php?authorsInstructions=1 .

 

To be fully considered, please include:

 

- a curriculum vitae indicating degrees obtained, disciplines covered

(e.g. list of courses ), publications, and other relevant experience

 

- a sample of written work (e.g. research paper, or thesis,

preferably in English)

 

- copies of high school and university certificates, and transcripts

 

- two references (e-mailed directly to the SCALE office

(Diana.Schreyer@LSV.Uni-Saarland.De) before the deadline)

 

- a statement of research interests, previous knowledge and activities

in any of the relevant research areas.

 

In case an application can only be submitted by regular post, it should

be sent to:

 

SCALE office

Diana Schreyer

Spoken Language Systems, FR 7.4

C 71 Office 0.02

Saarland University

P.O. Box 15 11 50

D-66041 Saarbruecken

Germany

 

If you have any questions, please contact Prof. Dr. Dietrich Klakow

(Dietrich.Klakow@LSV.Uni-Saarland.De).

 

For more information see also http://www.scale.uni-saarland.de/

 

Back to Top

6-4 . )2009-03-10) Maitre de conferences a l'Universite Descartes Paris (french)

 Un poste de maître de conférences en informatique (section 27) 27MCF0031 est à pouvoir à l'université Paris Descartes.
L’objectif de ce recrutement est de renforcer la thématique de recherche en traitement de la parole pour la détection et la remédiation d’altérations de la voix. On attend du candidat une solide expérience en traitement automatique de la parole (reconnaissance, synthèse, …).
Pour l'enseignement, tous les diplômes de l'UFR mathématiques et informatique sont concernés : la Licence MIA, le Master Mathématique et Informatique, le Master MIAGE. 

 

Contact: Marie-José Carat

Professeur d'Informatique
CRIP5 - Diadex (Dialogue et indexation)

Université Paris Descartes
45, rue des Saints Pères - 75270 Paris cedex 06
< mailto:Marie-Jose.Caraty@ParisDescartes.fr>

Tél  : (33/0) 1 42 86 38 48 

 

Back to Top

6-5 . (2009-03-14) Institut de linguistique et de phonetique Sorbonne Paris (french)

UNIVERSITE PARIS 3 (SORBONNE NOUVELLE) Poste n° 3743

07-Sciences du langage : linguistique et phonétique générales ...
Informatique et Traitemant Automatique des Langues
PARIS 75005
Vacant
Adresse d'envoi du
dossier :
17, RUE DE LA SORBONNE
Bureau du personnel enseignant
PR - 7eme - 0743
75005 - PARIS
Contact administratif :
N° de téléphone :
N° de Fax :
Email :
MARTINE GRAFFAN
GESTION MCF
01 40 46 28 96 01 40 46 28 92
01 43 25 74 71
Martine.Graffan@univ-paris3.fr
Date de prise de fonction : 01/09/2009
Mots-clés :
Profil enseignement :
Composante ou UFR :
Référence UFR :
Institut de linguistique et phonetique generales et appliquees  

0751982X 

Laboratoire5 : 

EA2290 - SYSTEMES LINGUISTIQUES, ENONCIATION ET DISCURSIVITE
(SYLED)
UMR7018 - LABORATOIRE DE PHONETIQUE ET PHONOLOGIE
EA1483 - RECHERCHE SUR LE FRANCAIS CONTEMPORAIN
UMR7107 - LABORATOIRE DES LANGUES ET CIVILISATIONS A TRADITION
ORALE (LACITO)
Informations Complémentaires
Enseignement :
Profil :
L’enseignement interviendra dès la 1ère année de la filière de la Licence des Sciences
du Langage jusqu’au Doctorat des Sciences du Langage, spécialité TAL. La formation en
Traitement Automatique des Langues peut bien entendu aussi trouver des applications dans
un Master de Sciences du Langage, spécialité Langage, Langues, Modèles et un Doctorat de
Sciences du Langage d’autres spécialités.
Le poste permettra l’encadrement d’enseignements associant Sciences du Langage et
Traitement Automatique des Langues, orientés à la fois vers la poursuite d’études en Master
et Doctorat et vers la professionnalisation, en préparant à des métiers des Industries de la
Langue.
Département d’enseignement : UFR de Linguistique e Phonétique Générales et
Appliquées
Lieu(x) d’exercice : 19, rue des Bernardins 75005 - PARIS
Equipe pédagogique :
Nom directeur département : Madame Martine VERTALIER
Tél. directeur département. : 01 44 32 05 79
Email directeur département : Martine.Vertalier@univ-paris3.fr
URL département. : /
Recherche :
Profil :
Développement et encadrement des recherches en TAL, recherche sur « grands
corpus » oraux et/ou écrits dans des langues diverses, éventuellement fouille de données,
induction de grammaires, mais aussi en synergie, au sein des équipes de recherche
constituées, avec les composantes travaillant sur d’autres domaines de recherche, par un
apport de ressources théoriques et technologiques. Le Professeur inscrira sa recherche dans
l’Ecole Doctorale 268 de Paris3 en priorité dans l’équipe fondatrice des filières de formation
et de recherche décrites ci-dessus : le SYLED, en particulier sa composante CLA2t (Centre de
Lexicométrie et d’Analyse Automatique des Textes), ou dans une équipe dont les enseignants
chercheurs contribuent à l’enseignement et à la recherche à l’ILPAG (Laboratoire de
Phonéthique et Phonologie ) ; UMR 7107 Laboratoire des Langues et Civilisations à Tradition
Orale (LACITO).
Lieu(x) d’exercice :
1- EA 2290 SYLED 19 , rue des Bernardins 75005-PARIS
2- UMR 7018 Laboratoire de phonétique et phonologie 19, rue des Bernardins
75005-PARIS
3- EA 1483 Recherche sur le Français Contemporain 19, rue des Bernardins
75005-PARIS
4- UMR 7107 LACITO CNRS 7, rue G. Môquet 94800-VILLEJUIF
Nom directeur laboratoire : 1- M. André SALEM 01 44 32 05 84
2- Me Jacqueline VAISSIERE et Me Annie RIALLAND 01 43 26 57 17
3- Me Anne SALAZAR-ORVIG 01 44 32 05 07
4- Me Zlatka GUENTCHEVA 01 49 58 37 78
Email directeur laboratoire : syled@univ-paris3.fr -
jacqueline.vaissiere@univ-paris3.fr - anne.salazar-orvig@univ-paris3.fr - lacito@vjf.cnrs.fr 

 

Back to Top

6-6 . (2009-03-15) Poste Maitre de conferences Nanterre Paris (french)

Poste MCF, 221 : Linguistique : pathologie des acquisitions langagières
Université Paris X, Nanterre, Département des Sciences du langage
Contact : Anne Lacheret, anne@lacheret.com

Préférence accordée aux candidats et candidates à double profil :
linguistique et orthophonie ou discipline connexe. 

Back to Top

6-7 . (2009-03-18) Ingenieur etude/developpement Semantique, TAL, traduction automatique (french)

Ingénieur Etude & Développement (H/F)

POSTE BASE DANS LE NORD PAS DE CALAIS (62)

 

 

Fort d’une croissance continue de ses activités, soutenue par un investissement permanent en R&D, notre CLIENT, leader Européen du traitement de l’information recrute un Ingénieur Développement (h/f) spécialisé en sémantique, traitement automatique du langage naturel, outils de traduction automatique et de recherche d’informations cross-lingue et système de gestion de ressources linguistique multilingues (dictionnaires, lexiques, mémoires de traduction, corpus alignés).


Passionné(e) par l’application des technologies les plus avancées au traitement industriel de l’information, vos missions consistent à concevoir, développer et industrialiser les chaînes de traitement documentaire utilisées par les lignes de production pour le compte des clients de l’entreprise.

De formation supérieure en informatique (BAC+5 ou équivalent), autonome et créatif, nous vous proposons d’intégrer une structure dynamique et à taille humaine où l’innovation est permanente au service de la production et du client.

Vous justifiez idéalement de 2/3 ans d'expérience dans la programmation orientée objet et les processus de développement logiciel. La pratique de C++ et/ou Java est indispensable.
La maîtrise de l’anglais est exigée pour évoluer dans un groupe à envergure internationale.
Vos qualités d’analyse et de synthèse, votre sens du service et de l’engagement client vous permettront de relever le challenge que nous vous proposons.

Back to Top

6-8 . (2009-04-02)The Johns Hopkins University: Post-docs, research staff, professors on sabbaticals

The Johns Hopkins University
The Human Language Technology Center of Excellence
Post-docs, research staff, professors on sabbaticals
The Human Language Technology Center of Excellence (COE) at the Johns Hopkins University is seeking to hire outstanding Ph.D. researchers in the field of speech and natural language processing. The COE seeks the most talented candidates for both junior and senior level positions including, but not limited to, full-time research staff, professors on sabbaticals, visiting scientists and post-docs. Candidates will be expected to work in a team setting with other researchers and graduate students at the Johns Hopkins University, the University of Maryland College Park and other affiliated institutions.
Candidates should have a strong background in speech processing:
Robust speech recognition across language channel, formal vs. informal genres, speaker identification, language identification, speech retrieval, spoken term detection, etc.
The COE was founded in January 2007 and has a long-term research contract as an independent center within Johns Hopkins University. Located next to Johns Hopkins’ Homewood Campus in Baltimore, Maryland, the COE’s distinguished contract partners include the University of Maryland College Park, the Johns Hopkins University Applied Physics Lab, and BBN Technologies of Cambridge, Massachusetts. World-class researchers at the COE focus on fundamental challenge problems critical to finding solutions for real-world problems of importance to our government sponsor. The COE offers substantial computing capability for research that requires heavy computation and massive storage. In the summer of 2009, the COE will hold its first annual Summer Camp for Advanced Language Exploration (SCALE), inviting the best and brightest researchers to work on common areas in speech and NLP. Researchers are expected to publish in peer-reviewed venues. For more information about the COE, visit www.hltcoe.org.
Applicants should have earned a Ph.D. in Computer Science (CS), Electrical and Computer Engineering (ECE), or a closely related field. Applicants should submit a curriculum vitae, research statement, names and addresses of at least four references, and an optional teaching statement. Please send applications and inquiries about the position to hltcoe-hiring@jhu.edu.
While
Back to Top

6-9 . (2009-04-07) PhD Position in The Auckland University - New Zealand

PhD Position in The Auckland University - New Zealand Speech recognition for Healthcare Robotics Description:  This project is the speech recognition component of a larger project for a speech enabled command module with verbal feedback software to facilitate interaction between aged people and robots. Including: speech generation and empathetic speech expression by the robot, speech recognition by the robot. For more details please refer to the link:  https://wiki.auckland.ac.nz/display/csihealthbots/Speech+recognition+PhD
Back to Top

6-10 . (2009-04-23) R&D position in SPEECH RECOGNITION, PROCESSING AND SYNTHESIS IRCAM Paris

RESEARCH AND DEVELOPMENT POSITION IN SPEECH RECOGNITION, PROCESSING AND SYNTHESIS =========================================================================

The position is available immediately in the Speech group of the Analysis/Synthesis team at Ircam.
The Analysis/Synthesis team undertakes research and development
centered on new and advanced algorithms for analysis, synthesis and
transformation of audio signals, and, in particular, speech.

JOB DESCRIPTION:
A full-time position is open for research and development of advanced statistics
and signal processing algorithms in the field of speech recognition,
transformation and synthesis.
http://www.ircam.fr/anasyn.html  (projects Rhapsodie, Respoken,
Affective Avatars, Vivos, among others)
The applications in view are, for example,
- Transformation of the identity, type and nature of a voice
- Text-to-Speech and expressive Speech Synthesis
- Synthesis from actor and character recordings.
The principal task is the design and the development of new algorithms
for some of the subjects above and in collaboration with the other
members of the Speech group. The research environment is Linux, Matlab
and various scripting languages like Perl. The development environment
is C/C++, for Windows in particular.

REQUIRED EXPERIENCE AND COMPETENCE:
O Excellent experience of research in statistics, speech and signal processing
O Experience in speech recognition, automatic segmentation (e.g. HTK)
O Experience of C++ development
O Good knowledge of UNIX and Windows environments
O High productivity, methodical work, and excellent programming style.

AVAILABILITY:
The position is available in the Analysis/Synthesis team of the Research
and Ddevelopment department of Ircam to start as soon as possible.

DURATION:
The initial contract is for 1 year, and could be prolonged.

EEC WORKING PAPERS:
In order to be able to begin immediately, the candidate SHALL HAVE valid EEC working papers.

SALARY:
According to formation and experience.

TO APPLY:
Please send your CV describing in a very detailed way the level of knowledge,
expertise and experience in the fields mentioned above (and any other
relevant information, recommendations in particular) preferably by email to:

 Xavier.Rodet@ircam.fr (Xavier Rodet, Head of the Analysis/Synthesis team)

Or by fax: (33 1) 44 78 15 40, attention of Xavier Rodet

Or by post to: Xavier Rodet, IRCAM, 1 Place Stravinsky, 75004 Paris, France 

Back to Top

6-11 . (2009-05-04) Several Ph.D. positions and Ph.D. or Postdoc scholarships, Universität Bielefeld

 Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld

 
Applications are invited for several Ph.D. positions and Ph.D. scholarships in experimental phonetics, speech technology and laboratory phonology at Universität Bielefeld (Fakultät für Linguistik und Literaturwissenschaft), Germany.

 

Successful candidates should hold a Master's degree (or equivalent) in phonetics, computational linguistics, linguistics, computer science or a related discipline. They will have a strong background in either

-       speech synthesis and/or recognition

-       discourse prosody

-       laboratory phonology

-       speech and language rhythm research

-       multimodal speech (technology)

 

Candidates should appreciate working in an interdisciplinary environment. Good knowledge in experimental design techniques and programming skills will be considered a plus. Strong interest in research and high proficiency in English is required.

 

The Ph.D. positions will be part-time (50%); salary and social benefits are determined by the German public service pay scale (TVL-E13). The Ph.D. scholarship is based on the DFG scale. There is no mandatory teaching load.

 

Bielefeld University is an equal opportunity employer. Women are therefore particularly encouraged to apply. Disabled applicants with equivalent qualification will be treated preferentially.

 

The positions are available for three years (with a potential extension for the Ph.D. positions), starting as soon as
possible. Please submit your documents (cover letter, CV including list of publications, statement of research interests, names of two referees) electronically to the address indicated below. Applications must be received by June 15, 2009.
 
Universität Bielefeld
Fakultät für Linguistik und Literaturwissenschaft
Prof. Dr. Petra Wagner
Postfach 10 01 31
33 501 Bielefeld
Germany
 

 

 

 

 
 
Back to Top

6-12 . (2009-05-07) PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING FRANCE)

=============================================================================
PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting
09/09)
=============================================================================

The PORT-MEDIA (ANR CONTINT 2008-2011) is a cooperative project
sponsored by the French National Research Agency, between the University
of Avignon, the University of Grenoble, the University of Le Mans, CNRS
at Nancy and ELRA (European Language Resources Association).  PORT-MEDIA
will address the multi-domain and multi-lingual robustness and
portability of spoken language understanding systems. More specifically,
the overall objectives of the project can be summarized as:
- robustness: integration/coupling of the automatic speech recognition
component in the spoken language understanding process.
- portability across domains and languages: evaluation of the genericity
and adaptability of the approaches implemented in the
understanding systems, and development of new techniques inspired by
machine translation approaches.
- representation: evaluation of new rich structures for high-level
semantic knowledge representation.

The PhD thesis will focus on the multilingual portability of speech
understanding systems. For example, the candidate will investigate
techniques to fast adapt an understanding system from one language to
another and creating low-cost resources with (semi) automatic methods,
for instance by using automatic alignment techniques and lightly
supervised translations. The main contribution will be to fill the gap
between the techniques currently used in the statistical machine
translation and spoken language understanding fields.

The thesis will be co-supervised by Fabrice Lefèvre, Assistant Professor
at LIA (University of Avignon) and Laurent Besacier, Assistant Professor
at LIG (University of Grenoble). The candidate will spend 18 months at
LIG then 18 months at LIA.

The salary of a PhD position is roughly 1,300€ net per month. Applicants
should hold a strong university degree entitling them to start a
doctorate (Masters/diploma or equivalent) in a relevant discipline
(Computer Science, Human Language Technology, Machine Learning, etc).
The applicants should be fluent in English. Competence in French is
optional, though applicants will be encouraged to acquire this skill
during training. All applicants should have very good programming skills.

For further information, please contact Fabrice Lefèvre (Fabrice.Lefevre
at univ-avignon.fr) AND Laurent Besacier (Laurent.Besacier at imag.fr).

====================================================================================
Sujet de thèse en Traduction Automatique et Compréhension de la Parole
(début 09/09)
====================================================================================

Le projet PORT-MEDIA (ANR CONTINT 2008-2011) concerne la robustesse et
la portabilité multidomaine et multilingue des systèmes de compréhension
de l'oral. Les partenaires sont le LIG, le LIA, le LORIA, le LIUM et
ELRA (European Language Ressources Association). Plus précisément, les
trois objectifs principaux du projet concernent :
-la robustesse et l'intégration/couplage du composant de reconnaissance
automatique de la parole dans le processus de compréhension.
-la portabilité vers un nouveau domaine ou langage : évaluation des
niveaux de généricité et d'adaptabilité des approches implémentées dans
les systèmes de compréhension.
-l’utilisation de représentations sémantiques de haut niveau pour
l’interaction langagière.

Ce sujet de thèse concerne essentiellement la portabilité multilingue
des différents composants d’un système de compréhension automatique ;
l’idée étant d’utiliser, par exemple, des techniques d’alignement
automatique et de traduction pour adapter rapidement un système de
compréhension d’une langue vers une autre, en créant des ressources à
faible coût de façon automatique ou semi-automatique. L'idée forte est
de rapprocher les techniques de traduction automatique et de
compréhension de la parole.

Cette thèse est un co-encadrement entre deux laboratoires (Fabrice
Lefevre, LIA & Laurent Besacier, LIG). Les 18 premiers mois auront lieu
au LIG, les 18 suivants au LIA.

Le salaire pour un etudiant en thèse est d'environ 1300€ net par mois.
Nous recherchons des étudiants ayant un Master (ou équivalent) mention
Recherche dans le domaine de l'Informatique, et des compétences dans les
domaines suivants : traitement des langues écrites et/ou parlées,
apprentissage automatique...

Pour de plus amples informations ou candidater, merci de contacter
Fabrice Lefèvre (Fabrice.Lefevre at univ-avignon.fr) ET Laurent Besacier
(Laurent.Besacier at imag.fr).

-------------------------- 

Back to Top

6-13 . (2009-05-07)Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld

 
Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld
 
Applications are invited for several Ph.D. positions and Ph.D. scholarships in experimental phonetics, speech technology and laboratory phonology at Universität Bielefeld (Fakultät für Linguistik und Literaturwissenschaft), Germany.

 

Successful candidates should hold a Master's degree (or equivalent) in phonetics, computational linguistics, linguistics, computer science or a related discipline. They will have a strong background in either

-       speech synthesis and/or recognition

-       discourse prosody

-       laboratory phonology

-       speech and language rhythm research

-       multimodal speech (technology)

 

Candidates should appreciate working in an interdisciplinary environment. Good knowledge in experimental design techniques and programming skills will be considered a plus. Strong interest in research and high proficiency in English is required.

 

The Ph.D. positions will be part-time (50%); salary and social benefits are determined by the German public service pay scale (TVL-E13). The Ph.D. scholarship is based on the DFG scale. There is no mandatory teaching load.

 

Bielefeld University is an equal opportunity employer. Women are therefore particularly encouraged to apply. Disabled applicants with equivalent qualification will be treated preferentially.

 

The positions are available for three years (with a potential extension for the Ph.D. positions), starting as soon as
possible. Please submit your documents (cover letter, CV including list of publications, statement of research interests, names of two referees) electronically to the address indicated below. Applications must be received by June 15, 2009.
 
Universität Bielefeld
Fakultät für Linguistik und Literaturwissenschaft
Prof. Dr. Petra Wagner
Postfach 10 01 31
33 501 Bielefeld
Germany
 
 
 
Back to Top

6-14 . (2009-05-08)PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)

PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)
=============================================================================

The PORT-MEDIA (ANR CONTINT 2008-2011) is a cooperative project sponsored by the French National Research Agency, between the University of Avignon, the University of Grenoble, the University of Le Mans, CNRS at Nancy and ELRA (European Language Resources Association).  PORT-MEDIA will address the multi-domain and multi-lingual robustness and portability of spoken language understanding systems. More specifically, the overall objectives of the project can be summarized as:
- robustness: integration/coupling of the automatic speech recognition component in the spoken language understanding process.
- portability across domains and languages: evaluation of the genericity and adaptability of the approaches implemented in the
understanding systems, and development of new techniques inspired by machine translation approaches.
- representation: evaluation of new rich structures for high-level semantic knowledge representation.

The PhD thesis will focus on the multilingual portability of speech understanding systems. For example, the candidate will investigate techniques to fast adapt an understanding system from one language to another and creating low-cost resources with (semi) automatic methods, for instance by using automatic alignment techniques and lightly supervised translations. The main contribution will be to fill the gap between the techniques currently used in the statistical machine translation and spoken language understanding fields.

The thesis will be co-supervised by Fabrice Lefèvre, Assistant Professor at LIA (University of Avignon) and Laurent Besacier, Assistant Professor at LIG (University of Grenoble). The candidate will spend 18 months at LIG then 18 months at LIA.

The salary of a PhD position is roughly 1,300€ net per month. Applicants should hold a strong university degree entitling them to start a doctorate (Masters/diploma or equivalent) in a relevant discipline (Computer Science, Human Language Technology, Machine Learning, etc). The applicants should be fluent in English. Competence in French is optional, though applicants will be encouraged to acquire this skill during training. All applicants should have very good programming skills.

For further information, please contact Fabrice Lefèvre (Fabrice.Lefevre at univ-avignon.fr) AND Laurent Besacier (Laurent.Besacier at imag.fr). 

Back to Top

6-15 . (2009-05-11) Thèse Cifre indexation de données multimédia Institut Eurecom

Thèse Cifre indexation de données multimédia
These
DeadLine: 01/11/2009
merialdo@eurecom.fr
http://bmgroup.eurecom.fr/
The Multimedia Communications Department of EURECOM, in partnership the travel service provider company AMADEUS, invites applications for a PhD position on multimedia indexing. The goal of the thesis is to study new techniques to organize large quantities of multimedia information, specifically images and videos, for improving services to travelers. This includes managing images and videos from providers as well as from users about places, locations, events, etc… The approach will be based on the most recent techniques in multimedia indexing, and will benefit from the strong research experience of EURECOM in this domain, joint to the industrial experience of AMADEUS.
We are looking for very good and motivated students, with a strong knowledge in image and video processing, statistical and probabilistic modeling, for the theoretical part, and a good C/C++ programming ability for the experimental part. English is required. The successful candidate will be employed by AMADEUS in Sophia Antipolis, and will strongly interact with the researchers at EURECOM.
Applicants should email a resume, letter of motivation, and all relevant information to.with
Prof. Bernard Merialdo
merialdo@eurecom.fr
The project will be conducted within AMADEUS (http://www.amadeus.com/), a world leader in provision of solutions to the travel industry to manage the distribution and selling of travel services. The company is the leading Global Distribution System (GDS) and the biggest processor of travel bookings in the world. Their main development center is located in Sophia Antipolis, France, and employs more than 1200 engineers. The research will be supervised by EURECOM (http://www.eurecom.fr), a graduate school and research center in communication systems, whose activity includes corporate, multimedia and mobile communications. EURECOM currently counts about 20 professors, 10 post-docs, 170 MS and 60 PhD students, and is involved in many European research projects and joint collaborations with industry. EURECOM is also located in Sophia-Antipolis, a major European technology park for telecommunications research and development in the French Riviera.
 

Back to Top

6-16 . (2009-05-11)Senior Research Fellowship in Speech Perception and Language Development,MARCS Auditory Laboratories

Ref 147/09 Senior Research Fellowship in Speech Perception and Language Development, MARCS Auditory Laboratories, Australia

 

MARCS Auditory Laboratories is a multi-disciplinary research centre involved in research in auditory perception and cognition, particularly in the fields of speech and language, music, sound and action, and hearing and auditory processes.

MARCS is seeking a Senior Research Fellow with a background in psychology/behavioural science and specialisation in some or all of the following: speech perception, speech science, experimental phonetics, infant and child perception studies, speech production studies (eg with OPTOTRAK), psychophysics/psychoacoustics, cross-language studies; and experience in sophisticated methods of data analysis.
 
As this position is likely to involve working with children, 'Prohibited Persons' are not permitted to apply. The successful applicant will be required to authorise a screening check.

5 Year Fixed Term Contract , Bankstown Campus

Remuneration Package: Academic Level C $107,853 to $123,724 p.a. (comprising Salary $91,266 to $104,831 p.a., 17% Superannuation, and Leave Loading)

Position Enquiries: Professor Denis Burnham, (02) 9772 6677 or email d.burnham@uws.edu.au

Closing Date: The closing date for this position has been extended until 30 June 2009. 

 

 
Back to Top

6-17 . (2009-06-02)Proposition de sujet de thèse 2009 Analyse de scènes de parole Grenoble France

Proposition de sujet de thèse 2009
Ecole Doctorale EDISCE (http://www-sante.ujf-grenoble.fr/edisce/)
Financement ANR (http://www.icp.inpg.fr/~schwartz/Multistap/Multistap.html)

Analyse de scènes de parole : le problème du liage audio-visuo-moteur à la lumière de données comportementales et neurophysiologiques


Deux questions importantes traversent les recherches actuelles sur le traitement cognitif de la parole : la question de la multisensorialité (comment se combinent les informations auditives et visuelles dans le cerveau) et celle des interactions perceptuo-motrices.

Une question manquante est selon nous celle du « liage » (binding) : comment dans ces processus de traitement auditif ou audiovisuel, le cerveau parvient-il à « mettre ensemble » les informations pertinentes, à éliminer les « bruits », à construire les « flux de parole » pertinents avant la prise de décision ? Plus précisément, les objets élémentaires de la scène de parole sont les phonèmes, et des modules spécialisés auditifs, visuels, articulatoires contribuent au processus d'identification phonétique, mais il n'a pas été possible jusqu'à présent d'isoler leur contribution respective, ni la manière dont ces contributions sont fusionnées. Des expériences récentes permettent d'envisager le processus d'identification phonétique comme étant de nature non hériarchique, et essentiellement instancié par des opérations associatives. La thèse consistera à développer d’autres paradigmes expérimentaux originaux, mais aussi à mettre en place des expériences de neurophysiologie et neuroimagerie (EEG, IRMf) disponibles au laboratoire et dans son environnement Grenoblois, afin de déterminer la nature et le fonctionnement des processus de groupement audiovisuel des scènes de parole, en relation avec le mécanismes de production.

Cette thèse se réalisera dans le cadre d’un projet ANR « Multistap » (Multistabilité et groupement perceptif dans l’audition et dans la parole » http://www.icp.inpg.fr/~schwartz/Multistap/Multistap.html). Ce projet fournira à la fois le support de financement pour la bourse de thèse, et un environnement stimulant pour le développement des recherches, en partenariat avec des équipes de spécialistes d’audition et de vision, de Paris (DEC ENS), Lyon (LNSCC) et Toulouse (Cerco).

Responsables
Jean-Luc Schwartz (DR CNRS, HDR) : 04 76 57 47 12,
Frédéric Berthommier (CR CNRS) : 04 76 57 48 28
Jean-Luc.Schwartz, Frederic.Berthommier@gipsa-lab.grenoble-inp.fr

Back to Top

6-18 . (2009-06-10) PhD in ASR in Le Mans France

PhD position in Automatic Speech Recognition
=====================================

Starting in september-october 2009.

The ASH (Attelage de Systèmes Hétérogènes) project is a project funded by the ANR (French National Research Agency). Three French academic laboratories are involved: LIUM (University of Le Mans), LIA (University of Avignon) and IRISA (Rennes).

The main objective of the ASH project is to define and experiment an original methodological framework for the integration of heterogeneous automatic speech recognition systems. Integrating heterogeneous systems, and hence heterogeneous sources of knowledge, is a key issue in ASR but also in many other applicative fields concerned with knowledge integration and multimodality.

Clearly, the lack of a generic framework to integrate systems operating with different viewpoints, different knowledges and at different levels is a strong limitation which needs to be overcome: the definition of such a framework is the fundamental challenge of this work.

By defining a rigorous and generic framework to integrate systems, significant scientific progresses are expected in automatic speech recognition. Another objective of this project is to enable the efficient and reliable processing of large data streams by combining systems on the y.
At last, we expect to develop an on-the-fly ASR system as a real-time demonstrator of this new approach.

The thesis will be co-supervised by Paul Deléglise, Professeur at LIUM, Yannick Estève, Assistant Professor  at LIUM and Georges Linarès, Assistant Professor at LIA. The candidate will work at Le Mans (LIUM), but will regularly spend a few days in Avignon (LIA)

Applicants should hold a strong university degree entitling them to start a doctorate (Masters/diploma or equivalent) in a relevant discipline (Computer Science, Human Language Technology, Machine Learning, etc).

The applicants for this PhD position should be fluent in English or in French. Competence in French is optional, though applicants will be encouraged to acquire this skill during training. This position is funded by the ANR.

Strong software skills are required, especially Unix/linux, C, Java, and a scripting language such as Perl or Python.

Contacts:
Yannick Estève: yannick.esteve@lium.univ-lemans.fr
Georges Linarès: georges.linares@univ-avignon.fr 

Back to Top

6-19 . (2009-06-17)Two post-docs in the collaboration between CMU (USA) and University-Portugal program

Two post-doctoral positions in the framework of the Carnegie Mellon
University-Portugal program are available at the Spoken Language Systems
Lab (www.l2f.inesc-id.pt), INESC-ID, Lisbon, Portugal.
Positions are for a fixed term contract of length up to two and a half
years, renewable in one year intervals, in the scope of the research
projects PT-STAR (Speech Translation Advanced Research to and from
Portuguese) and REAP.PT (Computer Aided Language Learning – Reading
Practice), both financed by FCT (Portuguese Foundation for Science and
Technology).
The starting date for these positions is September 2009, or as soon as
possible thereafter.
Candidates should send their CVs (in .pdf format) before July 15th, to
the email addresses given below, together with a motivation letter.
Questions or other clarification requests should be emailed to the same
addresses.
======== PT-STAR (project CMU-PT/HuMach/0039/2008) ========
Topic: Speech-to-Speech Machine Translation
Description: We seek candidates with excellent knowledge in statistical
approaches to machine translation (and if possible also speech
technologies) and strong programming skills. Familiarity with the
Portuguese language is not at all mandatory, although the main source
and target languages are Portuguese/English.
Email address for applications: lcoheur at l2f dot inesc-id dot pt
======== REAP.PT (project CMU-PT/HuMach/0053/2008) ========
Topic: Computer Aided Language Learning
Description: We seek candidates with excellent knowledge in automatic
question generation (multiple-choice synonym questions, related word
questions, and cloze questions) and/or measuring the reading difficulty
of a text (exploring the combination of lexical features, grammatical
features and statistical models). Familiarity with a romance language is
recommended, since the target language is Portuguese.
Email address for applications: nuno dot mamede at inesc-id dot pt
Back to Top

6-20 . (2009-06-19) POSTDOC POSITION in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES

POSTDOC POSITION in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES (18 months ; starting January 2010 or later) IN GRENOBLE (France)

=============================================================================

PI (ANR BLANC 2009-2012) is a cooperative project sponsored by the French National Research Agency, between the University of Grenoble (France), the University of Avignon (France), and the International Research Center MICA in Hanoï (Vietnam).


PI addresses spoken language processing (notably speech recognition) for under-resourced languages (or ?-languages). From a scientific point of view, the interest and originality of this project consists in proposing viable innovative methods that go far beyond the simple retraining or adaptation of acoustic and linguistic models. From an operational point of view, this project aims at providing a free open source ASR development kit for ?-languages. We plan to distribute and evaluate such a development kit by deploying ASR systems for new under-resourced languages with very poor resources from Asia (Khmer, Lao) and Africa (Bantu languages).

The POSTDOC position focus on the development of ASR for two low-ressourced languages from Asia and Africa. This includes : supervising the ressource collection (in relation with the language partners), propose innovative methods to quickly develop ASR systems for these languages, evaluation., etc.

The salary of the POSTDOC position is roughly 2300€ net per month. Applicants should hold a PhD related to spoken language processing. The applicants should be fluent in English. Competence in French is optional, though applicants will be encouraged to acquire this skill during the postdoc.

For further information, please contact Laurent Besacier (Laurent.Besacier at imag.fr).

Back to Top

6-21 . (2009-06-22) PhD studentship in speech and machine learning ESPCI ParisTech

Thesis topic : Vocal Prosthesis Based on Machine Learning
The objective of the thesis is to design and implement a vocal prosthesis to restore the original voice of persons who have lost the ability to speak due to a partial or total laryngectomy or a neurological problem. Using a miniature ultrasound machine and a video camera to drive a speech synthesizer, the device is intended to restore the original voice of these patients with as much fidelity as possible, allowing speech handicapped individuals to interact with those around them in a more natural and familiar way. The thesis work will build upon promising results obtained in the Ouisper project (funded on contract number ANR-06-BLAN-0166, http://www.neurones.espci.fr/ouisper/index.htm, and also supported by the French Defense Department, DGA), which terminates at the end of 2009; however, final success will require addressing the following four key technological issues:
1) New data acquisition protocol: the current acquisition system requires the user’s head to be immobilized during speech. The candidate will need to design and implement an innovative new system to overcome this constraint, which would be unacceptable for a real world application.
2) New dictionaries: results obtained thus far show that a truly open domain vocabulary may not be realistic. The candidate will create new dictionaries of vocabularies which are constrained, yet rich enough to be of genuine utility for verbal communication in the targeted speech handicapped community.
3) New synthesis methods: concatenative synthesis, though conceptually simple, is not sufficiently flexible when the initial recognition step contains errors. The candidate will devise new synthesis methods which better model the spectral qualities of the speaker’s voice, perhaps using Bayesian networks. Innovative techniques of recovering an acceptable prosody for the synthesized speech will also need to be developed.
4) Real time execution: as the amount of calculation necessary to carry out the recognition and synthesis steps is significant, the candidate will need to pay particular attention to optimization of code and real time execution in the algorithms he or she develops.
The thesis will be carried out in partnership with the Laboratoire de Phonétique et de Phonologie of the Université de Paris III, specializing in speech production and pathologies, for which additional funding has been obtained from the Agence Nationale de la Recherche (Emergence-TEC 2009 call, REVOIX project).
Back to Top

6-22 . (2009-06-30)Postdoctoral Fellowships in machine learning/statistics/machine vision at Monash University, Australia


Back to Top

6-23 . (2009-06-30) PhD studentship at LIMSI France

Titre :
Modèles de l'expressivité pour la synthèse de récits courts, lus par un robot humanoïde.
Contenu:
Si les systèmes de synthèses actuels sont généralement suffisants pour lire des phrases de
façon neutre, il sont très vite pénibles à écouter, en particulier pour des textes assez long
(plusieurs paragraphes). Les systèmes de synthèse ne sont guère capables de rendre expressif
une narration. De même, les capacités motrices des robots humanoïdes sont actuellement peu
exploitées et développées pour l’expression par le geste et la posture.
Ce projet de recherche porte sur la synthèse expressive audiovisuelle de récits courts. Le
projet comprend deux aspects principaux. Dans une phase d’analyse, il s’agit de traiter
automatiquement des textes, courts récits de type « contes pour enfant », afin d'en extraire un
contenu pragmatique, sémantique, dialogique, narratif et émotionnel.
Ce contenu servira dans une seconde phase d'une part à la synthèse de prosodie expressive, et
d'autre part à alimenter un modèle comportemental en termes de postures, de gestes et autres
mouvements du robot humanoïde NAO.
Compétences requises:
Ce sujet est situé dans le domaine de l’interaction homme-machine expressive.
Il demande de posséder ou d'acquérir des compétences en informatique linguistique, tant du
point de vue de l'écrit que de l'oral, et si possible également du point de vue audio visuel.
Le projet contient une part significative de programmation pour l’analyse des texte et la
synthèse, mais aussi une part significative d’analyse linguistique (des textes), phonétique (de
la prosodie), et comportementale (posture et gestes).
Des profils de type informatique, science cognitive ou linguistique seront donc considérés.
Contexte et équipe d’accueil:
Cette thèse s’inscrit dans le contrat ANR GV-LEX.
Elle se déroulera au LIMSI-CNRS www.limsi.fr dans les groupes Audio & Acoustique,
Traitement du Langage Parlé, et Architecture et Modèles de l’Interaction
Cette thèse commencera dès septembre, financée par l’ANR pour une durée 3 ans.
Encadrement - contact:
La thèse sera encadrée par Christophe d’Alessandro, directeur de recherche au CNRS. Les
candidatures seront adressées aux quatre chercheurs impliqués dans ce projet :
Christophe d’Alessandro <cda@limsi.fr>
Jean-Claude Martin <martin@limsi.fr>
Sophie Rosset <sophie.rosset@limsi.fr>
Albert Rilliard <rilliard@limsi.fr>
Back to Top

6-24 . (2009-07-01) These: Vocal Prosthesis Based on Machine Learning (France)

  Vocal Prosthesis Based on Machine Learning(2)
These
DeadLine: 01/09/2009
denby@ieee.org
http://
We are looking for an excellent candidate for a PhD studentship in speech and statistical learning at the Laboratoire d'Electronique at ESPCI ParisTech, Paris, France. Interested candidates should contact Prof. B. Denby by mail at denby@ieee.org before 1 Septembre 2009 at the latest (earlier application is strongly encouraged).
Working language: French or English
Thesis topic : Vocal Prosthesis Based on Machine Learning
The objective of the thesis is to design and implement a vocal prosthesis to restore the original voice of persons who have lost the ability to speak due to a partial or total laryngectomy or a neurological problem. Using a miniature ultrasound machine and a video camera to drive a speech synthesizer, the device is intended to restore the original voice of these patients with as much fidelity as possible, allowing speech handicapped individuals to interact with those around them in a more natural and familiar way. The thesis work will build upon promising results obtained in the Ouisper project (funded on contract number ANR-06-BLAN-0166,
http://www.neurones.espci.fr/ouisper/index.htm,
and also supported by the French Defense Department, DGA), which terminates at the end of 2009; however, final success will require addressing the following four key technological issues:
1) New data acquisition protocol: the current acquisition system requires the users head to be immobilized during speech. The candidate will need to design and implement an innovative new system to overcome this constraint, which would be unacceptable for a real world application.
2) New dictionaries: results obtained thus far show that a truly open domain vocabulary may not be realistic. The candidate will create new dictionaries of vocabularies which are constrained, yet rich enough to be of genuine utility for verbal communication in the targeted speech handicapped community.
3) New synthesis methods: concatenative synthesis, though conceptually simple, is not sufficiently flexible when the initial recognition step contains errors. The candidate will devise new synthesis methods which better model the spectral qualities of the speaker's voice, perhaps using Bayesian networks. Innovative techniques of recovering an acceptable prosody for the synthesized speech will also need to be developed.
4) Real time execution: as the amount of calculation necessary to carry out the recognition and synthesis steps is significant, the candidate will need to pay particular attention to optimization of code and real time execution in the algorithms he or she develops.
The thesis will be carried out in partnership with the Laboratoire de Phonetique et de Phonologie of the Universite de Paris III, specializing in speech production and pathologies, for which additional funding has been obtained from the Agence Nationale de la Recherche (Emergence-TEC 2009 call, REVOIX project).
http://gdr-isis.org/rilk/gdr/Kiosque/poste.php?jobid=3369
Back to Top

6-25 . (2009-07-06) PhD in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES (Grenoble France)


POSTDOC POSITION in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES (18 months ; starting January 2010 or later) IN GRENOBLE (France)
=============================================================================

PI (ANR BLANC 2009-2012) is a cooperative project sponsored by the French National Research Agency, between the University of Grenoble (France), the University of Avignon (France), and the International Research Center MICA in Hanoï (Vietnam).

PI addresses spoken language processing (notably speech recognition) for under-resourced languages (or ?-languages). From a scientific point of view, the interest and originality of this project consists in proposing viable innovative methods that go far beyond the simple retraining or adaptation of acoustic and linguistic models. From an operational point of view, this project aims at providing a free open source ASR development kit for ?-languages. We plan to distribute and evaluate such a development kit by deploying ASR systems for new under-resourced languages with very poor resources from Asia (Khmer, Lao) and Africa (Bantu languages).


The POSTDOC position focus on the development of ASR for two low-ressourced languages from Asia and Africa. This includes : supervising the ressource collection (in relation with the language partners), propose innovative methods to quickly develop ASR systems for these languages, evaluation., etc.

The salary of the POSTDOC position is roughly 2300? net per month. Applicants should hold a PhD related to spoken language processing. The applicants should be fluent in English. Competence in French is optional, though applicants will be encouraged to acquire this skill during the postdoc.

For further information, please contact Laurent Besacier (Laurent.Besacier at imag.fr).
Back to Top

6-26 . (2009-07-08) Position at Deutsche Telekom R&D

Deutsche Telekom, one of the world’s leading telecommunications and information
technology service provider, is expanding its corporate research and development
activities at Deutsche Telekom Inc., R&D Lab USA, Los Altos, California. Having a close
collaboration with top-notch institutions, the laboratories offer an unprecedented
combination of academic and industrial research with opportunities to have a direct
impact on company’s products and services.
There is a current opening for a highly qualified Senior Research Scientist the
research field New Media for the area of Multimedia Communications and Systems.
We are looking for a self-driven and motivated individual who is passionate about
conducting leading-edge research. Applicants should have recently completed a
doctoral degree in computer science, electrical engineering, or other related disciplines
and have expertise in different facets of multimedia communications such as media
coding, streaming, and compression, with hands on system building experience and
know-how of standards. Experience in industrial R&D will be valued.
Application material should include, in a single pdf file, the following in stated order, (a)
cover letter, (b) one-page statement of research objectives, (c) curriculum vitae, (d) list
of publications, and (e) contact information of at least three individuals who may serve
as references. Short-listed candidates will be invited to give a talk and have interviews
with members of the recruiting committee.
Please submit your application until 22 July 2009.
Deutsche Telekom Inc. is an equal opportunity employer.
Applications should be submitted via email to:
Dr. Jatinder Pal Singh
Deutsche Telekom Inc., R&D Lab USA
Email: laboratories.researchscientist@telekom.de
Back to Top

7 . Journals

 

Back to Top

7-1 . Mathematics, Computing, Language, and the Life: Frontiers in Mathematical Linguistics and Language Theory (tentative)

A new book series is going to be announced in a few weeks by a major publisher under the (tentative) title of Mathematics, Computing, Language, and the Life: Frontiers in Mathematical Linguistics and Language Theory
SERIES DESCRIPTION: Language theory, as originated from Chomsky's seminal work in the fifties last century and in parallel to Turing-inspired automata theory, was first applied to natural language syntax within the context of the first unsuccessful attempts to achieve reliable machine translation prototypes. After this, the theory proved to be very valuable in the study of programming languages and the theory of computing. In the last 15-20 years, language and automata theory has experienced quick theoretical developments as a consequence of the emergence of new interdisciplinary domains and also as the result of demands for application to a number of disciplines, most notably: natural language processing, computational biology, natural computing, programming, and artificial intelligence. The series will collect recent research on either foundational or applied issues, and is addressed to graduate students as well as to post-docs and academics.
TOPIC CATEGORIES:
A. Theory: language and automata theory, combinatorics on words, descriptional and computational complexity, semigroups, graphs and graph transformation, trees, computability
B. Natural language processing: mathematics of natural language processing, finite-state technology, languages and logics, parsing, transducers, text algorithms, web text retrieval
C. Artificial intelligence, cognitive science, and programming: patterns, pattern matching and pattern recognition, models of concurrent systems, Petri nets, models of pictures, fuzzy languages, grammatical inference and algorithmic learning, language-based cryptography, data and image compression, automata for system analysis and program verification
D. Bio-inspired computing and natural computing: cellular automata, symbolic neural networks, evolutionary algorithms, genetic algorithms, DNA computing, molecular computing, biomolecular nanotechnology, circuit theory, quantum computing, chemical and optical computing, models of artificial life
E. Bioinformatics: mathematical biology, string and combinatorial issues in computational biology and bioinformatics, mathematical evolutionary genomics, language processing of biological sequences, digital libraries The connections of this broad interdisciplinary field with other areas include: computational linguistics, knowledge engineering, theoretical computer science, software science, molecular biology, etc. The first volumes will be miscellaneous and will globally define the scope of the future series.
INVITATION TO CONTRIBUTE: Contributions are requested for the first five volumes. In principle, there will be no limit in length. All contributions will be submitted to strict peer-review. Collections of papers are also welcome. Potential contributors should express their interest in being considered for the volumes by April 25, 2009 to carlos.martinvide@gmail.com They should specify: - the tentative title of the contribution, - the authors and affiliations, - a 5-10 line abstract, - the most appropriate topic category (A to E above). A selection will be done immediately after, with invited authors submitting their contribution for peer-review by July 25, 2009. The volumes are expected to appear in the first months of 2010.

Back to Top

7-2 . Announcement of a new Journal Dialog and Discourse.

We are happy to announce the launch of a new international journal,  		       *Dialogue and Discourse* 		http://www.dialogue-and-discourse.org/  *Dialogue and Discourse* reflects the surge of interest in the analysis of language `beyond the single sentence', in discourse (i.e., text, monologue) and dialogue, from a formal, computational, or experimental perspective, as reflected in the wide range of work presented at the SEMDIAL and SIGDIAL conferences (http://www.illc.uva.nl/semdial/ ; http://www.sigdial.org/ ) and various other forums. *Dialogue and Discourse* will be the first journal devoted to a wide dissemination of such work.  Our aim is to publish * the best research in the area of dialogue and discourse (as   specified in our Aims and Scope,    http://www.dialogue-and-discourse.org/aims.html ) * in a timely fashion (we are committed to achieving a mean time   between submission and decision of 3 months) * open to interested readers everywhere (open access, online).  We are part of the ejournal initiative of the Linguistic Society of America ( http://elanguage.net/home.php ).  Articles will be published online as soon as they have been accepted. Each year, a (hardcopy) volume, collecting all articles of the year will be published by CSLI Publications, Stanford.  The journal can be found in the following two sites, each of which provides immediate access to a submission portal and to available articles: * http://elanguage.net/journals/index.php/dad/index * http://www.dialogue-and-discourse.org  As with any journal, the two most important resources are its contributors and its readers. The journal is open for submissions and we urge you to consider submitting your work on any topic relevant to Dialogue and Discourse. Our first articles should start appearing within the next two months.  David Schlangen, for The Managing Editors http://www.dialogue-and-discourse.org/boards.html
Back to Top

7-3 . CfP Special issue of Speech Comm: Non-native speech perception in adverse conditions: imperfect knowledge, imperfect signal

CALL FOR PAPERS: SPECIAL ISSUE OF SPEECH COMMUNICATION

NON-NATIVE SPEECH PERCEPTION IN ADVERSE CONDITIONS: IMPERFECT KNOWLEDGE, IMPERFECT SIGNAL

Much work in phonetics and speech perception has focused on doubly-optimal conditions, in which the signal reaching listeners is unaffected by distorting influences and in which listeners possess native competence in the sound system. However, in practice, these idealised conditions are rarely met. The processes of speech production and perception thus have to account for imperfections in the state of knowledge of the interlocutor as well as imperfections in the signal received. In noisy settings, these factors combine to create particularly adverse conditions for non-native listeners.

The purpose of the Special Issue is to assemble the latest research on perception in adverse conditions with special reference to non-native communication. The special issue will bring together, interpret and extend the results emerging from current research carried out by engineers, psychologists and phoneticians, such as the general frailty of some sounds for both native and non-native listeners and the strong non-native disadvantage experienced for categories which are apparently equivalent in the listeners’ native and target languages.

Papers describing novel research on non-native speech perception in adverse conditions are welcomed, from any perspective including the following. We especially welcome interdisciplinary contributions.

• models and theories of L2 processing in noise
• informational and energetic masking
• role of attention and processing load
• effect of noise type and reverberation
• inter-language phonetic distance
• audiovisual interactions in L2
• perception-production links
• the role of fine phonetic detail

GUEST EDITORS

Maria Luisa Garcia Lecumberri (Department of English, University of the Basque Country, Vitoria, Spain).
garcia.lecumberri@ehu.es

Martin Cooke (Ikerbasque and Department of Electrical & Electronic Engineering, University of the Basque Country, Bilbao, Spain).
m.cooke@ikerbasque.org

Anne Cutler (Max-Planck Institute for Psycholinguistics, Nijmegen, The Netherlands and MARCS Auditory Laboratories, Sydney, Australia).
anne.cutler@mpi.nl


DEADLINE

Full papers should be submitted by 31st July 2009

SUBMISSION PROCEDURE

Authors should consult the “guide for authors”, available online at http://www.elsevier.com/locate/specom, for information about the preparation of their manuscripts. Papers should be submitted via http://ees.elsevier.com/specom, choosing “Special Issue: non-native speech perception” as the article type. If you are a first time user of the system, please register yourself as an author. Prospective authors are welcome to contact the guest editors for more details of the Special Issue. 

Back to Top

7-4 . IEEE Special Issue on Speech Processing for Natural Interaction with Intelligent Environments

Call for Papers IEEE Signal Processing Society IEEE Journal of Selected Topics in Signal Processing  Special Issue on Speech Processing for Natural Interaction                   with Intelligent Environments  With the advances in microelectronics, communication technologies and smart materials, our environments are transformed to be increasingly intelligent by the presence of robots, bio-implants, mobile devices, advanced in-car systems, smart house appliances and other professional systems. As these environments are integral parts of our daily work and life, there is a great interest in a natural interaction with them. Also, such interaction may further enhance the perception of intelligence. "Interaction between man and machine should be based on the very same concepts as that between humans, i.e. it should be intuitive, multi-modal and based on emotion," as envisioned by Reeves and Nass (1996) in their famous book "The Media Equation". Speech is the most natural means of interaction for human beings and it offers the unique advantage that it does not require carrying a device for using it since we have our "device" with us all the time.  Speech processing techniques are developed for intelligent environments to support either explicit interaction through message communications, or implicit interaction by providing valuable information about the physical ("who speaks when and where") as well as the emotional and social context of an interaction. Challenges presented by intelligent environments include the use of distant microphone(s), resource constraints and large variations in acoustic condition, speaker, content and context. The two central pieces of techniques to cope with them are high-performing "low-level" signal processing algorithms and sophisticated "high-level" pattern recognition methods.  We are soliciting original, previously unpublished manuscripts directly targeting/related to natural interaction with intelligent environments. The scope of this special issue includes, but is not limited to:  * Multi-microphone front-end processing for distant-talking interaction * Speech recognition in adverse acoustic environments and joint          optimization with array processing * Speech recognition for low-resource and/or distributed computing          infrastructure * Speaker recognition and affective computing for interaction with          intelligent environments * Context-awareness of speech systems with regard to their applied          environments * Cross-modal analysis of speech, gesture and facial expressions for          robots and smart spaces * Applications of speech processing in intelligent systems, such as          robots, bio-implants and advanced driver assistance systems.  Submission information is available at http://www.ece.byu.edu/jstsp. Prospective authors are required to follow the Author's Guide for manuscript preparation of the IEEE Transactions on Signal Processing at http://ewh.ieee.org/soc/sps/tsp. Manuscripts will be peer reviewed according to the standard IEEE process.  Manuscript submission due:    		 		  		 		  Jul. 3, 2009 First review completed:       		 		  		 		  Oct. 2, 2009 Revised manuscript due:      		 		  		 		  Nov. 13, 2009 Second review completed:      		 		  		 		  Jan. 29, 2010 Final manuscript due:         		 		  		 		  Mar. 5, 2010  Lead guest editor:         Zheng-Hua Tan, Aalborg University, Denmark             zt@es.aau.dk  Guest editors:         Reinhold Haeb-Umbach, University of Paderborn, Germany             haeb@nt.uni-paderborn.de         Sadaoki Furui, Tokyo Institute of Technology, Japan             furui@cs.titech.ac.jp         James R. Glass, Massachusetts Institute of Technology, USA             glass@mit.edu         Maurizio Omologo, FBK-IRST, Italy             omologo@fbk.eu
Back to Top

7-5 . Special issue "Speech as a Human Biometric: I know who you are from your voice" Int. Jnl Biometrics

International Journal of Biometrics  (IJBM)
 
Call For papers
 
Special Edition on: "Speech as a Human Biometric: I Know Who You Are From Your Voice!"
 
Guest Editors: 
Dr. Waleed H. Abdulla, The University of Auckland, New Zealand
Professor Sadaoki Furui, Tokyo Institute of Technology, Japan
Professor Kuldip K. Paliwal, Griffith University, Australia
 
 
The 2001 MIT Technology Review indicated that biometrics is one of the emerging technologies that will change the world. Human biometrics is the automated recognition of a person using adherent distinctive physiological and/or involuntary behavioural features.
 
Human voice biometrics has gained significant attention in recent years. The ubiquity of cheap microphones, human identity information carried by voice, ease of deployment, natural use, telephony applications diffusion, and non-obtrusiveness have been significant motivations for developing biometrics based on speech signals. The robustness of speech biometrics is sufficiently good. However, there are significant challenges with respect to conditions that cannot be controlled easily. These issues include changes in acoustical environmental conditions, respiratory and vocal pathology, age, channel, etc. The goal of speech biometric research is to solve and/or mitigate these problems.
 
This special issue will bring together leading researchers and investigators in speech research for security applications to present their latest successes in this field. The presented work could be new techniques, review papers, challenges, tutorials or other relevant topics.
 
   Subject Coverage
 
Suggested topics include, but are not limited to:
 
Speech biometrics
Speaker recognition
Speech feature extraction for speech biometrics
Machine learning techniques for speech biometrics
Speech enhancement for speech biometrics
Speech recognition for speech biometrics
Speech changeability over age, health condition, emotional status, fatigue, and related factors
Accent, gender, age and ethnicity information extraction from speech signals
Speech watermarking
Speech database security management
Cancellable speech biometrics
Voice activity detection
Conversational speech biometrics
   Notes for Prospective Authors
 
Submitted papers should not have been previously published nor be currently under consideration for publication elsewhere
 
All papers are refereed through a peer review process. A guide for authors, sample copies and other relevant information for submitting papers are available on the Author Guidelines page
 
   Important Dates
 
Manuscript due: 15 June, 2009
 
Acceptance/rejection notification: 15 September, 2009
 
Final manuscript due: 15 October, 2009
 
For more information please go to Calls for Papers page (http://www.inderscience.com/callPapers.php) OR The IJBM home page (http://www.inderscience.com/ijbm).
 
 
Back to Top

7-6 . Special on Voice transformation IEEE Trans ASLP

CALL FOR PAPERS
IEEE Signal Processing Society
IEEE Transactions on Audio, Speech and Language Processing
Special Issue on Voice Transformation
With the increasing demand for Voice Transformation in areas such as
speech synthesis for creating target or virtual voices, modeling various
effects (e.g., Lombard effect), synthesizing emotions, making more natural
dialog systems which use speech synthesis, as well as in areas like
entertainment, film and music industry, toys, chat rooms and games, dialog
systems, security and speaker individuality for interpreting telephony,
high-end hearing aids, vocal pathology and voice restoration, there is a
growing need for high-quality Voice Transformation algorithms and systems
processing synthetic or natural speech signals.
Voice Transformation aims at the control of non-linguistic information of
speech signals such as voice quality and voice individuality. A great deal
of interest and research in the area has been devoted to the design and
development of mapping functions and modifications for vocal tract
configuration and basic prosodic features.
However, high quality Voice Transformation systems that create effective
mapping functions for vocal tract, excitation signal, and speaking style
and whose modifications take into account the interaction of source and
filter during voice production, are still lacking.
We invite researchers to submit original papers describing new approaches
in all areas related to Voice Transformation including, but not limited to,
the following topics:
* Preprocessing for Voice Transformation
(alignment, speaker selection, etc.)
* Speech models for Voice Transformation
(vocal tract, excitation, speaking style)
* Mapping functions
* Evaluation of Transformed Voices
* Detection of Voice Transformation
* Cross-lingual Voice Transformation
* Real-time issues and embedded Voice Transformation Systems
* Applications
The call for paper is also available at:
http://www.ewh.ieee.org/soc/sps/tap/sp_issue/VoiceTransformationCFP.pdf
Prospective authors are required to follow the Information for Authors for
manuscript preparation of the IEEE Transactions on Audio, Speech, and
Language Processing Signal Processing at
http://www.signalprocessingsociety.org/periodicals/journals/taslp-author-information/
Manuscripts will be peer reviewed according to the standard IEEE process.
Schedule:
Submission deadline: May 10, 2009
Notification of acceptance: September 30, 2009
Final manuscript due: October 30, 2009
Publication date: January 2010
Lead Guest Editor:
Yannis Stylianou, University of Crete, Crete, Greece
yannis@csd.uoc.gr
Guest Editors:
Tomoki Toda, Nara Inst. of Science and Technology, Nara, Japan
tomoki@is.naist.jp
Chung-Hsien Wu, National Cheng Kung University, Tainan, Taiwan
chwu@csie.ncku.edu.tw
Alexander Kain, Oregon Health & Science University, Portland Oregon, USA
kaina@ohsu.edu
Olivier Rosec, Orange-France Telecom R&D, Lannion, France
olivier.rosec@orange-ftgroup.com

Back to Top

7-7 . CfPSpecial Issue on Statistical Learning Methods for Speech and Language Processing

Call for Papers
IEEE Signal Processing Society
IEEE Journal of Selected Topics in Signal Processing
Special Issue on Statistical Learning Methods for Speech and
Language Processing
In the last few years, significant progress has been made in both
research and commercial applications of speech and language
processing. Despite the superior empirical results, however, there
remain important theoretical issues to be addressed. Theoretical
advancement is expected to drive greater system performance
improvement, which in turn generates the new need of in-depth
studies of emerging novel learning and modeling methodologies. The
main goal of this special issue is to fill in the above need, with
the main focus on the fundamental issues of new emerging approaches
and empirical applications in speech and language processing.
Another focus of this special issue is on the unification of
learning approaches to speech and language processing problems. Many
problems in speech processing and in language processing share a
wide range of similarities (despite conspicuous differences), and
techniques in speech and language processing fields can be
successfully cross-fertilized. It is of great interest to study
unifying modeling and learning approaches across these two fields.
The goal of this special issue is to bring together a diverse but
complementary set of contributions on emerging learning methods for
speech processing, language processing, as well as unifying
approaches to problems across the speech and language processing
fields.
We invite original and unpublished research contributions in all
areas relevant to statistical learning, speech processing and
natural language processing. The topics of interest include, but are
not limited to:
• Discriminative learning methods and applications to speech and language processing
• Unsupervised/semi-supervised learning algorithms for Speech and language processing
• Model adaptation to new/diverse conditions
• Multi-engine approaches for speech and language processing
• Unifying approaches to speech processing and/or language processing
• New modeling technologies for sequential pattern recognition
Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/
for information on paper submission. Manuscripts should be submitted
using the Manuscript Central system at http://mc.manuscriptcentral.com/jstsp-ieee.
Manuscripts will be peer reviewed according to the standard IEEE process.
Manuscript submission due: Aug. 7, 2009
First review completed: Oct. 30, 2009
Revised manuscript due: Dec. 11, 2009
Second review completed: Feb. 19, 2010
Final manuscript due: Mar. 26, 2010
Lead guest editor:
Xiaodong He, Microsoft Research, Redmond (WA), USA, xiaohe@microsoft.com
Guest editors:
Li Deng, Microsoft Research, Redmond (WA), USA, deng@microsoft.com
Roland Kuhn, National Research Council of Canada, Gatineau (QC), Canada, roland.kuhn@cnrc-nrc.gc.ca
Helen Meng, The Chinese University of Hong Kong, Hong Kong, hmmeng@se.cuhk.edu.hk
Samy Bengio, Google Inc., Mountain View (CA), USA, bengio@google.com 
Back to Top

7-8 . CfP SPECIAL ISSUE OF SPEECH COMMUNICATION: Perceptual and Statistical Audition

Perceptual and Statistical Audition
 
To give authors a bit more time we extended the deadline for the call for papers for the special issue of Speech Communication to the 27th July, 2009
 
See the call for papers below for more details.
 
Aims and Scope
Current trends in audio analysis are strongly founded in statistical principles, or on approaches that are influenced by empirically derived, or perceptually motivated rules of auditory perception. These approaches are perceived as orthogonal, but new ideas that draw upon both perceptual and statistical principles can often result in superior performance. The relationship of these two approaches however, has not been thoroughly explored and is still a developing field of research.
In this special issue we invite researchers to submit papers on original and previously unpublished work on both approaches, and especially on hybrid techniques that combine perceptual and statistical principles, as applied to speech, music and audio analysis.  Recent advances in neurosciences have emphasized the important role of spectro-temporal modulations in human perception. We encourage submission of original and previously unpublished work on techniques that exploit the information in spectro-temporal modulations, particularly within a statistical framework.
Papers describing relevant research and new concepts are solicited on, but not limited to, the following topics:
 
 - Analysis of audio including speech and music
 - Audio classification
 - Speech recognition
 - Signal separation
 - Multi-channel analysis
 - Computational Auditory Scene Analysis  (CASA)
 - Spectro-temporal modulation methods
 - Perceptual aspects of statistical algorithms, such as Independent Component Analysis and Non-negative Matrix Factorization.
 - Hybrid methods that use CASA-like cues in a statistical framework
 
Guest Editors
Martin Heckmann Bhiksha Raj Paris Smaragdis
Honda Research Institute Europe Carnegie Mellon University Adobe Advanced Technology Labs
63073 Offenbach a. M., Germany Pittsburgh, PA 15217 Newton, MA 02446
martin.heckmann@honda-ri.de bhiksha@cs.cmu.edu paris@adobe.com
 
NEW DEADLINE
Papers due 27th July, 2009
 
Submission Guidelines
Authors should consult the "Guide for Authors", available online, at http://www.elsevier.com/locate/specom for information about the preparation of their manuscripts. Authors, please submit your paper via http://ees.elsevier.com/specom, choosing "Perceptual and Statistical Audition" as the Article Type. If you are a first time user of the system, please register yourself as an author. 
 
Back to Top

8 . Future Speech Science and Technology Events

8-1 . (2009-07-17) Seminaires du GIPSA Grenoble France

Attention changement de date :
Vendredi 17 juillet 2009, 13h30 – Séminaire externe
========================================
Olivier PASCALIS
Laboratoire de Psychologie et NeuroCognition, Grenoble
 
How and when do infants understand Ethnicity?
 
The ability to discriminate phonemic contrasts that are absent in the infants native language declines towards the end of the first year of life (Werker & Tees, 1984). Infants' ability to recognize both own-race and other-race faces is found at 6 months of age but is limited to own race from 9-month of age (Kelly et al., 2007). Infants are also sensitive to events which are bimodally specified. This is the integration of information from two sense modalities into a single percept. Using such information, young infants are able to correctly match sound and vision to identify the appropriate moving object (Spelke, 1979), the gender of the speaker (Poulin-Dubois, et al. , 1995; Patterson and Werker, 2002), as well as the age of the speaker (Bahrick et al, 1998) and also to discriminate between emotions (Walker-Andrews and Lennon, 1991). They are also able to make assumptions about categories by matching intermodal information from pictures and sounds that they have little or no experience with early in life. This ability will then be lost by the end of the first year of life. For example, Lewkowicz and Ghazanfar (2006) have shown that whereas 4- and 6-month-old show inter-sensory matching for monkey faces and monkey calls, 8- and 10-month-old infants do not. Weikum et al. (2007) found that 4- and 6-month-old discriminate visually French from English. This ability disappears in monolingual 8-month-old infants who only discriminate the visual attributes of their own language. Bilingual 8-month-olds maintain the ability to discriminate between their native language and a non-native language. We can conclude that infants have a representation and an expectation about human (Bonatti et al., 2002) that is changing rapidly with experience during the first years of life. How precise is the human representation? Does it extend to language and culture? Ethnicity and language are examples of a naturally occurring category, we differ both in face morphology, skin tone and speech. Will infants expect an own race face to speak their native language and an other-race face to speak a non-native language?
 
I will present a series of studies investigating cross-modal representation of both own race and an unfamiliar race in 3-, 6 and 9-month-old infants. Our results suggest an intermodal representation of other-race faces from 6 months of age.
 
Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire
 
Jeudi 30 juillet 2009, 13h30 – Séminaire externe
========================================
James BONAIUTO
University of Southern California
 
Modeling the Mirror System Hypothesis
 
The Mirror System Hypothesis suggests that the language-ready brain evolved through a series of stages starting with a monkey-like mirror system for grasping and progressing through an ape-like mirror system for imitation, and then a human mirror system that supports complex imitation and language. I will present the MNS2 model of the monkey mirror system for action recognition, and augmented competitive queuing, a model of opportunistic action scheduling. These will form the basis for an analysis of modeling goals for "simple imitation" in great apes, and "complex imitation" in humans, as part of an ongoing research effort investigating the evolution of the language-ready brain.
 
Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire

 

Back to Top

8-2 . (2009-08-02) ACL-IJCNLP 2009 1st Call for Papers

ACL-IJCNLP 2009 1st Call for Papers

Joint Conference of
the 47th Annual Meeting of the Association for Computational Linguistics
and
the 4th International Joint Conference on Natural Language Processing of
the Asian Federation of Natural Language Processing

August 2 - 7, 2009
Singapore

http://www.acl-ijcnlp-2009.org

Full Paper Submission Deadline:  February 22, 2009 (Sunday)
Short Paper Submission Deadline:  April 26, 2009 (Sunday)

For the first time, the flagship conferences of the Association of
Computational Linguistics (ACL) and the Asian Federation of Natural
Language Processing (AFNLP) --the ACL and IJCNLP -- are jointly
organized as a single event. The conference will cover a broad
spectrum of technical areas related to natural language and
computation. ACL-IJCNLP 2009 will include full papers, short papers,
oral presentations, poster presentations, demonstrations, tutorials,
and workshops. The conference invites the submission of papers on
original and unpublished research on all aspects of computational
linguistics.

Important Dates:

* Feb 22, 2009    Full paper submissions due;
* Apr 12, 2009    Full paper notification of acceptance;
* Apr 26, 2009    Short paper submissions due;
* May 17, 2009    Camera-ready full papers due;
* May 31, 2009    Short Paper notification of acceptance;
* Jun 7, 2009       Camera-ready short papers due;
* Aug 2-7, 2009   ACL-IJCNLP 2009

Topics of interest:

Topics include, but are not limited to:

* Phonology/morphology, tagging and chunking, and word segmentation
* Grammar induction and development
* Parsing algorithms and implementations
* Mathematical linguistics and grammatical formalisms
* Lexical and ontological semantics
* Formal semantics and logic
* Word sense disambiguation
* Semantic role labeling
* Textual entailment and paraphrasing
* Discourse, dialogue, and pragmatics
* Language generation
* Summarization
* Machine translation
* Information retrieval
* Information extraction
* Sentiment analysis and opinion mining
* Question answering
* Text mining and natural language processing applications
* NLP in vertical domains, such as biomedical, chemical and legal text
* NLP on noisy unstructured text, such as email, blogs, and SMS
* Spoken language processing
* Speech recognition and synthesis
* Spoken language understanding and generation
* Language modeling for spoken language
* Multimodal representations and processing
* Rich transcription and spoken information retrieval
* Speech translation
* Statistical and machine learning methods
* Language modeling for text processing
* Lexicon and ontology development
* Treebank and corpus development
* Evaluation methods and user studies
* Science of annotation

Submissions:

Full Papers: Submissions must describe substantial, original,
completed and unpublished work. Wherever appropriate, concrete
evaluation and analysis should be included. Submissions will be judged
on correctness, originality, technical strength, significance,
relevance to the conference, and interest to the attendees. Each
submission will be reviewed by at least three program committee
members.

Full papers may consist of up to eight (8) pages of content, plus one
extra page for references, and will be presented orally or as a poster
presentation as determined by the program committee.  The decisions as
to which papers will be presented orally and which as poster
presentations will be based on the nature rather than on the quality
of the work. There will be no distinction in the proceedings between
full papers presented orally and those presented as poster
presentations.

The deadline for full papers is February 22, 2009 (GMT+8). Submission
is electronic using paper submission software at:
https://www.softconf.com/acl-ijcnlp09/papers

Short papers: ACL-IJCNLP 2009 solicits short papers as well. Short
paper submissions must describe original and unpublished work. The
short paper deadline is just about three months before the conference
to accommodate the following types of papers:

* A small, focused contribution
* Work in progress
* A negative result
* An opinion piece
* An interesting application nugget

Short papers will be presented in one or more oral or poster sessions,
and will be given four pages in the proceedings. While short papers
will be distinguished from full papers in the proceedings, there will
be no distinction in the proceedings between short papers presented
orally and those presented as poster presentations. Each short paper
submission will be reviewed by at least two program committee members.
The deadline for short papers is April 26, 2009 (GMT + 8).  Submission
is electronic using paper submission software at:
https://www.softconf.com/acl-ijcnlp09/shortpapers

Format:

Full paper submissions should follow the two-column format of
ACL-IJCNLP 2009 proceedings without exceeding eight (8) pages of
content plus one extra page for references.  Short paper submissions
should also follow the two-column format of ACL-IJCNLP 2009
proceedings, and should not exceed four (4) pages, including
references. We strongly recommend the use of ACL LaTeX style files or
Microsoft Word style files tailored for this year's conference, which
are available on the conference website under Information for Authors.
Submissions must conform to the official ACL-IJCNLP 2009 style
guidelines, which are contained in the style files, and they must be
electronic in PDF.

As the reviewing will be blind, the paper must not include the
authors' names and affiliations. Furthermore, self-references that
reveal the author's identity, e.g., "We previously showed (Smith,
1991) ...", must be avoided. Instead, use citations such as "Smith
previously showed (Smith, 1991) ...". Papers that do not conform to
these requirements will be rejected without review.

Multiple-submission policy:

Papers that have been or will be submitted to other meetings or
publications must provide this information at submission time. If
ACL-IJCNLP 2009 accepts a paper, authors must notify the program
chairs by April 19, 2009 (full papers) or June 7, 2009 (short papers),
indicating which meeting they choose for presentation of their work.
ACL-IJCNLP 2009 cannot accept for publication or presentation work
that will be (or has been) published elsewhere.

Mentoring Service:

ACL is providing a mentoring (coaching) service for authors from
regions of the world where English is less emphasized as a language of
scientific exchange. Many authors from these regions, although able to
read the scientific literature in English, have little or no
experience in writing papers in English for conferences such as the
ACL meetings. The service will be arranged as follows. A set of
potential mentors will be identified by Mentoring Service Chairs Ng,
Hwee Tou (NUS, Singapore) and Reeder, Florence (Mitre, USA), who will
organize this service for ACL-IJCNLP 2009. If you would like to take
advantage of the service, please upload your paper in PDF format by
January 14, 2009 for long papers and March 18 2009 for short papers
using the paper submission software for mentoring service which will
be available at conference website.

An appropriate mentor will be assigned to your paper and the mentor
will get back to you by February 8 for long papers or April 12 for
short papers, at least 2 weeks before the deadline for the submission
to the ACL-IJCNLP 2009 program committee.

Please note that this service is for the benefit of the authors as
described above. It is not a general mentoring service for authors to
improve the technical content of their papers.

If you have any questions about this service please feel free to send
a message to Ng, Hwee Tou (nght[at]comp.nus.edu.sg) or Reeder,
Florence (floreederacl[at]yahoo.com).

General Conference Chair:
Su, Keh-Yih (Behavior Design Corp., Taiwan; kysu[at]bdc.com.tw)

Program Committee Chairs:
Su, Jian (Institute for Infocomm Research, Singapore;
sujian[at]i2r.a-star.edu.sg)
Wiebe, Janyce (University of Pittsburgh, USA; janycewiebe[at]gmail.com)

Area Chairs:
Agirre, Eneko (University of Basque Country, Spain; e.agirre[at]ehu.es)
Ananiodou, Sophia (University of Manchester, UK;
      sophia.ananiadou[at]manchester.ac.uk)
Belz, Anja (University of Brighton, UK; a.s.belz[at]itri.brighton.ac.uk)
Carenini, Giuseppe (University of British Columbia, Canada;
carenini[at]cs.ubc.ca)
Chen, Hsin-Hsi (National Taiwan University, TaiWan, hh_chen[at]csie.ntu.edu.tw)
Chen, Keh-Jiann (Sinica, Tai Wan, kchen[at]iis.sinica.edu.tw)
Curran, James (University of Sydney, Australia; james[at]it.usyd.edu.au)
Gao, Jian Feng (MSR, USA; jfgao[at]microsoft.com)
Harabagiu, Sanda (University of Texas at Dallas, USA, sanda[at]hlt.utdallas.edu)
Koehn, Philipp (University of Edinburgh, UK; pkoehn[at]inf.ed.ac.uk)
Kondrak, Grzegorz (University of Alberta, Canada; kondrak[at]cs.ualberta.ca)
Meng, Helen Mei-Ling (Chinese University of Hong Kong, Hong Kong;
      hmmeng[at]se.cuhk.edu.hk )
Mihalcea, Rada (University of Northern Texas, USA; rada[at]cs.unt.edu)
Poesio, Massimo(University of Trento, Italy; poesio[at]disi.unitn.it)
Riloff, Ellen (University of Utah, USA; riloff[at]cs.utah.edu)
Sekine, Satoshi (New York University, USA; sekine[at]cs.nyu.edu)
Smith, Noah (CMU, USA; nasmith[at]cs.cmu.edu)
Strube, Michael (EML Research, Germany; strube[at]eml-research.de)
Suzuki, Jun (NTT, Japan; jun[at]cslab.kecl.ntt.co.jp)
Wang, Hai Feng (Toshiba, China; wanghaifeng[at]rdc.toshiba.com.cn) 

Back to Top

8-3 . (2009-08-10) 16th International ECSE Summer School in Novel Computing (Joensuu, FINLAND)

Call for participation:
16th International ECSE Summer School in
Novel Computing (Joensuu, FINLAND)
=========================================

University of Joensuu, Finland, announces the 16th International ECSE Summer School in Novel Computing:

           http://cs.joensuu.fi/ecse/

The summer school includes three independent courses, one in June and two in August:

June 8-10
    Jean-Luc LeBrun: Scientific Writing Skills
    http://www.scientific-writing.com/
    "Publish or perish, reviewers decide.
    Be cited or not, readers decide"

    Registration deadline: May 20, 2009

August 10-14 -- two parallel courses:
    Douglas A. Reynolds (MIT Lincoln Lab)
    "Speaker and Language Recognition"

    Paul De Bra (Eindhoven Univ Technology)
    "Platforms for Stories-Based Learning
    in Future Schools"

    Early registration deadline: June 15, 2009

In addition to high-quality lectures, the summer school offers an inspiring learning environment and relaxed social program, including the Finnish sauna, in the middle of North Carelia region. Joensuu is located next to the Russian border and about 400 km North-East from the capital of the country. It is a vivid student city with over 6000 students in the University of Joensuu and 3500 in North Karelia Polytechnic. The European Forest Institute, The University and many other institutes and export enterprises such as Abloy, LiteonMobile and John Deere give Joensuu an international flavour.

The summer school is organized by the Department of Computer Science and Statistics, University of Joensuu, Finland (http://cs.joensuu.fi). The research areas of the department include speech and image processing, educational technology, color research, and psychology of programming.

More information:

WWW:    http://cs.joensuu.fi/ecse/
e-mail: ecse09@cs.joensuu.fi 

Back to Top

8-4 . (2009-09) Emotion challenge INTERSPEECH 2009

Call for Papers
INTERSPEECH 2009 Emotion Challenge
Feature, Classifier, and Open Performance Comparison for
Non-Prototypical Spontaneous Emotion Recognition
Organisers:
Bjoern Schuller (Technische Universitaet Muenchen, Germany)
Stefan Steidl (FAU Erlangen-Nuremberg, Germany)
Anton Batliner (FAU Erlangen-Nuremberg, Germany)
Sponsored by:
HUMAINE Association
Deutsche Telekom Laboratories
The Challenge
The young field of emotion recognition from voice has recently gained considerable interest in Human-Machine Communication, Human-Robot Communication, and Multimedia Retrieval. Numerous studies have been seen in the last decade trying to improve on features and classifiers. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead, a multiplicity of evaluation strategies employed such as cross-validation or percentage splits without proper instance definition, prevents exact reproducibility. Further, to face more realistic use-cases, the community is in desperate need of more spontaneous and less prototypical data.
In these respects, the INTERSPEECH 2009 Emotion Challenge shall help bridging the gap between excellent research on human emotion recognition from speech and low compatibility of results: the FAU Aibo Emotion Corpus of spontaneous, emotionally coloured speech, and benchmark results of the two most popular approaches will be provided by the organisers. Nine hours of speech (51 children) were recorded at two different schools. This allows for distinct definition of test and training partitions incorporating speaker independence as needed in most real-life settings. The corpus further provides a uniquely detailed transcription of spoken content with word boundaries, non-linguistic vocalisations, emotion labels, units of analysis, etc.
Three sub-challenges are addressed in two different degrees of difficulty by using non-prototypical five or two emotion classes (including a garbage model):
 The Open Performance Sub-Challenge allows contributors to find their own features with their own classification algorithm. However, they will have to stick to the definition of test and training sets.
 In the Feature Sub-Challenge, participants are encouraged to upload their individual best features per unit of analysis with a maximum of 100 per contribution. These features will then be tested by the organisers with equivalent settings in one classification task, and pooled together in a feature selection process.
 In the Classifier Sub-Challenge, participants may use a large set of standard acoustic features provided by the organisers for classifier tuning.
The labels of the test set will be unknown, but each participant can upload instance predictions to receive the confusion matrix and results up to 25 times. As classes are un-balanced, the measure to optimise will be mean recall. The organisers will not take part in the sub-challenges but provide baselines.
Overall, contributions using the provided or an equivalent database are sought in (but not limited to) the areas:
 Participation in any of the sub-challenges
 Speaker adaptation for emotion recognition
 Noise/coding/transmission robust emotion recognition
 Effects of prototyping on performance
 Confidences in emotion recognition
 Contextual knowledge exploitation
The results of the Challenge will be presented at a Special Session of Interspeech 2009 in Brighton, UK.
Prizes will be awarded to the sub-challenge winners and a best paper.
If you are interested and planning to participate in the Emotion Challenge, or if you want to be kept informed about the Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage:
http://emotion-research.net/sigs/speech-sig/emotion-challenge
Back to Top

8-5 . (2009-09-06) Special session at Interspeech 2009:adaptivity in dialog systems

 
Call for papers (submission deadline Friday 17 April 2009)
 
Special Session : "Machine Learning for Adaptivity in Spoken Dialogue Systems"
at Interspeech 2009, Brighton U.K., http://www.interspeech2009.org/
Session chairs: Oliver Lemon, Edinburgh University,
and Olivier Pietquin, Supélec - IMS Research Group
In the past decade, research in the field of Spoken Dialogue Systems
(SDS) has experienced increasing growth, and new applications include
interactive mobile search, tutoring, and troubleshooting systems
(e.g. fixing a broken internet connection). The design and
optimization of robust SDS for such tasks requires the development of
dialogue strategies which can automatically adapt to different types
of users (novice/expert, youth/senior) and noise conditions
(room/street). New statistical learning techniques are emerging for
training and optimizing adaptive speech recognition, spoken language
understanding, dialogue management, natural language generation, and
speech synthesis in spoken dialogue systems. Among machine learning
techniques for spoken dialogue strategy optimization, reinforcement
learning using Markov Decision Processes (MDPs) and Partially
Observable MDP (POMDPs) has become a particular focus.
We therefore solicit papers on new research in the areas of:
- Adaptive dialogue strategies and adaptive multimodal interfaces
- User simulation techniques for adaptive strategy learning and testing
- Rapid adaptation methods
- Reinforcement Learning of dialogue strategies
- Partially Observable MDPs in dialogue strategy optimization
- Statistical spoken language understanding in dialogue systems
- Machine learning and context-sensitive speech recognition
- Learning for adaptive Natural Language Generation in dialogue
- Corpora and annotation for machine learning approaches to SDS
- Machine learning for adaptive multimodal interaction
- Evaluation of adaptivity in statistical approaches to SDS and user
simulation.
Important Dates--
Full paper submission deadline: Friday 17 April 2009
Notification of paper acceptance: Wednesday 17 June 2009
Conference dates: 6-10 September 2009
Back to Top

8-6 . (2009-09-07) Information Retrieval and Information Extraction for Less Resourced Languages

CALL FOR PAPERS
Information Retrieval and Information Extraction for Less Resourced Languages (IE-IR-LRL)
SEPLN 2009 pre-conference workshop
University of the Basque Country
Donostia-San Sebastián. Monday 7th September 2009
Organised by the SALTMIL Special Interest Group of ISCA
SALTMIL: http://ixa2.si.ehu.es/saltmil/
SEPLN 2009: http://ixa2.si.ehu.es/sepln2009
Call For Papers: http://ixa2.si.ehu.es/saltmil/en/activities/lrec2008/sepln-2009-workshop-cfp.html
Paper submission: http://sepln.org/myreview-saltmil2009
Deadline for submission: 8 June 2009
Papers are invited for the above half-day workshop, in the format outlined below. Most submitted papers will be presented in poster form, though some authors may be invited to present in lecture format.
CONTEXT AND FOCUS
The phenomenal growth of the Internet has led to a situation where, by some estimates, more than one billion words of text is currently available. This is far more text than any given person can possibly process. Hence there is a need for automatic tools to access and process his mass of textual information. Emerging techniques of this kind include Information Retrieval (IR), Information Extraction (IE), and Question Answering (QA)
However, there is a growing concern among researchers about the situation of languages other than English. Although not all Internet text is in English, it is clear that non-English languages do not have the same degree of representation on the Internet. Simply counting the number of articles in Wikipedia, English is the only language with more than 20 percent of the available articles. There then follows a group of 17 languages with between one and ten percent of the articles. The remaining 245 languages each have less than one percent of the articles. Even these low-profile languages are relatively privileged, as the total number of languages in the world is estimated to be 6800.
Clearly there is a danger that the gap between high-profile and low-profile languages on the Internet will continue to increase, unless tools are developed for the low-profile languages to access textual information. Hence there is a pressing need to develop basic language technology software for less-resourced languages as well. In particular, the priority is to adapt the scope of recently-developed IE, IR and QA systems so that they can be used also for these languages. In doing so, several questions will naturally arise, such as:
* What problems emerge when faced with languages having different linguistic features from the major languages?
* Which techniques should be promoted in order to get the maximum yield from sparse training data?
* What standards will enable researchers to share tools and techniques across several different languages?
* Which tools are easily re-useable across several unrelated languages?
It is hoped that presentations will focus on real-world examples, rather than purely theoretical discussions of the questions. Researchers are encouraged to share examples of best practice -- and also examples where tools have not worked as well as expected. Also of interest will be
cases where the particular features of a less-resourced language raise a challenge to currently accepted linguistic models that were based on features of major languages.
TOPICS
Given the context of IR, IE and QA, topics for discussion may include, but are not limited to:
* Information retrieval;
* Text and web mining;
* Information extraction;
* Text summarization;
* Term recognition;
* Text categorization and clustering;
* Question answering;
* Re-use of existing IR, IE and QA data;
* Interoperability between tools and data.
* General speech and language resources for minority languages, with particular emphasis on resources for IR,IE and QA.
IMPORTANT DATES
* 8 June 2009: Deadline for submission
* 1 July 2009: Notification
* 15 July 2009: Final version
* 7 September 2009: Workshop
ORGANISERS
* Kepa Sarasola, University of the Basque Country
* Mikel Forcada, Universitat d'Alacant, Spain
* Iñaki Alegria. University of the Basque Country
* Xabier Arregi, University of the Basque Country
* Arantza Casillas. University of the Basque Country
* Briony Williams, Language Technologies Unit, Bangor University, Wales, UK
PROGRAMME COMMITTEE
* Iñaki Alegria. University of the Basque Country.
* Atelach Alemu Argaw: Stockholm University, Sweden
* Xabier Arregi, University of the Basque Country.
* Jordi Atserias, Barcelona Media (yahoo! research Barcelona)
* Shannon Bischoff, Universidad de Puerto Rico, Puerto Rico
* Arantza Casillas. University of the Basque Country.
* Mikel Forcada: Universitat d'Alacant, Spain
* Xavier Gomez Guinovart. University of Vigo.
* Lori Levin, Carnegie-Mellon University, USA
* Climent Nadeu, Universitat Politècnica de Catalunya
* Jon Patrick, University of Sydney, Australia
* Juan Antonio Pérez-Ortiz, Universitat d'Alacant, Spain
* Bojan Petek, University of Ljubljana, Slovenia
* Kepa Sarasola, University of the Basque Country
* Oliver Streiter, National University of Kaohsiung, Taiwan
* Vasudeva Varma, IIIT, Hyderabad, India
* Briony Williams: Bangor University, Wales, UK
SUBMISSION INFORMATION
We expect short papers of max 3500 words (about 4-6 pages) describing research addressing one of the above topics, to be submitted as PDF documents by uploading to the following URL:
http://sepln.org/myreview-saltmil2009
The final papers should not have more than 6 pages, adhering to the stylesheet that will be adopted for the SEPLN Proceedings (to be announced later on the Conference web site).
--
Mikel L. Forcada <mlf@dlsi.ua.es>
http://www.dlsi.ua.es/~mlf
Back to Top

8-7 . (2009-09-09) CfP IDP 09 Discourse-Prosody Interface

IDP 09 : CALL FOR PAPERS

 

Discourse – Prosody Interface

 

Paris, September 9-10-11, 2009

 

The third round of the “Discourse – Prosody Interface” Conference will be hosted by the Laboratoire de Linguistique Formelle (UMR 7110 / LLF), the Equipe CLILLAC-ARP (EA 3967) and the Linguistic Department (UFRL) of the University of Paris-Diderot (Paris 7), on September 9-10-11, 2009 in Paris. The first round was organized by the Laboratoire Parole et Langage (UMR 6057 /LPL) in September 2005, in Aix-en-Provence. The second took place in Geneva in September 2007 and was organized by the Department of Linguistics at the University of Geneva, in collaboration with the École de Langue et Civilisation Françaises at the University of Geneva, and the VALIBEL research centre at the Catholic University of Louvain.

The third round will be held at the Paris Center of the University of Chicago, 6, rue Thomas Mann, in the XIIIth arrondissement, near the Bibliothèque François Mitterrand (BNF).

 

The Conference is addressed to researchers in prosody, phonology, phonetics, pragmatics, discourse analysis and also psycholinguistics, who are particularly interested in the relations between prosody and discourse. The participants may develop their research programmes within different theoretical paradigms (formal approaches to phonology and semantics/ pragmatics, conversation analysis, descriptive linguistics, etc.). For this third edition, spécial attention will be given to research work that propose a formal analysis of the Discourse- Prosody interface.

 

So as to favour convergence among contributions, the IDP09 conference will focus on :

* Prosody, its parts and discourse :

- How to analyze the interaction between the different prosodic subsystems (accentuation,

intonation, rhythm; register changes or voice quality)?

- How to model the contribution of each subsystem to the global interpretation of discourse?

- How to describe and analyze prosodic facts, and at which level (phonetic vs. phonological) ?

* Prosodic units & discourse units

- What are the relevant units for discourse or conversation analysis ? What are their prosodic

properties ?

- How the embedding of utterances in discourse is marked syntactically or prosodically ?

What consequence of the modelling of syntax & prosody ?

* Prosody and context(s)

- What is the contribution of the context in the analysis of prosody in discourse?

- How can the relations between prosody and context(s) be modelled?

* Acquisition of the relations between prosody & discourse in L1 and L2

- How are the relations between prosody & discourse acquired in L1, in L2 ?

- Which methodological tools could best describe and transcribe these processes ?

 

 

Guest speakers :

* Diane Blakemore (School of Languages, University of Salford, United Kingdom)

* Piet Mertens (Department of Linguistics, K.U Leuven, Belgium)

* Hubert Truckenbrodt (ZAS, Zentrum für Allgemeine Sprachwissenschaft, Berlin,

Germany)

 

Conference will be held in English or French. Studies can be about any language.

 

 

Submission will be made by uploading an anonymous two pages abstract (plus an extra page for references and figures) in A4 and with Times 12 font, written in either English or French as PDF file at the following address : http://www.easychair.org/conferences/?conf=idp09 .

 

Author’s name and affiliation should be given as requested, but not in the PDF file.

 

If you have any question concerning the submission procedure or you encounter any problem,

please send an email at the following address : idp09@linguist.jussieu.fr

 

Authors may submit as many proposals as they wish.

 

The proposals will be evaluated anonymously by the scientific committee.

 

Schedule

Submission deadline: April, 26th, 2009

Notification of acceptation: June, 8th, 2009

Conference (IDP 09): September 9th-11th, 2009.

 

Further information is available on the conférence website : http://idp09.linguist.univ-paris-diderot.fr

 

Back to Top

8-8 . (2009-09- 09) Interface Discours et prosodie

All information on this conference to be hold in Paris can be found at

http://idp09.linguist.univ-paris-diderot.fr/pages/indexpag.html

 

Back to Top

8-9 . (2009-09-11) SIGDIAL 2009 CONFERENCE

 SIGDIAL 2009 CONFERENCE
     10th Annual Meeting of the Special Interest Group
     on Discourse and Dialogue

     Queen Mary University of London, UK September 11-12, 2009
     (right after Interspeech 2009)

     Submission Deadline: April 24, 2009


     PRELIMINARY CALL FOR PAPERS

The SIGDIAL venue provides a regular forum for the presentation of
cutting edge research in discourse and dialogue to both academic and
industry researchers. Due to the success of the nine previous SIGDIAL
workshops, SIGDIAL is now a conference. The conference is sponsored by
the SIGDIAL organization, which serves as the Special Interest Group in
discourse and dialogue for both ACL and ISCA. SIGDIAL 2009 will be
co-located with Interspeech 2009 as a satellite event.

In addition to presentations and system demonstrations, the program
includes an invited talk by Professor Janet Bavelas of the University of
Victoria, entitled "What's unique about dialogue?".


TOPICS OF INTEREST

We welcome formal, corpus-based, implementation, experimental, or
analytical work on discourse and dialogue including, but not restricted
to, the following themes:

1. Discourse Processing and Dialogue Systems

Discourse semantic and pragmatic issues in NLP applications such as text
summarization, question answering, information retrieval including
topics like:

- Discourse structure, temporal structure, information structure ;
- Discourse markers, cues and particles and their use;
- (Co-)Reference and anaphora resolution, metonymy and bridging resolution;
- Subjectivity, opinions and semantic orientation;

Spoken, multi-modal, and text/web based dialogue systems including
topics such as:

- Dialogue management models;
- Speech and gesture, text and graphics integration;
- Strategies for preventing, detecting or handling miscommunication
(repair and correction types, clarification and under-specificity,
grounding and feedback strategies);
- Utilizing prosodic information for understanding and for disambiguation;

2. Corpora, Tools and Methodology

Corpus-based and experimental work on discourse and spoken, text-based
and multi-modal dialogue including its support, in particular:

- Annotation tools and coding schemes;
- Data resources for discourse and dialogue studies;
- Corpus-based techniques and analysis (including machine learning);
- Evaluation of systems and components, including methodology, metrics
and case studies;

3. Pragmatic and/or Semantic Modeling

The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a
single sentence) including the following issues:

- The semantics/pragmatics of dialogue acts (including those which are
less studied in the semantics/pragmatics framework);
- Models of discourse/dialogue structure and their relation to
referential and relational structure;
- Prosody in discourse and dialogue;
- Models of presupposition and accommodation; operational models of
  conversational implicature.


SUBMISSIONS

The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
presentations, followed by posters and demonstrations.

- Long papers must be no longer than 8 pages, including title, examples,
references, etc. In addition to this, two additional pages are allowed
as an appendix which may include extended example discourses or
dialogues, algorithms, graphical representations, etc.
- Short papers and demo descriptions should be 4 pages or less
(including title, examples, references, etc.).

Please use the official ACL style files:
http://ufal.mff.cuni.cz/acl2007/styles/

Papers that have been or will be submitted to other meetings or
publications must provide this information (see submission format).
SIGDIAL 2009 cannot accept for publication or presentation work that
will be (or has been) published elsewhere. Any questions regarding
submissions can be sent to the General Co-Chairs.

Authors are encouraged to make illustrative materials available, on the
web or otherwise. Examples might include excerpts of recorded
conversations, recordings of human-computer dialogues, interfaces to
working systems, and so on.


BEST PAPER AWARDS

In order to recognize significant advancements in dialog and discourse
science and technology, SIGDIAL will (for the first time) recognize a
BEST PAPER AWARD and a BEST STUDENT PAPER AWARD. A selection committee
consisting of prominent researchers in the fields of interest will
select the recipients of the awards.


IMPORTANT DATES (SUBJECT TO CHANGE)

Submission: April 24, 2009
Workshop: September 11-12, 2009


WEBSITES

SIGDIAL 2009 conference website:
http://www.sigdial.org/workshops/workshop10/
SIGDIAL organization website: http://www.sigdial.org/
Interspeech 2009 website: http://www.interspeech2009.org/


ORGANIZING COMMITTEE

For any questions, please contact the appropriate members of the
organizing committee:

GENERAL CO-CHAIRS
Pat Healey (Queen Mary University of London): ph@dcs.qmul.ac.uk
Roberto Pieraccini (SpeechCycle): roberto@speechcycle.com

TECHNICAL PROGRAM CO-CHAIRS
Donna Byron (Northeastern University): dbyron@ccs.neu.edu
Steve Young (University of Cambridge): sjy@eng.cam.ac.uk

LOCAL CHAIR
Matt Purver (Queen Mary University of London): mpurver@dcs.qmul.ac.uk

SIGDIAL PRESIDENT
Tim Paek (Microsoft Research): timpaek@microsoft.com

SIGDIAL VICE PRESIDENT
Amanda Stent (AT&T Labs - Research): amanda.stent@gmail.com


-- 
Matthew Purver - http://www.dcs.qmul.ac.uk/~mpurver/

Senior Research Fellow
Interaction, Media and Communication
Department of Computer Science
Queen Mary University of London, London E1 4NS, UK 
 
Back to Top

8-10 . (2009-09-11) Int. Workshop on spoken language technology for development: from promise to practice.

International Workshop on Spoken Language Technology for Development
- from promise to practice
 
Venue - The Abbey Hotel, Tintern, UK
Dates - 11-12 September 2009
  
Following on from a successful special session at SLT 2008 in Goa, this workshop invites participants with an interest in SLT4D and who have expertise and experience in any of the following areas:
- Development of speech technology for resource-scarce languages
- SLT deployments in the developing world
- HCI in a developing world context
- Successful ICT4D interventions
  
The aim of the workshop is to develop a "Best practice in developing and deploying speech systems for developmental applications". It is also hoped that the participants will form the core of an open community which shares tools, insights and methodologies for future SLT4D projects. 
  
If you are interested in participating in the workshop, please submit a 2-4 page position paper explaining how your expertise and experience might be applied to SLT4D, formatted according to the Interspeech 2009 guidelines, to Roger Tucker at roger@outsideecho.com by 30th April 2009. 
  
Important Dates:
Papers due: 30th April 2009
Acceptance Notification: 10th June 2009
Early Registration deadline: 3rd July 2009
Workshop: 11-12 September 2009
  
Further details can be found on the workshop website at www.llsti.org/SLT4D-09

Back to Top

8-11 . (2009-09-11) ACORNS Workshop Brighton UK

Call for Participation
ACORNS Workshop
Computational Models of Language Evolution, Acquisition and Processing
the workshop is a satellite of Interspeech-2009.
September 11, 2009
Brighton, UK
Old Ship Hotel, the oldest hotel of Brighton that offers the allure of history
As a follow-up of the successful ESF workshop with the same title (held in November 2007), we again
would like to bring together a group of outstanding invited speakers and discussants to explore
directions for future research in the multidisciplinary field of computational modeling of language
evolution, processing, and acquisition.
We envisage bringing together a group of maximally 50 researchers from different disciplines who
take an interest in investigating language acquisition and processing. The focus is on computational
models that enhance our understanding of behavioural phenomena and results of experiments.
A pervasive problem in the multi-disciplinary field of language processing and acquisition is that
different disciplines not only favour and exploit different experimental paradigms, but also quite
different publication styles and different journals. As a result, information flow across the borders of
the disciplines is no more than a trickle, where broad streams would be desirable.
We have designed a programme in which there is room for four or five longer presentations by invited
speakers. A team of discussants from a range of disciplines will give comment.
Prospective participants are invited to send an expression of interest to the organizers before June 20.
In selecting participants priority will be given to scientists who submit a position statement with their
expression of interest. The statement should address the topics of the workshop. It should be in the
Interspeech-2009 format with a maximum of four pages (shorter statements are also welcome).
Workshop participants will receive key papers by the speakers and discussants, as well as the full set
of position statements on August 15.
During the workshop, there will be ample time and opportunity for all participants to contribute to the
discussions.
The workshop should result in sketch of future research in language evolution, acquisition and
processing during the next five to ten years. To that end the workshop will explore the formation of
consortia that can prepare project proposals for EU-funded programmes such as FET, etc. For this
reason we have also invited representatives of the major European funding agencies. Finally, the
workshop will explore the feasibility of forming a couple of small interdisciplinary teams to prepare
papers for journals in a number of disciplines.
Costs: The price for participants is 70 UK pounds; this includes all preparatory materials and a lunch.
All payments must be made in cash at the workshop venue.
Programme:
9:00 – 9:30 Registration
9:30 – 10:30 Lou Boves, Scientific Manager of the ACORNS project (www.acorns-project.org)
10:30 – 10:45 Coffee
10:45 – 12:15 Deb Roy, Massachusetts Institute of Technology, Cambridge, Mass
12:15 – 13:15 Lunch
13:15 – 14:45 Friedemann Pulvermuller, MRC Cognition and Brain Sciences Unit, Cambridge, UK
14:45 – 15:00 Tea
15:00 – 16:30 Rochelle Newman, University of Maryland, MD
16:30 – 17:30 Conclusions
Discussants:
Discussants will include:
Roger Moore, Sheffield University, UK
Hugo Van hamme, Catholic University Leuven, Belgium
Odette Scharenborg, Radboud University, Netherlands
Additional discussants are being negotiated at this moment. Scientists who would like to participate as
a discussant are invited to contact the organizers at the e-mail address shown at the bottom of this
message as soon as possible.
Workshop Organisers:
Lou Boves, Elisabeth den Os, Louis ten Bosch (Radboud University)
All questions regarding the workshop and requests for registration with full name, affiliation and email
address must be sent to:
e.denos@let.ru.nl
Back to Top

8-12 . (2009-09-13)Young Researchers' Roundtable on Spoken Dialogue Systems 2009 London

Young Researchers' Roundtable on Spoken Dialogue Systems 2009

13th-14th September, at Queen Mary University of London


*Overview and goals*

The Young Researchers' Roundtable on Spoken Dialogue Systems (YRRSDS) is an annual workshop designed for post-graduate students, post-docs and junior researchers working in research related to spoken dialogue systems in both academia and industry. The roundtable provides an open forum where participants can discuss their research interests, current work and future plans. The workshop has three main goals:
- to offer an interdisciplinary forum for creative thinking about current issues in spoken dialogue systems research
- to provide young researchers with career advice from senior researchers and professionals from both academic and industrial backgrounds
- to develop a stronger international network of young researchers working in the field.

(Important note: There is no age restriction to participating in the workshop; the word 'young' is meant to indicate that it is targeted towards researchers who are at a relatively early stage in their career.)


*Topics and sessions*
Potential roundtable discussion topics include: best practices for conducting and evaluating user studies of spoken dialogue systems, the prosody of conversation, methods of analysis for dialogue systems, conversational agents and virtual characters,cultural adaptation of dialogue strategies, and user modelling.

YRRSDS’09 will feature:

- a senior researcher panel (both academia and industry)
- a demo and poster session
- a special session on frameworks and grand challenges for dialogue system evaluation
- a special session on EU projects related to spoken dialogue systems.

Previous workshops were held in Columbus (ACL 2008), Antwerp (INTERSPEECH 2007), Pittsburgh (INTERSPEECH 2006) and Lisbon (INTERSPEECH 2005).


*Workshop date*

YRRSDS'09 will take place on September 13th and 14th, 2009 (immediately after Interspeech and SIGDial 2009).


*Workshop location*

The 2009 YRRSDS will be held at Queen Mary University of London, one of the UK's leading research-focused higher education institutions. Queen Mary’s Mile End campus began life in 1887 as the People's Palace, a philanthropic endeavour to provide east Londoners with education and social activities, and is located in the heart of London's vibrant East End.


*Grants*

YRRSDS 2009 will be supported this year by ISCA, the International Speech Communication Association. ISCA will consider applications for a limited number of travel grants. Applications should be send directly to grants@isca-speech.org, details of the application process and forms are available from http://www.isca-speech.org/grants.html.We are also negotiating with other supporters the possibility of offering a limited number of travel grants to students.


*Endorsements*

SIGDial, ISCA, Dialogs on Dialogs


*Sponsors*

Orange, Microsoft Research, AT&T


*Submission process*

Participants will be asked to submit a 2-page position paper based on a template provided by the organising committee. In their papers, authors will include a short biographical sketch, a brief statement of research interests, a description of their research work, and a short discussion of what they believe to be the most significant and interesting issues in spoken dialogue systems today and in the near future. Participants will also provide three suggestions for discussion topics.
Workshop attendance will be limited to 50 participants. Submissions will be accepted on a first-come-first-served basis. Submissions will be collated and made available to participants. We also plan to publish the position papers and presentations from the workshop on the web, subject to any sponsor or publisher constraints.

*Important Dates*

- Submissions open: May 15, 2009
- Submissions deadline: June 30, 2009
- Final notification: July 31, 2009

- Registration begins: TBD
- Registration deadline: TBD

- Interspeech: 6-10 September 2009
- SIGDial: 11-12 September, 2009
- YRR:  13-14 September, 2009

*More information on related websites*

- Young Researchers' Roundtable website: http://www.yrrsds.org/
- SIGDIAL 2009 conference website: http://www.sigdial.org/workshops/workshop10/
- Interspeech 2009 website: http://www.interspeech2009.org/

*Organising Committee*

- David Díaz Pardo de Vera, Polytechnic University of Madrid, Spain
- Milica Gašić, Cambridge University, UK
- François Mairesse, Cambridge University, UK
- Matthew Marge, Carnegie Mellon University, USA
- Joana Paulo Pardal, Technical University Lisbon, Portugal
- Ricardo Ribeiro, ISCTE, Lisbon, Portugal

*Local Organisers*

- Arash Eshghi, Queen Mary University of London, UK
- Christine Howes, Queen Mary University of London, UK
- Gregory Mills, Queen Mary University of London, UK

*Scientific Advisory Committee*

- Hua Ai, University of Pittsburgh, USA
- James Allen, University of Rochester, USA
- Alan Black, Carnegie Mellon University, USA
- Dan Bohus, Microsoft Research, USA
- Philippe Bretier, Orange Labs, France
- Robert Dale, Macquarie University, Australia
- Maxine Eskenazi, Carnegie Mellon University, USA
- Sadaoki Furui, Tokyo Institute of Technology, Japan
- Luis Hernández Gómez, Polytechnic University of Madrid, Spain
- Carlos Gómez Gallo, University of Rochester, USA
- Kristiina Jokinen, University of Helsinki, Finland
- Nuno Mamede, Spoken Language Systems Lab, INESC-ID, Portugal
- David Martins de Matos, Spoken Language Systems Lab, INESC-ID, Portugal
- João Paulo Neto, Voice Interaction, Portugal
- Tim Paek, Microsoft Research
- Antoine Raux, Honda Research, USA
- Robert J. Ross, Universitat Bremen, Germany
- Alexander Rudnicky, Carnegie Mellon University, USA
- Mary Swift, University of Rochester, USA
- Isabel Trancoso, Spoken Language Systems Lab, INESC-ID, Portugal
- Tim Weale, The Ohio State University, USA
- Jason Williams, AT&T, USA
- Sabrina Wilske, Lang Tech and Cognitive Sys at Saarland University, Germany
- Andi Winterboer, Universiteit van Amsterdam, Netherlands
- Craig Wootton, University of Ulster, Belfast, Northern Ireland
- Steve Young, University of Cambridge, United Kingdom

Back to Top

8-13 . (2009-09-14) 7th International Conference on Recent Advances in Natural Language Processing

RANLP-09 Second Call for Papers and Submission Information

 

"RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING"

 

International Conference RANLP-2009

 

September 14-16, 2009

Borovets, Bulgaria

http://www.lml.bas.bg/ranlp2009

 

Further to the successful and highly competitive 1st, 2nd, 3rd, 4th, 5th

and 6th conferences 'Recent Advances in Natural Language Processing'

(RANLP), we are pleased to announce the 7th RANLP conference to be held in

September 2009.

 

The conference will take the form of addresses from invited keynote

speakers plus peer-reviewed individual papers. There will also be an

exhibition area for poster and demo sessions.

 

We invite papers reporting on recent advances in all aspects of Natural

Language Processing (NLP). The conference topics are announced at the

RANLP-09 website. All accepted papers will be published in the full

conference proceedings and included in the ACL Anthology. In addition,

volumes of RANLP selected papers are traditionally published by John

Benjamins Publishers; currently the volume of Selected RANLP-07 papers is

under print.

 

KEYNOTE SPEAKERS:

       Kevin Bretonnel Cohen (University of Colorado School of Medicine),

       Mirella Lapata (University of Edinburgh),

       Shalom Lappin (King’s College, London),

       Massimo Poesio (University of Trento and University of Essex).

 

CHAIR OF THE PROGRAMME COMMITTEE:

Ruslan Mitkov (University of Wolverhampton)

 

CHAIR OF THE ORGANISING COMMITTEE:

Galia Angelova (Bulgarian Academy of Sciences)

 

The PROGRAMME COMMITTEE members are distinguished experts from all over

the world. The list of PC members will be announced at the conference

website. After the review, the list of all reviewers will be announced at

the website as well.

 

SUBMISSION

People interested in participating should submit a paper, poster or demo

following the instructions provided at the conference website. The review

will be blind, so the article text should not reveal the authors' names.

Author identification should be done in additional page of the conference

management system.

 

TUTORIALS 12-13 September 2009:

Four half-day tutorials will be organised at 12-13 September 2009. The

list of tutorial lecturers includes:

       Kevin Bretonnel Cohen (University of Colorado School of Medicine),

       Constantin Orasan (University of Wolverhampton)

 

WORKSHOPS 17-18 September 2009:

Post-conference workshops will be organised at 17-18 September 2009. All

workshops will publish hard-copy proceedings, which will be distributed at

the event. Workshop papers might be listed in the ACL Anthology as well

(depending on the workshop organisers). The list of RANLP-09 workshops

includes:

       Semantic Roles on Human Language Technology Applications, organised by

Paloma Moreda, Rafael Muсoz and Manuel Palomar,

       Partial Parsing 2: Between Chunking and Deep Parsing, organised by Adam

Przepiorkowski, Jakub Piskorski and Sandra Kuebler,

       1st Workshop on Definition Extraction, organised by Gerardo Eugenio

Sierra Martнnez and Caroline Barriere,

       Evaluation of Resources and Tools for Central and Eastern European

languages, organised by Cristina Vertan, Stelios Piperidis and Elena

Paskaleva,

       Adaptation of Language Resources and Technology to New Domains,

organised by Nuria Bel, Erhard Hinrichs, Kiril Simov and Petya Osenova,

       Natural Language Processing methods and corpora in translation,

lexicography, and language learning, organised by Viktor Pekar, Iustina

Narcisa Ilisei, and Silvia Bernardini,

       Events in Emerging Text Types (eETTs), organised by Constantin Orasan,

Laura Hasler, and Corina Forascu,

       Biomedical Information Extraction, organised by Guergana Savova,

Vangelis Karkaletsis, and Galia Angelova.

 

 

IMPORTANT DATES:

 

Conference paper submission notification: 6 April 2009

Conference paper submission deadline: 13 April 2009

Conference paper acceptance notification: 1 June 2009

Final versions of conference papers submission: 13 July 2009

 

Workshop paper submission deadline (suggested): 5 June 2009

Workshop paper acceptance notification (suggested): 20 July 2009

Final versions of workshop papers submission (suggested): 24 August 2009

 

RANLP-09 tutorials: 12-13 September 2009 (Saturday-Sunday)

RANLP-09 conference: 14-16 September 2009 (Monday-Wednesday)

RANLP-09 workshops: 17-18 September 2009 (Thursday-Friday)

 

For further information about the conference, please visit the conference

site http://www.lml.bas.bg/ranlp2009.

 

 

THE TEAM BEHIND RANLP-09

Galia Angelova, Bulgarian Academy of Sciences, Bulgaria, Chair of the Org.

Committee

Kalina Bontcheva, University of Sheffield, UK

Ruslan Mitkov, University of Wolverhampton, UK, Chair of the Programme

Committee

Nicolas Nicolov, Umbria Inc, USA (Editor of volume with selected papers)

Nikolai Nikolov, INCOMA Ltd., Shoumen, Bulgaria

Kiril Simov, Bulgarian Academy of Sciences, Bulgaria (Workshop Coordinator)

 

e-mail: ranlp09 [AT] lml (dot) bas (dot) 

Back to Top

8-14 . (2009-09-14) Student Research Workshop at RANLP (Bulgaria)

First Call for Papers

Student Research Workshop

14-15 September 2009, Borovets, Bulgaria

associated with the International Conference RANLP-2009

/RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING/

http://lml.bas.bg/ranlp2009/stud-ranlp09

The International Conference RANLP 2009 would like to invite students at all levels (Bachelor-, Master-, and PhD-students) to present their ongoing work at the Student Research Workshop. This will provide an excellent opportunity to present and discuss your work in progress or completed projects to an international research audience and receive feedback from senior researchers. The research being presented can come from any topic area within natural language processing and computational linguistics including, but not limited to, the following topic areas:

Anaphora Resolution, Complexity, Corpus Linguistics, Discourse, Evaluation, Finite-State Technology, Formal Grammars and Languages, Information Extraction, Information Retrieval, Lexical Knowledge Acquisition, Lexicography, Machine Learning, Machine Translation, Morphology, Natural Language Generation, Natural Language in Multimodal and Multimedia Systems, Natural Language Interraction, Natural Language Processing in Computer-Assisted Language Learning, Natural Language Processing for Biomedical Texts, Ontologies, Opinion Mining, Parsing, Part-of-Speech Tagging, Phonology, Post-Editing, Pragmatics and Dialogue, Question Answering, Semantics, Speech Recognition, Statistical Methods, Sublanguages and Controlled Languages, Syntax, Temporal Processing, Term Extraction and Automatic Indexing, Text Data Mining, Text Segmentation, Text Simplification, Text Summarisation, Text-to-Speech Synthesis, Translation Technology, Tree-Adjoining Grammars, Word Sense Disambiguation.

All accepted papers will be presented at the Student Workshop sessions during the main conference days: 14-16 September 2009. The articles will be issued in special Student Session electronic proceedings.

            Important Dates

Submission deadline: 25 July
Acceptance notification: 20 August
Camera-ready deadline: 1 September

      Submission Requirements

All papers must be submitted in .doc or .pdf format and must be 4-8 pages long (including references). For format requirements please refer to the main RANLP website at http://lml.bas.bg/ranlp2009, Submission Info Section. Each submission will be reviewed by 3 reviewers from the Programme Committee, who will feature both experienced researchers and PhD students nearing the completion of their PhD studies. The final decisions will be made based on these reviews. The submissions will have to specify the student's level (Bachelor-, Master-, or PhD).

            Programme Committee

To be announced in the Second Call for Papers.

            Organising Committee

 

Irina Temnikova (University of Wolverhampton, UK)

Ivelina Nikolova (Bulgarian Academy of Sciences, Bulgaria)

Natalia Konstantinova (University of Wolverhampton, UK)

 

            For More Information

 

 

http://lml.bas.bg/ranlp2009/stud-ranlp09          

stud-ranlp09@lml.bas.bg

 

Back to Top

8-15 . (2009-09-28) ELMAR 2009

51st International Symposium ELMAR-2009

28-30 September 2009 Zadar, CROATIA
Paper submission deadline: March 16, 2009
http://www.elmar-zadar.org/
CALL FOR PAPERS TECHNICAL CO-SPONSORS IEEE Region 8 EURASIP - European Assoc. Signal, Speech and Image Processing IEEE Croatia Section IEEE Croatia Section Chapter of the Signal Processing Society IEEE Croatia Section Joint Chapter of the AP/MTT Societies
CONFERENCE PROCEEDINGS INDEXED BY IEEE Xplore
INSPEC TOPICS --> Image and Video Processing --> Multimedia Communications --> Speech and Audio Processing --> Wireless Commununications --> Telecommunications --> Antennas and Propagation --> e-Learning and m-Learning --> Navigation Systems --> Ship Electronic Systems --> Power Electronics and Automation --> Naval Architecture --> Sea Ecology --> Special Sessions Proposals - A special session consist of 5-6 papers which should present a unifying theme from a diversity of viewpoints
KEYNOTE TALKS
* Prof. Gregor Rozinaj,Slovak University of Technology, Bratislava, SLOVAKIA: -Title to be announced soon.
* Mr. David Wood, European Broadcasting Union, Geneva, SWITZERLAND: What strategy and research agenda for Europe in 'new media'?
SUBMISSION
Papers accepted by two reviewers will be published in conference proceedings available at the conference and abstracted/indexed in the IEEE Xplore and INSPEC database. More info is available here: http://www.elmar-zadar.org/ IMPORTANT: Web-based (online) paper submission of papers in PDF format is required for all authors. No e-mail, fax, or postal submissions will be accepted. Authors should prepare their papers according to ELMAR-2009 paper sample, convert them to PDF based on IEEE requirements, and submit them using web-based submission system by March 16, 2009.
SCHEDULE OF IMPORTANT DATES
Deadline for submission of full papers: March 16, 2009
Notification of acceptance mailed out by: May 11, 2009
Submission of (final) camera-ready papers: May 21, 2009
Preliminary program available online by: June 11, 2009
Registration forms and payment deadline: June 18, 2009
Accommodation deadline: September 10, 2009
GENERAL CO-CHAIRS
Ive Mustac, Tankerska plovidba, Zadar, Croatia Branka Zovko-Cihlar, University of Zagreb, Croatia
PROGRAM CHAIR
Mislav Grgic, University of Zagreb, Croatia
INTERNATIONAL PROGRAM COMMITTEE Juraj Bartolic, Croatia David Broughton, United Kingdom Paul Dan Cristea, Romania Kresimir Delac, Croatia Zarko Cucej, Slovenia Marek Domanski, Poland Kalman Fazekas, Hungary Janusz Filipiak, Poland Renato Filjar, Croatia Borko Furht, USA Mohammed Ghanbari, United Kingdom Mislav Grgic, Croatia Sonja Grgic, Croatia Yo-Sung Ho, Korea Bernhard Hofmann-Wellenhof, Austria Ismail Khalil Ibrahim, Austria Bojan Ivancevic, Croatia Ebroul Izquierdo, United Kingdom Kristian Jambrosic, Croatia Aggelos K. Katsaggelos, USA Tomislav Kos, Croatia Murat Kunt, Switzerland Panos Liatsis, United Kingdom Rastislav Lukac, Canada Lidija Mandic, Croatia Gabor Matay, Hungary Branka Medved Rogina, Croatia Borivoj Modlic, Croatia Marta Mrak, United Kingdom Fernando Pereira, Portugal Pavol Podhradsky, Slovak Republic Ramjee Prasad, Denmark Kamisetty R. Rao, USA Gregor Rozinaj, Slovak Republic Gerald Schaefer, United Kingdom Mubarak Shah, USA Shiguang Shan, China Thomas Sikora, Germany Karolj Skala, Croatia Marian S. Stachowicz, USA Ryszard Stasinski, Poland Luis Torres, Spain Frantisek Vejrazka, Czech Republic Stamatis Voliotis, Greece Nick Ward, United Kingdom Krzysztof Wajda, Poland Branka Zovko-Cihlar, Croatia
CONTACT INFORMATION Assoc.Prof. Mislav Grgic, Ph.D. FER, Unska 3/XII HR-10000 Zagreb CROATIA Telephone: + 385 1 6129 851 Fax: + 385 1 6129 717 E-mail: elmar2009 (at) fer.hr For further information please visit: http://www.elmar-zadar.org/
Back to Top

8-16 . (2009-10-05) 2009 APSIPA ASC

            APSIPA Annual Summit and Conference October 5 - 7, 2009

                       Sapporo Convention Center, Sapporo, Japan
2009 APSIPA Annual Summit and Conference is the inaugural event supported by the Asia-Pacific Signal and Information Processing Association (APSIPA). The APSIPA is a new association and it promotes all aspects of research and education on signal processing, information technology, and communications. The field of interest of APSIPA concerns all aspects of signals and information including processing, recognition, classification, communications, networking, computing, system design, security, implementation, and technology with applications to scientific, engineering, and social areas. The topics for regular sessions include, but are not limited to:
Signal Processing Track
1.1 Audio, speech, and language processing
1.2 Image, video, and multimedia signal processing
1.3 Information forensics and security
1.4 Signal processing for communications
1.5 Signal processing theory and methods
Sapporo and Conference Venue: One of many nice cities in Japan, Sapporo is always recognized as a quite beautiful and well-organized city. With a population of 1,800,000, Hokkaido's largest/capital city, Sapporo, is fully serviced by a network of subway, streetcar, and bus lines connecting to its full
compliment of hotel accommodations. Sapporo has already played host to international meetings, sports events, and academic societies. There are a lot of flights from/to Tokyo, Nagoya, Osaka et al. and overseas cities. With all the amenities of a major city yet in balance with its natural surroundings, this beautiful northern capital, Sapporo, is well-equipped to offer a new generation of conventions.
Important Due Dates and Author's Schedule:
Proposals for Special Session: March 1, 2009
Proposals for Forum, Panel and Tutorial Sessions: March 20, 2009
Deadline for Submission of Full-Papers: March 31, 2009
Notification of Acceptance: July 1, 2009
Deadline for Submission of Camera Ready Papers: August 1, 2009
Conference dates: October 5 - 7, 2009
Submission of Papers: Prospective authors are invited to submit either long papers, up to 10 pages in length, or short papers up to four pages in length, where long papers will be for the single-track oral presentation and short papers will be mostly for poster presentation. The conference proceedings will be published, available, and maintained at the APSIPA website.
Detail Information: WEB Site : http://www.gcoe.ist.hokudai.ac.jp/apsipa2009/
Organizing Committee:
Honorary Chair : Sadaoki Furui, Tokyo Institute of Technology, Japan
General co-Chairs : Yoshikazu Miyanaga, Hokkaido University, Japan K. J. Ray Liu, University of Maryland,USA
Technical Program co-Chairs : Hitoshi Kiya, Tokyo Metropolitan Univ., Japan Tomoaki Ohtsuki, Keio University, Japan Mark Liao, Academia Sinica, Taiwan Takao Onoye, Osaka University, Japan               

Back to Top

8-17 . (2009-10-05) IEEE International Workshop on Multimedia Signal Processing - MMSP'09

Call for Papers 2009 IEEE International Workshop on Multimedia Signal Processing - MMSP'09   October 5-7, 2009 Sheraton Rio Hotel & Resort Rio de Janeiro, Brazil   We would like to invite you to submit your work to MMSP-09, the eleventh IEEE International Workshop on Multimedia Signal Processing. We also would like to advise you of the upcoming paper submission deadline on April, 17th.  This year MMSP will introduce a new type of paper award: the “top 10%” paper award. While MMSP papers are already very well regarded and highly cited, there is a growing need among the scientific community for more immediate quality recognition. The objective of the top 10% award is to acknowledge outstanding quality papers, while at the same time keeping the wider participation and information exchange allowed by higher acceptance rates. MMSP will continue to accept as many as high quality papers as possible, with acceptance rates in line with other top events of the IEEE Signal Processing Society. This new award will be granted to as many as 10% of the total paper submissions, and is open to all accepted papers, whether presented in oral or poster form.     The workshop is organized by the Multimedia Signal Processing Technical Committee of the IEEE Signal Processing Society.   Organized in Rio de Janeiro, MMSP-09 provides excellent conditions for brainstorming on, and sharing the latest advances in multimedia signal processing and technology in one of the most beautiful and exciting cities in the world.   Scope: Papers are solicited on the following topics (but not limited to)   Systems and applications - Teleconferencing, telepresence, tele-immersion, immersive environments - Virtual classrooms and distance learning - Multimodal collaboration, online multiplayer gaming, social networking - Telemedicine, human-human distance collaboration - Multimodal storage and retrieval   Multimedia for communication and collaboration - Ad hoc broadband sensor array processing - Microphone and camera array processing - Automatic sensor calibration, synchronization - De-noising, enhancement, source separation, - Source localization, spatialization   Scene analysis for immersive telecommunication and human collaboration - Audiovisual scene analysis - Object detection, identification, and tracking - Gesture, face, and human pose recognition - Presence detection and activity classification - Multimodal sensor fusion   Coding - Distributed/centralized source coding for sensor arrays - Scalable source coding for multiparty conferencing - Error/loss resilient coding for telecommunications - Channel coding, error protection and error concealment   Networking - Voice/video over IP and wireless - Quality monitoring and management - Security - Priority-based QoS control and scheduling - Ad-hoc and real time communications - Channel coding, packetization, synchronization, buffering   A thematic emphasis for MMSP-09 is on topics related to multimedia processing and interaction for immersive telecommunications and collaboration.  Papers on these topics are encouraged.   Schedule - Papers (full paper, 4 pages, to be received by): April 17, 2009 - Notification of acceptance by: June 13, 2009 - Camera-ready paper submission by: July 6, 2009   More information is available at http://www.mmsp09.org   ================================================================================ You have received this mailing because you are a member of IEEE and/or one of the IEEE Technical Societies.   To unsubscribe, please go to http://ewh.ieee.org/enotice/options.php?SN=Wellekens&LN=CONF and be certain to include your IEEE member number.  If you need assistance with your E-Notice subscription, please contact k.n.luu@ieee.org
Back to Top

8-18 . (2009-10-13) CfP ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)

 ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
***Submission Deadline Extended to Monday, June 15, 2009***
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/

Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."

The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.

We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:

-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)

Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/

Important Dates:
Monday, June 15, 2009 (Extended Deadline) Submission Deadline
Saturday, July 10, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing

For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org

On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands


Back to Top

8-19 . (2009-10-18) 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics

Call for Papers

2009 IEEE Workshop on Applications of Signal Processing to Audio and

Acoustics

 

Mohonk Mountain House

New Paltz, New York

October 18-21, 2009

http://www.waspaa2009.com

 

The 2009 IEEE Workshop on Applications of Signal Processing to Audio and

Acoustics (WASPAA'09) will be held at the Mohonk Mountain House in New

Paltz, New York, and is sponsored by the Audio & Electroacoustics committee

of the IEEE Signal Processing Society. The objective of this workshop is to

provide an informal environment for the discussion of problems in audio and

acoustics and the signal processing techniques leading to novel solutions.

Technical sessions will be scheduled throughout the day. Afternoons will be

left free for informal meetings among workshop participants.

 

Papers describing original research and new concepts are solicited for

technical sessions on, but not limited to, the following topics:

 

* Acoustic Scenes

- Scene Analysis: Source Localization, Source Separation, Room Acoustics

- Signal Enhancement: Echo Cancellation, Dereverberation, Noise Reduction,

Restoration

- Multichannel Signal Processing for Audio Acquisition and Reproduction

- Microphone Arrays

- Eigenbeamforming

- Virtual Acoustics via Loudspeakers

 

* Hearing and Perception

- Auditory Perception, Spatial Hearing, Quality Assessment

- Hearing Aids

 

* Audio Coding

- Waveform Coding and Parameter Coding

- Spatial Audio Coding

- Internet Audio

- Musical Signal Analysis: Segmentation, Classification, Transcription

- Digital Rights

- Mobile Devices

 

* Music

- Signal Analysis and Synthesis Tools

- Creation of Musical Sounds: Waveforms, Instrument Models, Singing

- MEMS Technologies for Signal Pick-up

 

 

Submission of four-page paper: April 15, 2009

Notification of acceptance: June 26, 2009

Early registration until:  September 1, 2009

 

Workshop Committee

 

General Co-Chair:

Jacob Benesty

Université du Québec

INRS-EMT

Montréal, Québec, Canada

benesty@emt.inrs.ca

 

General Co-Chair:

Tomas Gaensler

mh acoustics

Summit, NJ, USA

tfg@mhacoustics.com

 

Technical Program Chair:

Yiteng (Arden) Huang

WeVoice Inc.

Bridgewater, NJ, USA

arden_huang@ieee.org

 

Technical Program Chair:

Jingdong Chen

Bell Labs

Alcatel-Lucent

Murray Hill, NJ, USA

jingdong@research.bell-labs.com

 

Finance Chair:

Michael Brandstein

Information Systems

Technology Group

MIT Lincoln Lab

Lexington, MA, USA

msb@ll.mit.edu

 

Publications Chair:

Eric J. Diethorn

Multimedia Technologies

Avaya Labs Research

Basking Ridge, NJ, USA

ejd@avaya.com

 

Publicity Chair:

Sofiène Affes

Université du Québec

INRS-EMT

Montréal, Québec, Canada

affes@emt.inrs.ca

 

Local Arrangements Chair:

Heinz Teutsch

Multimedia Technologies

Avaya Labs Research

Basking Ridge, NJ, USA

teutsch@avaya.com

 

Far East Liaison:

Shoji Makino

NTT Communication Science

Laboratories, Japan

maki@cslab.kecl.ntt.co.jp

Back to Top

8-20 . (2009-10-23) CfP Searching Spontaneous Conversational Speech (SSCS 2009) ACM Mltimedia Wkshp

Call for Papers

----------------------------

ACM Multimedia 2009 Workshop

Searching Spontaneous Conversational Speech (SSCS 2009)

***Submission Deadline Extended to Monday, June 15, 2009***

----------------------------

http://ict.ewi.tudelft.nl/SSCS2009/

 

Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."

 

The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.

 

We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:

 

***Integration***

-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)

-Search effectiveness (e.g., evidence combination, query/document expansion)

-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)

-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)

-Integration of higher-level semantics, including cross-modal concept detection

-Combination of indexing features from video, text and speech

 

***Interface/Interaction***

-Surrogates for representation or browsing of spoken content

-Intelligent playback: exploiting semantics in the media player

-Relevance intervals: determining the boundaries of query-related media segments

-Cross-media linking and link visualization deploying speech transcripts

 

***Scale/Scope***

-Large-scale speech indexing approaches (e.g., collection size, search speed)

-Dealing with collections containing multiple languages

-Affordable, light-weight solutions for small collections, i.e., for the long tail

 

***Community***

-Stakeholder participation in design and realization of real world applications

-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)

 

Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/

 

Important Dates:

Monday, June 15, 2009 (Extended Deadline) Submission Deadline

Saturday, July 10, 2009 Author Notification

Friday, July 17, 2009 Camera Ready Deadline

Friday, October 23, 2009 Workshop in Beijing

 

For more information: m.a.larson@tudelft.nl

SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/

ACM Multimedia 2009 Website: http://www.acmmm09.org

 

On behalf of the SSCS2009 Organizing Committee:

Martha Larson, Delft University of Technology, The Netherlands

Franciska de Jong, University of Twente, The Netherlands

Joachim Kohler, Fraunhofer IAIS, Germany

Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands

Wessel Kraaij, TNO and Radboud University, The Netherlands

 

Back to Top

8-21 . (2009-10-23)ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)

Call for Papers
----------------------------
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
October 23, 2009
Beijing, China
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/

Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."

The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.

We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:

***Integration***
-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech

***Interface/Interaction***
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts

***Scale/Scope***
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail

***Community***
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)

Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/

Important Dates:
Monday, June 1, 2009 Submission Deadline
Saturday, July 4, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing

For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org

On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands


Back to Top

8-22 . (2009-11-01) NLP Approaches for Unmet Information Needs in Health Care

NLP Approaches for Unmet Information Needs in Health Care

(http://www.uwm.edu/~hongyu/files/BIBM.workshop.html)

 

A workshop of IEEE International Conference on Bioinformatics and

Biomedicine 2009, Washington DC

 

As the amount of literature and other information in the biomedical

field continues to grow at a rapid rate, researchers in the health

care community dependent on computers to find the best answers for

meeting their information needs. Traditionally, information needs have

been simply represented as a set of queries. Recently, there have been

growing research efforts addressing these needs with natural languageapproaches. Although great strides have been made in producing

 

valuable biomedical databases, more work needs to be done to develop

computational approaches that enable users to search multiple

databases, which often comprise a variety of formats, including

journal articles, clinical guidelines, and electronic health care

records. Therefore, the task at hand is to develop natural language

systems that can understand the queries or complex questions being

asked, interpret the different resources that could be used to answer

the question, extract relevant information, and summarize this

information to meet user needs, and data mine the structured data for

clinical decision support. This workshop will explore a broad range of

traditional NLP approaches and emerging new methods, and the variety

of challenges that need to be overcome with respect to these issues.

 

Some specific topics include:

 

   * Clinical information needs

   * Clinical terminology and coding clinical data

   * Annotation and machine learning

   * Healthcare, domain-specific adaption of open-domain NLP techniques

   * Information extraction from electronic health records

   * Data mining of electronic health records

   * NLP approaches that involve with image and video

   * Automatic speech recognition for the healthcare domain

   * Spoken clinical question answering

 

 

Paper submission: http://kis-lab.com/cyberchair/bibm09/cbc_index.html

Timeline:

 August 10, 2009: Due date for full workshop papers submission

 September 10, 2009: Notification of paper acceptance to authors

 September 17, 2009: Camera-ready of accepted papers

 November 1-4, 2009: Workshops

 

Organizers:

 

Workshop co-chairs:

 

Hong Yu, PhD, University of Wisconsin-Milwaukee

Dilek Hakkani-Tür, PhD, International Computer Science Institute

John Ely, MD University of Iowa

Lyle Ungar, PhD, University of Pennsylvania

 

Workshop PC members:

 

Eugene Agichtein, Emory University

Alan Aronson, NLM

James Cimino, NIH

Kevin Cohen, University of Colorado

Nigel Collier, National Institute of Informatics, Japan

Chris Chute, Mayo Clinic

Dina Demner Fushman, NLM

Bob Futrelle, Northeastern University

Henk Harkema, University of Pittsburgh

Lynette Hirschman, MITRE

Susan McRoy, University of Wisconsin

Serguei Pakhomov, University of Minnesota

Tim Patrick, University of Wisconsin

Thomas Rindflesch, NLM

Pete White, Children's Hospital of Philadelphia

John Wilbur, NLM

Pierre Zweigenbaum, LIMSI

Back to Top

8-23 . (2009-11-02) CALL FOR ICMI-MLMI 2009 WORKSHOPS New dates !!

CALL FOR ICMI-MLMI 2009 WORKSHOPS   NEW DATES!!

http://icmi2009.acm.org
Boston MA, USA

Paper submission May 22, 2009Author notification July 20, 2009 Camera-ready due August 20, 2009 Conference Nov 2-4, 2009 Workshops Nov 5-6, 2009 conference: 2-4 November 2009
 

The ICMI and MLMI conferences will jointly take place in the Boston
area during November 2-6, 2009. The main aim of ICMI-MLMI 2009 is to
further scientific research within the broad field of multimodal
interaction, methods and systems. The joint conference will focus on
major trends and challenges in this area, and work to identify a
roadmap for future research and commercial success.  The main
conference will be followed by a number of workshops, for which we
invite proposals.

The format, style, and content of accepted workshops is under the
control of the workshop organizers.  Workshops will take place on 5-6
November 2009, and may be of one or two days duration.
Workshop organizers will be expected to manage the workshop content,
specify the workshop format, be present to moderate the discussion and
panels, invite experts in the domain, and maintain a website for the
workshop.

Proposals should specify clearly the workshop's title, motivation,
impact, expected outcomes,  potential invited speakers and the workshop
URL. The proposal should also name the main workshop organizer, and
co-organizers,  and should provide brief bios of the organizers.

Submit workshop proposals, as pdf, by email to
  workshops-icmi2009@acm.org

Back to Top

8-24 . (2009-11-06) CfP 4th LANGUAGE AND TECHNOLOGY CONFERENCE: Human Language Technologies as a Challenge for Computer Science and Linguistics (LTC 2009)a Challenge a Challenge


 The 4th LANGUAGE AND TECHNOLOGY CONFERENCE: Human Language Technologies as a Challenge
for Computer Science and Linguistics (LTC 2009), a meeting organized by the Faculty of Mathematics and Computer Science of Adam Mickiewicz University, Poznañ, Poland in cooperation with the Adam Mickiewicz University Foundation (co-organizer), will take place on November 6-8, 2009.

Human Language Technologies (HLT) continue to be a challenge for computer science, linguistics and related fields as these areas become an ever more essential element of our everyday technological environment. Since the very beginning of the Computer and Information  Age these fields have influenced and stimulated each other. The European Union strongly supports HLT under the 7th Framework Program. These efforts as well as technological, social and cultural globalization have created a favorable climate for the intensive exchange of novel ideas, concepts and solutions across initially distant disciplines. We aim at further contributing to this exchange and invite you to join us at LTC in November 2009, as well as at the FlaReNet workshop (LRL 2009) on the theme "Getting Less-Resourced Languages on-Board!".

Zygmunt Vetulani
LTC 2009 Chair
vetulani@amu.edu.pl


CONFERENCE TOPICS

The conference topics include the following (the ordering is not significative):
   - electronic language resources and tools,
   - formalisation of natural languages,
   - parsing and other forms of NL processing,
   - computer modelling of language competence,
   - NL user modelling,
   - NL understanding by computers,
   - knowledge representation,
   - man-machine NL interfaces,
   - Logic Programming in Natural Language Processing,
   - speech processing,
   - NL applications in robotics,
   - text-based information retrieval and extraction,
   - question answering,
   - tools and methodologies for developing multilingual systems,
   - translation enhancement tools,
   - corpora-based methods in language engineering,
   - WordNet-like ontologies,
   - methodological issues in HLT,
   - language-specific computational challenges for HLTs (especially for languages other than English),
   - HLT standards,
   - HLTs as a support for foreign language teaching,
   - communicative intelligence,
   - legal issues connected with HLTs (problems and challenges),
   - contribution of HLTs to the Homeland Security problems (technology applications and legal aspects),
   - visionary papers in the field of HLT,
   - HLT's for the Less-Resourced Languages
   - HLT related policies,
   - system prototype presentations.

This list is by no means closed and we are open to further proposals. Please do not hesitate to contact us in order to feed us with you suggestions and ideas of how to satisfy your expectation concerning the program. The Program Committee is also open to suggestions concerning accompanying events (workshops, exhibits, panels, etc). Suggestions, ideas and observations may be addressed directly to the LTC Chair by email (vetulani@amu.edu.pl).


PROGRAM COMMITTEE

Zygmunt Vetulani (Adam Mickiewicz University, Poznañ, Poland) - chair

Victoria Arranz (ELRA, France)
Anja Belz (University of Brighton, UK)
Janusz S. Bieñ (Warsaw University, Poland)
Christian Boitet (IMAG, France)
Leonard Bolc (IPI PAN, Poland)
Lynne Bowker (University of Ottawa, Canada)
Nicoletta Calzolari (ILC/CNR, Italy)
Nick Campbell (Trinity College Dublin, Ireland)
Julie Carson-Berndsen (University College Dublin, Ireland)
Khalid Choukri (ELRA, France)
Adam D¹browski (Poznañ University of Technology, Poland)
El¿bieta Dura (University of Skovde, Sweden)
Katarzyna Dziubalska-Ko³aczyk (Adam Mickiewicz University,Poland)
Tomaz Erjavec (Josef Stefan Institute, Slovenia)
Cedrick Fairon (University of Louvain, Belgium)
Christiane Fellbaum (Princeton University, USA)
Maria Gavrilidou (ILSP, Greece)
Dafydd Gibbon (University of Bielefeld, Germany)
Stefan Grocholewski (Poznañ University of Technology, Poland)
Franz Guenthner (Ludwig-Maximilians-University München, Germany)
Hans Guesgen (Massey University, New Zealand)
Eva Hajièová (Charles University, Czech Republic)
Roland Hausser (Erlangen, Germany)
Eric Laporte (University Marne-la-Vallee, France)
Yves Lepage (University Caen Basse-Normandie, France)
Gerard Ligozat (LIMSI/CNRS, France)
Natalia Loukachevitch (Moscow State University, Russia)
Wies³aw Lubaszewski (AGH/UJ, Poland)
Bente Maegaard (University of Copenhagen, Denmark, Denmark)
Bernardo Magnini (ITC IRST, Italy)
Joseph Mariani (LIMSI-CNRS, France)
Jacek Martinek (Poznañ University of Technology, Poland)
Gayrat Matlatipov (Urgench State University,Uzbekistan)
Keith J. Miller (MITRE, USA)
Nicholas Ostler (Linguacubun Ltd., UK)
Karel Pala (Masaryk University, Czech Republic)
Pavel S. Pankov (National Academy of Sciences, Kyrgyzstan)
Patrick Paroubek (LIMSI-CNRS, France)
Stelios Piperidis (ILSP, Greece)
Emil P³ywaczewski (University of Bialystok, Poland)
Gabor Proszeky (Morphologic, Hungary)
Adam Przepiórkowski (IPI PAN, Poland)
Reinhard Rapp (University Mainz, Germany)
Zbigniew Rau (PPBW, Poland)
Mike Rosner (University of Malta)
Justus Roux (University of Stellenbosch, South Africa)
Vasile Rus (University of Memphis, Fedex Inst. of Technology, USA)
Rafa³ Rzepka (University of Hokkaido, Japan)
Frédérique Ségond (Xerox, France)
Zhongzhi Shi (Institute of Computing Technology / Chinese Academy of Sciences, China)
W³odzimierz Sobkowiak (Adam Mickiewicz University, Poland)
Hanna Szafrañska (Adam Mickiewucz Foundation, Poland)
Marek Œwidziñski (Warsaw University, Poland)
Ryszard Tadeusiewicz (AGH, Poland)
Dan Tufiº (RCAI, Romania)
Hans Uszkoreit (DFKI,Germany)
Piek Vossen (University of Amsterdam, Netherlands)
Tom Wachtel (Independent Consultant, Italy)
Jan Wêglarz (Poznañ University of Technology, Poland)
Richard Zuber (CNRS, France)


LANGUAGE

The conference language is English.


PAPER SUBMISSION

The conference accepts papers in English. Papers (5 formatted pages) are due by July 31, 2009 (midnight, any time zone) and should not identify the author(s)in any manner. In order to facilitate submission we have decided to reduce the formatting requirements as much as possible at this stage. Please, however, do observe the following:

1. Accepted fonts for texts are Times Roman, Times New Roman. Courier is recommended for program listings. Character size for the main text should be 10 points, with 11 points leading (line spacing).

2. Text should be presented in 2 columns, 8,42 cm each with 0,95 cm between columns (gutter).

3. The paper size is 5 pages formatted according to (1) and (2) above.

4. The use of PDF format is strongly recommended, although MS Word will also be accepted.

Detailed guidelines for the final submission of accepted papers will be
published on the conference Web site by September 10, 2009 (acceptance
notification date).


All submissions are to be made electronically via the LTC 2009 web submission system. Acceptance/rejection notification will be sent by September 1, 2009.

PUBLICATION POLICY


Acceptance will be based on the reviewers' assessments (anonymous submission model). The accepted papers will be published in the conference proceedings (hard copy, with ISBN number) and on CD-ROM. The abstracts of the accepted contributions will also be made available via the conference page (during its lifetime). Publication requires full electronic registration and payment of the conference fee (full registration) by at least one of the co-authors by October 1, 2009. (In case of more than one accepted paper a special regulation will be applied. This regulation will be announced later on.)

A post-conference volume with extended versions of selected papers will be published.

The LTC 2007 post-conference volume is going to appear in the Springer Verlag series LNAI (vol. 5603).

IMPORTANT DATES/DEADLINES

- Deadline for submission of papers for review:  July 31, 2009.
- Acceptance/Rejection notification: September 10, 2009.
- Deadline for submission of final versions of accepted papers: October 1, 2009.
- Conference: November 6-8, 2009.

REGISTRATION

Only electronic registration will be possible. Details will be provided later on www.ltc.amu.edu.pl.

CONFERENCE FEES

Non-student participants:
   - Regular registration (payment by October 4, 2009) 160 EURO
   - Late registration (payment after October 4, 2009) 190 EURO

Student participants:
   - Regular registration (payment before October 4, 2009)  100 EURO
   - Late registration (payment after October 4, 2009)  120 EURO

Extra 40 Euro will be charged for the LRL Workshop participation (5.11.2009, cf below).

Student registrations must be accompanied by a proof of full-time student status valid on the payment date. Registrants are requested to scan and e-mail their proof of student status to ltc@amu.edu.pl. The e-mail subject field must have the following format:
   LTC-09-StudentStatus-< Name_of_participant > 
   (e.g. LTC-09-StudentStatus-VETULANI)

The conference fee covers:
   - Participation in the scientific programme.
   - Conference materials.
   - Proceedings on CD and paper.
   - Social events (banquet,...).
   - Coffee breaks.

PAYMENT

The payment methods will be detailed shortly.
   


AWARDS

As at the 2nd and 3rd Language and Technology Conferences (2005, 2007) special awards will be granted to the best student papers. Regular or PhD students (on the date of paper submission) are eligible.

In 2005 the Jury, composed of the Program Committee members present at the conference, awarded this distinction to:
   - Ronny Melz (University of Leipzig),
   - Hartwig Holzapfel (University of Karlsruhe),
   - Marcin Woliñski (IPI PAN, Warsaw). 

In 2007 this distinction went to Daria Fišer (University of Ljubljana)

Other awards will be announced.


LRL WORKSHOP: Getting Less-Resourced Languages on-Board!

Name: Getting Less-Resourced Languages on-Board!

Date: 5.11.2009, half-day (13h30 – 18h00) + cocktail

Theme:
Language Technologies (LT) provide an essential support to the challenge of Multilingualism. In order to develop them, it is necessary to have access to Language Resources (LR) and to assess LT performances. To this regard, the situation is very different across the different languages. Little or sparse data exist for languages in countries or regions where limited efforts have been devoted to such issues in the past, also known as Less-Resourced Languages (LRL). The workshop aims at reporting the needs, at presenting achievements and at proposing solutions for the future, both in terms of LR and of LT evaluation, especially in the European, Euro-Mediterranean and regional frameworks. This will allow to identity the factors that have an impact on a potential and shared roadmap towards supplying LR and LT for all languages.

Topics:
-    Experience in the production, validation and distribution of LR for less-resourced languages
-    Experience in the evaluation of LT for less-resourced languages
-    Infrastructures for making available LR and LT in less-resourced languages
-    Alternative approaches (comparable corpora, pivot languages, language clustering…)
-    To be completed…

Co-Chairs: Joseph Mariani (LIMSI-CNRS & IMMI-CNRS), Khalid Choukri(ELRA & ELDA), Zygmunt Vetulani (Adam Mickiewicz University, Poznan)

LRL Workshop Program Committee:

Nuria Bel (Univ. Pompeu Fabra, Spain)
Gerhard Budin (Univ. Wienna, Austria)
Nicoletta Calzolari (ILS, Italy)
Daffyd Gibbon (Univ. Bielefeld, Germany)
Jan Hajic (Charles Univ., Czech Republic)
Alfed Majewicz (UAM, Poland)
Asunción Moreno (UPC, Spain)
Nicholas Ostler (Foundation for Endengered Languages, UK)
Stellios Piperidis(ILSP, Greece)
Mohsen Rashwan (Cairo Univ., Egypt)
Kepa Sarasola Gabiola (Univ. del Pas Vasco, Spain)
Marko Tadiæ (Croatian Academy of Sciences and Arts, Croatia)
Cristina Vertan (Univ. Hamburg, Germany)


Paper submission deadline: August 15.

Sponsors: FLaReNet, ELRA

Inscriptions: as for the general LTC (+ cc to workshop chairs)

Fees: inscription fees to the LTC + extra 40 Euros or 80 Euros for the Workshop-only attenders.

Paper submission: as for the general LTC(EasyChair) + to the workshop chairs

Presentation: publication in the LTC proceedings (paper + CD)

Reviewing: up to the workshop chairs + scientific committee

Program: The workshop will comprise presentations (including keynote talks) and a panel session, including a EC representative (tentative). In addition, selected speakers will be invited to
present their papers to a larger audience at the main LTC conference.



E-mail: ltc@amu.edu.pl

WWW: http://www.ltc.amu.edu.pl

Back to Top

8-25 . (2009-11-15) CIARP 2009

CIARP 2009 Third Call for Papers
Chairs
Eduardo Bayro Corrochano CINVESTAV, Mexico
Jan Olof Ecklundh
KTH, Sweden
November 15th-18th 2009, Guadalajara, México Venue: Hotel Misión Carlton
CIARP-IAPR Award for best papers Special Issue in Journal Pattern Recognition Letters
The 14th Iberoamerican Congress on Pattern Recognition (CIARP 2009) will be held in Guadalajara, Jalisco, México. CIARP 2009 is organized by CINVESTAV, Unidad Guadalajara, México, supported by IAPR and sponsored by the Mexican Association for Computer Vision, Neural Computing and Robotics (MACVNR) and other five PR iberoamerican PR societies CIARP 2009, as all the thirteen previous conferences, will be a fruitful forum for the exchange of scientific results and experiences, as well as the sharing of new knowledge, and the increase of the co-operation between research groups in pattern recognition and related areas.
Topics of interests
• Artificial Intelligence Techniques in PR
• Bioinformatics
• Clustering
• Computer Vision
• Data Mining
• DB, Knowledge Bases and Linguistic PR-Tools
• Discrete Geometry
• Clifford Algebra Applications in Perception Action
• Document Processing and Recognition
• Fuzzy and Hybrid Techniques in PR
• Image Coding, Processing and Analysis
• Kernel Machines
• Logical Combinatorial Pattern Recognition
• Mathematical Morphology
• Mathematical Theory of Pattern Recognition
• Natural Language Processing and Recognition
• Neural Networks for Pattern Recognition
• Parallel and Distributed Pattern Recognition
• Pattern Recognition Principles
• Petri Nets
• Robotics and humanoids
• Remote Sensing Applications of PR
• Satellite Image processing and radar
• Gognitive Humanoid Vision
• Shape and Texture Analysis
• Signal Processing and Analysis
• Special Hardware Architectures
• Statistical Pattern Recognition
• Syntactical and Structural Pattern Recognition
• Voice and Speech Recognition
Invited Speakers: Prof. M. Petrou Imp. Coll. UK, Prof. I. Kakadiaris Hou TX Univ., Dr. P. Sturm INRIA, Gr. FR, Prof. W. Kropatsch (TU Wien, AU).
Paper Submission
Prospective authors are invited to contribute to the conference by electronically submitting a full paper in English of no more than 8 pages including illustrations, results and references, and must be presented at the conference in English. The papers should be submitted electronically before June 7th, 2009, through the CIARP 2008 webpage (http://www.gdl.cinvestav.mx/ciarp2009). The papers should be prepared following the instructions from Springer LNCS series. At least one of the authors must have registered for the paper to be published
Workshops/Tutorials: CASI’2009 Intellig. Remote Satellite Imagery & Humanoid Robotics, 4 Tutorials on Texture,CV, PR & Geometric Algebra Applications.
Important Dates
Submission of papers before June 7th, 2009
Notification of acceptance August 1th, 2009 Camera-ready August 21th, 2009
Registration IAPR Members Non-IAPR
Before August 21th , 2008 400 USD 450 USD
After August 21th, 2008 450 USD 500 USD
Extra Conference Dinner 50 USD
Registration fee includes: Proceedings, Ice-break Party, Coffee Breaks, Lunches, Conference Dinner, Tutorials and Cultural Program (1. tour colonial area by night, 2. Latin dance night, 3. folkloric dance spectacle, mariachi traditional concert with superb banquet in colonial romantic garden). Extra: organized tours to Puerto. Vallarta Tequila, archeological places, artisans markets, museums and traditional colonial churches and towns . Contact: ciarp09@gdl.cinvestav.mx
Back to Top

8-26 . (2009-11-16) 8ème Rencontres Jeunes Chercheurs en Parole (french)

******************************************************************** 
               Appel à communications RJCP 2009 : 
            8ème Rencontres Jeunes Chercheurs en Parole 
******************************************************************** 
 
 
 
16-18 novembre 2009 à Avignon 
 
http://rjcp2009.univ-avignon.fr 
 
 
 
PRESENTATION 
____________________________________________________________________ 
 
Cette manifestation, parrainée par l’Association Francophone de la 
Communication Parlée (AFCP), donne aux (futurs) doctorants ou jeunes 
docteurs l’occasion de se rencontrer, de présenter leurs travaux et 
d’échanger sur les divers domaines de la Parole. 
 
Des jeunes chercheurs de différentes disciplines seront invités lors 
de ces rencontres et viendront disserter sur les travaux en cours 
dans leurs domaines respectifs. Leurs conseils et questions vous 
permettront de porter un regard nouveau sur vos travaux de 
recherche. 
 
Des sessions "poster" ainsi que des sessions orales seront proposées 
aux participants souhaitant exposer leurs travaux. Ces journées 
sont bien sûr ouvertes à tous ceux qui désirent simplement assister 
aux présentations sans proposer eux-mêmes une communication. 
 
 
 
DATES IMPORTANTES 
____________________________________________________________________ 
 
Date limite de réception des articles : 2 juillet 2009 
Notification aux auteurs : 27 septembre 2009 
Conférence : 16,17 et 18 novembre 2009 
 
 
Pour la bonne organisation de ces rencontres, merci de vous inscrire 
le plus rapidement possible, votre article pourra être envoyé par 
la suite. 
 
 
SOUMISSIONS 
____________________________________________________________________ 
 
Les propositions de communication sous forme de résumé de 
4 à 6 pages devront être envoyées avant le 2 juillet 2009 
sur le site de la conférence : http://rjcp2009.univ-avignon.fr. 
 
Un comité de lecture, composé de scientifiques du domaine, examinera 
les articles soumis et communiquera à chaque participant ses 
remarques éventuelles. 
 
Les instructions spécifiques et feuilles de style prédéfinies sont 
disponibles sur le site. Un recueil des articles sera publié et 
distribué à l’issue de ces rencontres. 
 
 
 
 
DEROULEMENT DE LA CONFERENCE 
____________________________________________________________________ 
 
La conférence se tiendra sur 3 jours dans les locaux de l'Université 
d'Avignon et des Pays de Vaucluse. Outre les présentations des 
participants et les sessions "poster", des personnalités issues du 
monde académique et industriel animeront des conférences plénières. 
De plus, un forum d'entreprises sera organisé, permettant ainsi la 
rencontre entre chercheurs et industriels. 
 
Toutes les informations pratiques concernant le déroulement de la 
conférence seront disponibles sur le site. 
 
 
 
CONTACT 
____________________________________________________________________ 
 
Pour plus de renseignements, vous pouvez envoyer un mail à : 
contact.rjcp2009@univ-avignon.fr 
 
 
 
THEMATIQUES 
____________________________________________________________________ 
 
Les thématiques abordées (liste non exhaustive) : 
 
- Phonétique et phonologie 
- Traitement automatique de la langue naturelle orale 
- Production/perception de la parole 
- Pathologies de la parole 
- Acoustique de la parole 
- Reconnaissance et compréhension de la parole 
- Acquisition de la parole et du langage 
- Applications à composante orale (dialogue, indexation,...) 
- Prosodie 
- Diversité linguistique 
- Surdité 
- Gestualité 
Back to Top

8-27 . (2009-11-20) Seminar FROM PERCEPTION TO COMPREHENSIONOF A FOREIGN LANGUAGE(Strasbourg-France)

SEMINAR
FROM PERCEPTION TO COMPREHENSION
OF A FOREIGN LANGUAGE
UNIVERSITY OF STRASBOURG
UdS
CALL FOR PAPERS
Equipe d’Accueil 1339, Linguistique, Langues et Parole (LiLPa),
Composantes: Fonctionnement Discursif & Parole et Cognition
The proof of the effectiveness of perception in the ultimate phase of speech
reception is when the listener accesses meaning, and which one calls
comprehension.
The obstacles that affect this comprehension in foreign language learning are
multiple. Specialists highlight three points of view that one must correlate to explain
the phenomenon of speech reception: the articulatory and acoustic signals (physical
aspects), the phonological system (linguistic code) and processing of relevant
information by the listener (psycholinguistic aspect).
This seminar will be devoted to the role of perception in the comprehension of a
foreign language (with a particular focus on the comprehension of English), and to
the various dysfunctions related to data processing by the learner.
The transition from perception to comprehension involves a series of processing
stages:
- peripheral (auditory) and central processing, where sensory information makes
it possible for the listener to extract the acoustic and articulatory clues that are
considered relevant;
- categorial perception (phonemic units, invariance and variability);
- matching the learner’s phonetic information and phonological knowledge in the
native language (phonological sieve), or in a foreign language
- recognition of words, sentences, discourse, during various speech acts…
In the perception/comprehension process, difficulties may be related to several
factors:
- intrinsic characteristics of a language (for example the duration of English
vowels or nasality in French…);
- linguistic, situational or interactional contexts…;
- segmentation into erroneous perceptual units, coarticulation…
- speaker-specific variability (speech rate, accent, intonation…).
EXPECTED CONTRIBUTIONS
The topics covered by this seminar, in the field of perception and
comprehension of English or of any other foreign language, will be the
following:
• speech perception
• prosody
• phonology
• foreign language learning / acquisition
• psycholinguistics
• neurophonetics/neurolinguistics
• etc.
ABSTRACT SUBMISSION
Please send your proposals (500 words MAXIMUM, in English or in French), in a
Word compatible format, Times 12, to: soumission.perception09@unistra.fr
by 11th September 2009, latest. The seminar will be held at the University of
Strasbourg, France on Friday the 20th of November, 2009.
For further information, please contact:
Ms Nuzha Moritz (PhD)
Université de Strasbourg (UdS)
Département des Langues Etrangères Appliquées
22, rue René Descartes
67084 Strasbourg cedex
France
moritz@umb.u-strasbg.fr
 
Back to Top

8-28 . (2009-12-04) CfP Troisièmes Journées de Phonétique Clinique Aix en Provence France (french)

JPC3

Troisièmes Journées de Phonétique Clinique

Appel à Communication
**4-5 décembre 2009, Aix-en-Provence, France

_http://www.lpl-aix.fr/~jpc3/ <http://www.lpl-aix.fr/%7Ejpc3/>
_********************************************************************************************************


*
Ces journées s’inscrivent dans la lignée des premières et deuxièmes journées d’études de phonétique clinique, qui s’étaient tenues respectivement à Paris en 2005 et Grenoble en 2007. La phonétique clinique réunit des chercheurs, enseignants-chercheurs, ingénieurs, médecins et orthophonistes, différents corps de métiers complémentaires qui poursuivent le même objectif : une meilleure connaissance des processus d’acquisition et de dysfonctionnement de la parole et de la voix. Cette approche interdisciplinaire vise à optimiser les connaissances fondamentales relatives à la communication parlée chez le sujet sain et à mieux comprendre, évaluer, diagnostiquer et remédier aux troubles de la parole et de la voix chez le sujet pathologique.

Les communications porteront sur les études phonétiques de la parole et de la voix pathologiques, chez l’adulte et chez l’enfant. Les *thèmes* du colloque incluent, de façon non limitative :

   Perturbations du système oro-pharyngo-laryngé    Perturbations du système perceptif    Troubles cognitifs et moteurs    Instrumentation et ressources en phonétique clinique    Modélisation de la parole et de la voix pathologique    Evaluation et traitement des pathologies de la parole et de la voix
*Les contributions sélectionnées seront présentées sous l’une des deux formes suivantes :*

   Communication longue: 20 minutes, pour l’exposé de travaux aboutis    Communication courte: 8 minutes pour l’exposé d'observations
   cliniques, de travaux préliminaires, de problématiques émergentes
   afin de favoriser au mieux les échanges interdisciplinaires entre
   phonéticiens et cliniciens.
*Format de soumission:
*Les soumissions aux JPC se présentent sous la forme de résumés rédigés en français, d’une longueur maximale d’une page A4, police Times New Roman, 12pt, interligne simple. Les résumés devront être soumis au format PDF à l’adresse suivante: _soumission.jpc3@lpl-aix.fr


_*Date limite de soumission: 15 mai 2009
Date de notification auteurs : 1er juillet 2009

*Pour toute information complémentaire, contactez les organisateurs:
_org.jpc3@lpl-aix.fr

_/L’inscription aux JPC3 (1^er juillet 2009) sera ouverte à tous, publiant ou non publiant.


Back to Top

8-29 . (2009-12-09)1st EUROPE-ASIA SPOKEN DIALOGUE SYSTEMS TECHNOLOGY WORKSHOP

1st EUROPE-ASIA SPOKEN DIALOGUE SYSTEMS
TECHNOLOGY WORKSHOP
December 9 – 11, 2009
Kloster Irsee, Germany
Introduction
Dear Colleagues,
It is our pleasure to invite you to participate in this FIRST EUROPE-ASIA SPOKEN DIALOGUE
SYSTEMS TECHNOLOGY WORKSHOP, which will be held at the Kloster Irsee in southern
Germany from December 9 to December 11, 2009.
This annual workshop will bring together researchers from all over the world working in the field of
spoken dialogue systems. It will provide an international forum for the presentation of research and
applications and for lively discussions among researchers as well as industrialists. The workshops
will be held alternately in Europe and Asia.
A scientific focus of the Europe-Asia Spoken Dialogue Systems Technology Workshop is placed on
advanced speech-based human-computer interaction where to a larger extent contextual factors are
modelled and taken into account when users communicate with computers. Future interfaces will be
endowed with more human-like capabilities. For example, the emotional state of the user will be
analyzed so as to be able to automatically adapt the dialogue flow to user preferences, state of
knowledge and learning success. Complex knowledge bases and reasoning capabilities will control
ambient devices that automatically adapt to user requirements and communication styles and, in
doing so, help reducing the mental load of the user. Future interfaces will finally behave like real
partners or cognitive technical assistants to their users.
Topics of interest include mechanisms, architectures, design issues, applications, evaluation and
tools. Prototype and product demonstrations will be very welcome.
The workshop will be held as a Satellite Event of ASRU2009 - Automatic Speech Recognition and
Understanding Workshop; Merano (Italy), December 13-17, 2009.
We welcome you to the workshop.
Gary Geunbae Lee
POSTECH, Pohang
(Korea)
Joseph Mariani
LIMSI-CNRS and
IMMI, Orsay (France)
Wolfgang Minker
Ulm University
(Germany)
Satoshi Nakamura
NICT-ATR, Kyoto
(Japan)
Back to Top

8-30 . (2010-03-15) ICASSP 2010

IEEE ICASSP 2010
International Conference on Acoustics, Speech, and Signal Processing
March 15 – 19, 2010
Sheraton Dallas Hotel * Dallas, Texas, U.S.A.
http://www.icassp2010.com/
The 35th International Conference on Acoustics, Speech, and Signal Processing (ICASSP) will be held at the Sheraton Dallas Hotel, March 15 – 19, 2010. The ICASSP meeting is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 120 lecture and poster sessions on the following topics:
* Audio and electroacoustics
* Bio imaging and signal processing
* Design and implementation of signal processing systems
* Image and multidimensional signal processing
* Industry technology tracks
* Information forensics and security
* Machine learning for signal processing
* Multimedia signal processing
* Sensor array and multichannel systems
* Signal processing education
* Signal processing for communications
* Signal processing theory and methods
* Speech processing
* Spoken language processing
Welcome to Texas, Y’All! Dallas is known for living large and thinking big. As the nation’s ninth-largest city, Dallas is exciting, diverse and friendly — factors that contribute to its success as a leading leisure and convention destination. There’s a whole “new” vibrant Dallas to enjoy-new entertainment districts, dining, shopping, hotels, arts and cultural institutions- with more on the way. There’s never been a more exciting time to visit Dallas than now.
Submission of Papers: Prospective authors are invited to submit full-length, four-page papers, including figures and references, to the ICASSP Technical Committee. All ICASSP papers will be handled and reviewed electronically. The ICASSP 2010 website www.icassp2010.com will provide you with further details. Please note that all submission deadlines are strict.
Tutorial and Special Session Proposals: Tutorials will be held on March 14 and 15, 2010. Brief proposals should be submitted by July 31, 2009, through the ICASSP 2010 website and must include title, outline, contact information for the presenter, and a description of the tutorial and material to be distributed to participants. Special sessions proposals should be submitted by July 31, 2009, through the ICASSP 2010 website and must include a topical title, rationale, session outline, contact information, and a list of invited papers. Tutorial and special session authors are referred to the ICASSP website for additional information regarding submissions.
* Important Deadlines *
Special Session & Tutorial Proposals Due
July 31, 2009
Notification of Special Session & Tutorial Acceptance
September 04, 2009
Submission of Camera-Ready Papers
September 14, 2009
Notification of Paper Acceptance
December 11, 2009
Revised Paper Upload Deadline
January 8, 2010
Author’s Registration Deadline
January 15, 2010
For more detailed information, please visit the ICASSP 2010 official website, http://www.icassp2010.com/.
 
Back to Top

8-31 . (2010-05-11) Speech prosody 2010 Chicago IL USA

SPEECH PROSODY 2010
===============================================================
Every Language, Every Style: Globalizing the Science of Prosody
===============================================================
Call For Papers
===============================================================

Prosody is, as far as we know, a universal characteristic of human speech, founded on the cognitive processes of speech production and perception.  Adequate modeling of prosody has been shown to improve human-computer interface, to aid clinical diagnosis, and to improve the quality of second language instruction, among many other applications.

Speech Prosody 2010, the fifth international conference on speech prosody, invites papers addressing any aspect of the science and technology of prosody.  Speech Prosody is the only recurring international conference focused on prosody as an organizing principle for the social, psychological, linguistic, and technological aspects of spoken language.  Speech Prosody 2010 seeks, in particular, to discuss the universality of prosody.  To what extent can the observed scientific and technological benefits of prosodic modeling be ported to new languages, and to new styles of spoken language?  Toward this end, Speech Prosody 2010 especially welcomes papers that create or adapt models of prosody to languages, dialects, sociolects, and/or communicative situations that are inadequately addressed by the current state of the art.

=======
TOPICS
=======

Speech Prosody 2010 will include keynote presentations, oral sessions, and poster sessions covering topics including:

* Prosody of under-resourced languages and dialects
* Communicative situation and speaking style
* Dynamics of prosody: structures that adapt to new situations
* Phonology and phonetics of prosody
* Rhythm and duration
* Syntax, semantics, and pragmatics
* Meta-linguistic and para-linguistic communication
* Signal processing
* Automatic speech synthesis, recognition and understanding
* Prosody of sign language
* Prosody in face-to-face interaction: audiovisual modeling and analysis
* Prosodic aspects of speech and language pathology
* Prosody in language contact and second language acquisition
* Prosody and psycholinguistics
* Prosody in computational linguistics
* Voice quality, phonation, and vocal dynamics

====================
SUBMISSION OF PAPERS
====================

Prospective authors are invited to submit full-length, four-page papers, including figures and references, at http://speechprosody2010.org. All Speech Prosody papers will be handled and reviewed electronically.

===================
VENUE
===================

The Doubletree Hotel Magnificent Mile is located two blocks from North Michigan Avenue, and three blocks from Navy Pier, at the cultural center of Chicago.  The Windy City has been the center of American innovation since the mid nineteenth century, when a railway link connected Chicago to the west coast, civil engineers reversed the direction of the Chicago river, Chicago financiers invented commodity corn (maize), and the Great Chicago Fire destroyed almost every building in the city. The Magnificent Mile hosts scores of galleries and museums, and hundreds of world-class restaurants and boutiques.

===================
IMPORTANT DATES
===================

Submission of Papers (http://speechprosody2010.org): October 15, 2009
Notification of Acceptance:                                           December 15, 2009
Conference:                                                                    May 11-14, 2010

Back to Top

8-32 . (2010-05-17) 7th Language Resources and Evaluation Conference

 The 7th edition of the Language Resources and Evaluation Conference will take place in Valetta (Malta) on May 17-23, 2010.
More information will be available soon on: http://www.lrec-conf.org/lrec2010/

Back to Top

8-33 . (2010-05-25) JEP 2010

JEP 2010
         XXVIIIèmes Journées d'Étude sur la Parole

                    Université de Mons, Belgique

                         du 25 au 28 mai 2010


                        http://w3.umh.ac.be/jep2010

=====================================================================

Les Journées d'Études de la Parole (JEP) sont consacrées à l'étude de la communication parlée ainsi qu'à ses applications. Ces journées ont pour but de rassembler l'ensemble des communautés scientifiques francophones travaillant dans le domaine. La conférence se veut aussi un lieu d'échange convivial entre doctorants et chercheurs confirmés.

En 2010, les JEP sont organisées par le Laboratoire des Sciences de la Parole de l'Académie Wallonie-Bruxelles, sur le site de l'Université de Mons en Belgique, sous l'égide de l'AFCP
(Association Francophone de la Communication Parlée) avec le  soutien de l'ISCA (International Speech Communication Association).
Un second appel à communication précisant les thèmes ainsi que les modalités de soumission suivra ce premier appel.



CALENDRIER
===========
Date limite de soumission:          11 janvier 2010
Notification aux auteurs:             15    mars 2010
Conférence:                                 25-28 mai 2010


 
 
V. Delvaux
Chargée de Recherches FNRS
Laboratoire de Phonétique
Service de Métrologie et Sciences du Langage
Université de Mons-Hainaut
18, Place du Parc
7000 Mons
Belgium
+3265373140
 
Back to Top

8-34 . (2010-05-19) CfP LREC 2010 - 7th Conference on Language Resources and Evaluation

LREC 2010 - 7th Conference on Language Resources and Evaluation

FIRST ANNOUNCEMENT AND CALL FOR PAPERS

MEDITERRANEAN CONFERENCE CENTRE, VALLETTA - MALTA

MAIN CONFERENCE: 19-20-21 MAY 2010
WORKSHOPS and TUTORIALS: 17-18 MAY and 22-23 MAY 2010

Conference web site: http://www.lrec-conf.org/lrec2010/


The seventh international conference on Language Resources and Evaluation (LREC) will be organised in 2010 by ELRA in cooperation with a wide range of international associations and organisations.


CONFERENCE AIMS

In 12 years – the first LREC was held in Granada in 1998 – LREC has become the major event on Language Resources (LRs) and Evaluation for Human Language Technologies (HLT). The aim of LREC is to provide an overview of the state-of-the-art, explore new R&D directions and emerging trends, exchange information regarding LRs and their applications, evaluation methodologies and tools, ongoing and planned activities, industrial uses and needs, requirements coming from the e-society, both with respect to policy issues and to technological and organisational ones.

LREC provides a unique forum for researchers, industrials and funding agencies from across a wide spectrum of areas to discuss problems and opportunities, find new synergies and promote initiatives for international cooperation, in support to investigations in language sciences, progress in language technologies and development of corresponding products, services and applications, and standards.


Special Highlight: Contribute to building the LREC2010 Map!

LREC2010 recognises that time is ripe to launch an important initiative, the LREC2010 Map of Language Resources, Technologies and Evaluation. The Map will be a collective enterprise of the LREC community, as a first step towards the creation of a very broad, community-built, Open Resource Infrastructure. As first in a series, it will become an essential instrument to monitor the field and to identify shifts in the production, use and evaluation of LRs and LTs over the years.

When submitting a paper, from the START page you will be asked to fill in a very simple template to provide essential information about resources (in a broad sense that includes technologies, standards, evaluation kits, etc.) that either have been used for the work described in the paper or are a new result of your research.

The Map will be disclosed at LREC, where some event(s) will be organised around this initiative.


CONFERENCE TOPICS

Issues in the design, construction and use of Language Resources (LRs): text, speech, other associated media and modalities
•    Guidelines, standards, specifications, models and best practices for LRs
•    Methodologies and tools for LRs construction and annotation
•    Methodologies and tools for the extraction and acquisition of knowledge
•    Ontologies and knowledge representation
•    Terminology
•    Integration between (multilingual) LRs, ontologies and Semantic Web technologies
•    Metadata descriptions of LRs and metadata for semantic/content markup
•    Validation, quality assurance, evaluation of LRs
Exploitation of LRs in different types of systems and applications
•    For: information extraction, information retrieval, speech dictation, mobile communication, machine translation, summarisation, semantic search, text mining, inferencing, reasoning, etc.
•    In different types of interfaces: (speech-based) dialogue systems, natural language and multimodal/multisensorial interactions, voice activated services, cognitive systems, etc.
•    Communication with neighbouring fields of applications, e.g. e-government, e-culture, e-health, e-participation, mobile applications, etc.
•    Industrial LRs requirements, user needs
Issues in Human Language Technologies evaluation
•    HLT Evaluation methodologies, protocols and measures
•    Benchmarking of systems and products
•    Usability evaluation of HLT-based user interfaces (speech-based, text-based, multimodal-based, etc.), interactions and dialogue systems
•    Usability and user satisfaction evaluation
General issues regarding LRs & Evaluation
•    National and international activities and projects
•    Priorities, perspectives, strategies in national and international policies for LRs
•    Open architectures
•    Organisational, economical and legal issues


PROGRAMME

The Scientific Programme will include invited talks, oral presentations, poster and demo presentations, and panels.
There is no difference in quality between oral and poster presentations. Only the appropriateness of the type of communication (more or less interactive) to the content of the paper will be considered.


SUBMISSIONS AND DATES

Submitted abstracts of papers for oral and poster or demo presentations should consist of about 1500-2000 words.
•    Submission of proposals for oral and poster/demo papers: 31 October 2009

Proposals for panels, workshops and tutorials will be reviewed by the Programme Committee.
•    Submission of proposals for panels, workshops and tutorials: 31 October 2009


PROCEEDINGS

The Proceedings on CD will include both oral and poster papers, in the same format. They will be added to the ELRA web archives before the conference.
A Book of Abstracts will be printed.


CONFERENCE PROGRAMME COMMITTEE

Nicoletta Calzolari, Istituto di Linguistica Computazionale del CNR - Pisa, Italy (Conference chair)
Khalid Choukri - ELRA, Paris, France
Bente Maegaard - CST, University of Copenhagen, Denmark
Joseph Mariani - LIMSI-CNRS and IMMI, Orsay, France
Jan Odijk - UIL-OTS, Utrecht, The Netherlands
Stelios Piperidis - Institute for Language and Speech Processing (ILSP), Athens, Greece
Mike Rosner – Department of Intelligent Computer Systems, University of Malta, Malta
Daniel Tapias - Sigma Technologies S.L., Madrid, Spain

Back to Top