Contents
- 1 . Editorial
- 2 . ISCA News
- 2-1 . Board elections
- 2-2 . SIG's News SPLC
- 3 . Future ISCA Conferences and Workshops (ITRW)
- 4 . Workshops and conferences supported (but not organized) by ISCA
- 5 . Books,databases and softwares
- 5-1 . Books
- 5-1-1 . Computeranimierte Sprechbewegungen in realen Anwendungen
- 5-1-2 . Usability of Speech Dialog Systems Listening to the Target Audience
- 5-1-3 . Speech and Language Processing, 2nd Edition
- 5-1-4 . Advances in Digital Speech Transmission
- 5-1-5 . Sprachverarbeitung -- Grundlagen und Methoden der Sprachsynthese und Spracherkennung
- 5-1-6 . Digital Speech Transmission
- 5-1-7 . Distant Speech Recognition,
- 5-1-8 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
- 5-1-9 . Some aspects of Speech and the Brain.
- 5-2 . Database providers
- 5-2-1 . ELRA - Language Resources Catalogue - Update
- 5-2-2 . LDC News
- 5-1 . Books
- 6 . Jobs openings
- 6-1 . (2009-01-08) Assistant Professor Toyota Technological Institute at Chicago
- 6-2 . (2009-01-09) Poste d'ingénieur CDD : environnement intelligent
- 6-3 . (2009-01-13) 2009 PhD Research Fellowships at the University of Trento (Italy)
- 6-4 . (2009-02-06) Position at ELDA
- 6-5 . (2009-01-18) Ph D position at Universitaet Karlsruhe
- 6-6 . (2009-01-16) Two post-docs at the University of Rennes (France)
- 6-7 . (2009-01-13) Ph D Research fellowships at University of Trento (Italy)
- 6-8 . (2009-02-15) Research Grants for PhD Students and Postdoc Researchers-Bielefeld University
- 6-9 . (2009-03-09) 9 PhD positions in the Marie Curie International Training Network
- 6-10 . )2009-03-10) Maitre de conferences a l'Universite Descartes Paris (french)
- 6-11 . (2009-03-14) Institut de linguistique et de phonetique Sorbonne Paris (french)
- 6-12 . (2009-03-15) Poste Maitre de conferences Nanterre Paris (french)
- 6-13 . (2009-03-18) Ingenieur etude/developpement Semantique, TAL, traduction automatique (french)
- 6-14 . (2009-04-02)The Johns Hopkins University: Post-docs, research staff, professors on sabbaticals
- 6-15 . (2009-04-07) PhD Position in The Auckland University - New Zealand
- 6-16 . (2009-04-23) R&D position in SPEECH RECOGNITION, PROCESSING AND SYNTHESIS IRCAM Paris
- 6-17 . (2009-05-04) Several Ph.D. positions and Ph.D. or Postdoc scholarships, Universität Bielefeld
- 6-18 . (2009-05-07) PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING FRANCE)
- 6-19 . (2009-06-11)PhD at IRIT Toulouse France
- 6-20 . (2009-06-11) Ph D at IRIT Toulouse France
- 6-21 . (2009-06-10) PhD in ASR in Le Mans France
- 6-22 . (2009-06-02)Proposition de sujet de thèse 2009 Analyse de scènes de parole Grenoble France
- 6-23 . (2009-05-11) Thèse Cifre indexation de données multimédia Institut Eurecom
- 6-24 . (2009-05-11)Senior Research Fellowship in Speech Perception and Language Development,MARCS Auditory Laboratories
- 6-25 . (2009-05-08)PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)
- 6-26 . (2009-05-07)Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld
- 6-27 . (2009-06-17)Two post-docs in the collaboration between CMU (USA) and University-Portugal program
- 7 . Journals
- 7-1 . Special issue IEEE Trans. ASL Signal models and representation of musical and environmental sounds
- 7-2 . "Speech Communication" special issue on "Speech and Face to Face Communication
- 7-3 . SPECIAL ISSUE of the EURASIP Journal on Audio, Speech, and Music Processing. ON SCALABLE AUDIO-CONTENT ANALYSIS
- 7-4 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing.on Atypical Speech
- 7-5 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing on Animating virtual speakers or singers from audio: lip-synching facial animation
- 7-6 . CfP Special issue of Speech Comm: Non-native speech perception in adverse conditions: imperfect knowledge, imperfect signal
- 7-7 . CfP IEEE Special Issue on Speech Processing for Natural Interaction with Intelligent Environments
- 7-8 . CfP Special issue "Speech as a Human Biometric: I know who you are from your voice" Int. Jnl Biometrics
- 7-9 . CfP Special on Voice transformation IEEE Trans ASLP
- 7-10 . Mathematics, Computing, Language, and the Life: Frontiers in Mathematical Linguistics and Language Theory (tentative)
- 7-11 . CfPSpecial Issue on Statistical Learning Methods for Speech and Language Processing
- 8 . Future Speech Science and Technology Events
- 8-1 . (2009-06-18) Conferences GIPSA Grenoble
- 8-2 . (2009-06-21) Specom 2009- St Petersburg Russia
- 8-3 . (2009-06-22) Summer workshop at Johns Hopkins University
- 8-4 . (2009-06-22) Third International Conference on Intelligent Technologies for Interactive Entertainment (Intetain 2009)
- 8-5 . (2009-06-24) DIAHOLMIA 2009: THE 13TH WORKSHOP ON THE SEMANTICS AND PRAGMATICS OF DIALOGUE
- 8-6 . (2009-06-24) Speaker Odyssey Brno
- 8-7 . (2009-07-01) Conference on Scalable audio-content analysis
- 8-8 . (2009-07) 6th IJCAI workshop on knowledge and reasoning in practical dialogue systems
- 8-9 . (2009-07-09) MULTIMOD 2009 Multimodality of communication in children: gestures, emotions, language and cognition
- 8-10 . (2009-08-02) ACL-IJCNLP 2009 1st Call for Papers
- 8-11 . (2009-08-10) 16th International ECSE Summer School in Novel Computing (Joensuu, FINLAND)
- 8-12 . (2009-09) Emotion challenge INTERSPEECH 2009
- 8-13 . (2009-09-06) Special session at Interspeech 2009:adaptivity in dialog systems
- 8-14 . (2009-09-07)CfP Information Retrieval and Information Extraction for Less Resourced Languages
- 8-15 . (2009-09-09) CfP IDP 09 Discourse-Prosody Interface
- 8-16 . (2009-09-11) SIGDIAL 2009 CONFERENCE
- 8-17 . (2009-09-11) Int. Workshop on spoken language technology for development: from promise to practice.
- 8-18 . (2009-09-11) ACORNS Workshop Brighton UK
- 8-19 . (2009-09-13)Young Researchers' Roundtable on Spoken Dialogue Systems 2009 London
- 8-20 . (2009-09-14) 7th International Conference on Recent Advances in Natural Language Processing
- 8-21 . (2009-09-14) Student Research Workshop at RANLP (Bulgaria)
- 8-22 . (2009-09-28) ELMAR 2009
- 8-23 . (2009-10-05) 2009 APSIPA ASC
- 8-24 . (2009-10-05) IEEE International Workshop on Multimedia Signal Processing - MMSP'09
- 8-25 . (2009-10-13) CfP ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
- 8-26 . (2009-10-18) 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
- 8-27 . (2009-10-23) CfP Searching Spontaneous Conversational Speech (SSCS 2009) ACM Mltimedia Wkshp
- 8-28 . (2009-10-23)ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
- 8-29 . (2009-11-02) CALL FOR ICMI-MLMI 2009 WORKSHOPS New dates !!
- 8-30 . (2009-11-15) CIARP 2009
- 8-31 . (2009-11-16) 8ème Rencontres Jeunes Chercheurs en Parole (french)
- 8-32 . (2009-12-04) CfP Troisièmes Journées de Phonétique Clinique Aix en Provence France (french)
- 8-33 . (2009-12-09)1st EUROPE-ASIA SPOKEN DIALOGUE SYSTEMS TECHNOLOGY WORKSHOP
- 8-34 . (2010-05-11) Speech prosody 2010 Chicago IL USA
- 8-35 . (2010-05-17) 7th Language Resources and Evaluation Conference
1 . Editorial
Dear members,
I was editing this June issue when I was informed by our president Isabel Trancoso that Professor Gunnar Fant from KTH Stockholm has passed away on June 6th. He was one of our greatest pioneers and was awarded the first ISCA medal (1989), an honor he fully deserved for his scientific achievements and the great impact of his career in speech communication. His kindness and friendship will be always remembered by his close colleagues and students and all of us who were privileged to have met him.
Election of the new ISCA board is completed and the list of the members is published below. In the name of all ISCA members I thank all new board members for their willingness to serve speech science and technology and I congratulate them. Also it is a good opportunity to thank all board members who are leaving as well as those who stay for another term for their excellent activity.
Do not forget the registration for INTERSPEECH 2009 (Brighton, UK) is open and that the deadline for early registration is july 15th. I hope to meet you there.
Prof. em. Chris Wellekens
Institut Eurecom
Sophia Antipolis
France
public@isca-speech.org
2 . ISCA News
2-1 . Board elections
Dear ISCA members,The election for nine new ISCA Board members is now complete. Theresults are as follows:338 valid ballots were received during the election period, and thefollowing nine candidates were elected as ISCA Board members for aperiod of four years from September 2009 (members are listedalphabetically):Jean-Francois BONASTRE (France)Nick CAMPBELL (Ireland)Keikichi HIROSE (Japan)David HOUSE (Sweden)Haizhou LI (Singapore)Douglas O'SHAUGHNESSY (Canada)Michael PICHENY (USA)Yannis STYLIANOU (Greece)Isabel TRANCOSO (Portugal)Many thanks to all of you who took part in the election.On behalf of the ISCA Board, I would also like to encourage you toattend the ISCA General Assembly which will take place in Brighton inconjunction with INTERSPEECH 2009. The General Assembly is open to allmembers of ISCA and provides an excellent opportunity for you toactively participate in the meeting and propose suggestions to helpmake our association even better.Looking forward to seeing you in Brighton.Best regards,Bernd MöbiusISCA Treasurer
2-2 . SIG's News SPLC
As SPLC is moving into it's 9 th year of existence, the The ISCA Speaker and Language Characterization (SpLC) SIG plans to hold elections again during the next Odyssey Workshop to replace its current Chairpersons Joe Campbell and Doug Reynolds after a long turn resulting in a very active and growing SIG group.
The SpLC's current Board Members are: Lukáš Burget & Honza Černocký (Brno University of Technology), Jean-François Bonastre (University of Avignon, France), Niko Brümmer (Agnitio, South Africa), Joseph Campbell, & Douglas Reynolds (MIT Lincoln Laboratory, USA), Alvin Martin (NIST, USA), and Kay Berkling (Inline Internet Online Dienste GmbH, Germany).The SpLC's Secretary and Liaison Representative is now Kay Berkling (Germany), who replaced Ivan Magrin-Chagnolleau (France).
The SpLC's main activity is to support the Odyssey Speaker and Language Recognition Workshop that takes place every other year and has been a steady and important asset to the field.
A successful Speaker Odyssey workshop was held in 2008 in Stellenbosch, South Africa, co-hosted by 'Spescom DataVoice' and 'Stellenbosch University Digital Signal Processing Group' and co-chaired by Niko Brümmer and Prof. Johan du Preez.
The series of Speaker and Language Characterization Workshop has moved on to be hosted by Brno University of Technology in Brno, Czech Republic as the 7^th workshop in taking place June 28^th -July 1^st 2010. Brno is the second largest city in the Czech Republic and the capital of Moravia. The city has a local airport and can be easily reached from international airports of Prague (200 km) and Vienna (130 km). Odyssey will take place in the scenic campus of BUT's Faculty of Information Technology featuring middle-age Cartesian monastery and modern lecture halls (see _http://www.fit.vutbr.cz/_) .
While enjoying this setting, four days of intensive program will await the participants. Judging by the 2008 workshop, the program will include lots of speaker recognition and classification with some language identification. As our website (http://_www.speakerodyssey.com <http://www.speakerodyssey.com/>_) states, topics of interest include speaker and language recognition verification, identification, segmentation, and clustering): text-dependent and -independent speaker recognition; multispeaker training and detection; speaker characterization and adaptation; features for speaker recognition; robustness in channels; robust classification and fusion; speaker recognition corpora and evaluation; use of extended training data; speaker recognition with speech recognition; forensics; speaker and language confidence estimation.
In 2010, we also look forward to receiving submissions in multimodality, and multimedia speaker recognition; dialect, and accent recognition; speaker synthesis and transformation; biometrics; human recognition of speaker and language; and commercial applications.
As usual, the NIST 2010 Speaker recognition evaluation (SRE) workshop will precede Odyssey and will take place also in Brno, 24-25 June 2010. For participants attending both the NIST workshop and Odyssey, social activities will be organized on the weekend 26-27 June.
ISCA grants are available to enable students and young scientists to participate and we expect to have a large student participation due to the convenient location.
Kay Berkling
3 . Future ISCA Conferences and Workshops (ITRW)
3-1 . (2009-06-25) ISCA Tutorial and Research Workshop on NON-LINEAR SPEECH PROCESSING
An ISCA Tutorial and Research Workshop on NON-LINEAR SPEECH PROCESSING (NOLISP'09)25/06/2009 - DeadLine: 2009-03-15Vic Catalonia Espagnehttp://nolisp2009.uvic.catAfter the success of NOLISP'03 held in Le Croisic, NOLISP'05 in Barcelona and NOLISP'07 in Paris, we are pleased to present NOLISP'09 to be held at the University of Vic (Catalonia, Spain) on June 25-27, 2009. The workshop will feature invited lectures by leading researchers as well as contributed talks. The purpose of NOLISP'09 is to present and discuss novel ideas, works and results related to alternative techniques for speech processing, which depart from mainstream approaches. Prospective authors are invited to submit a 3 to 4 page paper proposal in English, which will be evaluated by the Scientific C! ommittee. Final papers will be due one month after the workshop to be included in the CD-ROM proceedings. Contributions are expected (but not restricted to) the following areas: Non-linear approximation and estimation Non-linear oscillators and predictors Higher-order statistics Independent component analysis Nearest neighbours Neural networks Decision trees Non-parametric models Dynamics of non-linear systems Fractal methods Chaos modelling Non-linear differential equations All fields of speech processing are targeted by the workshop, namely: Speech production, speech analysis and modelling, speech coding, speech synthesis, speech recognition, speaker identification/verification, speech enhancement/separation, speech perception, etc. ADDITIONAL INFORMATION Proceedings will be published in Springer-Verlag's Lecture Notes Series in Computer Science (LNCS). LNCS is published, in parallel to the printed books, in full-text electronic form. All contributions should be original, and must not have been previously published, nor be under review for presentation elsewhere. A special issue of Speech Communication (Elsevier) on “Non-Linear and Non-Conventional Speech Processing” will be also published after the workshop Detailed instructions for submission to NOLISP'09 and further informations will be available at the conference Web site (http://nolisp2009.uvic.cat).IMPORTANT DATES:* March 15, 2009 - Submission (full papers)* April 30, 2009 - Notification of acceptance* September 30, 2009 - Final (revised) paper
3-2 . (2009-09-06) CfP INTERSPEECH 2009 Brighton UK
3-3 . (2010-09-26) INTERSPEECH 2010 Chiba Japan
Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."
3-4 . (2011-08-27) INTERSPEECH 2011 Florence Italy
Interspeech 2011
Palazzo dei Congressi, Italy, August 27-31, 2011.
Organizing committee
Piero Cosi (General Chair),
Renato di Mori (General Co-Chair),
Claudia Manfredi (Local Chair),
Roberto Pieraccini (Technical Program Chair),
Maurizio Omologo (Tutorials),
Giuseppe Riccardi (Plenary Sessions).
More information www.interspeech2011.org
4 . Workshops and conferences supported (but not organized) by ISCA
4-1 . (2009-12-13) ASRU 2009
4-2 . (2009-12-14) 6th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications MAVEBA 2009
4-3 . (2009-11-05) Workshop on Child, Computer and Interaction
Call for Papers
5 . Books,databases and softwares
5-1 . Books
5-1-1 . Computeranimierte Sprechbewegungen in realen Anwendungen
5-1-2 . Usability of Speech Dialog Systems Listening to the Target Audience
5-1-3 . Speech and Language Processing, 2nd Edition
5-1-4 . Advances in Digital Speech Transmission
5-1-5 . Sprachverarbeitung -- Grundlagen und Methoden der Sprachsynthese und Spracherkennung
5-1-6 . Digital Speech Transmission
5-1-7 . Distant Speech Recognition,
5-1-8 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
5-1-9 . Some aspects of Speech and the Brain.
5-2 . Database providers
5-2-1 . ELRA - Language Resources Catalogue - Update
*****************************************************************
ELRA - Language Resources Catalogue - Update
*****************************************************************
ELRA is happy to announce that 1 new Speech Corpus is now available in its catalogue:
ELRA-S0299 Alcohol Language Corpus (BAS ALC)
ALC contains recordings of 88 German speakers that are either intoxicated or sober. The type of speech ranges from read single digits to full conversation style. Recordings were done during drinking test where speakers drank beer or wine to reach a self-chosen level of alcoholic intoxication. Recordings were performed in two standing automobiles. In the intoxicated state 30 items were sampled from each speaker, while in the sober state 60 items were recorded.
For more information, see: http://catalog.elra.info/product_info.php?products_id=1097
For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org
Visit our On-line Catalogue: http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html
5-2-2 . LDC News
In this Newsletter:
- LDC at MEDAR Conference -
- Early Renewing LDC Members Saved Big! -
- Membership Mailbag: Navigating the LDC Intranet - Part 2 -
- LDC Offices to Close for Memorial Day -
LDC at MEDAR Conference
LDC was pleased to attend the 2nd International Conference on Arabic Language Resources and Tools recently held in
Cieri and other conference attendees were interviewed by Emmy Adul Alim, a staff reporter for IslamOnline.net, a MEDAR sponsor. The resulting article “The Breakthrough of Arabic Language Technologies”, discusses the accomplishments and challenges of creating accessible Arabic human language technologies. Cieri highlighted LDC’s work with al-hakawati, the Arab Cultural Trust, to identify and digitize Arabic heritage texts. Al-hakawati makes the digitized materials immediately available on its website to end users, and LDC is developing a database of these texts that scholars can study for language change over time and across genres.
You can view LDC papers and poster presentations, including those from the MEDAR Conference, on our Papers page. Papers date from 1998 forward and most can be downloaded in pdf format. Presentations slides and posters are available for several papers as well.
Early Renewing LDC Members Saved Big!
The numbers are in and LDC's early renewal discount program was a success! Nearly 100 organizations who renewed membership or joined early received a discount on fees for Membership Year (MY) 2009. Taken together, these members saved over US$50,000! MY 2008 members are reminded that they are still eligible for a 5% discount when renewing. This discount will apply throughout 2009, regardless of time of renewal.
By joining for MY 2009, any organization can take advantage of membership benefits including free membership year data as well as deep discounts on older LDC corpora. Please visit our Members FAQ for further information.
Membership Mailbag: Navigating the LDC Intranet - Part 2
LDC's Membership office responds to a few thousand emailed queries a year, and, over time, we've noticed that some questions tend to crop up with regularity. To address the questions that you, our data users, have asked, we'd like to continue our Membership Mailbag series of newsletter articles. Last month we focused on a few features of the LDC Intranet including establishing an account and using that account to access information about your organization's history with LDC. This month, we'll take a look into using your account to access password-protected corpora and resources.
LDC's Intranet contains the following links:
User
Customer Profile
LDC Online
Corpora Available for Download
LDC Online and Corpora Available for Download sections. After registering for an LDC Intranet account, users can access LDC Online both through the LDC Intranet and the LDC Online page on LDC's website. LDC Online contains an indexed collection of Arabic, Chinese and English newswire text, millions of words of English telephone speech from the Switchboard and Fisher collections and the American English Spoken Lexicon, as well as the full text of the Brown corpus.
To download corpora that your organization has licensed, visit the Corpora Available for Download section. This section contains all web-download corpora the organization has licensed, with the most recently invoiced requests listed first. Any registered user of an organization can utilize the web-download service at any time to view and access the corpora that have been invoiced for delivery over the web. This section will not contain all corpora that an organization has licensed, only those small enough for web-download.
Recently, LDC has made available for web-download some popular resources which were previously distributed only on disc. These resources include TIMIT Acoustic-Phonetic Continuous Speech Corpus (LDC93S1), CELEX2 (LDC96L14), and Treebank-3 (LDC99T42). If an organization has obtained a license to any of these resources, registered users can simply log in to download the data, thereby eliminating the need to locate the copy on disc or license a new copy.
Got a question? About LDC data? Forward it to ldc@ldc.upenn.edu. The answer may appear in a future Membership Mailbag article.
New Publications
(1) 2008 CoNLL Shared Task Data contains the trial corpus, training corpus, development and test data for the 2008 CoNLL (Conference on Computational Natural Language Learning) Shared Task Evaluation. The 2008 Shared Task developed syntactic dependency annotations, including information such as named-entity boundaries and the semantic dependencies model roles of both verbal and nominal predicates. The materials in the Shared Task data consist of excerpts from the following corpora: Treebank-3 LDC99T42 , BNN Pronoun Coreference and Entity Type Corpus LDC2005T33, Proposition Bank I LDC2004T14 (PropBank) and NomBank v 1.0 LDC2008T23.
The Conference on Computational Natural Language Learning (CoNLL) is accompanied every year by a shared task intended to promote natural language processing applications and evaluate them in a standard setting. The 2008 shared task employed a unified dependency-based formalism and merged the task of syntactic dependency parsing and the task of identifying semantic arguments and labeling them with semantic roles.
The 2008 shared task was divided into three subtasks:
- parsing syntactic dependencies
- identification and disambiguation of semantic predicates
- identification of arguments and assignment of semantic roles for each predicate
Several objectives were addressed in this shared task:
- Semantic Role Labeling (SRL) was performed and evaluated using a dependency-based representation for both syntactic and semantic dependencies. While SRL on top of a dependency treebank has been addressed before, the approach of the 2008 Shared Task was characterized by the following novelties:
- The constituent-to-dependency conversion strategy transformed all annotated semantic arguments in PropBank and NomBank v 1.0, not just a subset;
- The annotations addressed propositions centered around both verbal (PropBank) and nominal (NomBank) predicates.
- The constituent-to-dependency conversion strategy transformed all annotated semantic arguments in PropBank and NomBank v 1.0, not just a subset;
- Based on the observation that a richer set of syntactic dependencies improves semantic processing, the syntactic dependencies modeled are more complex than the ones used in the previous CoNLL shared tasks. For example, the corpus includes apposition links, dependencies derived from named entity (NE) structures, and better modeling of long-distance grammatical relations.
- A practical framework is provided for the joint learning of syntactic and semantic dependencies.
2008 CoNLL Shared Task Data is distributed via web-download.
2009 Subscription Members will automatically receive two copies of this corpus on disc. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$750.
(2) English Gigaword Fourth Edition. English Gigaword, now being released in its fourth edition, is a comprehensive archive of newswire text data that has been acquired over several years by the LDC at the University of Pennsylvania. The fourth edition includes all of the contents in English Gigawaord Third Edition (LDC2007T07) plus new data covering the 24-month period of January 2007 through December 2008.
The six distinct international sources of English newswire included in this edition are the following:
- Agence France-Presse, English Service (afp_eng)
- Associated Press Worldstream, English Service (apw_eng)
- Central News Agency of Taiwan, English Service (cna_eng)
- Los Angeles Times/Washington Post Newswire Service (ltw_eng)
- New York Times Newswire Service (nyt_eng)
- Xinhua News Agency, English Service (xin_eng)
New in the Fourth Edition:
- Articles with significant Spanish language content have now been identified and documented.
- Markup has been simplified and made consistent throughout the corpus.
- Information structure has been simplified.
- Character entities have been simplified.
English Gigaword Fourth Edition is distributed on two DVD-ROM.
2009 Subscription Members will automatically receive two copies of this corpus. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$5000.
(3) GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 contains a total of 145,000 words (263 files) of Arabic newsgroup text and its translation selected from thirty-five sources. Newsgroups consist of posts to electronic bulletin boards, Usenet newsgroups, discussion groups and similar forums. This release was used as training data in Phase 1 (year 1) of the DARPA-funded GALE program. This is the second of a two-part release. GALE Phase 1 Arabic Newsgroup Parallel Text - Part 1 was released in early 2009.
Preparing the source data involved four stages of work: data scouting, data harvesting, formating and data selection.
Data scouting involved manually searching the web for suitable newsgroup text. Data scouts were assigned particular topics and genres along with a production target in order to focus their web search. Formal annotation guidelines and a customized annotation toolkit helped data scouts to manage the search process and to track progress.
Data scouts logged their decisions about potential text of interest to a database. A nightly process queried the annotation database and harvested all designated URLs. Whenever possible, the entire site was downloaded, not just the individual thread or post located by the data scout. Once the text was downloaded, its format was standardized so that the data could be more easily integrated into downstream annotation processes. Typically, a new script was required for each new domain name that was identified. After scripts were run, an optional manual process corrected any remaining formatting problems.
The selected documents were then reviewed for content-suitability using a semi-automatic process. A statistical approach was used to rank a document's relevance to a set of already-selected documents labeled as "good." An annotator then reviewed the list of relevance-ranked documents and selected those which were suitable for a particular annotation task or for annotation in general. These newly-judged documents in turn provided additional input for the generation of new ranked lists.
Manual sentence units/segments (SU) annotation was also performed as part of the transcription task. Three types of end of sentence SU were identified: statement SU, question SU, and incomplete SU. After transcription and SU annotation, files were reformatted into a human-readable translation format and assigned to professional translators for careful translation. Translators followed LDC's GALE Translation guidelines which describe the makeup of the translation team, the source data format, the translation data format, best practices for translating certain linguistic features and quality control procedures applied to completed translations.
GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 is distributed via web-download.
2009 Subscription Members will automatically receive two copies of this corpus on disc. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$1500.
6 . Jobs openings
We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobs.html as well as http://www.elsnet.org/ Jobs).
The ads will be automatically removed from ISCApad after 6 months. Informing ISCApad editor when the positions are filled will avoid irrelevant mails between applicants and proposers.
6-1 . (2009-01-08) Assistant Professor Toyota Technological Institute at Chicago
Assistant Professor Toyota Technological Institute at Chicago ######################################################## Toyota Technological Institute at Chicago (http://www.tti-c.org) is a philanthropically endowed academic computer science institute, dedicated to basic research and graduate education in computer science. TTI-C opened for operation in 2003 and by 2010 plans to have 12 tenured and tenure track faculty and 18 research (3-year) faculty. Regular faculty will have a teaching load of at most one course per year and research faculty will have no teaching responsibilities. Applications are welcome in all areas of computer science, but TTI-C is currently focusing on a number of areas including speech and language processing. For all positions we require a Ph.D. degree or Ph.D. candidacy, with the degree conferred prior to date of hire. Applications received after December 31 may not get full consideration. Applications can be submitted online at http://www.tti-c.org/facultyapplication
6-2 . (2009-01-09) Poste d'ingénieur CDD : environnement intelligent
Poste d'ingénieur CDD : environnement intelligent
Ingenieur - CDD
DeadLine: 15/02/2008
http://ims.metz.supelec.fr/spip.php?article99
Un poste d'ingénieur CDD de 18 mois est ouvert sur le campus de Metz de Supélec. Le candidat s’intégrera au sein de l’équipe « Information, Multimodalité & Signal » (http://ims.metz.supelec.fr). Cette équipe composée de 15 personnes est active dans les domaines du traitement numérique du signal et de l’information (traitement statistique du signal, apprentissage numérique, méthodes d’inspiration biologique), de la représentation des connaissances (fouille de données, analyse et apprentissage symbolique) et du calcul intensif et distribué. Le poste vise un profil permettant l’implémentation matérielle intégrée des méthodes développées au sein de l’équipe dans des applications liées aux environnements intelligents ainsi que leur maintenance. Le campus de Metz s’est en effet doté d’une plateforme en vraie grandeur reproduisant une pièce intelligente intégrant caméras, microphones, capteurs infrarouges, interfaces homme-machine (interface vocale, interface cerveau-machine), robo!
ts et moyens de diffusion d’information. Il s’agira de réaliser une plateforme intégrée permettant de déployer des démonstrations rapidement dans cet environnement et de les maintenir.
Profil recherché :
– diplôme d’ingénieur en informatique, ou équivalent universitaire
– expérience de travail dans le cadre d’équipes multidisciplinaires,
– une bonne pratique de l’anglais est un plus.
Plus d'informations sont disponibles sur le site de l'équipe (http://ims.metz.supelec.fr)
Faire acte de candidature (CV+lettre) auprès de O. Pietquin : olivier.pietquin@supelec.fr.
6-3 . (2009-01-13) 2009 PhD Research Fellowships at the University of Trento (Italy)
2009 PhD Research Fellowships
=============================
The Adaptive Multimodal Information and Interface Research Lab
(casa.disi.unitn.it) at University of Trento (Italy) has several
PhD Research fellowships in the following areas:
Statistical Machine Translation
Natural Language Processing
Automatic Speech Recognition
Machine Learning
Spoken/Multimodal Conversational Systems
We are looking for students with _excellent_ academic records
and relevant technical background. Students with EE, CS Master degrees
( or equivalent ) are welcome as well other related disciplines will
be considered. Prospective students are encouraged to look at the lab
website to search for current and past research projects.
PhD research fellowships benefits are described in the graduate school
website (http://ict.unitn.it/disi/edu/ict).
The applicants should be fluent in _English_. The Italian language
competence is optional and applicants are encouraged to acquire
this skill during training. All applicants should have very good
programming skills. University of Trento is an equal opportunity employer.
The selection of candidates will be open until positions are filled.
Interested applicants should send their CV along with their
statement of research interest, transcript records and three reference
letters to :
Prof. Dr.-Ing. Giuseppe Riccardi
Email: riccardi@disi.unitn.it
-------------------
About University of Trento and Information Engineering and Computer
Science Department
The University of Trento is constantly ranked as
premiere Italian graduate university institution (see www.disi.unitn.it).
Please visit the DISI Doctorate school website at http://ict.unitn.it/edu/ict
DISI Department
DISI has a strong focus on Interdisciplinarity with professors from
different faculties of the University (Physical Science, Electrical
Engineering, Economics, Social Science, Cognitive Science, Computer Science)
with international background.
DISI aims at exploiting the complementary experiences present in the
various research areas in order to develop innovative methods and
technologies, applications and advanced services.
English is the official language.
--
Prof. Ing. Giuseppe Riccardi
Marie Curie Excellence Leader
Department of Information Engineering and Computer Science
University of Trento
Room D11, via Sommarive 14
38050 Povo di Trento, Italy
tel : +39-0461 882087
email: riccardi@dit.unitn.it
6-4 . (2009-02-06) Position at ELDA
The Evaluation and Language Distribution Agency (ELDA) is offering a 6-month to 1-year internship in Human Language Technology for the Arabic language, with a special focus on Machine Translation (MT) and Multilingual Information Retrieval (MLIR). The internship is organised in the framework of the European project MEDAR (MEDiterranean ARabic language and speech technology). She or he will work in ELDA offices in Paris and the main work will consist of the development and adaptation of MT and MLIR open source software for Arabic.
http://www.medar.info
http://www.elda.org
Qualifications:
---------------
The applicant should have a high-quality degree in Computer Science. Good programming skills in C, C++, Perl and Eclipse are required.
The applicant should have a good knowledge of Linux and open source software.
Interest in Speech/Text Processing, Machine Learning, Computational Linguistics, or Cognitive Science is a plus.
Proficiency in written English is required.
Starting date:
--------------
February 2009.
Applications
-------------
Applications in the first instance should be made by email to
Djamel Mostefa, Head of Production and Evaluation department, ELDA, email: mostefa _AT_ elda.org
Please include a cover letter and your CV
6-5 . (2009-01-18) Ph D position at Universitaet Karlsruhe
Ph.D. position
in the field of
Multimodal Dialog Systems
is to be filled immediately with a salary according to TV-L, E13.
The responsibilities include basic research in the area of multimodal dialog systems, especially multimodal human-robot interaction and learning robots, within application targeted research projects in the area of multimodal Human-machine interaction. Set in a framework of internationally and industry funded research programs, the successful candidate(s) are expected to contribute to the state-of-the art of modern spoken dialog systems, improving natural interaction with robots.
We are an internationally renowned research group with an excellent infrastructure. Current research projects for improving Human-Machine and Human-to-Human interaction are focus on dialog management for Human-Robot interaction.
Within the framework of the
Applicants are expected to have:
- an excellent university degree (M.S, Diploma or Ph.D.) in Computer Science, Computational Linguistics, or related fields
- excellent programming skills
- advanced knowledge in at least one of the fields of Speech and Language Processing, Pattern Recognition, or Machine Learning
For candidates with Bachelor or Master’s degrees, the position offers the opportunity to work toward a Ph.D. degree.
In line with the university's policy of equal opportunities, applications from qualified women are particularly encouraged. Handicapped applicants will be preferred in case of the same qualification.
Questions may be directed to: Hartwig Holzapfel, Tel. 0721 608 4057, E-Mail: hartwig@ira.uka.de, http://isl.ira.uka.de
The application should be sent to Professor Waibel, Institut für Theoretische Informatik, Universität Karlsruhe (TH), Adenauerring 4, 76131
6-6 . (2009-01-16) Two post-docs at the University of Rennes (France)
Two post-doc positions on sparse representations at IRISA, Rennes, Post-Doc DeadLine: 28/02/2009 stephanie.lemaile@irisa.fr Two postdoc positions are opened in the METISS team at INRIA, Rennes, France, in the area of data analysis / signal processing for large-scale data. INRIA, the French National Institute for Research in Computer Science and Control plays a leading role in the development of Information and Communication Science and Technology (ICST) in France. The METISS project team gathers more than 15 researchers and engineers for research in audio signal and speech modelling and processing. The positions are opened in the context of the European project SMALL (Sparse Models, Algorithms and Learning for Large-scale data), within the FET-Open program of FP7, and of the ECHANGE project (ECHantillonnage Acoustique Nouvelle GEnération), funded by the french ANR. The objective of the SMALL project is to build a theoretical framework with solid foundations, as well as efficient algorithms, to discover and exploit structure in large-scale multimodal or multichannel data, using sparse signal representations. The SMALL consortium is made of 5 academic partners located in four countries (France, United Kingdom, Switzerland, and Israel). INRIA is the scientific coordinator of the SMALL project. INRIA is also the coordinator of the ECHANGE project, which gathers three academic partners (Institut Jean Le Rond d'Alembert & Institut Jacques Louis Lions from Université Paris 6, and INRIA). The objective of ECHANGE is to design a theoretical and experimental framework based on sparse representations and compressed sensing to measure and process large complex acoustic fields through a limited number of acoustic sensors. DESCRIPTION The postdocs will work on theoretical, algorithmic and practical aspects of sparse representations of large-dimensional data, with a particular emphasis on acoustic fields, for various applications such as compressed sensing, source separation and localization, and signal classification. REQUESTED PROFILE: Candidates should hold a Ph.D in Signal/Image Processing, Machine Learning, or Applied Mathematics. Previous experience in sparse representations (time-frequency and time-scale transforms, pursuit algorithms, support vector machines and related approaches) is desirable, as well as a strong taste for the mathematical aspects of signal processing. ADDITIONAL INFORMATION For additional technical information, please contact : remi.gribonval@inria.fr DURATION OF THE CONTRACT The positions, funded for at least 2 years (up to three years), will be renewed on a yearly basis depending on scientific progress and achievement. The gross minimum salary will be 28287 € annually (~ 1923 € net per month) and will be adjusted according to experience. The usual funding support of any French institution (medical insurance, etc.) will be provided. TENTATIVE RECRUITING DATE 01.03.2009 as soon as possible PLACE OF EMPLOYMENT INRIA Rennes – Bretagne Atlantique (France) - Websites: : http://www.irisa.fr/ / http://www.inria.fr SCIENTIFIC COORDINATOR Rémi GRIBONVAL - SMALL/ECHANGE project leader - METISS Project-Team - INRIA-Bretagne Atlantique - Email: remi.gribonval@inria.fr - phone: +33 2 99 84 25 06 APPLICATIONS TO BE SENT TO Please send application files (a motivation letter, a full resume, a statement of research interests, a list of publications, and up to five reference letters) to Stéphanie Lemaile, SMALL/ECHANGE administrative assistant. Email: stephanie.lemaile@irisa.fr Deadline: end of february 2009. http://gdr-isis.org/rilk/gdr/Kiosque/poste.php?jobid=3051
6-7 . (2009-01-13) Ph D Research fellowships at University of Trento (Italy)
2009 PhD Research Fellowships
The Adaptive Multimodal Information and Interface Research Lab
(casa.disi.unitn.it) at University of Trento (Italy) has several
PhD Research fellowships in the following areas:
Statistical Machine Translation
Natural Language Processing
Automatic Speech Recognition
Machine Learning
Spoken/Multimodal Conversational Systems
We are looking for students with _excellent_ academic records
and relevant technical background. Students with EE, CS Master degrees
( or equivalent ) are welcome as well other related disciplines will
be considered. Prospective students are encouraged to look at the lab
website to search for current and past research projects.
PhD research fellowships benefits are described in the graduate school
website (http://ict.unitn.it/disi/edu/ict).
The applicants should be fluent in _English_. The Italian language
competence is optional and applicants are encouraged to acquire
this skill during training. All applicants should have very good
programming skills. University of Trento is an equal opportunity employer.
The selection of candidates will be open until positions are filled.
Interested applicants should send their CV along with their
statement of research interest, transcript records and three reference
letters to :
Prof. Dr.-Ing. Giuseppe Riccardi
Email: riccardi@disi.unitn.it
-------------------
About University of Trento and Information Engineering and Computer
Science Department
The University of Trento is constantly ranked as
premiere Italian graduate university institution (see www.disi.unitn.it).
Please visit the DISI Doctorate school website at http://ict.unitn.it/edu/ict
DISI Department
DISI has a strong focus on Interdisciplinarity with professors from
different faculties of the University (Physical Science, Electrical
Engineering, Economics, Social Science, Cognitive Science, Computer Science)
with international background.
DISI aims at exploiting the complementary experiences present in the
various research areas in order to develop innovative methods and
technologies, applications and advanced services.
English is the official language.
--
Prof. Ing. Giuseppe Riccardi
Marie Curie Excellence Leader
Department of Information Engineering and Computer Science
University of Trento
Room D11, via Sommarive 14
38050 Povo di Trento, Italy
tel : +39-0461 882087
email: riccardi@dit.unitn.it
6-8 . (2009-02-15) Research Grants for PhD Students and Postdoc Researchers-Bielefeld University
6-9 . (2009-03-09) 9 PhD positions in the Marie Curie International Training Network
Up to 9 PhD Positions available in
the Marie Curie International Training Network on
Speech Communication with Adaptive LEarning (SCALE)
SCALE is a cooperative project between
· IDIAP Research Institute in
· Radboud University Nijmegen, The Netherlands (Prof Lou Boves, Dr Louis ten Bosch, Dr-ir Bert Cranen, Dr O. Scharenborg)
· RWTH
·
·
·
Companies like Toshiba or Philips Speech Recognition Systems/Nuance are associated partners of the program.
Each PhD position is funded for three years and degrees can be obtained from the participating academic institutions.
Distinguishing features of the cooperation include:
· Joint supervision of dissertations by lecturers from two partner institutions
· While staying with one institution for most of the time, the program includes a stay at a second partner institution either from academic or industry for three to nine month
· An intensive research exchange program between all participating institutions
PhD projects will be in the area of
· Automatic Speech Recognition
· Machine learning
· Speech Synthesis
· Signal Processing
· Human speech recognition
The salary of a PhD position is roughly 33.800 Euro per year. There are additional mobility (up to 800 Euro/month) and travel allowances (yearly allowance). Applicants should hold a strong university degree which would entitle them to embark on a doctorate (Masters/diploma or equivalent) in a relevant discipline, and should be in the first four years of their research careers. As the project is funded by a EU mobility scheme, there are also certain mobility requirements.
Women are particularly encouraged to apply.
Deadlines for applications:
After each deadline all submitted applications will be reviewed and positions awarded until all positions are filled.
Applications should be submitted at http://www.scale.uni-saarland.de/index.php?authorsInstructions=1 .
To be fully considered, please include:
- a curriculum vitae indicating degrees obtained, disciplines covered
(e.g. list of courses ), publications, and other relevant experience
- a sample of written work (e.g. research paper, or thesis,
preferably in English)
- copies of high school and university certificates, and transcripts
- two references (e-mailed directly to the SCALE office
(Diana.Schreyer@LSV.Uni-Saarland.De) before the deadline)
- a statement of research interests, previous knowledge and activities
in any of the relevant research areas.
In case an application can only be submitted by regular post, it should
be sent to:
SCALE office
Spoken Language Systems, FR 7.4
C 71 Office 0.02
D-66041 Saarbruecken
If you have any questions, please contact Prof. Dr.
(Dietrich.Klakow@LSV.Uni-Saarland.De).
For more information see also http://www.scale.uni-saarland.de/
6-10 . )2009-03-10) Maitre de conferences a l'Universite Descartes Paris (french)
Un poste de maître de conférences en informatique (section 27) 27MCF0031 est à pouvoir à l'université Paris Descartes.
L’objectif de ce recrutement est de renforcer la thématique de recherche en traitement de la parole pour la détection et la remédiation d’altérations de la voix. On attend du candidat une solide expérience en traitement automatique de la parole (reconnaissance, synthèse, …).
Pour l'enseignement, tous les diplômes de l'UFR mathématiques et informatique sont concernés : la Licence MIA, le Master Mathématique et Informatique, le Master MIAGE.
Contact: Marie-José Carat
Professeur d'Informatique
CRIP5 - Diadex (Dialogue et indexation)
Université Paris Descartes
45, rue des Saints Pères - 75270 Paris cedex 06
< mailto:Marie-Jose.Caraty@ParisDescartes.fr>
Tél : (33/0) 1 42 86 38 48
6-11 . (2009-03-14) Institut de linguistique et de phonetique Sorbonne Paris (french)
UNIVERSITE PARIS 3 (SORBONNE NOUVELLE) Poste n° 3743
07-Sciences du langage : linguistique et phonétique générales ...
Informatique et Traitemant Automatique des Langues
PARIS 75005
Vacant
Adresse d'envoi du
dossier :
17, RUE DE LA SORBONNE
Bureau du personnel enseignant
PR - 7eme - 0743
75005 - PARIS
Contact administratif :
N° de téléphone :
N° de Fax :
Email :
MARTINE GRAFFAN
GESTION MCF
01 40 46 28 96 01 40 46 28 92
01 43 25 74 71
Martine.Graffan@univ-paris3.fr
Date de prise de fonction : 01/09/2009
Mots-clés :
Profil enseignement :
Composante ou UFR :
Référence UFR :
Institut de linguistique et phonetique generales et appliquees
0751982X
Laboratoire5 :
EA2290 - SYSTEMES LINGUISTIQUES, ENONCIATION ET DISCURSIVITE
(SYLED)
UMR7018 - LABORATOIRE DE PHONETIQUE ET PHONOLOGIE
EA1483 - RECHERCHE SUR LE FRANCAIS CONTEMPORAIN
UMR7107 - LABORATOIRE DES LANGUES ET CIVILISATIONS A TRADITION
ORALE (LACITO)
Informations Complémentaires
Enseignement :
Profil :
L’enseignement interviendra dès la 1ère année de la filière de la Licence des Sciences
du Langage jusqu’au Doctorat des Sciences du Langage, spécialité TAL. La formation en
Traitement Automatique des Langues peut bien entendu aussi trouver des applications dans
un Master de Sciences du Langage, spécialité Langage, Langues, Modèles et un Doctorat de
Sciences du Langage d’autres spécialités.
Le poste permettra l’encadrement d’enseignements associant Sciences du Langage et
Traitement Automatique des Langues, orientés à la fois vers la poursuite d’études en Master
et Doctorat et vers la professionnalisation, en préparant à des métiers des Industries de la
Langue.
Département d’enseignement : UFR de Linguistique e Phonétique Générales et
Appliquées
Lieu(x) d’exercice : 19, rue des Bernardins 75005 - PARIS
Equipe pédagogique :
Nom directeur département : Madame Martine VERTALIER
Tél. directeur département. : 01 44 32 05 79
Email directeur département : Martine.Vertalier@univ-paris3.fr
URL département. : /
Recherche :
Profil :
Développement et encadrement des recherches en TAL, recherche sur « grands
corpus » oraux et/ou écrits dans des langues diverses, éventuellement fouille de données,
induction de grammaires, mais aussi en synergie, au sein des équipes de recherche
constituées, avec les composantes travaillant sur d’autres domaines de recherche, par un
apport de ressources théoriques et technologiques. Le Professeur inscrira sa recherche dans
l’Ecole Doctorale 268 de Paris3 en priorité dans l’équipe fondatrice des filières de formation
et de recherche décrites ci-dessus : le SYLED, en particulier sa composante CLA2t (Centre de
Lexicométrie et d’Analyse Automatique des Textes), ou dans une équipe dont les enseignants
chercheurs contribuent à l’enseignement et à la recherche à l’ILPAG (Laboratoire de
Phonéthique et Phonologie ) ; UMR 7107 Laboratoire des Langues et Civilisations à Tradition
Orale (LACITO).
Lieu(x) d’exercice :
1- EA 2290 SYLED 19 , rue des Bernardins 75005-PARIS
2- UMR 7018 Laboratoire de phonétique et phonologie 19, rue des Bernardins
75005-PARIS
3- EA 1483 Recherche sur le Français Contemporain 19, rue des Bernardins
75005-PARIS
4- UMR 7107 LACITO CNRS 7, rue G. Môquet 94800-VILLEJUIF
Nom directeur laboratoire : 1- M. André SALEM 01 44 32 05 84
2- Me Jacqueline VAISSIERE et Me Annie RIALLAND 01 43 26 57 17
3- Me Anne SALAZAR-ORVIG 01 44 32 05 07
4- Me Zlatka GUENTCHEVA 01 49 58 37 78
Email directeur laboratoire : syled@univ-paris3.fr -
jacqueline.vaissiere@univ-paris3.fr - anne.salazar-orvig@univ-paris3.fr - lacito@vjf.cnrs.fr
6-12 . (2009-03-15) Poste Maitre de conferences Nanterre Paris (french)
Poste MCF, 221 : Linguistique : pathologie des acquisitions langagières
Université Paris X, Nanterre, Département des Sciences du langage
Contact : Anne Lacheret, anne@lacheret.com
Préférence accordée aux candidats et candidates à double profil :
linguistique et orthophonie ou discipline connexe.
6-13 . (2009-03-18) Ingenieur etude/developpement Semantique, TAL, traduction automatique (french)
Ingénieur Etude & Développement (H/F)
POSTE BASE DANS LE NORD PAS DE CALAIS (62)
Fort d’une croissance continue de ses activités, soutenue par un investissement permanent en R&D, notre CLIENT, leader Européen du traitement de l’information recrute un Ingénieur Développement (h/f) spécialisé en sémantique, traitement automatique du langage naturel, outils de traduction automatique et de recherche d’informations cross-lingue et système de gestion de ressources linguistique multilingues (dictionnaires, lexiques, mémoires de traduction, corpus alignés).
Passionné(e) par l’application des technologies les plus avancées au traitement industriel de l’information, vos missions consistent à concevoir, développer et industrialiser les chaînes de traitement documentaire utilisées par les lignes de production pour le compte des clients de l’entreprise.
De formation supérieure en informatique (BAC+5 ou équivalent), autonome et créatif, nous vous proposons d’intégrer une structure dynamique et à taille humaine où l’innovation est permanente au service de la production et du client.
Vous justifiez idéalement de 2/3 ans d'expérience dans la programmation orientée objet et les processus de développement logiciel. La pratique de C++ et/ou Java est indispensable.
La maîtrise de l’anglais est exigée pour évoluer dans un groupe à envergure internationale.
Vos qualités d’analyse et de synthèse, votre sens du service et de l’engagement client vous permettront de relever le challenge que nous vous proposons.
6-14 . (2009-04-02)The Johns Hopkins University: Post-docs, research staff, professors on sabbaticals
6-15 . (2009-04-07) PhD Position in The Auckland University - New Zealand
PhD Position in The Auckland University - New Zealand Speech recognition for Healthcare Robotics Description: This project is the speech recognition component of a larger project for a speech enabled command module with verbal feedback software to facilitate interaction between aged people and robots. Including: speech generation and empathetic speech expression by the robot, speech recognition by the robot. For more details please refer to the link: https://wiki.auckland.ac.nz/display/csihealthbots/Speech+recognition+PhD
6-16 . (2009-04-23) R&D position in SPEECH RECOGNITION, PROCESSING AND SYNTHESIS IRCAM Paris
RESEARCH AND DEVELOPMENT POSITION IN SPEECH RECOGNITION, PROCESSING AND SYNTHESIS =========================================================================
The position is available immediately in the Speech group of the Analysis/Synthesis team at Ircam.
The Analysis/Synthesis team undertakes research and development
centered on new and advanced algorithms for analysis, synthesis and
transformation of audio signals, and, in particular, speech.
JOB DESCRIPTION:
A full-time position is open for research and development of advanced statistics
and signal processing algorithms in the field of speech recognition,
transformation and synthesis.
http://www.ircam.fr/anasyn.html (projects Rhapsodie, Respoken,
Affective Avatars, Vivos, among others)
The applications in view are, for example,
- Transformation of the identity, type and nature of a voice
- Text-to-Speech and expressive Speech Synthesis
- Synthesis from actor and character recordings.
The principal task is the design and the development of new algorithms
for some of the subjects above and in collaboration with the other
members of the Speech group. The research environment is Linux, Matlab
and various scripting languages like Perl. The development environment
is C/C++, for Windows in particular.
REQUIRED EXPERIENCE AND COMPETENCE:
O Excellent experience of research in statistics, speech and signal processing
O Experience in speech recognition, automatic segmentation (e.g. HTK)
O Experience of C++ development
O Good knowledge of UNIX and Windows environments
O High productivity, methodical work, and excellent programming style.
AVAILABILITY:
The position is available in the Analysis/Synthesis team of the Research
and Ddevelopment department of Ircam to start as soon as possible.
DURATION:
The initial contract is for 1 year, and could be prolonged.
EEC WORKING PAPERS:
In order to be able to begin immediately, the candidate SHALL HAVE valid EEC working papers.
SALARY:
According to formation and experience.
TO APPLY:
Please send your CV describing in a very detailed way the level of knowledge,
expertise and experience in the fields mentioned above (and any other
relevant information, recommendations in particular) preferably by email to:
Xavier.Rodet@ircam.fr (Xavier Rodet, Head of the Analysis/Synthesis team)
Or by fax: (33 1) 44 78 15 40, attention of Xavier Rodet
Or by post to: Xavier Rodet, IRCAM, 1 Place Stravinsky, 75004 Paris, France
6-17 . (2009-05-04) Several Ph.D. positions and Ph.D. or Postdoc scholarships, Universität Bielefeld
Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld
- speech synthesis and/or recognition
- discourse prosody
- laboratory phonology
- speech and language rhythm research
- multimodal speech (technology)
6-18 . (2009-05-07) PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING FRANCE)
=============================================================================
PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting
09/09)
=============================================================================
The PORT-MEDIA (ANR CONTINT 2008-2011) is a cooperative project
sponsored by the French National Research Agency, between the University
of Avignon, the University of Grenoble, the University of Le Mans, CNRS
at Nancy and ELRA (European Language Resources Association). PORT-MEDIA
will address the multi-domain and multi-lingual robustness and
portability of spoken language understanding systems. More specifically,
the overall objectives of the project can be summarized as:
- robustness: integration/coupling of the automatic speech recognition
component in the spoken language understanding process.
- portability across domains and languages: evaluation of the genericity
and adaptability of the approaches implemented in the
understanding systems, and development of new techniques inspired by
machine translation approaches.
- representation: evaluation of new rich structures for high-level
semantic knowledge representation.
The PhD thesis will focus on the multilingual portability of speech
understanding systems. For example, the candidate will investigate
techniques to fast adapt an understanding system from one language to
another and creating low-cost resources with (semi) automatic methods,
for instance by using automatic alignment techniques and lightly
supervised translations. The main contribution will be to fill the gap
between the techniques currently used in the statistical machine
translation and spoken language understanding fields.
The thesis will be co-supervised by Fabrice Lefèvre, Assistant Professor
at LIA (University of Avignon) and Laurent Besacier, Assistant Professor
at LIG (University of Grenoble). The candidate will spend 18 months at
LIG then 18 months at LIA.
The salary of a PhD position is roughly 1,300€ net per month. Applicants
should hold a strong university degree entitling them to start a
doctorate (Masters/diploma or equivalent) in a relevant discipline
(Computer Science, Human Language Technology, Machine Learning, etc).
The applicants should be fluent in English. Competence in French is
optional, though applicants will be encouraged to acquire this skill
during training. All applicants should have very good programming skills.
For further information, please contact Fabrice Lefèvre (Fabrice.Lefevre
at univ-avignon.fr) AND Laurent Besacier (Laurent.Besacier at imag.fr).
====================================================================================
Sujet de thèse en Traduction Automatique et Compréhension de la Parole
(début 09/09)
====================================================================================
Le projet PORT-MEDIA (ANR CONTINT 2008-2011) concerne la robustesse et
la portabilité multidomaine et multilingue des systèmes de compréhension
de l'oral. Les partenaires sont le LIG, le LIA, le LORIA, le LIUM et
ELRA (European Language Ressources Association). Plus précisément, les
trois objectifs principaux du projet concernent :
-la robustesse et l'intégration/couplage du composant de reconnaissance
automatique de la parole dans le processus de compréhension.
-la portabilité vers un nouveau domaine ou langage : évaluation des
niveaux de généricité et d'adaptabilité des approches implémentées dans
les systèmes de compréhension.
-l’utilisation de représentations sémantiques de haut niveau pour
l’interaction langagière.
Ce sujet de thèse concerne essentiellement la portabilité multilingue
des différents composants d’un système de compréhension automatique ;
l’idée étant d’utiliser, par exemple, des techniques d’alignement
automatique et de traduction pour adapter rapidement un système de
compréhension d’une langue vers une autre, en créant des ressources à
faible coût de façon automatique ou semi-automatique. L'idée forte est
de rapprocher les techniques de traduction automatique et de
compréhension de la parole.
Cette thèse est un co-encadrement entre deux laboratoires (Fabrice
Lefevre, LIA & Laurent Besacier, LIG). Les 18 premiers mois auront lieu
au LIG, les 18 suivants au LIA.
Le salaire pour un etudiant en thèse est d'environ 1300€ net par mois.
Nous recherchons des étudiants ayant un Master (ou équivalent) mention
Recherche dans le domaine de l'Informatique, et des compétences dans les
domaines suivants : traitement des langues écrites et/ou parlées,
apprentissage automatique...
Pour de plus amples informations ou candidater, merci de contacter
Fabrice Lefèvre (Fabrice.Lefevre at univ-avignon.fr) ET Laurent Besacier
(Laurent.Besacier at imag.fr).
--------------------------
6-19 . (2009-06-11)PhD at IRIT Toulouse France
Sujet de thèse : Caractérisation et identification automatique de dialectes
Directeur de thèse : Régine André-Obrecht
Encadrement : Jérôme Farinas
Lieu : IRIT (Toulouse)
Sélection : aura lieu devant l'Ecole Doctorale MITT lee 24 et 25 juin 2009
Connaissances et compétences requises : informatique (traitement automatique de la parole), linguistique (phonologe, prosodie)
Description :
Le recherche en traitement automatique de la parole s'intéresse de plus en plus au traitement de grandes collections de données, dans des conditions de parole spontanée et conversationnelle. Les performances sont dépendantes de toutes les variabilités de la parole. Une de ces variabilités concerne l'appartenance dialectale du locuteur, qui induit de la variabilité tant au niveau de la prononciation phonétique, mais également au niveau de la morphologie des mots et de la prosodie. Nous proposons de réaliser un sujet de recherche sur la caractérisation automatique dialectale des locuteurs, en vue de diriger l'adaptation des systèmes de reconnaissance de la parole : la sélection de modèles acoustiques et prosodiques adaptées permettront d'améliorer des performances dans des conditions de reconnaissance indépendante du locuteur. La réalisation d'un tel système devra s'appuyer sur les avancées récentes en identification de la langue au niveau de la modélisation acoustique par exploration de réseaux de phonèmes et proposer une modélisation fine basée sur la micro et macro prosodie. Les bases de données disponibles au sein du projet sur la phonologie du français contemporain (http://www.projet-pfc.net/) permettront de disposer d'un large éventail de données sur les variances de prononciation. Le système final sera évalué lors des campagnes internationales organisées par le NIST sur la vérification de la langue, qui prennent maintenant en compte les variances dialectales (mandarin, anglais, espagnol et hindi) : http://www.nist.gov/speech/tests/lre/.
Jerome Farinas jerome.farinas@irit.fr
Maitre de Conferences UPS/IRIT Equipe SAMOVA
http://www.irit.fr/~Jerome.Farinas/
tel : +33 561557434 fax : +33 959280850 mob : +33 685229687
6-20 . (2009-06-11) Ph D at IRIT Toulouse France
Thèse : allocation MENRT (+ possibilité monitorat)
DeadLine: 15 juin 2009 (les candidats sélectionnés par les encadrants devront faire une présentation orale vers le 23 juin devant l’école doctorale : contacter senac@irit.fr pour plus de précisions)
Date : à partir de Septembre 2009
Durée : 3 ans
Intitulé du sujet de thèse proposé :
Méthodes d’analyse et de structuration de contenus audiovisuels pour des services de télévision à la demande
Lieu : Laboratoire IRIT – UPS UMR 5505 (Toulouse III)
Nom, prénom et courriel des encadrants : Hervé Bredin bredin@irit.fr et Christine Sénac senac@irit.fr
Description du projet :
La télévision vit actuellement une transformation profonde, portée par la numérisation de sa diffusion et par l’apparition de nouveaux canaux de distribution de l’offre de contenus audiovisuels. Citons en particulier les services de vidéo à la demande et la généralisation des « set-top box » permettant l’enregistrement de plusieurs chaînes de télévision en simultanée et sur une longue durée (jusqu’à une semaine en continu). Cette profusion de contenus (des millions d’heures de télévision sont disponibles) rend indispensables des fonctionnalités de recherche et de navigation automatiques dans ces catalogues numériques.
Les enregistrements récurrents (quotidiens ou hebdomadaires), effectués classiquement sur ce type de dispositifs, permettent de constituer des "collections". Ces collections peuvent être gérées comme un ensemble homogène de contenus présentant tous les mêmes caractéristiques structurelles, stylistiques, etc. On peut donc penser que l'identification automatique de ces caractéristiques communes doit permettre de déployer des services adaptés tels que le résumé automatique permettant de prendre connaissance rapidement de l'intégralité d'une collection ou d'un enregistrement particulier de celle-ci.
Le caractère novateur de ce programme de recherche réside dans l’application de techniques de résumé automatique adaptées à une collection, permettant ainsi de délimiter un cadre d’exploitation précis et par conséquent de définir clairement un plan d’évaluation.
Cette question pose un triple problème : celui de la réalisation d'une procédure automatique et sans connaissance a priori sur la nature de la collection permettant d'identifier les caractéristiques communes (on pourra s'appuyer sur les résultats préliminaires décrits dans [1]) ; celui de la définition d'un mécanisme de constitution automatique de résumé exploitant l'information de ces caractéristiques communes ; et celui de l'évaluation qualitative du service correspondant.
L’équipe SAMoVA possède une très grande expérience dans la structuration, l’analyse et la modélisation de documents audiovisuels. En témoignent les différentes participations aux campagnes d’évaluation internationales (telles que TRECVid [2] ou Ester) et au projet Quaero [3]. L’étudiant pourra en outre mener des expérimentations sur la plateforme OSIRIM [4] qui, d'ici sa phase de mise en exploitation prévue pour juin 2009, permettra en particulier d'acquérir en simultané plus de 15 flux audiovisuels. Il sera ainsi possible de publier des résultats validés sur des quantités de données rivalisant avec seulement 4 ou 5 autres laboratoires dans le monde (INA, IRISA en France, ou Carnegie Mellon aux USA, etc.). Les publications pourront être soumises à des conférences de renommée internationale telles que ACM Multimedia, IEEE ICASSP, CBMI, etc.
[1] Siba Haidar, Philippe Joly, Bilal Chebaro. Mining for Video Production Invariants to Measure Style Similarity. Dans / In : International Journal of Intelligent Systems, Wiley, Vol. 21 N. 7, p. 747-763, juillet / july 2006.
[2] H. Bredin, D. Byrne, H. Lee, N. O’Connor, and G. J. Jones.
[4] http://www.irit.fr/OSIRIM/
Compétences requises:
Cette thèse s’adresse aux étudiants en possession d’un master en informatique ayant de bonnes bases en mathématiques appliquées. Il serait également souhaitable d’avoir des bases en traitement du signal ou de l’image.
6-21 . (2009-06-10) PhD in ASR in Le Mans France
PhD position in Automatic Speech Recognition
=====================================
Starting in september-october 2009.
The ASH (Attelage de Systèmes Hétérogènes) project is a project funded by the ANR (French National Research Agency). Three French academic laboratories are involved: LIUM (University of Le Mans), LIA (University of Avignon) and IRISA (Rennes).
The main objective of the ASH project is to define and experiment an original methodological framework for the integration of heterogeneous automatic speech recognition systems. Integrating heterogeneous systems, and hence heterogeneous sources of knowledge, is a key issue in ASR but also in many other applicative fields concerned with knowledge integration and multimodality.
Clearly, the lack of a generic framework to integrate systems operating with different viewpoints, different knowledges and at different levels is a strong limitation which needs to be overcome: the definition of such a framework is the fundamental challenge of this work.
By defining a rigorous and generic framework to integrate systems, significant scientific progresses are expected in automatic speech recognition. Another objective of this project is to enable the efficient and reliable processing of large data streams by combining systems on the y.
At last, we expect to develop an on-the-fly ASR system as a real-time demonstrator of this new approach.
The thesis will be co-supervised by Paul Deléglise, Professeur at LIUM, Yannick Estève, Assistant Professor at LIUM and Georges Linarès, Assistant Professor at LIA. The candidate will work at Le Mans (LIUM), but will regularly spend a few days in Avignon (LIA)
Applicants should hold a strong university degree entitling them to start a doctorate (Masters/diploma or equivalent) in a relevant discipline (Computer Science, Human Language Technology, Machine Learning, etc).
The applicants for this PhD position should be fluent in English or in French. Competence in French is optional, though applicants will be encouraged to acquire this skill during training. This position is funded by the ANR.
Strong software skills are required, especially Unix/linux, C, Java, and a scripting language such as Perl or Python.
Contacts:
Yannick Estève: yannick.esteve@lium.univ-lemans.fr
Georges Linarès: georges.linares@univ-avignon.fr
6-22 . (2009-06-02)Proposition de sujet de thèse 2009 Analyse de scènes de parole Grenoble France
Proposition de sujet de thèse 2009
Ecole Doctorale EDISCE (http://www-sante.ujf-grenoble.fr/edisce/)
Financement ANR (http://www.icp.inpg.fr/~schwartz/Multistap/Multistap.html)
Analyse de scènes de parole : le problème du liage audio-visuo-moteur à la lumière de données comportementales et neurophysiologiques
Deux questions importantes traversent les recherches actuelles sur le traitement cognitif de la parole : la question de la multisensorialité (comment se combinent les informations auditives et visuelles dans le cerveau) et celle des interactions perceptuo-motrices.
Une question manquante est selon nous celle du « liage » (binding) : comment dans ces processus de traitement auditif ou audiovisuel, le cerveau parvient-il à « mettre ensemble » les informations pertinentes, à éliminer les « bruits », à construire les « flux de parole » pertinents avant la prise de décision ? Plus précisément, les objets élémentaires de la scène de parole sont les phonèmes, et des modules spécialisés auditifs, visuels, articulatoires contribuent au processus d'identification phonétique, mais il n'a pas été possible jusqu'à présent d'isoler leur contribution respective, ni la manière dont ces contributions sont fusionnées. Des expériences récentes permettent d'envisager le processus d'identification phonétique comme étant de nature non hériarchique, et essentiellement instancié par des opérations associatives. La thèse consistera à développer d’autres paradigmes expérimentaux originaux, mais aussi à mettre en place des expériences de neurophysiologie et neuroimagerie (EEG, IRMf) disponibles au laboratoire et dans son environnement Grenoblois, afin de déterminer la nature et le fonctionnement des processus de groupement audiovisuel des scènes de parole, en relation avec le mécanismes de production.
Cette thèse se réalisera dans le cadre d’un projet ANR « Multistap » (Multistabilité et groupement perceptif dans l’audition et dans la parole » http://www.icp.inpg.fr/~schwartz/Multistap/Multistap.html). Ce projet fournira à la fois le support de financement pour la bourse de thèse, et un environnement stimulant pour le développement des recherches, en partenariat avec des équipes de spécialistes d’audition et de vision, de Paris (DEC ENS), Lyon (LNSCC) et Toulouse (Cerco).
Responsables
Jean-Luc Schwartz (DR CNRS, HDR) : 04 76 57 47 12,
Frédéric Berthommier (CR CNRS) : 04 76 57 48 28
Jean-Luc.Schwartz, Frederic.Berthommier@gipsa-lab.grenoble-inp.fr
6-23 . (2009-05-11) Thèse Cifre indexation de données multimédia Institut Eurecom
Thèse Cifre indexation de données multimédiaTheseDeadLine: 01/11/2009merialdo@eurecom.frhttp://bmgroup.eurecom.fr/The Multimedia Communications Department of EURECOM, in partnership with the travel service provider company AMADEUS, invites applications for a PhD position on multimedia indexing. The goal of the thesis is to study new techniques to organize large quantities of multimedia information, specifically images and videos, for improving services to travelers. This includes managing images and videos from providers as well as from users about places, locations, events, etc… The approach will be based on the most recent techniques in multimedia indexing, and will benefit from the strong research experience of EURECOM in this domain, joint to the industrial experience of AMADEUS.We are looking for very good and motivated students, with a strong knowledge in image and video processing, statistical and probabilistic modeling, for the theoretical part, and a good C/C++ programming ability for the experimental part. English is required. The successful candidate will be employed by AMADEUS in Sophia Antipolis, and will strongly interact with the researchers at EURECOM.Applicants should email a resume, letter of motivation, and all relevant information to.Prof. Bernard Merialdomerialdo@eurecom.frThe project will be conducted within AMADEUS (http://www.amadeus.com/), a world leader in provision of solutions to the travel industry to manage the distribution and selling of travel services. The company is the leading Global Distribution System (GDS) and the biggest processor of travel bookings in the world. Their main development center is located in Sophia Antipolis, France, and employs more than 1200 engineers. The research will be supervised by EURECOM (http://www.eurecom.fr), a graduate school and research center in communication systems, whose activity includes corporate, multimedia and mobile communications. EURECOM currently counts about 20 professors, 10 post-docs, 170 MS and 60 PhD students, and is involved in many European research projects and joint collaborations with industry. EURECOM is also located in Sophia-Antipolis, a major European technology park for telecommunications research and development in the French Riviera.http://gdr-isis.org/rilk/gdr/Kiosque/poste.php?jobid=3267
6-24 . (2009-05-11)Senior Research Fellowship in Speech Perception and Language Development,MARCS Auditory Laboratories
Ref 147/09 Senior Research Fellowship in Speech Perception and Language Development, MARCS Auditory Laboratories
5 Year Fixed Term Contract , Bankstown Campus
Remuneration Package: Academic Level C $107,853 to $123,724 p.a. (comprising Salary $91,266 to $104,831 p.a., 17% Superannuation, and Leave Loading)
Position Enquiries: Professor Denis Burnham, (02) 9772 6677 or email d.burnham@uws.edu.au
Closing Date: The closing date for this position has been extended until 30 June 2009.
6-25 . (2009-05-08)PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)
PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)
=============================================================================
The PORT-MEDIA (ANR CONTINT 2008-2011) is a cooperative project sponsored by the French National Research Agency, between the University of Avignon, the University of Grenoble, the University of Le Mans, CNRS at Nancy and ELRA (European Language Resources Association). PORT-MEDIA will address the multi-domain and multi-lingual robustness and portability of spoken language understanding systems. More specifically, the overall objectives of the project can be summarized as:
- robustness: integration/coupling of the automatic speech recognition component in the spoken language understanding process.
- portability across domains and languages: evaluation of the genericity and adaptability of the approaches implemented in the
understanding systems, and development of new techniques inspired by machine translation approaches.
- representation: evaluation of new rich structures for high-level semantic knowledge representation.
The PhD thesis will focus on the multilingual portability of speech understanding systems. For example, the candidate will investigate techniques to fast adapt an understanding system from one language to another and creating low-cost resources with (semi) automatic methods, for instance by using automatic alignment techniques and lightly supervised translations. The main contribution will be to fill the gap between the techniques currently used in the statistical machine translation and spoken language understanding fields.
The thesis will be co-supervised by Fabrice Lefèvre, Assistant Professor at LIA (University of Avignon) and Laurent Besacier, Assistant Professor at LIG (University of Grenoble). The candidate will spend 18 months at LIG then 18 months at LIA.
The salary of a PhD position is roughly 1,300€ net per month. Applicants should hold a strong university degree entitling them to start a doctorate (Masters/diploma or equivalent) in a relevant discipline (Computer Science, Human Language Technology, Machine Learning, etc). The applicants should be fluent in English. Competence in French is optional, though applicants will be encouraged to acquire this skill during training. All applicants should have very good programming skills.
For further information, please contact Fabrice Lefèvre (Fabrice.Lefevre at univ-avignon.fr) AND Laurent Besacier (Laurent.Besacier at imag.fr).
====================================================================================
Sujet de thèse en Traduction Automatique et Compréhension de la Parole (début 09/09)
====================================================================================
Le projet PORT-MEDIA (ANR CONTINT 2008-2011) concerne la robustesse et la portabilité multidomaine et multilingue des systèmes de compréhension de l'oral. Les partenaires sont le LIG, le LIA, le LORIA, le LIUM et ELRA (European Language Ressources Association). Plus précisément, les trois objectifs principaux du projet concernent :
-la robustesse et l'intégration/couplage du composant de reconnaissance automatique de la parole dans le processus de compréhension.
-la portabilité vers un nouveau domaine ou langage : évaluation des niveaux de généricité et d'adaptabilité des approches implémentées dans les systèmes de compréhension.
-l’utilisation de représentations sémantiques de haut niveau pour l’interaction langagière.
Ce sujet de thèse concerne essentiellement la portabilité multilingue des différents composants d’un système de compréhension automatique ; l’idée étant d’utiliser, par exemple, des techniques d’alignement automatique et de traduction pour adapter rapidement un système de compréhension d’une langue vers une autre, en créant des ressources à faible coût de façon automatique ou semi-automatique. L'idée forte est de rapprocher les techniques de traduction automatique et de compréhension de la parole.
Cette thèse est un co-encadrement entre deux laboratoires (Fabrice Lefevre, LIA & Laurent Besacier, LIG). Les 18 premiers mois auront lieu au LIG, les 18 suivants au LIA.
Le salaire pour un etudiant en thèse est d'environ 1300€ net par mois. Nous recherchons des étudiants ayant un Master (ou équivalent) mention Recherche dans le domaine de l'Informatique, et des compétences dans les domaines suivants : traitement des langues écrites et/ou parlées, apprentissage automatique...
Pour de plus amples informations ou candidater, merci de contacter Fabrice Lefèvre (Fabrice.Lefevre at univ-avignon.fr) ET Laurent Besacier (Laurent.Besacier at imag.fr).
6-26 . (2009-05-07)Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld
- speech synthesis and/or recognition
- discourse prosody
- laboratory phonology
- speech and language rhythm research
- multimodal speech (technology)
Just in time for the May issue!ChrisPetra Wagner wrote:Dear Chris,could you please include the job offers below in the next ISCA pad?Thanks a lot!Petra*Several Ph.D. positions and Ph.D. scholarships, Universität Bielefeld** *Applications are invited for several Ph.D. positions and Ph.D. scholarships in experimental phonetics, speech technology and laboratory phonology at Universität Bielefeld (Fakultät für Linguistik und Literaturwissenschaft), Germany.Successful candidates should hold a Master's degree (or equivalent) in phonetics, computational linguistics, linguistics, computer science or a related discipline. They will have a strong background in either- speech synthesis and/or recognition- discourse prosody- laboratory phonology- speech and language rhythm research- multimodal speech (technology)Candidates should appreciate working in an interdisciplinary environment. Good knowledge in experimental design techniques and programming skills will be considered a plus. Strong interest in research and high proficiency in English is required.The Ph.D. positions will be part-time (50%); salary and social benefits are determined by the German public service pay scale (TVL-E13). The Ph.D. scholarship is based on the DFG scale. There is no mandatory teaching load.Bielefeld University is an equal opportunity employer. Women are therefore particularly encouraged to apply. Disabled applicants with equivalent qualification will be treated preferentially.The positions are available for three years (with a potential extension for the Ph.D. positions), starting as soon aspossible. Please submit your documents (cover letter, CV including list of publications, statement of research interests, names of two referees) electronically to the address indicated below. Applications must be received by June 15, 2009.Universität BielefeldFakultät für Linguistik und LiteraturwissenschaftProf. Dr. Petra WagnerPostfach 10 01 3133 501 BielefeldGermanyEmail: petra.wagner@uni-bielefeld.de <mailto:petra.wagner@uni-bielefeld.de>--Prof. Dr. Petra WagnerFakultät für Linguistik und LiteraturwissenschaftUniversität BielefeldPostfach 10 01 3133501 BielefeldTel.: +49(0)521-106-3683
6-27 . (2009-06-17)Two post-docs in the collaboration between CMU (USA) and University-Portugal program
Two post-doctoral positions in the framework of the Carnegie MellonUniversity-Portugal program are available at the Spoken Language SystemsLab (www.l2f.inesc-id.pt), INESC-ID, Lisbon, Portugal.Positions are for a fixed term contract of length up to two and a halfyears, renewable in one year intervals, in the scope of the researchprojects PT-STAR (Speech Translation Advanced Research to and fromPortuguese) and REAP.PT (Computer Aided Language Learning – ReadingPractice), both financed by FCT (Portuguese Foundation for Science andTechnology).The starting date for these positions is September 2009, or as soon aspossible thereafter.Candidates should send their CVs (in .pdf format) before July 15th, tothe email addresses given below, together with a motivation letter.Questions or other clarification requests should be emailed to the sameaddresses.======== PT-STAR (project CMU-PT/HuMach/0039/2008) ========Topic: Speech-to-Speech Machine TranslationDescription: We seek candidates with excellent knowledge in statisticalapproaches to machine translation (and if possible also speechtechnologies) and strong programming skills. Familiarity with thePortuguese language is not at all mandatory, although the main sourceand target languages are Portuguese/English.Email address for applications: lcoheur at l2f dot inesc-id dot pt======== REAP.PT (project CMU-PT/HuMach/0053/2008) ========Topic: Computer Aided Language LearningDescription: We seek candidates with excellent knowledge in automaticquestion generation (multiple-choice synonym questions, related wordquestions, and cloze questions) and/or measuring the reading difficultyof a text (exploring the combination of lexical features, grammaticalfeatures and statistical models). Familiarity with a romance language isrecommended, since the target language is Portuguese.Email address for applications: nuno dot mamede at inesc-id dot pt
7 . Journals
7-1 . Special issue IEEE Trans. ASL Signal models and representation of musical and environmental sounds
Special Issue of IEEE Transactions on Audio, Speech and Language Processing**SIGNAL MODELS AND REPRESENTATION OF MUSICAL AND ENVIRONMENTAL SOUNDS**http://www.ewh.ieee.org/soc/sps/tap http://www.ewh.ieee.org/soc/sps/tap/sp_issue/audioCFP.pdf-- Submission deadline: 15 December, 2008-- Notification of acceptance: 15 June, 2009--Final manuscript due: 1st July, 2009--Tentative publication date: 1st September, 2009Guest editorsDr. Bertrand David (Telecom ParisTech, France) bertrand.david@telecom-paristech.frDr. Laurent Daudet (UPMC University Paris 06, France) daudet@lam.jussieu.frDr. Masataka Goto (National Institute of Advanced Industrial Science and Technology, Japan) m.goto@aist.go.jpDr. Paris Smaragdis (Adobe Systems, Inc, USA) paris@adobe.comThe non-stationary nature, the richness of the spectra and the mixing of diverse sources are common characteristics shared by musical and environmental audio scenes. It leads to specific challenges of audio processing tasks such as information retrieval, source separation, analysis-transformation-synthesis and coding. When seeking to extract information from musical or environmental audio signals, the time-varying waveform or spectrum are often further analysed and decomposed into sound elements. Two aims of this decomposition can be identified, which are sometimes antagonist: to be together adapted to the particular properties of the signal and to the targeted application. This special issue is focused on how the choices of a low level representation (typically a time-frequency distribution with or without probabilistic framework, with or without perceptual considerations), a source model or a decomposition technique may influence the overall performance. Specific topics of interest include but are not limited to:* factorizations of time-frequency distribution* sparse representations* Bayesian frameworks* parametric modeling* subspace-based methods for audio signals* representations based on instrument or/and environmental sources signal models* sinusoidal modeling of non-stationary spectra (sinusoids, noise, transients)Typical applications considered are (non exclusively):* source separation/recognition* mid or high level features extraction (metrics, onsets, pitches, …)* sound effects * audio coding * information retrieval* audio scene structuring, analysis or segmentation * ...
7-2 . "Speech Communication" special issue on "Speech and Face to Face Communication
"Speech Communication" special issue on "Speech and Face to Face Communication
http://www.elsevier.com/wps/find/journaldescription.cws_home/505597/description
Speech communication is increasingly studied in a face to face perspective:
- It is interactive: the speaking partners build a complex communicative act together
involving linguistic, emotional, expressive, and more generally cognitive and social
dimensions;
- It involves multimodality to a large extent: the “listener” sees and hears the speaker who
produces sounds as well as facial and more generally bodily gestures;
- It involves not only linguistic but also psychological, affective and social aspects of
interaction. Gaze together with speech contribute to maintain mutual attention and to
regulate turn-taking for example. Moreover the true challenge of speech communication is
to take into account and integrate information not only from the speaker but also from the
entire physical environment in which the interaction takes place.
The present issue proposes to synthetize the most recent developments in
this topic considering its various aspects from complementary perspectives: cognitive and
neurocognitive (multisensory and perceptuo-motor interactions), linguistic (dialogic face to
face interactions), paralinguistic (emotions and affects, turn-taking, mutual attention),
computational (animated conversational agents, multimodal interacting communication
systems).
There will be two stages in the submission procedure.
- First stage (by DECEMBER 1ST): submission of a one-to-two page abstract describing the
contents of the work and its relevance to the "Speech and Face to Face Communication" topic
by DECEMBER 1ST. The guest editors will then make a selection of the most relevant
proposals in December.
- Second stage (by MARCH 1ST): the selected contributors will be invited to submit a full
paper by MARCH 1ST. The submitted papers will then be peer reviewed through the regular
Speech Communication journal process (two independent reviews). Accepted papers will then
be published in the special issue.
Abstracts should be directly sent to the guest editors:
Marion.Dohen@gipsa-lab.inpg.fr, Gerard.Bailly@gipsa-lab.inpg.fr, Jean-Luc.Schwartz@gipsa-lab.inpg.fr
7-3 . SPECIAL ISSUE of the EURASIP Journal on Audio, Speech, and Music Processing. ON SCALABLE AUDIO-CONTENT ANALYSIS
SPECIAL ISSUE ON SCALABLE AUDIO-CONTENT ANALYSIS
The amount of easily-accessible audio, whether in the form of large
collections of audio or audio-video recordings, or in the form of
streaming media, has increased exponentially in recent times.
However this audio is not standardized: much of it is noisy,
recordings are frequently not clean, and most of it is not labelled.
The audio content covers a large range of categories including
sports, music and songs, speech, and natural sounds. There is
therefore a need for algorithms that allow us make sense of these
data, to store, process, categorize, summarize, identify and
retrieve them quickly and accurately.
In this special issue we invite papers that present novel approaches
to problems such as (but not limited to):
Audio similarity
Audio categorization
Audio classification
Indexing and retrieval
Semantic tagging
Audio event detection
Summarization
Mining
We are especially interested in work that addresses real-world
issues such as:
Scalable and efficient algorithms
Audio analysis under noisy and real-world conditions
Classification with uncertain labeling
Invariance to recording conditions
On-line and real-time analysis of audio.
Algorithms for very large audio databases.
We encourage theoretical or application-oriented papers that
highlight exploitation of such techniques in practical systems/products.
Authors should follow the EURASIP Journal on Audio, Speech, and Music
Processing manuscript format described at the journal site
http://www.hindawi.com/journals/asmp/. Prospective authors should
submit an electronic copy of their complete manuscript through the
journal Manuscript Tracking System at http://mts.hindawi.com/,
according to the following timetable:
Manuscript Due: June 1st, 2009
First Round of Reviews: September 1, 2009
Publication Date: December 1st, 2009
Guest Editors:
1) Bhiksha Raj
Associate professor
School of computer science
Carnegie Mellon university
2) Paris Smaragdis
Senior Research Scientist
Advanced Technology Labs, Adobe Systems Inc.
Newton, MA, USA
3) Malcolm Slaney
Principal Scientist
Yahoo! Research
Santa Clara, CA
and
(Consulting) Professor
Stanford CCRMA
4) Chung-Hsien Wu
Distinguished Professor
Dept. of Computer Science & Infomation Engineering
National Cheng Kung University,
Tainan, TAIWAN
5) Liming Chen
Professor and head of the Dept. Mathematics & Informatics
Ecole Centrale de Lyon
University of Lyon
Lyon, France
6) Professor Hyoung-Gook Kim
Intelligent Multimedia Signal Processing Lab.
Kwangwoon University, Republic of Korea
7-4 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing.on Atypical Speech
Atypical Speech
Call for Papers
Research in speech processing (e.g., speech coding, speech enhancement, speech recognition, speaker recognition, etc.) tends to concentrate on speech samples collected from normal adult talkers. Focusing only on these “typical speakers” limits the practical applications of automatic speech processing significantly. For instance, a spoken dialogue system should be able to understand any user, even if he or she is under stress or belongs to the elderly population. While there is some research effort in language and gender issues, there remains a critical need for exploring issues related to “atypical speech”. We broadly define atypical speech as speech from speakers with disabilities, children's speech, speech from the elderly, speech with emotional content, speech in a musical context, and speech recorded through unique, nontraditional transducers. The focus of the issue is on voice quality issues rather than unusual talking styles.
In this call for papers, we aim to concentrate on issues related to processing of atypical speech, issues that are commonly ignored by the mainstream speech processing research. In particular, we solicit original, previously unpublished research on:
• Identification of vocal effort, stress, and emotion in speech
• Identification and classification of speech and voice disorders
• Effects of ill health on speech
• Enhancement of disordered speech
• Processing of children's speech
• Processing of speech from elderly speakers
• Song and singer identification
• Whispered, screamed, and masked speech
• Novel transduction mechanisms for speech processing
• Computer-based diagnostic and training systems for speech dysfunctions
• Practical applications
Authors should follow the EURASIP Journal on Audio, Speech, and Music Processing manuscript format described at the journal site
http://www.hindawi.com/journals/asmp/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at
http://mts.hindawi.com/, according to the following timetable:
Manuscript Due
April 1, 2009
First Round of Reviews
July 1, 2009
Publication Date
October 1, 2009
Guest Editors
•
Georg Stemmer, Siemens AG, Corporate Technology, 80333 Munich, Germany
•
Elmar Nöth, Department of Pattern Recognition, Friedrich-Alexander University of Erlangen-Nuremberg, 91058 Erlangen, Germany
•
Vijay Parsa, National Centre for Audiology, The University of Western Ontario, London, ON, Canada N6G 1H1
7-5 . Special issue of the EURASIP Journal on Audio, Speech, and Music Processing on Animating virtual speakers or singers from audio: lip-synching facial animation
•
Audiovisual synthesis from text
•
Facial animation from audio
•
Trajectory formation systems
•
Evaluation methods for audiovisual synthesis
•
Perception of audiovisual asynchrony in speech and music
•
Control of speech and facial expressions
One page abstractJanuary 1, 2009
Preselection of papersFebruary 1, 2009
Manuscript dueMarch 1, 2009
First round of reviewsMay 1, 2009
Camera-ready papersJuly 1, 2009
Publication dateSeptember 1,2009
Guest Editors
7-6 . CfP Special issue of Speech Comm: Non-native speech perception in adverse conditions: imperfect knowledge, imperfect signal
CALL FOR PAPERS: SPECIAL ISSUE OF SPEECH COMMUNICATION
NON-NATIVE SPEECH PERCEPTION IN ADVERSE CONDITIONS: IMPERFECT KNOWLEDGE, IMPERFECT SIGNAL
Much work in phonetics and speech perception has focused on doubly-optimal conditions, in which the signal reaching listeners is unaffected by distorting influences and in which listeners possess native competence in the sound system. However, in practice, these idealised conditions are rarely met. The processes of speech production and perception thus have to account for imperfections in the state of knowledge of the interlocutor as well as imperfections in the signal received. In noisy settings, these factors combine to create particularly adverse conditions for non-native listeners.
The purpose of the Special Issue is to assemble the latest research on perception in adverse conditions with special reference to non-native communication. The special issue will bring together, interpret and extend the results emerging from current research carried out by engineers, psychologists and phoneticians, such as the general frailty of some sounds for both native and non-native listeners and the strong non-native disadvantage experienced for categories which are apparently equivalent in the listeners’ native and target languages.
Papers describing novel research on non-native speech perception in adverse conditions are welcomed, from any perspective including the following. We especially welcome interdisciplinary contributions.
• models and theories of L2 processing in noise
• informational and energetic masking
• role of attention and processing load
• effect of noise type and reverberation
• inter-language phonetic distance
• audiovisual interactions in L2
• perception-production links
• the role of fine phonetic detail
GUEST EDITORS
Maria Luisa Garcia Lecumberri (Department of English, University of the Basque Country, Vitoria, Spain).
garcia.lecumberri@ehu.es
Martin Cooke (Ikerbasque and Department of Electrical & Electronic Engineering, University of the Basque Country, Bilbao, Spain).
m.cooke@ikerbasque.org
Anne Cutler (Max-Planck Institute for Psycholinguistics, Nijmegen, The Netherlands and MARCS Auditory Laboratories, Sydney, Australia).
anne.cutler@mpi.nl
DEADLINE
Full papers should be submitted by 31st July 2009
SUBMISSION PROCEDURE
Authors should consult the “guide for authors”, available online at http://www.elsevier.com/locate/specom, for information about the preparation of their manuscripts. Papers should be submitted via http://ees.elsevier.com/specom, choosing “Special Issue: non-native speech perception” as the article type. If you are a first time user of the system, please register yourself as an author. Prospective authors are welcome to contact the guest editors for more details of the Special Issue.
7-7 . CfP IEEE Special Issue on Speech Processing for Natural Interaction with Intelligent Environments
Call for Papers IEEE Signal Processing Society IEEE Journal of Selected Topics in Signal Processing Special Issue on Speech Processing for Natural Interaction with Intelligent Environments With the advances in microelectronics, communication technologies and smart materials, our environments are transformed to be increasingly intelligent by the presence of robots, bio-implants, mobile devices, advanced in-car systems, smart house appliances and other professional systems. As these environments are integral parts of our daily work and life, there is a great interest in a natural interaction with them. Also, such interaction may further enhance the perception of intelligence. "Interaction between man and machine should be based on the very same concepts as that between humans, i.e. it should be intuitive, multi-modal and based on emotion," as envisioned by Reeves and Nass (1996) in their famous book "The Media Equation". Speech is the most natural means of interaction for human beings and it offers the unique advantage that it does not require carrying a device for using it since we have our "device" with us all the time. Speech processing techniques are developed for intelligent environments to support either explicit interaction through message communications, or implicit interaction by providing valuable information about the physical ("who speaks when and where") as well as the emotional and social context of an interaction. Challenges presented by intelligent environments include the use of distant microphone(s), resource constraints and large variations in acoustic condition, speaker, content and context. The two central pieces of techniques to cope with them are high-performing "low-level" signal processing algorithms and sophisticated "high-level" pattern recognition methods. We are soliciting original, previously unpublished manuscripts directly targeting/related to natural interaction with intelligent environments. The scope of this special issue includes, but is not limited to: * Multi-microphone front-end processing for distant-talking interaction * Speech recognition in adverse acoustic environments and joint optimization with array processing * Speech recognition for low-resource and/or distributed computing infrastructure * Speaker recognition and affective computing for interaction with intelligent environments * Context-awareness of speech systems with regard to their applied environments * Cross-modal analysis of speech, gesture and facial expressions for robots and smart spaces * Applications of speech processing in intelligent systems, such as robots, bio-implants and advanced driver assistance systems. Submission information is available at http://www.ece.byu.edu/jstsp. Prospective authors are required to follow the Author's Guide for manuscript preparation of the IEEE Transactions on Signal Processing at http://ewh.ieee.org/soc/sps/tsp. Manuscripts will be peer reviewed according to the standard IEEE process. Manuscript submission due: Jul. 3, 2009 First review completed: Oct. 2, 2009 Revised manuscript due: Nov. 13, 2009 Second review completed: Jan. 29, 2010 Final manuscript due: Mar. 5, 2010 Lead guest editor: Zheng-Hua Tan, Aalborg University, Denmark zt@es.aau.dk Guest editors: Reinhold Haeb-Umbach, University of Paderborn, Germany haeb@nt.uni-paderborn.de Sadaoki Furui, Tokyo Institute of Technology, Japan furui@cs.titech.ac.jp James R. Glass, Massachusetts Institute of Technology, USA glass@mit.edu Maurizio Omologo, FBK-IRST, Italy omologo@fbk.eu
7-8 . CfP Special issue "Speech as a Human Biometric: I know who you are from your voice" Int. Jnl Biometrics
7-9 . CfP Special on Voice transformation IEEE Trans ASLP
CALL FOR PAPERSIEEE Signal Processing SocietyIEEE Transactions on Audio, Speech and Language ProcessingSpecial Issue on Voice TransformationWith the increasing demand for Voice Transformation in areas such asspeech synthesis for creating target or virtual voices, modeling variouseffects (e.g., Lombard effect), synthesizing emotions, making more naturaldialog systems which use speech synthesis, as well as in areas likeentertainment, film and music industry, toys, chat rooms and games, dialogsystems, security and speaker individuality for interpreting telephony,high-end hearing aids, vocal pathology and voice restoration, there is agrowing need for high-quality Voice Transformation algorithms and systemsprocessing synthetic or natural speech signals.Voice Transformation aims at the control of non-linguistic information ofspeech signals such as voice quality and voice individuality. A great dealof interest and research in the area has been devoted to the design anddevelopment of mapping functions and modifications for vocal tractconfiguration and basic prosodic features.However, high quality Voice Transformation systems that create effectivemapping functions for vocal tract, excitation signal, and speaking styleand whose modifications take into account the interaction of source andfilter during voice production, are still lacking.We invite researchers to submit original papers describing new approachesin all areas related to Voice Transformation including, but not limited to,the following topics:* Preprocessing for Voice Transformation(alignment, speaker selection, etc.)* Speech models for Voice Transformation(vocal tract, excitation, speaking style)* Mapping functions* Evaluation of Transformed Voices* Detection of Voice Transformation* Cross-lingual Voice Transformation* Real-time issues and embedded Voice Transformation Systems* ApplicationsThe call for paper is also available at:http://www.ewh.ieee.org/soc/sps/tap/sp_issue/VoiceTransformationCFP.pdfProspective authors are required to follow the Information for Authors formanuscript preparation of the IEEE Transactions on Audio, Speech, andLanguage Processing Signal Processing athttp://www.signalprocessingsociety.org/periodicals/journals/taslp-author-information/Manuscripts will be peer reviewed according to the standard IEEE process.Schedule:Submission deadline: May 10, 2009Notification of acceptance: September 30, 2009Final manuscript due: October 30, 2009Publication date: January 2010Lead Guest Editor:Yannis Stylianou, University of Crete, Crete, Greeceyannis@csd.uoc.grGuest Editors:Tomoki Toda, Nara Inst. of Science and Technology, Nara, Japantomoki@is.naist.jpChung-Hsien Wu, National Cheng Kung University, Tainan, Taiwanchwu@csie.ncku.edu.twAlexander Kain, Oregon Health & Science University, Portland Oregon, USAkaina@ohsu.eduOlivier Rosec, Orange-France Telecom R&D, Lannion, Franceolivier.rosec@orange-ftgroup.com
7-10 . Mathematics, Computing, Language, and the Life: Frontiers in Mathematical Linguistics and Language Theory (tentative)
A new book series is going to be announced in a few weeks by a major publisher under the (tentative) title of Mathematics, Computing, Language, and the Life: Frontiers in Mathematical Linguistics and Language Theory SERIES DESCRIPTION: Language theory, as originated from Chomsky's seminal work in the fifties last century and in parallel to Turing-inspired automata theory, was first applied to natural language syntax within the context of the first unsuccessful attempts to achieve reliable machine translation prototypes. After this, the theory proved to be very valuable in the study of programming languages and the theory of computing. In the last 15-20 years, language and automata theory has experienced quick theoretical developments as a consequence of the emergence of new interdisciplinary domains and also as the result of demands for application to a number of disciplines, most notably: natural language processing, computational biology, natural computing, programming, and artificial intelligence. The series will collect recent research on either foundational or applied issues, and is addressed to graduate students as well as to post-docs and academics. TOPIC CATEGORIES: A. Theory: language and automata theory, combinatorics on words, descriptional and computational complexity, semigroups, graphs and graph transformation, trees, computability B. Natural language processing: mathematics of natural language processing, finite-state technology, languages and logics, parsing, transducers, text algorithms, web text retrieval C. Artificial intelligence, cognitive science, and programming: patterns, pattern matching and pattern recognition, models of concurrent systems, Petri nets, models of pictures, fuzzy languages, grammatical inference and algorithmic learning, language-based cryptography, data and image compression, automata for system analysis and program verification D. Bio-inspired computing and natural computing: cellular automata, symbolic neural networks, evolutionary algorithms, genetic algorithms, DNA computing, molecular computing, biomolecular nanotechnology, circuit theory, quantum computing, chemical and optical computing, models of artificial life E. Bioinformatics: mathematical biology, string and combinatorial issues in computational biology and bioinformatics, mathematical evolutionary genomics, language processing of biological sequences, digital libraries The connections of this broad interdisciplinary field with other areas include: computational linguistics, knowledge engineering, theoretical computer science, software science, molecular biology, etc. The first volumes will be miscellaneous and will globally define the scope of the future series. INVITATION TO CONTRIBUTE: Contributions are requested for the first five volumes. In principle, there will be no limit in length. All contributions will be submitted to strict peer-review. Collections of papers are also welcome. Potential contributors should express their interest in being considered for the volumes by April 25, 2009 to carlos.martinvide@gmail.com They should specify: - the tentative title of the contribution, - the authors and affiliations, - a 5-10 line abstract, - the most appropriate topic category (A to E above). A selection will be done immediately after, with invited authors submitting their contribution for peer-review by July 25, 2009. The volumes are expected to appear in the first months of 2010.
7-11 . CfPSpecial Issue on Statistical Learning Methods for Speech and Language Processing
Samy Bengio, Google Inc., Mountain View (CA), USA, bengio@google.com
8 . Future Speech Science and Technology Events
8-1 . (2009-06-18) Conferences GIPSA Grenoble
Jeudi 18 juin 2009, 13h30 – Séminaire externe
========================================
Blaise POTARD
LORIA, Nancy
Inversion acoustique-articulatoire et animation de tête parlante
L'inversion acoustique-articulatoire (c'est-à-dire la détermination du mouvement des articulateurs – lèvres, langue, mâchoire, larynx— à partir du signal acoustique de parole) est un problème ancien mais toujours d'actualité. Les premières approches se basaient sur l'analyse par synthèse, en utilisant des modèles simplifiés du comportement acoustique de l'appareil vocal humain. Depuis le milieu des années 90 environ, du fait de l'amélioration des techniques d'imagerie médicale qui permettent d'enregistrer des corpus de données articulatoires conséquents, la plupart des approches utilisent désormais des techniques d'apprentissages – HMM, réseau de neurones, réseau bayésiens...— sur de gros corpus de données, qui permettent d'obtenir d'excellents résultats, mais souvent uniquement pour locuteur donné. La flexibilité des méthodes utilisant l'analyse par synthèse, en particulier leur capacité à s'adapter à différents locuteurs, reste cependant appréciable.
Je présenterai dans cet exposé la technique d'inversion par tabulation développée au cours de ma thèse, et expliquerai comment elle peut s'adapter à l'inversion sur un corpus de données naturelles. Je présenterai également une méthode d'inversion alternative basée sur la régulation variationnelle ne nécessitant pas une trajectoire initiale proche de la solution cherchée. Je présenterai ensuite mes travaux actuels sur l'animation par HMM de la tête parlante virtuelle développée au LORIA.
Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire
Jeudi 25 juin 2009, 13h30 – Séminaire externe
========================================
Michèle Olivieri
Laboratoire Bases, Corpus et Langage, Nice - Sophia Antipolis
Le THESOC et ses exploitations
Le Thesaurus Occitan ou THESOC est une base de données multimédias dont le but est de rassembler toutes les données dialectales recueillies en domaine d'oc, afin de les rendre accessibles au public (1), aux chercheurs et aux pédagogues. Centralisé à Nice dans le cadre de l’UMR 6039 du CNRS « Bases, Corpus, Langage » (BCL), sous la direction de Jean-Philippe Dalbera, il s’agit d’un programme inter-universitaire qui associe différentes équipes. Développé depuis 1992, le THESOC contient notamment :
- des données linguistiques et péri-linguistiques issues d’enquêtes de terrain : cartes et carnets d’enquêtes des Atlas linguistiques (2), monographies, enregistrements sonores, documents iconographiques ;
- des données linguistiques procédant d’analyses déjà réalisées : lemmatisations, morphologie, étymologie, microtoponymie ;
- des données bibliographiques ;
- des outils d’analyse : représentations cartographiques, instruments d’analyse diachronique, procédures de cartographie comparative, instruments d’analyse morphologique.
Les données brutes figurant dans la base ont toutes une caractéristique commune : elles procèdent de sources orales et sont précisément localisées, ce qui constitue une condition essentielle pour l’étude de la variation diatopique. En outre, le THESOC permet de faire entendre les sons enregistrés au cours des enquêtes, ce qui garantit la réalité des faits considérés et transcrits.
Il s’agit enfin d’un objet à géométrie variable qui envisage toutes sortes d’exploitations grâce à des menus spécifiques, qui intègre toutes sortes de documents, de telle sorte que le THESOC se présente comme un outil offrant à la fois (mais toujours séparément) des données linguistiques quasi brutes, des données ayant fait l'objet d'analyses et de traitements et des outils d'investigation.
Nous évoquerons alors les travaux de recherche menés par l'équipe de dialectologie de BCL, qui s'appuient sur les données et les outils du THESOC et permettent de renouveler les points de vue et les perspectives en linguistique générale. D'abord, nous présenterons succinctement les découvertes de Jean-Philippe Dalbera (3) en matière d'étymologie et de reconstruction lexicale à l'aide de quelques exemples. Puis, nous montrerons comment l'étude de la microvariation dialectale permet de mieux appréhender le changement morpho-syntaxique, en examinant la question des clitiques sujets dans les langues romanes (4).
1. Le THESOC est accessible en consultation sur Internet à l’adresse : http://thesaurus.unice.fr.
2. Atlas Linguistiques de la France par régions, éditions du C.N.R.S.
3. Dalbera, Jean-Philippe, 2006, Des dialectes au langage : Une archéologie du sens, Paris, Champion.
4. Oliviéri, Michèle. 2009. “Syntactic Parameters and Reconstruction.” In: G.A. Kaiser & E.-M. Remberger (eds.): Proceedings of the Workshop on „Null-Subjects, Expletives, and Locatives in Romance“. Konstanz: Fachbereich Sprachwissenschaft der Universität Konstanz (= Arbeitspapier 123), 27-46 (http://ling.uni-konstanz.de/pages/publ/PDF/ap123.pdf).
Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire
Jeudi 9 juillet 2009, 13h30 – Séminaire externe
========================================
Iiro Jääskeläinen
Department of Biomedical Engineering and Computational Science, Helsinki
Titre à venir
Résumé à venir
Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire
Jeudi 30 juillet 2009, 13h30 – Séminaire externe
========================================
James Bonaiuto
University of Southern California
Titre à venir
Résumé à venir
Salle de réunion du Département Parole et Cognition (B314)
3ème étage Bâtiment B ENSE3
961 rue de la Houille Blanche
Domaine Universitaire
8-2 . (2009-06-21) Specom 2009- St Petersburg Russia
SPECOM 2009 - FINAL CALL FOR PAPERS
13-th International Conference "Speech and Computer"
21-25 June 2009
Grand Duke Vladimir's palace, St. Petersburg, Russia
http://www.specom.nw.ru
(!) Due to many requests the submission deadline has been postponed to Monday, February 9, 2009 (!)
Organized by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS)
Dear Colleagues, we are pleased to invite you to the 13-th International Conference on Speech and Computer SPECOM'2009, which will be held in June
21-25, 2009 in St.Petersburg. The global aim of the conference is to discuss state-of-the-art problems and recent achievements in Signal Processing and
Human-Computer Interaction related to speech technologies. Main topics of SPECOM'2009 are:
- Signal processing and feature extraction
- Multimodal analysis and synthesis
- Speech recognition and understanding
- Natural language processing
- Spoken dialogue systems
- Speaker and language identification
- Text-to-speech systems
- Speech perception and speech disorders
- Speech and language resources
- Applications for human-computer interaction
The official language of the event is English. Full papers up to 6 pages will be published in printed and electronic proceedings with ISBN.
Imporatnt Dates:
- Submission of full papers: February 1, 2009 (extended)
- Notification of acceptance: March 1, 2009
- Submission of final papers: March 20, 2009
- Early registration: March 20, 2009
- Conference dates: June 21-25, 2009
Scientific Committee:
Andrey Ronzhin, Russia (conference chairman)
Niels Ole Bernsen, Denmark
Denis Burnham, Australia
Jean Caelen, France
Christoph Draxler, Germany
Thierry Dutoit, Belgium
Hiroya Fujisaki, Japan
Sadaoki Furui, Japan
Jean-Paul Haton, France
Ruediger Hoffmann, Germany
Dimitri Kanevsky, USA
George Kokkinakis, Greece
Steven Krauwer, Netherlands
Lin-shan Lee, Taiwan
Boris Lobanov, Belarus
Benoit Macq, Belgium
Jury Marchuk, Russia
Roger Moore, UK
Heinrich Niemann, Germany
Rajmund Piotrowski, Russia
Louis Pols, Netherlands
Rodmonga Potapova, Russia
Josef Psutka, Czech Republic
Lawrence Rabiner, USA
Gerhard Rigoll, Germany
John Rubin, UK
Murat Saraclar, Turkey
Jesus Savage, Mexico
Pavel Skrelin, Russia
Viktor Sorokin, Russia
Yannis Stylianou, Greece
Jean E. Viallet, France
Taras Vintsiuk, Ukraine
Christian Wellekens, France
The invited speakers of SPECOM'2009 are:
- Prof. Walter Kellermann (University of Erlangen-Nuremberg, Germany), lecture "Towards Natural Acoustic Interfaces for Automatic Speech Recognition"
- Prof. Mikko Kurimo (Helsinki University of Technology, Finland), lecture "Unsupervised decomposition of words for speech recognition and retrieval"
The conference venue is House of Scientists (former Grand Duke Vladimir's palace) located in the very heart of the city, in the neighborhood
of the Winter Palace (Hermitage), the residence of Russian emperor, and the Peter's and Paul's Fortress. Independently of the scientific actions
we will provide essential possibilities for acquaintance with cultural and historical valuables of Saint-Petersburg, the conference will be hosted
during a unique and wonderful period known as the White Nights.
Contact Information:
SPECOM'2009 Organizing Committee,
SPIIRAS, 39, 14-th line, St.Petersburg, 199178, RUSSIA
E-mail: specom@iias.spb.su
Web: http://www.specom.nw.ru
8-3 . (2009-06-22) Summer workshop at Johns Hopkins University
at Johns Hopkins University invites one page research proposals for a
NSF-sponsored, Six-week Summer Research Workshop on
Machine Learning for Language Engineering
to be held in Baltimore, MD, USA,
June 22 to July 31, 2009.
CALL FOR PROPOSALS
Deadline: Wednesday, October 15, 2008.
One-page proposals are invited for the 15th annual NSF sponsored JHU summer workshop. Proposals should be suitable for a six-week team exploration, and should aim to advance the state of the art in any of the various fields of Human Language Technology (HLT) including speech recognition, machine translation, information retrieval, text summarization and question answering. This year, proposals in related areas of Machine Intelligence, such as Computer Vision (CV), that share techniques with HLT are also being solicited. Research topics selected for investigation by teams in previous workshops may serve as good examples for your proposal. (See http://www.clsp.jhu.edu/workshops.)
Proposals on all topics of scientific interest to HLT and technically related areas are encouraged. Proposals that address one of the following long-term challenges are particularly encouraged.
Ø ROBUST TECHNOLOGY FOR SPEECH: Technologies like speech transcription, speaker identification, and language identification share a common weakness: accuracy degrades disproportionately with seemingly small changes in input conditions (microphone, genre, speaker, dialect, etc.), where humans are able to adapt quickly and effectively. The aim is to develop technology whose performance would be minimally degraded by input signal variations.
Ø KNOWLEDGE DISCOVERY FROM LARGE UNSTRUCTURED TEXT COLLECTIONS: Scaling natural language processing (NLP) technologies—including parsing, information extraction, question answering, and machine translation—to very large collections of unstructured or informal text, and domain adaptation in NLP is of interest.
Ø VISUAL SCENE INTERPRETATION: New strategies are needed to parse visual scenes or generic (novel) objects, analyzing an image as a set of spatially related components. Such strategies may integrate global top-down knowledge of scene structure (e.g., generative models) with the kind of rich bottom-up, learned image features that have recently become popular for object detection. They will support both learning and efficient search for the best analysis.
Ø UNSUPERVISED AND SEMI-SUPERVISED LEARNING: Novel techniques that do not require extensive quantities of human annotated data to address any of the challenges above could potentially make large strides in machine performance as well as lead to greater robustness to changes in input conditions. Semi-supervised and unsupervised learning techniques with applications to HLT and CV are therefore of considerable interest.
An independent panel of experts will screen all received proposals for suitability. Results of this screening will be communicated no later than October 22, 2008. Authors passing this initial screening will be invited to Baltimore to present their ideas to a peer-review panel on November 7-9, 2008. It is expected that the proposals will be revised at this meeting to address any outstanding concerns or new ideas. Two or three research topics and the teams to tackle them will be selected for the 2009 workshop.
We attempt to bring the best researchers to the workshop to collaboratively pursue the selected topics for six weeks. Authors of successful proposals typically become the team leaders. Each topic brings together a diverse team of researchers and students. The senior participants come from academia, industry and government. Graduate student participants familiar with the field are selected in accordance with their demonstrated performance, usually by the senior researchers. Undergraduate participants, selected through a national search, will be rising seniors who are new to the field and have shown outstanding academic promise.
If you are interested in participating in the 2009 Summer Workshop we ask that you submit a one-page research proposal for consideration, detailing the problem to be addressed. If your proposal passes the initial screening, we will invite you to join us for the organizational meeting in Baltimore (as our guest) for further discussions aimed at consensus. If a topic in your area of interest is chosen as one of the two or three to be pursued next summer, we expect you to be available for participation in the six-week workshop. We are not asking for an ironclad commitment at this juncture, just a good faith understanding that if a project in your area of interest is chosen, you will actively pursue it.
8-4 . (2009-06-22) Third International Conference on Intelligent Technologies for Interactive Entertainment (Intetain 2009)
Intetain 2009,
Third International Conference on Intelligent Technologies for Interactive Entertainment
**********************************************************************
Call for Papers
==================
==== OVERVIEW ====
==================
The Human Media Interaction (HMI) department of the
INTETAIN 09 intends to stimulate interaction among academic researchers and commercial developers of interactive entertainment systems. We are seeking long (full) and short (poster) papers as well as proposals for interactive demos. In addition, the conference organisation aims at an interactive hands-on session along the lines of the Design Garage that was held at INTETAIN 2005. Individuals who want to organise special sessions during INTETAIN 09 may contact the General Chair, Anton Nijholt (anijholt@cs.utwente.nl).
The global theme of this third edition of the international conference is “Playful interaction, with others and with the environment”.
Contributions may, for example, contribute to this theme by focusing on the Supporting Device Technologies underlying interactive systems (mobile devices, home entertainment centers, haptic devices, wall screen displays, information kiosks, holographic displays, fog screens, distributed smart sensors, immersive screens and wearable devices), on the Intelligent Computational Technologies used to build the interactive systems, or by discussing the Interactive Applications for Entertainment themselves.
We seek novel, revolutionary, and exciting work in areas including but not limited to:
== Supporting Technology ==
* New hardware technology for interaction and entertainment
* Novel sensors and displays
* Haptic devices
* Wearable devices
== Intelligent Computational Technologies ==
* Animation and Virtual Characters
* Holographic Interfaces
* Adaptive Multimodal Presentations
* Creative language environments
* Affective User Interfaces
* Intelligent Speech Interfaces
* Tele-presence in Entertainment
* (Collaborative) User Models and Group Behavior
* Collaborative and virtual Environments
* Brain Computer Interaction
* Cross Domain User Models
* Augmented, Virtual and Mixed Reality
* Computer Graphics & Multimedia
* Pervasive Multimedia
* Robots
* Computational humor
== Interactive Applications for Entertainment ==
* Intelligent Interactive Games
* Emergent games
* Human Music Interaction
* Interactive Cinema
* Edutainment
* Urban Gaming
* Interactive Art
* Interactive Museum Guides
* Evaluation
* City and Tourism Explorers Assistants
* Shopping Assistants
* Interactive Real TV
* Interactive Social Networks
* Interactive Story Telling
* Personal Diaries, Websites and Blogs
* Comprehensive assisting environments for special populations
(handicapped, children, elderly)
* Exertion games
===========================
==== SUBMISSION FORMAT ====
===========================
INTETAIN 09 accepts long papers and short poster papers as well as demo proposals accompanied by a two page extended abstract. Accepted long and short papers will be published in the new Springer series LNICST: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. The organisation of INTETAIN 09 is currently working to secure a special edition of a journal, as happened previously for the 2005 edition of the Intetain conference.
Submissions should adhere to the LNICST instructions for authors, available from the INTETAIN 09 web site.
== Long papers ==
Submissions of a maximum of 12 pages that describe original research work not submitted or published elsewhere. Long papers will be orally presented at the conference.
== Short papers ==
Submissions of a maximum of 6 pages that describe original research work not submitted or published elsewhere. Short papers will be presented with a poster during the demo and poster session at the conference.
== Demos ==
Researchers are invited to submit proposals for demonstrations to be held during a special demo and poster session at the INTETAIN 09. For more information, see the Call for Demos below. Demo proposals may either be accompanied by a long or short paper submission, or by a two page extended abstract describing the demo. The extended abstracts will be published in a supplementary proceedings distributed during the conference.
=========================
==== IMPORTANT DATES ====
=========================
Submission deadline:
Monday, Februari 16, 2009
Notification:
Monday, March 16, 2009
Camera ready submission deadline:
Monday, March 30, 2009
Late demo submission deadline (extended abstract only!):
Monday, March 30, 2009
Conference:
June 22-24, 2009,
===================
==== COMMITTEE ====
===================
General Program Chair:
Anton Nijholt, Human Media Interaction,
Local Chair:
Dennis Reidsma, Human Media Interaction,
Web Master and Publication Chair:
Hendri Hondorp, Human Media Interaction,
Steering Committee Chair:
Imrich Chlamtac, Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering
========================
==== CALL FOR DEMOS ====
========================
We actively seek proposals from both industry and academia for interactive demos to be held during a dedicated session at the conference. Demos may accompany a long or short paper. Also, demos may be submitted at a later deadline instead, with a short, two page extended abstract explaining the demo and showing why the demo would be a worthwhile contribution the INTETAIN 09's demo session.
== Format ==
Demo submissions should be accompanied by the following additional information:
* A short description of the setup and demo (2 alineas)
* Requirements (hardware, power, network, space,
sound conditions, etc, time needed for setup)
* A sketch or photo of the setup
Videos showing the demonstration setup in action are very welcome.
== Review ==
Demo proposals will be reviewed by a review team that will take into account aspects such as novelty, relevance to the conference, coverage of topics and available resources.
== Topics ==
* Topics for demo submissions include, but are not limited to:
* New technology for interaction and entertainment
* (serious) gaming
* New entertainment applications
* BCI
* Human Music Interaction
* Music technology
* Edutainment
* Exertion interfaces
============================
==== PROGRAM COMMITTEE ====
============================
Elisabeth Andre Augsburg University, Germany
Lora Aroyo Vrije Universiteit Amsterdam, the Netherlands
Regina Bernhaupt University of Salzburg, Austria
Kim Binsted University of Hawai, USA
Andreas Butz University of Munich, Germany
Yang Cai Visual Intelligence Studio, CYLAB, Carnegie Mellon, USA
Antonio Camurri University of Genoa, Italy
Marc Cavazza University of Teesside, UK
Keith Cheverst University of Lancaster, UK
Drew Davidson CMU, Pittsburgh, USA
Barry Eggen University of Eindhoven, the Netherlands
Arjan Egges University of Utrecht, the Netherlands
Anton Eliens Vrije Universiteit Amsterdam, the Netherlands
Steven Feiner Columbia University, New York
Alois Ferscha University of Linz, Austria
Matthew Flagg Georgia Tech, USA
Jaap van den Herik University of Tilburg, the Netherlands
Dirk Heylen University of Twente, the Netherlands
Frank Kresin Waag Society, Amsterdam, the Netherlands
Antonio Krueger University of Muenster, Germany
Tsvi Kuflik University of Haifa, Israel
Markus Löckelt DFKI Saarbrücken, Germany
Henry Lowood University of Stanford, USA
Mark Maybury MITRE, Boston, USA
Oscar Mayora Create-Net Research Consortium, Italy
John-Jules Meijer University of Utrecht, the Netherlands
Louis-Philippe Morency Institute for Creative Technologies, USC, USA
Florian 'Floyd' Mueller University of Melbourne, Australia
Patrick Olivier University of Newcastle, UK
Paolo Petta Medical University of Vienna, Austria
Fabio Pianesi ITC-irst, Trento, Italy
Helmut Prendinger National Institute of Informatics, Tokyo, Japan
Matthias Rauterberg University of Eindhoven, the Netherlands
Isaac Rudomin Monterrey Institute of Technology, Mexico
Pieter Spronck University of Tilburg, the Netherlands
Oliviero Stock ITC-irst, Trento, Italy
Carlo Strapparava ITC-irst, Trento, Italy
Mariet Theune University of Twente, the Netherlands
Thanos Vasilikos University of Western Macedonia, Greece
Sean White Columbia University, USA
Woontack Woo Gwangju Institute of Science and Technology, Korea
Wijnand IJsselstein University of Eindhoven, the Netherlands
Massimo Zancanaro ITC-irst, Trento, Italy
8-5 . (2009-06-24) DIAHOLMIA 2009: THE 13TH WORKSHOP ON THE SEMANTICS AND PRAGMATICS OF DIALOGUE
DIAHOLMIA 2009: THE 13TH WORKSHOP ON THE SEMANTICS AND PRAGMATICS OF DIALOGUE
**** NOTE: Deadline for 2-page submissions ****
**** (posters and demos) has been extended to May 7. ****
KTH, Stockholm, Sweden, 24-26 June, 2009
The SemDial series of workshops aims to bring together researchers working on the semantics and pragmatics of dialogue in fields such as artificial intelligence, computational linguistics, formal semantics/pragmatics, philosophy, psychology, and neural science. DiaHolmia will be the 13th workshop in the SemDial series, and will be organized at the Department of Speech Music and Hearing, KTH (Royal Institute of Technology). KTH is Scandinavia's largest institution of higher education in technology and is located in central Stockholm (Holmia in Latin).
WEBSITE: www.diaholmia.org
DATES AND DEADLINES:
Full 8-page papers:
Submission due: 22 March 2009
Notification of acceptance: 25 April 2009
Final version due: 7 May 2009
2-page poster or demo descriptions:
Submission due: 25 April 2009
Notification of acceptance: 7 May 2009
DiaHolmia 2009: 24-26 June 2009 (Wednesday-Friday)
SCOPE:
We invite papers on all topics related to the semantics and pragmatics of dialogues, including, but not limited to:
- common ground/mutual belief
- turn-taking and interaction control
- dialogue and discourse structure
- goals, intentions and commitments
- natural language understanding/semantic interpretation
- reference, anaphora and ellipsis
- collaborative and situated dialogue
- multimodal dialogue
- extra- and paralinguistic phenomena
- categorization of dialogue phenomena in corpora
- designing and evaluating dialogue systems
- incremental, context-dependent processing
- reasoning in dialogue systems
- dialogue management
Full papers will be in the usual 8-page, 2-column format. There will also be poster and demo presentations. The selection of posters and demos will be based on 2-page descriptions. Selected descriptions will be included in the proceedings.
Details on programme and local arrangements will be announced at a later date.
The best accepted papers will be invited to submit extended versions to Dialogue & Discourse, the new open-access journal dedicated exclusively to research on language 'beyond the single sentence' (www.dialogue-and-discourse.org).
KEYNOTE SPEAKERS:
Harry Bunt (Tilburg University, Netherlands)
Nick Campbell (ATR, Japan)
Julia Hirschberg (Columbia University, New York)
Sverre Sjölander (Linköping University, Sweden)
PROGRAMME COMMITTEE:
Jan Alexandersson, Srinivas Bangalore, Ellen Gurman Bard, Anton Benz, Johan Bos, Johan Boye, Harry Bunt, Donna Byron, Jean Carletta, Rolf Carlson, Robin Cooper, Paul Dekker, Giuseppe Di Fabbrizio, Raquel Fernández, Claire Gardent, Simon Garrod, Jonathan Ginzburg, Pat Healey, Peter Heeman, Mattias Heldner, Joris Hulstijn, Michael Johnston, Kristiina Jokinen, Arne Jönsson, Alistair Knott, Ivana Kruijff-Korbayova, Staffan Larsson, Oliver Lemon, Ian Lewin, Diane Litman, Susann Luperfoy, Colin Matheson, Nicolas Maudet, Michael McTear, Wolfgang Minker, Philippe Muller, Fabio Pianesi, Martin Pickering, Manfred Pinkal, Paul Piwek, Massimo Poesio, Alexandros Potamianos, Matthew Purver, Manny Rayner, Hannes Rieser, Laurent Romary, Alex Rudnicky, David Schlangen, Stephanie Seneff, Ronnie Smith, Mark Steedman, Amanda Stent, Matthew Stone, David Traum, Marilyn Walker and Mats Wirén
ORGANIZING COMMITTEE:
Jens Edlund
Joakim Gustafson
Anna Hjalmarsson
Gabriel Skantze
8-6 . (2009-06-24) Speaker Odyssey Brno
Speaker Odyssey moving from South Africa to Czech Republic for 2010
A successful Speaker Odyssey workshop was held in 2008 in Stellenbosch, South Africa, co-hosted by ‘Spescom DataVoice’ and ‘Stellenbosch University Digital Signal Processing Group’ and co-chaired by Niko Brümmer and Prof. Johan du Preez.
The series of Speaker and Language Characterization Workshop has moved on to be hosted by Brno University of Technology in Brno, Czech Republic as the 7^th workshop in taking place June 28^th -July 1^st 2010. Brno is the second largest city in the Czech Republic and the capital of Moravia. The city has a local airport and can be easily reached from international airports of Prague (200 km) and Vienna (130 km). Odyssey will take place in the scenic campus of BUT’s Faculty of Information Technology featuring middle-age Cartesian monastery and modern lecture halls (see _http://www.fit.vutbr.cz/_) .
While enjoying this setting, four days of intensive program will await the participants. Judging by the 2008 workshop, the program will include lots of speaker recognition and classification with some language identification. As our website (http://_www.speakerodyssey.com <http://www.speakerodyssey.com/>_) states, topics of interest include speaker and language recognition verification, identification, segmentation, and clustering): text-dependent and -independent speaker recognition; multispeaker training and detection; speaker characterization and adaptation; features for speaker recognition; robustness in channels; robust classification and fusion; speaker recognition corpora and evaluation; use of extended training data; speaker recognition with speech recognition; forensics; speaker and language confidence estimation.
In 2010, we also look forward to receiving submissions in multimodality, and multimedia speaker recognition; dialect, and accent recognition; speaker synthesis and transformation; biometrics; human recognition of speaker and language; and commercial applications.
As usual, the NIST 2010 Speaker recognition evaluation (SRE) workshop will precede Odyssey and will take place also in Brno, 24-25 June 2010. For participants attending both the NIST workshop and Odyssey, social activities will be organized on the weekend 26-27 June.
ISCA grants are available to enable students and young scientists to participate and we expect to have a large student participation due to the convenient location.
8-7 . (2009-07-01) Conference on Scalable audio-content analysis
ConferenceInternational avec ActesSpecial issue on Scalable audio-content analysisEurasip Journal on audio, speech, and music processing01/07/2009 - //http://www.hindawi.com/journals/asmp/si/saca.htmlThe amount of easily-accessible audio, either in the form of large collections of audio or audio-video recordings or in the form of streaming media, has increased exponentially in recent times. However, this audio is not standardized: much of it is noisy, recordings are frequently not clean, and most of it is not labeled. The audio content covers a large range of categories including sports, music and songs, speech, and natural sounds. There is, therefore, a need for algorithms that allow us make sense of these data, to store, process, categorize, summarize, identify, and retrieve them quickly and accurately.In this special issue, we invite papers that present novel approaches to problems such as (but not limited to):* Audio similarity* Audio categorization* Audio classification* Indexing and retrieval* Semantic tagging* Audio event detection* Summarization* MiningWe are especially interested in work that addresses real-world issues such as:* Scalable and efficient algorithms* Audio analysis under noisy and real-world conditions* Classification with uncertain labeling* Invariance to recording conditions* On-line and real-time analysis of audio* Algorithms for very large audio databasesWe encourage theoretical or application-oriented papers that highlight exploitation of such techniques in practical systems/products.Before submission, authors should carefully read over the journal's Author Guidelines, which are located at http://www.hindawi.com/journals/asmp/guidelines.html. Authors should follow the EURASIP Journal on Audio, Speech, and Music Processing manuscript format described at the journal site http://www.hindawi.com/journals/asmp/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://mts.hindawi.com/, according to the following timetable:Manuscript Due June 1, 2009First Round of Reviews September 1, 2009Publication Date December 1, 2009Lead Guest Editor* Bhiksha Raj, Carnegie Mellon University, PA 15213, USAGuest Editors* Paris Smaragdis, Advanced Technology Labs, Adobe Systems Inc. Newton, MA 02466, USA* Malcolm Slaney, Yahoo! Research, Santa Clara, CA 95054; Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, CA 94305-8180, USA* Chung-Hsien Wu, Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan* Liming Chen, Department of Mathematics and Informatics, Ecole Centrale de Lyon University of Lyon, 69006 Lyon, France* Hyoung-Gook Kim, Intelligent Multimedia Signal Processing Lab, Kwangwoon University, Seoul 139-701, South KoreaConference International avec Actes Special issue on Scalable audio-content analysis Eurasip Journal on audio, speech, and music processing 01/07/2009 - // DeadLine: 20090701 XXX YYY ZZZ http://www.hindawi.com/journals/asmp/si/saca.html The amount of easily-accessible audio, either in the form of large collections of audio or audio-video recordings or in the form of streaming media, has increased exponentially in recent times. However, this audio is not standardized: much of it is noisy, recordings are frequently not clean, and most of it is not labeled. The audio content covers a large range of categories including sports, music and songs, speech, and natural sounds. There is, therefore, a need for algorithms that allow us make sense of these data, to store, process, categorize, summarize, identify, and retrieve them quickly and accurately. In this special issue, we invite papers that present novel approaches to problems such as (but not limited to): * Audio similarity * Audio categorization * Audio classification * Indexing and retrieval * Semantic tagging * Audio event detection * Summarization * Mining We are especially interested in work that addresses real-world issues such as: * Scalable and efficient algorithms * Audio analysis under noisy and real-world conditions * Classification with uncertain labeling * Invariance to recording conditions * On-line and real-time analysis of audio * Algorithms for very large audio databases We encourage theoretical or application-oriented papers that highlight exploitation of such techniques in practical systems/products. Before submission, authors should carefully read over the journal's Author Guidelines, which are located at http://www.hindawi.com/journals/asmp/guidelines.html. Authors should follow the EURASIP Journal on Audio, Speech, and Music Processing manuscript format described at the journal site http://www.hindawi.com/journals/asmp/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://mts.hindawi.com/, according to the following timetable: Manuscript Due June 1, 2009 First Round of Reviews September 1, 2009 Publication Date December 1, 2009 Lead Guest Editor * Bhiksha Raj, Carnegie Mellon University, PA 15213, USA Guest Editors * Paris Smaragdis, Advanced Technology Labs, Adobe Systems Inc. Newton, MA 02466, USA * Malcolm Slaney, Yahoo! Research, Santa Clara, CA 95054; Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, CA 94305-8180, USA * Chung-Hsien Wu, Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan * Liming Chen, Department of Mathematics and Informatics, Ecole Centrale de Lyon University of Lyon, 69006 Lyon, France * Hyoung-Gook Kim, Intelligent Multimedia Signal Processing Lab, Kwangwoon University, Seoul 139-701, South Korea
8-8 . (2009-07) 6th IJCAI workshop on knowledge and reasoning in practical dialogue systems
6th WORKSHOP ON KNOWLEDGE AND REASONING IN PRACTICAL DIALOGUE SYSTEMS The sixth IJCAI workshop on "Knowledge and Reasoning in Practical Dialogue
Systems" will focus on challenges of novel applications of practical dialogue systems. The venue for IJCAI 2009 is Pasadena Conference Center, California, USA.
Topics addressed in the workshop include, but are not limited to the
following, particularly focusing on the challenges offered by these novel applications:
* What kinds of novel applications have a need for natural language
dialogue interaction?
* How can authoring tools for dialogue systems be developed such that
application designers who are not experts in natural language can make use of these systems?
* How can one easily adapt a dialogue system to a new application? * Methods for design and development of dialogue systems. * What are the extra constraints and resources of a dialogue system for
these novel applications, that might not be present in a speech or text only dialogue system or even traditional multi-modal interfaces?
* Representation of language resources for dialogue systems. * The role of ontologies in dialogue systems * Evaluation of dialogue systems, what to evaluate and how. * Techniques and algorithms for adaptivity in dialogue systems on
various levels, e.g. interpretation, dialogue strategy, and generation.
* Robustness and how to handle unpredictability. * Architectures and frameworks for adaptive dialogue systems. * Requirements and methods for development related to the architecture. This is the sixth IJCAI workshop on "Knowledge and Reasoning in Practical
Dialogue Systems". The first workshop was held at IJCAI in Stockholm in 1999. The second workshop was held at IJCAI 2001 in Seattle, with a focus on multimodal interfaces. The Third workshop was held in Acapulco, in 2003, and focused on the role and use of ontologies in multi-modal dialogue systems. The fourth workshop was held in Edinburgh in 2005, and focused on adaptivity in dialogue systems. The fifth workshop was held in Hyderabad, India, 2007 and focused on dialogue systems for robots and virtual humans.
Who should attend This workshop aims at bringing together researchers and practitioners that
work on the development of communication models that support robust and efficient interaction in natural language, both for commercial dialogue systems and in basic research.
It should be of interest also for anyone studying dialogue and multimodal
interfaces and how to coordinate different information sources. This involves theoretical as well as practical research, e.g. empirical evaluations of usability, formalization of dialogue phenomena and development of intelligent interfaces for various applications, including such areas as robotics.
Workshop format The workshop will be kept small, with a maximum of 40 participants.
Preference will be given to active participants selected on the basis of their submitted papers.
Each paper will be given ample time for discussion, more than what is
customary at a conference. As said above, we encourage contributions of a critical or comparative nature that provide fuel for discussion. We also invite people to share their experiences of implementing and coordinating knowledge modules in their dialogue systems, and integrating dialogue components to other applications.
Important Dates * Submission deadline: March 6, 2009 * Notification date: April 17, 2009 * Accepted paper submission deadline: May 8, 2009 * Workshop July, 2009 Submissions Papers may be any of the following types: * Regular Papers papers of length 4-8 pages, for regular presentation * Short Papers with brief results, or position papers, of length up to
4 pages for brief or panel presentation.
* Extended papers with extra details on system architecture, background
theory or data presentation, of up to 12 pages, for regular presentation.
Papers should include authors names and affiliation and full references
(not anonymous submission). All papers should be formatted according to the AAAI formats: AAAI Press Author Instructions
Submission procedure Papers should be submitted by web by registering at the following address:
http://www.easychair.org/conferences/?conf=krpd09
Organizing Committee Arne Jönsson (Chair) Department of Computer and Information Science Linköping University S-581 83 Linköping, Sweden tel: +46 13 281717 fax: +46 13 142231 email: arnjo@ida.liu.se David Traum (Co-Chair) Institute for Creative Technologies University of Southern California 13274 Fiji Way Marina del Rey, CA 90405 USA tel: +1 (310) 574-5729 fax: +1 (310) 574-5725 email: traum@ict.usc.edu Jan Alexandersson (Co-Chair) German Research Center for Artificial Intelligence, DFKI GmbH Stuhlsatzenhausweg 3 D-66 123 Saarbrücken Germany tel: +49-681-3025347 fax: +49-681-3025341 email: jan.alexandersson@dfki.de Ingrid Zukerman (Co-Chair) Faculty of Information Technolog Monash University Clayton, Victoria 3800, Australia tel: +61 3 9905-5202 fax: +61 3 9905-5146 email: ingrid@csse.monash.edu.au Programme committee Dan Bohus, USA Johan Bos, Italy Sandra Carberry, USA Kallirroi Georgila, USA Genevieve Gorrell, UK Joakim Gustafson, Sweden Yasuhiro Katagiri, Japan Ali Knott, New Zealand Kazunori Komatani, Japan Staffan Larsson, Sweden Anton Nijholt, Netherlands Tim Paek, USA Antoine Raux, USA Candace Sidner, USA Amanda Stent, USA Marilyn Walker, UK Jason Williams, USA Web page: http://www.ida.liu.se/~arnjo/Ijcai09ws/ Arne Jönsson Tel: +4613281717
8-9 . (2009-07-09) MULTIMOD 2009 Multimodality of communication in children: gestures, emotions, language and cognition
The Multimod 2009 conference - Multimodality of communication in children:
gestures, emotions, language and cognition is being organized jointly by
psychologists and linguists from the Universities of Toulouse (Toulouse II)
and Grenoble (Grenoble III) and will take place in Toulouse (France) from
Thursday 9th July to Saturday 11th July 2009.
The aim of the conference will be to assess research on theories, concepts
and methods relating to multimodality in children.
The invited speakers are :
- Susan Goldin-Meadow (University of Chicago, USA),
- Jana Iverson (University of Pittsburg, USA),
- Paul Harris (Harvard University, USA),
- Judy Reilly (San Diego State University, USA),
- Gwyneth Doherty-Sneddon (University of Stirling, UK),
- Marianne Gullberg (MPI Nijmegen, The Netherlands).
We invite you to submit proposals for symposia, individual papers or posters
of original, previously unpublished research on all aspects of multimodal
communication in children, including:
- Gestures and language development, both typical and atypical
- Emotional development, both typical and atypical
- Multimodality of communication and bilingualism
- Gestural and/or emotional communication in non-human and human primates
- Multimodality of communication and didactics
- Multimodality of communication in the classroom
- Multimodality of communication and brain development
- Prosodic (emotional) aspects of language and communication development
- Pragmatic aspects of multimodality development
Please visit the conference website
http://w3.eccd.univ-tlse2.fr/multimod2009/ to find all useful Information
about submissions (individual papers, posters and symposia); the deadline
for submissions is December 15th, 2008.
8-10 . (2009-08-02) ACL-IJCNLP 2009 1st Call for Papers
ACL-IJCNLP 2009 1st Call for Papers
Joint Conference of
the 47th Annual Meeting of the Association for Computational Linguistics
and
the 4th International Joint Conference on Natural Language Processing of
the Asian Federation of Natural Language Processing
August 2 - 7, 2009
Singapore
http://www.acl-ijcnlp-2009.org
Full Paper Submission Deadline: February 22, 2009 (Sunday)
Short Paper Submission Deadline: April 26, 2009 (Sunday)
For the first time, the flagship conferences of the Association of
Computational Linguistics (ACL) and the Asian Federation of Natural
Language Processing (AFNLP) --the ACL and IJCNLP -- are jointly
organized as a single event. The conference will cover a broad
spectrum of technical areas related to natural language and
computation. ACL-IJCNLP 2009 will include full papers, short papers,
oral presentations, poster presentations, demonstrations, tutorials,
and workshops. The conference invites the submission of papers on
original and unpublished research on all aspects of computational
linguistics.
Important Dates:
* Feb 22, 2009 Full paper submissions due;
* Apr 12, 2009 Full paper notification of acceptance;
* Apr 26, 2009 Short paper submissions due;
* May 17, 2009 Camera-ready full papers due;
* May 31, 2009 Short Paper notification of acceptance;
* Jun 7, 2009 Camera-ready short papers due;
* Aug 2-7, 2009 ACL-IJCNLP 2009
Topics of interest:
Topics include, but are not limited to:
* Phonology/morphology, tagging and chunking, and word segmentation
* Grammar induction and development
* Parsing algorithms and implementations
* Mathematical linguistics and grammatical formalisms
* Lexical and ontological semantics
* Formal semantics and logic
* Word sense disambiguation
* Semantic role labeling
* Textual entailment and paraphrasing
* Discourse, dialogue, and pragmatics
* Language generation
* Summarization
* Machine translation
* Information retrieval
* Information extraction
* Sentiment analysis and opinion mining
* Question answering
* Text mining and natural language processing applications
* NLP in vertical domains, such as biomedical, chemical and legal text
* NLP on noisy unstructured text, such as email, blogs, and SMS
* Spoken language processing
* Speech recognition and synthesis
* Spoken language understanding and generation
* Language modeling for spoken language
* Multimodal representations and processing
* Rich transcription and spoken information retrieval
* Speech translation
* Statistical and machine learning methods
* Language modeling for text processing
* Lexicon and ontology development
* Treebank and corpus development
* Evaluation methods and user studies
* Science of annotation
Submissions:
Full Papers: Submissions must describe substantial, original,
completed and unpublished work. Wherever appropriate, concrete
evaluation and analysis should be included. Submissions will be judged
on correctness, originality, technical strength, significance,
relevance to the conference, and interest to the attendees. Each
submission will be reviewed by at least three program committee
members.
Full papers may consist of up to eight (8) pages of content, plus one
extra page for references, and will be presented orally or as a poster
presentation as determined by the program committee. The decisions as
to which papers will be presented orally and which as poster
presentations will be based on the nature rather than on the quality
of the work. There will be no distinction in the proceedings between
full papers presented orally and those presented as poster
presentations.
The deadline for full papers is February 22, 2009 (GMT+8). Submission
is electronic using paper submission software at:
https://www.softconf.com/acl-ijcnlp09/papers
Short papers: ACL-IJCNLP 2009 solicits short papers as well. Short
paper submissions must describe original and unpublished work. The
short paper deadline is just about three months before the conference
to accommodate the following types of papers:
* A small, focused contribution
* Work in progress
* A negative result
* An opinion piece
* An interesting application nugget
Short papers will be presented in one or more oral or poster sessions,
and will be given four pages in the proceedings. While short papers
will be distinguished from full papers in the proceedings, there will
be no distinction in the proceedings between short papers presented
orally and those presented as poster presentations. Each short paper
submission will be reviewed by at least two program committee members.
The deadline for short papers is April 26, 2009 (GMT + 8). Submission
is electronic using paper submission software at:
https://www.softconf.com/acl-ijcnlp09/shortpapers
Format:
Full paper submissions should follow the two-column format of
ACL-IJCNLP 2009 proceedings without exceeding eight (8) pages of
content plus one extra page for references. Short paper submissions
should also follow the two-column format of ACL-IJCNLP 2009
proceedings, and should not exceed four (4) pages, including
references. We strongly recommend the use of ACL LaTeX style files or
Microsoft Word style files tailored for this year's conference, which
are available on the conference website under Information for Authors.
Submissions must conform to the official ACL-IJCNLP 2009 style
guidelines, which are contained in the style files, and they must be
electronic in PDF.
As the reviewing will be blind, the paper must not include the
authors' names and affiliations. Furthermore, self-references that
reveal the author's identity, e.g., "We previously showed (Smith,
1991) ...", must be avoided. Instead, use citations such as "Smith
previously showed (Smith, 1991) ...". Papers that do not conform to
these requirements will be rejected without review.
Multiple-submission policy:
Papers that have been or will be submitted to other meetings or
publications must provide this information at submission time. If
ACL-IJCNLP 2009 accepts a paper, authors must notify the program
chairs by April 19, 2009 (full papers) or June 7, 2009 (short papers),
indicating which meeting they choose for presentation of their work.
ACL-IJCNLP 2009 cannot accept for publication or presentation work
that will be (or has been) published elsewhere.
Mentoring Service:
ACL is providing a mentoring (coaching) service for authors from
regions of the world where English is less emphasized as a language of
scientific exchange. Many authors from these regions, although able to
read the scientific literature in English, have little or no
experience in writing papers in English for conferences such as the
ACL meetings. The service will be arranged as follows. A set of
potential mentors will be identified by Mentoring Service Chairs Ng,
Hwee Tou (NUS, Singapore) and Reeder, Florence (Mitre, USA), who will
organize this service for ACL-IJCNLP 2009. If you would like to take
advantage of the service, please upload your paper in PDF format by
January 14, 2009 for long papers and March 18 2009 for short papers
using the paper submission software for mentoring service which will
be available at conference website.
An appropriate mentor will be assigned to your paper and the mentor
will get back to you by February 8 for long papers or April 12 for
short papers, at least 2 weeks before the deadline for the submission
to the ACL-IJCNLP 2009 program committee.
Please note that this service is for the benefit of the authors as
described above. It is not a general mentoring service for authors to
improve the technical content of their papers.
If you have any questions about this service please feel free to send
a message to Ng, Hwee Tou (nght[at]comp.nus.edu.sg) or Reeder,
Florence (floreederacl[at]yahoo.com).
General Conference Chair:
Su, Keh-Yih (Behavior Design Corp., Taiwan; kysu[at]bdc.com.tw)
Program Committee Chairs:
Su, Jian (Institute for Infocomm Research, Singapore;
sujian[at]i2r.a-star.edu.sg)
Wiebe, Janyce (University of Pittsburgh, USA; janycewiebe[at]gmail.com)
Area Chairs:
Agirre, Eneko (University of Basque Country, Spain; e.agirre[at]ehu.es)
Ananiodou, Sophia (University of Manchester, UK;
sophia.ananiadou[at]manchester.ac.uk)
Belz, Anja (University of Brighton, UK; a.s.belz[at]itri.brighton.ac.uk)
Carenini, Giuseppe (University of British Columbia, Canada;
carenini[at]cs.ubc.ca)
Chen, Hsin-Hsi (National Taiwan University, TaiWan, hh_chen[at]csie.ntu.edu.tw)
Chen, Keh-Jiann (Sinica, Tai Wan, kchen[at]iis.sinica.edu.tw)
Curran, James (University of Sydney, Australia; james[at]it.usyd.edu.au)
Gao, Jian Feng (MSR, USA; jfgao[at]microsoft.com)
Harabagiu, Sanda (University of Texas at Dallas, USA, sanda[at]hlt.utdallas.edu)
Koehn, Philipp (University of Edinburgh, UK; pkoehn[at]inf.ed.ac.uk)
Kondrak, Grzegorz (University of Alberta, Canada; kondrak[at]cs.ualberta.ca)
Meng, Helen Mei-Ling (Chinese University of Hong Kong, Hong Kong;
hmmeng[at]se.cuhk.edu.hk )
Mihalcea, Rada (University of Northern Texas, USA; rada[at]cs.unt.edu)
Poesio, Massimo(University of Trento, Italy; poesio[at]disi.unitn.it)
Riloff, Ellen (University of Utah, USA; riloff[at]cs.utah.edu)
Sekine, Satoshi (New York University, USA; sekine[at]cs.nyu.edu)
Smith, Noah (CMU, USA; nasmith[at]cs.cmu.edu)
Strube, Michael (EML Research, Germany; strube[at]eml-research.de)
Suzuki, Jun (NTT, Japan; jun[at]cslab.kecl.ntt.co.jp)
Wang, Hai Feng (Toshiba, China; wanghaifeng[at]rdc.toshiba.com.cn)
8-11 . (2009-08-10) 16th International ECSE Summer School in Novel Computing (Joensuu, FINLAND)
Call for participation:
16th International ECSE Summer School in
Novel Computing (Joensuu, FINLAND)
=========================================
University of Joensuu, Finland, announces the 16th International ECSE Summer School in Novel Computing:
http://cs.joensuu.fi/ecse/
The summer school includes three independent courses, one in June and two in August:
June 8-10
Jean-Luc LeBrun: Scientific Writing Skills
http://www.scientific-writing.com/
"Publish or perish, reviewers decide.
Be cited or not, readers decide"
Registration deadline: May 20, 2009
August 10-14 -- two parallel courses:
Douglas A. Reynolds (MIT Lincoln Lab)
"Speaker and Language Recognition"
Paul De Bra (Eindhoven Univ Technology)
"Platforms for Stories-Based Learning
in Future Schools"
Early registration deadline: June 15, 2009
In addition to high-quality lectures, the summer school offers an inspiring learning environment and relaxed social program, including the Finnish sauna, in the middle of North Carelia region. Joensuu is located next to the Russian border and about 400 km North-East from the capital of the country. It is a vivid student city with over 6000 students in the University of Joensuu and 3500 in North Karelia Polytechnic. The European Forest Institute, The University and many other institutes and export enterprises such as Abloy, LiteonMobile and John Deere give Joensuu an international flavour.
The summer school is organized by the Department of Computer Science and Statistics, University of Joensuu, Finland (http://cs.joensuu.fi). The research areas of the department include speech and image processing, educational technology, color research, and psychology of programming.
More information:
WWW: http://cs.joensuu.fi/ecse/
e-mail: ecse09@cs.joensuu.fi
8-12 . (2009-09) Emotion challenge INTERSPEECH 2009
8-13 . (2009-09-06) Special session at Interspeech 2009:adaptivity in dialog systems
Call for papers (submission deadline Friday 17 April 2009) Special Session : "Machine Learning for Adaptivity in Spoken Dialogue Systems"at Interspeech 2009, Brighton U.K., http://www.interspeech2009.org/Session chairs: Oliver Lemon, Edinburgh University,and Olivier Pietquin, Supélec - IMS Research GroupIn the past decade, research in the field of Spoken Dialogue Systems(SDS) has experienced increasing growth, and new applications includeinteractive mobile search, tutoring, and troubleshooting systems(e.g. fixing a broken internet connection). The design andoptimization of robust SDS for such tasks requires the development ofdialogue strategies which can automatically adapt to different typesof users (novice/expert, youth/senior) and noise conditions(room/street). New statistical learning techniques are emerging fortraining and optimizing adaptive speech recognition, spoken languageunderstanding, dialogue management, natural language generation, andspeech synthesis in spoken dialogue systems. Among machine learningtechniques for spoken dialogue strategy optimization, reinforcementlearning using Markov Decision Processes (MDPs) and PartiallyObservable MDP (POMDPs) has become a particular focus.We therefore solicit papers on new research in the areas of:- Adaptive dialogue strategies and adaptive multimodal interfaces- User simulation techniques for adaptive strategy learning and testing- Rapid adaptation methods- Reinforcement Learning of dialogue strategies- Partially Observable MDPs in dialogue strategy optimization- Statistical spoken language understanding in dialogue systems- Machine learning and context-sensitive speech recognition- Learning for adaptive Natural Language Generation in dialogue- Corpora and annotation for machine learning approaches to SDS- Machine learning for adaptive multimodal interaction- Evaluation of adaptivity in statistical approaches to SDS and usersimulation.Important Dates--Full paper submission deadline: Friday 17 April 2009Notification of paper acceptance: Wednesday 17 June 2009Conference dates: 6-10 September 2009
8-14 . (2009-09-07)CfP Information Retrieval and Information Extraction for Less Resourced Languages
8-15 . (2009-09-09) CfP IDP 09 Discourse-Prosody Interface
IDP 09 : CALL FOR PAPERS
Discourse – Prosody Interface
Paris, September 9-10-11, 2009
The third round of the “Discourse – Prosody Interface” Conference will be hosted by the Laboratoire de Linguistique Formelle (UMR 7110 / LLF), the Equipe CLILLAC-ARP (EA 3967) and the Linguistic Department (UFRL) of the University of Paris-Diderot (Paris 7), on September 9-10-11, 2009 in Paris. The first round was organized by the Laboratoire Parole et Langage (UMR 6057 /LPL) in September 2005, in Aix-en-Provence. The second took place in Geneva in September 2007 and was organized by the Department of Linguistics at the University of Geneva, in collaboration with the École de Langue et Civilisation Françaises at the University of Geneva, and the VALIBEL research centre at the Catholic University of Louvain.
The third round will be held at the Paris Center of the University of Chicago, 6, rue Thomas Mann, in the XIIIth arrondissement, near the Bibliothèque François Mitterrand (BNF).
The Conference is addressed to researchers in prosody, phonology, phonetics, pragmatics, discourse analysis and also psycholinguistics, who are particularly interested in the relations between prosody and discourse. The participants may develop their research programmes within different theoretical paradigms (formal approaches to phonology and semantics/ pragmatics, conversation analysis, descriptive linguistics, etc.). For this third edition, spécial attention will be given to research work that propose a formal analysis of the Discourse- Prosody interface.
So as to favour convergence among contributions, the IDP09 conference will focus on :
* Prosody, its parts and discourse :
- How to analyze the interaction between the different prosodic subsystems (accentuation,
intonation, rhythm; register changes or voice quality)?
- How to model the contribution of each subsystem to the global interpretation of discourse?
- How to describe and analyze prosodic facts, and at which level (phonetic vs. phonological) ?
* Prosodic units & discourse units
- What are the relevant units for discourse or conversation analysis ? What are their prosodic
properties ?
- How the embedding of utterances in discourse is marked syntactically or prosodically ?
What consequence of the modelling of syntax & prosody ?
* Prosody and context(s)
- What is the contribution of the context in the analysis of prosody in discourse?
- How can the relations between prosody and context(s) be modelled?
* Acquisition of the relations between prosody & discourse in L1 and L2
- How are the relations between prosody & discourse acquired in L1, in L2 ?
- Which methodological tools could best describe and transcribe these processes ?
Guest speakers :
* Diane Blakemore (School of Languages, University of Salford, United Kingdom)
* Piet Mertens (Department of Linguistics, K.U Leuven, Belgium)
* Hubert Truckenbrodt (ZAS, Zentrum für Allgemeine Sprachwissenschaft, Berlin,
Germany)
Conference will be held in English or French. Studies can be about any language.
Submission will be made by uploading an anonymous two pages abstract (plus an extra page for references and figures) in A4 and with Times 12 font, written in either English or French as PDF file at the following address : http://www.easychair.org/conferences/?conf=idp09 .
Author’s name and affiliation should be given as requested, but not in the PDF file.
If you have any question concerning the submission procedure or you encounter any problem,
please send an email at the following address : idp09@linguist.jussieu.fr
Authors may submit as many proposals as they wish.
The proposals will be evaluated anonymously by the scientific committee.
Schedule
• Submission deadline: April, 26th, 2009
• Notification of acceptation: June, 8th, 2009
• Conference (IDP 09): September 9th-11th, 2009.
Further information is available on the conférence website : http://idp09.linguist.univ-paris-diderot.fr
8-16 . (2009-09-11) SIGDIAL 2009 CONFERENCE
10th Annual Meeting of the Special Interest Group
on Discourse and Dialogue
Queen Mary University of London, UK September 11-12, 2009
(right after Interspeech 2009)
Submission Deadline: April 24, 2009
PRELIMINARY CALL FOR PAPERS
The SIGDIAL venue provides a regular forum for the presentation of
cutting edge research in discourse and dialogue to both academic and
industry researchers. Due to the success of the nine previous SIGDIAL
workshops, SIGDIAL is now a conference. The conference is sponsored by
the SIGDIAL organization, which serves as the Special Interest Group in
discourse and dialogue for both ACL and ISCA. SIGDIAL 2009 will be
co-located with Interspeech 2009 as a satellite event.
In addition to presentations and system demonstrations, the program
includes an invited talk by Professor Janet Bavelas of the University of
Victoria, entitled "What's unique about dialogue?".
TOPICS OF INTEREST
We welcome formal, corpus-based, implementation, experimental, or
analytical work on discourse and dialogue including, but not restricted
to, the following themes:
1. Discourse Processing and Dialogue Systems
Discourse semantic and pragmatic issues in NLP applications such as text
summarization, question answering, information retrieval including
topics like:
- Discourse structure, temporal structure, information structure ;
- Discourse markers, cues and particles and their use;
- (Co-)Reference and anaphora resolution, metonymy and bridging resolution;
- Subjectivity, opinions and semantic orientation;
Spoken, multi-modal, and text/web based dialogue systems including
topics such as:
- Dialogue management models;
- Speech and gesture, text and graphics integration;
- Strategies for preventing, detecting or handling miscommunication
(repair and correction types, clarification and under-specificity,
grounding and feedback strategies);
- Utilizing prosodic information for understanding and for disambiguation;
2. Corpora, Tools and Methodology
Corpus-based and experimental work on discourse and spoken, text-based
and multi-modal dialogue including its support, in particular:
- Annotation tools and coding schemes;
- Data resources for discourse and dialogue studies;
- Corpus-based techniques and analysis (including machine learning);
- Evaluation of systems and components, including methodology, metrics
and case studies;
3. Pragmatic and/or Semantic Modeling
The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a
single sentence) including the following issues:
- The semantics/pragmatics of dialogue acts (including those which are
less studied in the semantics/pragmatics framework);
- Models of discourse/dialogue structure and their relation to
referential and relational structure;
- Prosody in discourse and dialogue;
- Models of presupposition and accommodation; operational models of
conversational implicature.
SUBMISSIONS
The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
presentations, followed by posters and demonstrations.
- Long papers must be no longer than 8 pages, including title, examples,
references, etc. In addition to this, two additional pages are allowed
as an appendix which may include extended example discourses or
dialogues, algorithms, graphical representations, etc.
- Short papers and demo descriptions should be 4 pages or less
(including title, examples, references, etc.).
Please use the official ACL style files:
http://ufal.mff.cuni.cz/acl2007/styles/
Papers that have been or will be submitted to other meetings or
publications must provide this information (see submission format).
SIGDIAL 2009 cannot accept for publication or presentation work that
will be (or has been) published elsewhere. Any questions regarding
submissions can be sent to the General Co-Chairs.
Authors are encouraged to make illustrative materials available, on the
web or otherwise. Examples might include excerpts of recorded
conversations, recordings of human-computer dialogues, interfaces to
working systems, and so on.
BEST PAPER AWARDS
In order to recognize significant advancements in dialog and discourse
science and technology, SIGDIAL will (for the first time) recognize a
BEST PAPER AWARD and a BEST STUDENT PAPER AWARD. A selection committee
consisting of prominent researchers in the fields of interest will
select the recipients of the awards.
IMPORTANT DATES (SUBJECT TO CHANGE)
Submission: April 24, 2009
Workshop: September 11-12, 2009
WEBSITES
SIGDIAL 2009 conference website:
http://www.sigdial.org/workshops/workshop10/
SIGDIAL organization website: http://www.sigdial.org/
Interspeech 2009 website: http://www.interspeech2009.org/
ORGANIZING COMMITTEE
For any questions, please contact the appropriate members of the
organizing committee:
GENERAL CO-CHAIRS
Pat Healey (Queen Mary University of London): ph@dcs.qmul.ac.uk
Roberto Pieraccini (SpeechCycle): roberto@speechcycle.com
TECHNICAL PROGRAM CO-CHAIRS
Donna Byron (Northeastern University): dbyron@ccs.neu.edu
Steve Young (University of Cambridge): sjy@eng.cam.ac.uk
LOCAL CHAIR
Matt Purver (Queen Mary University of London): mpurver@dcs.qmul.ac.uk
SIGDIAL PRESIDENT
Tim Paek (Microsoft Research): timpaek@microsoft.com
SIGDIAL VICE PRESIDENT
Amanda Stent (AT&T Labs - Research): amanda.stent@gmail.com
Matthew Purver - http://www.dcs.qmul.ac.uk/~mpurver/
Senior Research Fellow
Interaction, Media and Communication
Department of Computer Science
Queen Mary University of London, London E1 4NS, UK
8-17 . (2009-09-11) Int. Workshop on spoken language technology for development: from promise to practice.
International Workshop on Spoken Language Technology for Development
- from promise to practice
Venue - The Abbey Hotel, Tintern, UK
Dates - 11-12 September 2009
Following on from a successful special session at SLT 2008 in Goa, this workshop invites participants with an interest in SLT4D and who have expertise and experience in any of the following areas:
- Development of speech technology for resource-scarce languages
- SLT deployments in the developing world
- HCI in a developing world context
- Successful ICT4D interventions
The aim of the workshop is to develop a "Best practice in developing and deploying speech systems for developmental applications". It is also hoped that the participants will form the core of an open community which shares tools, insights and methodologies for future SLT4D projects.
If you are interested in participating in the workshop, please submit a 2-4 page position paper explaining how your expertise and experience might be applied to SLT4D, formatted according to the Interspeech 2009 guidelines, to Roger Tucker at roger@outsideecho.com by 30th April 2009.
Important Dates:
Papers due: 30th April 2009
Acceptance Notification: 10th June 2009
Early Registration deadline: 3rd July 2009
Workshop: 11-12 September 2009
Further details can be found on the workshop website at www.llsti.org/SLT4D-09
8-18 . (2009-09-11) ACORNS Workshop Brighton UK
8-19 . (2009-09-13)Young Researchers' Roundtable on Spoken Dialogue Systems 2009 London
Young Researchers' Roundtable on Spoken Dialogue Systems 2009
13th-14th September, at Queen Mary University of London
*Overview and goals*
The Young Researchers' Roundtable on Spoken Dialogue Systems (YRRSDS) is an annual workshop designed for post-graduate students, post-docs and junior researchers working in research related to spoken dialogue systems in both academia and industry. The roundtable provides an open forum where participants can discuss their research interests, current work and future plans. The workshop has three main goals:
- to offer an interdisciplinary forum for creative thinking about current issues in spoken dialogue systems research
- to provide young researchers with career advice from senior researchers and professionals from both academic and industrial backgrounds
- to develop a stronger international network of young researchers working in the field.
(Important note: There is no age restriction to participating in the workshop; the word 'young' is meant to indicate that it is targeted towards researchers who are at a relatively early stage in their career.)
*Topics and sessions*
Potential roundtable discussion topics include: best practices for conducting and evaluating user studies of spoken dialogue systems, the prosody of conversation, methods of analysis for dialogue systems, conversational agents and virtual characters,cultural adaptation of dialogue strategies, and user modelling.
YRRSDS’09 will feature:
- a senior researcher panel (both academia and industry)
- a demo and poster session
- a special session on frameworks and grand challenges for dialogue system evaluation
- a special session on EU projects related to spoken dialogue systems.
Previous workshops were held in Columbus (ACL 2008), Antwerp (INTERSPEECH 2007), Pittsburgh (INTERSPEECH 2006) and Lisbon (INTERSPEECH 2005).
*Workshop date*
YRRSDS'09 will take place on September 13th and 14th, 2009 (immediately after Interspeech and SIGDial 2009).
*Workshop location*
The 2009 YRRSDS will be held at Queen Mary University of London, one of the UK's leading research-focused higher education institutions. Queen Mary’s Mile End campus began life in 1887 as the People's Palace, a philanthropic endeavour to provide east Londoners with education and social activities, and is located in the heart of London's vibrant East End.
*Grants*
YRRSDS 2009 will be supported this year by ISCA, the International Speech Communication Association. ISCA will consider applications for a limited number of travel grants. Applications should be send directly to grants@isca-speech.org, details of the application process and forms are available from http://www.isca-speech.org/grants.html.We are also negotiating with other supporters the possibility of offering a limited number of travel grants to students.
*Endorsements*
SIGDial, ISCA, Dialogs on Dialogs
*Sponsors*
Orange, Microsoft Research, AT&T
*Submission process*
Participants will be asked to submit a 2-page position paper based on a template provided by the organising committee. In their papers, authors will include a short biographical sketch, a brief statement of research interests, a description of their research work, and a short discussion of what they believe to be the most significant and interesting issues in spoken dialogue systems today and in the near future. Participants will also provide three suggestions for discussion topics.
Workshop attendance will be limited to 50 participants. Submissions will be accepted on a first-come-first-served basis. Submissions will be collated and made available to participants. We also plan to publish the position papers and presentations from the workshop on the web, subject to any sponsor or publisher constraints.
*Important Dates*
- Submissions open: May 15, 2009
- Submissions deadline: June 30, 2009
- Final notification: July 31, 2009
- Registration begins: TBD
- Registration deadline: TBD
- Interspeech: 6-10 September 2009
- SIGDial: 11-12 September, 2009
- YRR: 13-14 September, 2009
*More information on related websites*
- Young Researchers' Roundtable website: http://www.yrrsds.org/
- SIGDIAL 2009 conference website: http://www.sigdial.org/workshops/workshop10/
- Interspeech 2009 website: http://www.interspeech2009.org/
*Organising Committee*
- David Díaz Pardo de Vera, Polytechnic University of Madrid, Spain
- Milica Gašić, Cambridge University, UK
- François Mairesse, Cambridge University, UK
- Matthew Marge, Carnegie Mellon University, USA
- Joana Paulo Pardal, Technical University Lisbon, Portugal
- Ricardo Ribeiro, ISCTE, Lisbon, Portugal
*Local Organisers*
- Arash Eshghi, Queen Mary University of London, UK
- Christine Howes, Queen Mary University of London, UK
- Gregory Mills, Queen Mary University of London, UK
*Scientific Advisory Committee*
- Hua Ai, University of Pittsburgh, USA
- James Allen, University of Rochester, USA
- Alan Black, Carnegie Mellon University, USA
- Dan Bohus, Microsoft Research, USA
- Philippe Bretier, Orange Labs, France
- Robert Dale, Macquarie University, Australia
- Maxine Eskenazi, Carnegie Mellon University, USA
- Sadaoki Furui, Tokyo Institute of Technology, Japan
- Luis Hernández Gómez, Polytechnic University of Madrid, Spain
- Carlos Gómez Gallo, University of Rochester, USA
- Kristiina Jokinen, University of Helsinki, Finland
- Nuno Mamede, Spoken Language Systems Lab, INESC-ID, Portugal
- David Martins de Matos, Spoken Language Systems Lab, INESC-ID, Portugal
- João Paulo Neto, Voice Interaction, Portugal
- Tim Paek, Microsoft Research
- Antoine Raux, Honda Research, USA
- Robert J. Ross, Universitat Bremen, Germany
- Alexander Rudnicky, Carnegie Mellon University, USA
- Mary Swift, University of Rochester, USA
- Isabel Trancoso, Spoken Language Systems Lab, INESC-ID, Portugal
- Tim Weale, The Ohio State University, USA
- Jason Williams, AT&T, USA
- Sabrina Wilske, Lang Tech and Cognitive Sys at Saarland University, Germany
- Andi Winterboer, Universiteit van Amsterdam, Netherlands
- Craig Wootton, University of Ulster, Belfast, Northern Ireland
- Steve Young, University of Cambridge, United Kingdom
8-20 . (2009-09-14) 7th International Conference on Recent Advances in Natural Language Processing
RANLP-09 Second Call for Papers and Submission Information
"RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING"
International Conference RANLP-2009
September 14-16, 2009
Borovets, Bulgaria
http://www.lml.bas.bg/ranlp2009
Further to the successful and highly competitive 1st, 2nd, 3rd, 4th, 5th
and 6th conferences 'Recent Advances in Natural Language Processing'
(RANLP), we are pleased to announce the 7th RANLP conference to be held in
September 2009.
The conference will take the form of addresses from invited keynote
speakers plus peer-reviewed individual papers. There will also be an
exhibition area for poster and demo sessions.
We invite papers reporting on recent advances in all aspects of Natural
Language Processing (NLP). The conference topics are announced at the
RANLP-09 website. All accepted papers will be published in the full
conference proceedings and included in the ACL Anthology. In addition,
volumes of RANLP selected papers are traditionally published by John
Benjamins Publishers; currently the volume of Selected RANLP-07 papers is
under print.
KEYNOTE SPEAKERS:
• Kevin Bretonnel Cohen (University of Colorado School of Medicine),
• Mirella Lapata (University of Edinburgh),
• Shalom Lappin (King’s College, London),
• Massimo Poesio (University of Trento and University of Essex).
CHAIR OF THE PROGRAMME COMMITTEE:
Ruslan Mitkov (University of Wolverhampton)
CHAIR OF THE ORGANISING COMMITTEE:
Galia Angelova (Bulgarian Academy of Sciences)
The PROGRAMME COMMITTEE members are distinguished experts from all over
the world. The list of PC members will be announced at the conference
website. After the review, the list of all reviewers will be announced at
the website as well.
SUBMISSION
People interested in participating should submit a paper, poster or demo
following the instructions provided at the conference website. The review
will be blind, so the article text should not reveal the authors' names.
Author identification should be done in additional page of the conference
management system.
TUTORIALS 12-13 September 2009:
Four half-day tutorials will be organised at 12-13 September 2009. The
list of tutorial lecturers includes:
• Kevin Bretonnel Cohen (University of Colorado School of Medicine),
• Constantin Orasan (University of Wolverhampton)
WORKSHOPS 17-18 September 2009:
Post-conference workshops will be organised at 17-18 September 2009. All
workshops will publish hard-copy proceedings, which will be distributed at
the event. Workshop papers might be listed in the ACL Anthology as well
(depending on the workshop organisers). The list of RANLP-09 workshops
includes:
• Semantic Roles on Human Language Technology Applications, organised by
Paloma Moreda, Rafael Muсoz and Manuel Palomar,
• Partial Parsing 2: Between Chunking and Deep Parsing, organised by Adam
Przepiorkowski, Jakub Piskorski and Sandra Kuebler,
• 1st Workshop on Definition Extraction, organised by Gerardo Eugenio
Sierra Martнnez and Caroline Barriere,
• Evaluation of Resources and Tools for Central and Eastern European
languages, organised by Cristina Vertan, Stelios Piperidis and Elena
Paskaleva,
• Adaptation of Language Resources and Technology to New Domains,
organised by Nuria Bel, Erhard Hinrichs, Kiril Simov and Petya Osenova,
• Natural Language Processing methods and corpora in translation,
lexicography, and language learning, organised by Viktor Pekar, Iustina
Narcisa Ilisei, and Silvia Bernardini,
• Events in Emerging Text Types (eETTs), organised by Constantin Orasan,
Laura Hasler, and Corina Forascu,
• Biomedical Information Extraction, organised by Guergana Savova,
Vangelis Karkaletsis, and Galia Angelova.
IMPORTANT DATES:
Conference paper submission notification: 6 April 2009
Conference paper submission deadline: 13 April 2009
Conference paper acceptance notification: 1 June 2009
Final versions of conference papers submission: 13 July 2009
Workshop paper submission deadline (suggested): 5 June 2009
Workshop paper acceptance notification (suggested): 20 July 2009
Final versions of workshop papers submission (suggested): 24 August 2009
RANLP-09 tutorials: 12-13 September 2009 (Saturday-Sunday)
RANLP-09 conference: 14-16 September 2009 (Monday-Wednesday)
RANLP-09 workshops: 17-18 September 2009 (Thursday-Friday)
For further information about the conference, please visit the conference
site http://www.lml.bas.bg/ranlp2009.
THE TEAM BEHIND RANLP-09
Galia Angelova, Bulgarian Academy of Sciences, Bulgaria, Chair of the Org.
Committee
Kalina Bontcheva, University of Sheffield, UK
Ruslan Mitkov, University of Wolverhampton, UK, Chair of the Programme
Committee
Nicolas Nicolov, Umbria Inc, USA (Editor of volume with selected papers)
Nikolai Nikolov, INCOMA Ltd., Shoumen, Bulgaria
Kiril Simov, Bulgarian Academy of Sciences, Bulgaria (Workshop Coordinator)
e-mail: ranlp09 [AT] lml (dot) bas (dot)
8-21 . (2009-09-14) Student Research Workshop at RANLP (Bulgaria)
First Call for Papers
Student Research Workshop
14-15 September 2009,
associated with the International Conference RANLP-2009
/RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING/
http://lml.bas.bg/ranlp2009/stud-ranlp09
The International Conference RANLP 2009 would like to invite students at all levels (Bachelor-, Master-, and PhD-students) to present their ongoing work at the Student Research Workshop. This will provide an excellent opportunity to present and discuss your work in progress or completed projects to an international research audience and receive feedback from senior researchers. The research being presented can come from any topic area within natural language processing and computational linguistics including, but not limited to, the following topic areas:
Anaphora Resolution, Complexity, Corpus Linguistics, Discourse, Evaluation, Finite-State Technology, Formal Grammars and Languages, Information Extraction, Information Retrieval, Lexical Knowledge Acquisition, Lexicography, Machine Learning, Machine Translation, Morphology, Natural Language Generation, Natural Language in Multimodal and Multimedia Systems, Natural Language Interraction, Natural Language Processing in Computer-Assisted Language Learning, Natural Language Processing for Biomedical Texts, Ontologies, Opinion Mining, Parsing, Part-of-Speech Tagging, Phonology, Post-Editing, Pragmatics and Dialogue, Question Answering, Semantics, Speech Recognition, Statistical Methods, Sublanguages and Controlled Languages, Syntax, Temporal Processing, Term Extraction and Automatic Indexing, Text Data Mining, Text Segmentation, Text Simplification, Text Summarisation, Text-to-Speech Synthesis, Translation Technology, Tree-Adjoining Grammars, Word Sense Disambiguation.
All accepted papers will be presented at the Student Workshop sessions during the main conference days: 14-16 September 2009. The articles will be issued in special Student Session electronic proceedings.
Important Dates
Submission deadline: 25 July
Acceptance notification: 20 August
Camera-ready deadline: 1 September
Submission Requirements
All papers must be submitted in .doc or .pdf format and must be 4-8 pages long (including references). For format requirements please refer to the main RANLP website at http://lml.bas.bg/ranlp2009, Submission Info Section. Each submission will be reviewed by 3 reviewers from the Programme Committee, who will feature both experienced researchers and PhD students nearing the completion of their PhD studies. The final decisions will be made based on these reviews. The submissions will have to specify the student's level (Bachelor-, Master-, or PhD).
Programme Committee
To be announced in the Second Call for Papers.
Organising Committee
Irina Temnikova (
Ivelina Nikolova (Bulgarian
Natalia Konstantinova (
For More Information
8-22 . (2009-09-28) ELMAR 2009
51st International Symposium ELMAR-2009
28-30 September 2009 Zadar, CROATIAPaper submission deadline: March 16, 2009http://www.elmar-zadar.org/CALL FOR PAPERS TECHNICAL CO-SPONSORS IEEE Region 8 EURASIP - European Assoc. Signal, Speech and Image Processing IEEE Croatia Section IEEE Croatia Section Chapter of the Signal Processing Society IEEE Croatia Section Joint Chapter of the AP/MTT SocietiesCONFERENCE PROCEEDINGS INDEXED BY IEEE XploreINSPEC TOPICS --> Image and Video Processing --> Multimedia Communications --> Speech and Audio Processing --> Wireless Commununications --> Telecommunications --> Antennas and Propagation --> e-Learning and m-Learning --> Navigation Systems --> Ship Electronic Systems --> Power Electronics and Automation --> Naval Architecture --> Sea Ecology --> Special Sessions Proposals - A special session consist of 5-6 papers which should present a unifying theme from a diversity of viewpointsKEYNOTE TALKS* Prof. Gregor Rozinaj,Slovak University of Technology, Bratislava, SLOVAKIA: -Title to be announced soon.* Mr. David Wood, European Broadcasting Union, Geneva, SWITZERLAND: What strategy and research agenda for Europe in 'new media'?SUBMISSIONPapers accepted by two reviewers will be published in conference proceedings available at the conference and abstracted/indexed in the IEEE Xplore and INSPEC database. More info is available here: http://www.elmar-zadar.org/ IMPORTANT: Web-based (online) paper submission of papers in PDF format is required for all authors. No e-mail, fax, or postal submissions will be accepted. Authors should prepare their papers according to ELMAR-2009 paper sample, convert them to PDF based on IEEE requirements, and submit them using web-based submission system by March 16, 2009.SCHEDULE OF IMPORTANT DATESDeadline for submission of full papers: March 16, 2009Notification of acceptance mailed out by: May 11, 2009Submission of (final) camera-ready papers: May 21, 2009Preliminary program available online by: June 11, 2009Registration forms and payment deadline: June 18, 2009Accommodation deadline: September 10, 2009GENERAL CO-CHAIRSIve Mustac, Tankerska plovidba, Zadar, Croatia Branka Zovko-Cihlar, University of Zagreb, CroatiaPROGRAM CHAIRMislav Grgic, University of Zagreb, CroatiaINTERNATIONAL PROGRAM COMMITTEE Juraj Bartolic, Croatia David Broughton, United Kingdom Paul Dan Cristea, Romania Kresimir Delac, Croatia Zarko Cucej, Slovenia Marek Domanski, Poland Kalman Fazekas, Hungary Janusz Filipiak, Poland Renato Filjar, Croatia Borko Furht, USA Mohammed Ghanbari, United Kingdom Mislav Grgic, Croatia Sonja Grgic, Croatia Yo-Sung Ho, Korea Bernhard Hofmann-Wellenhof, Austria Ismail Khalil Ibrahim, Austria Bojan Ivancevic, Croatia Ebroul Izquierdo, United Kingdom Kristian Jambrosic, Croatia Aggelos K. Katsaggelos, USA Tomislav Kos, Croatia Murat Kunt, Switzerland Panos Liatsis, United Kingdom Rastislav Lukac, Canada Lidija Mandic, Croatia Gabor Matay, Hungary Branka Medved Rogina, Croatia Borivoj Modlic, Croatia Marta Mrak, United Kingdom Fernando Pereira, Portugal Pavol Podhradsky, Slovak Republic Ramjee Prasad, Denmark Kamisetty R. Rao, USA Gregor Rozinaj, Slovak Republic Gerald Schaefer, United Kingdom Mubarak Shah, USA Shiguang Shan, China Thomas Sikora, Germany Karolj Skala, Croatia Marian S. Stachowicz, USA Ryszard Stasinski, Poland Luis Torres, Spain Frantisek Vejrazka, Czech Republic Stamatis Voliotis, Greece Nick Ward, United Kingdom Krzysztof Wajda, Poland Branka Zovko-Cihlar, CroatiaCONTACT INFORMATION Assoc.Prof. Mislav Grgic, Ph.D. FER, Unska 3/XII HR-10000 Zagreb CROATIA Telephone: + 385 1 6129 851 Fax: + 385 1 6129 717 E-mail: elmar2009 (at) fer.hr For further information please visit: http://www.elmar-zadar.org/
8-23 . (2009-10-05) 2009 APSIPA ASC
APSIPA Annual Summit and Conference October 5 - 7, 2009 Sapporo Convention Center, Sapporo, Japan2009 APSIPA Annual Summit and Conference is the inaugural event supported by the Asia-Pacific Signal and Information Processing Association (APSIPA). The APSIPA is a new association and it promotes all aspects of research and education on signal processing, information technology, and communications. The field of interest of APSIPA concerns all aspects of signals and information including processing, recognition, classification, communications, networking, computing, system design, security, implementation, and technology with applications to scientific, engineering, and social areas. The topics for regular sessions include, but are not limited to:Signal Processing Track1.1 Audio, speech, and language processing1.2 Image, video, and multimedia signal processing1.3 Information forensics and security1.4 Signal processing for communications1.5 Signal processing theory and methodsSapporo and Conference Venue: One of many nice cities in Japan, Sapporo is always recognized as a quite beautiful and well-organized city. With a population of 1,800,000, Hokkaido's largest/capital city, Sapporo, is fully serviced by a network of subway, streetcar, and bus lines connecting to its fullcompliment of hotel accommodations. Sapporo has already played host to international meetings, sports events, and academic societies. There are a lot of flights from/to Tokyo, Nagoya, Osaka et al. and overseas cities. With all the amenities of a major city yet in balance with its natural surroundings, this beautiful northern capital, Sapporo, is well-equipped to offer a new generation of conventions.Important Due Dates and Author's Schedule:Proposals for Special Session: March 1, 2009Proposals for Forum, Panel and Tutorial Sessions: March 20, 2009Deadline for Submission of Full-Papers: March 31, 2009Notification of Acceptance: July 1, 2009Deadline for Submission of Camera Ready Papers: August 1, 2009Conference dates: October 5 - 7, 2009Submission of Papers: Prospective authors are invited to submit either long papers, up to 10 pages in length, or short papers up to four pages in length, where long papers will be for the single-track oral presentation and short papers will be mostly for poster presentation. The conference proceedings will be published, available, and maintained at the APSIPA website.Detail Information: WEB Site : http://www.gcoe.ist.hokudai.ac.jp/apsipa2009/Organizing Committee:Honorary Chair : Sadaoki Furui, Tokyo Institute of Technology, JapanGeneral co-Chairs : Yoshikazu Miyanaga, Hokkaido University, Japan K. J. Ray Liu, University of Maryland,USATechnical Program co-Chairs : Hitoshi Kiya, Tokyo Metropolitan Univ., Japan Tomoaki Ohtsuki, Keio University, Japan Mark Liao, Academia Sinica, Taiwan Takao Onoye, Osaka University, Japan
8-24 . (2009-10-05) IEEE International Workshop on Multimedia Signal Processing - MMSP'09
Call for Papers 2009 IEEE International Workshop on Multimedia Signal Processing - MMSP'09 October 5-7, 2009 Sheraton Rio Hotel & Resort Rio de Janeiro, Brazil We would like to invite you to submit your work to MMSP-09, the eleventh IEEE International Workshop on Multimedia Signal Processing. We also would like to advise you of the upcoming paper submission deadline on April, 17th. This year MMSP will introduce a new type of paper award: the “top 10%” paper award. While MMSP papers are already very well regarded and highly cited, there is a growing need among the scientific community for more immediate quality recognition. The objective of the top 10% award is to acknowledge outstanding quality papers, while at the same time keeping the wider participation and information exchange allowed by higher acceptance rates. MMSP will continue to accept as many as high quality papers as possible, with acceptance rates in line with other top events of the IEEE Signal Processing Society. This new award will be granted to as many as 10% of the total paper submissions, and is open to all accepted papers, whether presented in oral or poster form. The workshop is organized by the Multimedia Signal Processing Technical Committee of the IEEE Signal Processing Society. Organized in Rio de Janeiro, MMSP-09 provides excellent conditions for brainstorming on, and sharing the latest advances in multimedia signal processing and technology in one of the most beautiful and exciting cities in the world. Scope: Papers are solicited on the following topics (but not limited to) Systems and applications - Teleconferencing, telepresence, tele-immersion, immersive environments - Virtual classrooms and distance learning - Multimodal collaboration, online multiplayer gaming, social networking - Telemedicine, human-human distance collaboration - Multimodal storage and retrieval Multimedia for communication and collaboration - Ad hoc broadband sensor array processing - Microphone and camera array processing - Automatic sensor calibration, synchronization - De-noising, enhancement, source separation, - Source localization, spatialization Scene analysis for immersive telecommunication and human collaboration - Audiovisual scene analysis - Object detection, identification, and tracking - Gesture, face, and human pose recognition - Presence detection and activity classification - Multimodal sensor fusion Coding - Distributed/centralized source coding for sensor arrays - Scalable source coding for multiparty conferencing - Error/loss resilient coding for telecommunications - Channel coding, error protection and error concealment Networking - Voice/video over IP and wireless - Quality monitoring and management - Security - Priority-based QoS control and scheduling - Ad-hoc and real time communications - Channel coding, packetization, synchronization, buffering A thematic emphasis for MMSP-09 is on topics related to multimedia processing and interaction for immersive telecommunications and collaboration. Papers on these topics are encouraged. Schedule - Papers (full paper, 4 pages, to be received by): April 17, 2009 - Notification of acceptance by: June 13, 2009 - Camera-ready paper submission by: July 6, 2009 More information is available at http://www.mmsp09.org ================================================================================ You have received this mailing because you are a member of IEEE and/or one of the IEEE Technical Societies. To unsubscribe, please go to http://ewh.ieee.org/enotice/options.php?SN=Wellekens&LN=CONF and be certain to include your IEEE member number. If you need assistance with your E-Notice subscription, please contact k.n.luu@ieee.org
8-25 . (2009-10-13) CfP ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
***Submission Deadline Extended to Monday, June 15, 2009***
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/
Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."
The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.
We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:
-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)
Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/
Important Dates:
Monday, June 15, 2009 (Extended Deadline) Submission Deadline
Saturday, July 10, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing
For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org
On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands
8-26 . (2009-10-18) 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
Call for Papers
2009 IEEE Workshop on Applications of Signal Processing to Audio and
Acoustics
Mohonk Mountain House
New Paltz, New York
October 18-21, 2009
The 2009 IEEE Workshop on Applications of Signal Processing to Audio and
Acoustics (WASPAA'09) will be held at the Mohonk Mountain House in New
Paltz, New York, and is sponsored by the Audio & Electroacoustics committee
of the IEEE Signal Processing Society. The objective of this workshop is to
provide an informal environment for the discussion of problems in audio and
acoustics and the signal processing techniques leading to novel solutions.
Technical sessions will be scheduled throughout the day. Afternoons will be
left free for informal meetings among workshop participants.
Papers describing original research and new concepts are solicited for
technical sessions on, but not limited to, the following topics:
* Acoustic Scenes
- Scene Analysis: Source Localization, Source Separation, Room Acoustics
- Signal Enhancement: Echo Cancellation, Dereverberation, Noise Reduction,
Restoration
- Multichannel Signal Processing for Audio Acquisition and Reproduction
- Microphone Arrays
- Eigenbeamforming
- Virtual Acoustics via Loudspeakers
* Hearing and Perception
- Auditory Perception, Spatial Hearing, Quality Assessment
- Hearing Aids
* Audio Coding
- Waveform Coding and Parameter Coding
- Spatial Audio Coding
- Internet Audio
- Musical Signal Analysis: Segmentation, Classification, Transcription
- Digital Rights
- Mobile Devices
* Music
- Signal Analysis and Synthesis Tools
- Creation of Musical Sounds: Waveforms, Instrument Models, Singing
- MEMS Technologies for Signal Pick-up
Submission of four-page paper: April 15, 2009
Notification of acceptance: June 26, 2009
Early registration until: September 1, 2009
Workshop Committee
General Co-Chair:
Jacob Benesty
Université du Québec
INRS-EMT
Montréal, Québec, Canada
General Co-Chair:
Tomas Gaensler
mh acoustics
Summit, NJ, USA
Technical Program Chair:
Yiteng (Arden) Huang
WeVoice Inc.
Bridgewater, NJ, USA
Technical Program Chair:
Jingdong Chen
Bell Labs
Alcatel-Lucent
Murray Hill, NJ, USA
jingdong@research.bell-labs.com
Finance Chair:
Michael Brandstein
Information Systems
Technology Group
MIT Lincoln Lab
Lexington, MA, USA
Publications Chair:
Eric J. Diethorn
Multimedia Technologies
Avaya Labs Research
Basking Ridge, NJ, USA
Publicity Chair:
Sofiène Affes
Université du Québec
INRS-EMT
Montréal, Québec, Canada
Local Arrangements Chair:
Heinz Teutsch
Multimedia Technologies
Avaya Labs Research
Basking Ridge, NJ, USA
Far East Liaison:
Shoji Makino
NTT Communication Science
Laboratories, Japan
8-27 . (2009-10-23) CfP Searching Spontaneous Conversational Speech (SSCS 2009) ACM Mltimedia Wkshp
Call for Papers
----------------------------
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
***Submission Deadline Extended to Monday, June 15, 2009***
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/
Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."
The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.
We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:
***Integration***
-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech
***Interface/Interaction***
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts
***Scale/Scope***
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail
***Community***
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)
Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/
Important Dates:
Monday, June 15, 2009 (Extended Deadline) Submission Deadline
Saturday, July 10, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing
For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org
On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands
8-28 . (2009-10-23)ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
Call for Papers
----------------------------
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
October 23, 2009
Beijing, China
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/
Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."
The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.
We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:
***Integration***
-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech
***Interface/Interaction***
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts
***Scale/Scope***
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail
***Community***
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)
Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/
Important Dates:
Monday, June 1, 2009 Submission Deadline
Saturday, July 4, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing
For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org
On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands
8-29 . (2009-11-02) CALL FOR ICMI-MLMI 2009 WORKSHOPS New dates !!
CALL FOR ICMI-MLMI 2009 WORKSHOPS NEW DATES!!
http://icmi2009.acm.org
Boston MA, USA
Paper submission May 22, 2009Author notification July 20, 2009 Camera-ready due August 20, 2009 Conference Nov 2-4, 2009 Workshops Nov 5-6, 2009 conference: 2-4 November 2009
The ICMI and MLMI conferences will jointly take place in the Boston
area during November 2-6, 2009. The main aim of ICMI-MLMI 2009 is to
further scientific research within the broad field of multimodal
interaction, methods and systems. The joint conference will focus on
major trends and challenges in this area, and work to identify a
roadmap for future research and commercial success. The main
conference will be followed by a number of workshops, for which we
invite proposals.
The format, style, and content of accepted workshops is under the
control of the workshop organizers. Workshops will take place on 5-6
November 2009, and may be of one or two days duration.
Workshop organizers will be expected to manage the workshop content,
specify the workshop format, be present to moderate the discussion and
panels, invite experts in the domain, and maintain a website for the
workshop.
Proposals should specify clearly the workshop's title, motivation,
impact, expected outcomes, potential invited speakers and the workshop
URL. The proposal should also name the main workshop organizer, and
co-organizers, and should provide brief bios of the organizers.
Submit workshop proposals, as pdf, by email to
workshops-icmi2009@acm.org
8-30 . (2009-11-15) CIARP 2009
8-31 . (2009-11-16) 8ème Rencontres Jeunes Chercheurs en Parole (french)
8-32 . (2009-12-04) CfP Troisièmes Journées de Phonétique Clinique Aix en Provence France (french)
JPC3
Troisièmes Journées de Phonétique Clinique
Appel à Communication
**4-5 décembre 2009, Aix-en-Provence, France
_http://www.lpl-aix.fr/~jpc3/ <http://www.lpl-aix.fr/%7Ejpc3/>
_********************************************************************************************************
*
Ces journées s’inscrivent dans la lignée des premières et deuxièmes journées d’études de phonétique clinique, qui s’étaient tenues respectivement à Paris en 2005 et Grenoble en 2007. La phonétique clinique réunit des chercheurs, enseignants-chercheurs, ingénieurs, médecins et orthophonistes, différents corps de métiers complémentaires qui poursuivent le même objectif : une meilleure connaissance des processus d’acquisition et de dysfonctionnement de la parole et de la voix. Cette approche interdisciplinaire vise à optimiser les connaissances fondamentales relatives à la communication parlée chez le sujet sain et à mieux comprendre, évaluer, diagnostiquer et remédier aux troubles de la parole et de la voix chez le sujet pathologique.
Les communications porteront sur les études phonétiques de la parole et de la voix pathologiques, chez l’adulte et chez l’enfant. Les *thèmes* du colloque incluent, de façon non limitative :
Perturbations du système oro-pharyngo-laryngé Perturbations du système perceptif Troubles cognitifs et moteurs Instrumentation et ressources en phonétique clinique Modélisation de la parole et de la voix pathologique Evaluation et traitement des pathologies de la parole et de la voix
*Les contributions sélectionnées seront présentées sous l’une des deux formes suivantes :*
Communication longue: 20 minutes, pour l’exposé de travaux aboutis Communication courte: 8 minutes pour l’exposé d'observations
cliniques, de travaux préliminaires, de problématiques émergentes
afin de favoriser au mieux les échanges interdisciplinaires entre
phonéticiens et cliniciens.
*Format de soumission:
*Les soumissions aux JPC se présentent sous la forme de résumés rédigés en français, d’une longueur maximale d’une page A4, police Times New Roman, 12pt, interligne simple. Les résumés devront être soumis au format PDF à l’adresse suivante: _soumission.jpc3@lpl-aix.fr
_*Date limite de soumission: 15 mai 2009
Date de notification auteurs : 1er juillet 2009
*Pour toute information complémentaire, contactez les organisateurs:
_org.jpc3@lpl-aix.fr
_/L’inscription aux JPC3 (1^er juillet 2009) sera ouverte à tous, publiant ou non publiant.
8-33 . (2009-12-09)1st EUROPE-ASIA SPOKEN DIALOGUE SYSTEMS TECHNOLOGY WORKSHOP
8-34 . (2010-05-11) Speech prosody 2010 Chicago IL USA
SPEECH PROSODY 2010
===============================================================
Every Language, Every Style: Globalizing the Science of Prosody
===============================================================
Call For Papers
===============================================================
Prosody is, as far as we know, a universal characteristic of human speech, founded on the cognitive processes of speech production and perception. Adequate modeling of prosody has been shown to improve human-computer interface, to aid clinical diagnosis, and to improve the quality of second language instruction, among many other applications.
Speech Prosody 2010, the fifth international conference on speech prosody, invites papers addressing any aspect of the science and technology of prosody. Speech Prosody is the only recurring international conference focused on prosody as an organizing principle for the social, psychological, linguistic, and technological aspects of spoken language. Speech Prosody 2010 seeks, in particular, to discuss the universality of prosody. To what extent can the observed scientific and technological benefits of prosodic modeling be ported to new languages, and to new styles of spoken language? Toward this end, Speech Prosody 2010 especially welcomes papers that create or adapt models of prosody to languages, dialects, sociolects, and/or communicative situations that are inadequately addressed by the current state of the art.
=======
TOPICS
=======
Speech Prosody 2010 will include keynote presentations, oral sessions, and poster sessions covering topics including:
* Prosody of under-resourced languages and dialects
* Communicative situation and speaking style
* Dynamics of prosody: structures that adapt to new situations
* Phonology and phonetics of prosody
* Rhythm and duration
* Syntax, semantics, and pragmatics
* Meta-linguistic and para-linguistic communication
* Signal processing
* Automatic speech synthesis, recognition and understanding
* Prosody of sign language
* Prosody in face-to-face interaction: audiovisual modeling and analysis
* Prosodic aspects of speech and language pathology
* Prosody in language contact and second language acquisition
* Prosody and psycholinguistics
* Prosody in computational linguistics
* Voice quality, phonation, and vocal dynamics
====================
SUBMISSION OF PAPERS
====================
Prospective authors are invited to submit full-length, four-page papers, including figures and references, at http://speechprosody2010.org. All Speech Prosody papers will be handled and reviewed electronically.
===================
VENUE
===================
The Doubletree Hotel Magnificent Mile is located two blocks from North Michigan Avenue, and three blocks from Navy Pier, at the cultural center of Chicago. The Windy City has been the center of American innovation since the mid nineteenth century, when a railway link connected Chicago to the west coast, civil engineers reversed the direction of the Chicago river, Chicago financiers invented commodity corn (maize), and the Great Chicago Fire destroyed almost every building in the city. The Magnificent Mile hosts scores of galleries and museums, and hundreds of world-class restaurants and boutiques.
===================
IMPORTANT DATES
===================
Submission of Papers (http://speechprosody2010.org): October 15, 2009
Notification of Acceptance: December 15, 2009
Conference: May 11-14, 2010
8-35 . (2010-05-17) 7th Language Resources and Evaluation Conference
The 7th edition of the Language Resources and Evaluation Conference will take place in Valetta (Malta) on May 17-23, 2010.
More information will be available soon on: http://www.lrec-conf.org/lrec2010/