Contents
- 1 . Editorial
- 2 . ISCA News
- 3 . Future ISCA Conferences and Workshops (ITRW)
- 4 . Workshops and conferences supported (but not organized) by ISCA
- 4-1 . (2009-11-05) Workshop on Child, Computer and Interaction
- 4-2 . (2009-12-13) ASRU 2009
- 4-3 . (2009-12-14) 6th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications MAVEBA 2009
- 4-4 . (2010-05-19) CfP LREC 2010 - 7th Conference on Language Resources and Evaluation
- 4-5 . (2010-05-03) Workshop on Spoken Languages Technologies for Under-Resourced Languages (SLTU'10)
- 5 . Books,databases and softwares
- 5-1 . Books
- 5-1-1 . Advances in Digital Speech Transmission
- 5-1-2 . Sprachverarbeitung -- Grundlagen und Methoden der Sprachsynthese und Spracherkennung
- 5-1-3 . Digital Speech Transmission
- 5-1-4 . Distant Speech Recognition,
- 5-1-5 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
- 5-1-6 . Some aspects of Speech and the Brain.
- 5-1-7 . Spoken Language Processing,
- 5-2 . Database providers
- 5-2-1 . LDC News
- 5-2-2 . ELDA/ELRA press release
- 5-2-3 . MEDAR project/ELDA
- 5-1 . Books
- 6 . Jobs openings
- 6-1 . (2009-04-02)The Johns Hopkins University: Post-docs, research staff, professors on sabbaticals
- 6-2 . (2009-04-07) PhD Position in The Auckland University - New Zealand
- 6-3 . (2009-04-23) R&D position in SPEECH RECOGNITION, PROCESSING AND SYNTHESIS IRCAM Paris
- 6-4 . (2009-05-04) Several Ph.D. positions and Ph.D. or Postdoc scholarships, Universität Bielefeld
- 6-5 . (2009-05-07) PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING FRANCE)
- 6-6 . (2009-05-07)Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld
- 6-7 . (2009-05-08)PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)
- 6-8 . (2009-05-11) Thèse Cifre indexation de données multimédia Institut Eurecom
- 6-9 . (2009-05-11)Senior Research Fellowship in Speech Perception and Language Development,MARCS Auditory Laboratories
- 6-10 . (2009-06-02)Proposition de sujet de thèse 2009 Analyse de scènes de parole Grenoble France
- 6-11 . (2009-06-10) PhD in ASR in Le Mans France
- 6-12 . (2009-06-17)Two post-docs in the collaboration between CMU (USA) and University-Portugal program
- 6-13 . (2009-06-19) POSTDOC POSITION in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES
- 6-14 . (2009-06-22) PhD studentship in speech and machine learning ESPCI ParisTech
- 6-15 . (2009-06-30)Postdoctoral Fellowships in machine learning/statistics/machine vision at Monash University, Australia
- 6-16 . (2009-06-30) PhD studentship at LIMSI France
- 6-17 . (2009-07-01) These: Vocal Prosthesis Based on Machine Learning (France)
- 6-18 . (2009-07-06) PhD in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES (Grenoble France)
- 6-19 . (2009-07-08) Position at Deutsche Telekom R&D
- 6-20 . (2009-07-15) PhD at LIMSI Paris
- 6-21 . (2009-07-17) 2 PhD in Computational linguistics in Radboud University Nijmegen NL
- 6-22 . (2009-07-24) Acoustic signal detection engineer Oregon State University
- 6-23 . (2009-08-06) Post graduate Research positions at Marcs, Australia
- 6-24 . (2009-08-06) PhD position is available at the Queensland University of Technology, Brisbane, Australia.
- 6-25 . (2009-08-26) Ph Positions at the University of Bielefeld Germany
- 6-26 . (2009-09-03) Post-doc au laboratoire d'informatique de Grenoble France (french)
- 7 . Journals
- 7-1 . Special issue of Speech Comm: Non-native speech perception in adverse conditions: imperfect knowledge, imperfect signal
- 7-2 . IEEE Special Issue on Speech Processing for Natural Interaction with Intelligent Environments
- 7-3 . Special issue "Speech as a Human Biometric: I know who you are from your voice" Int. Jnl Biometrics
- 7-4 . Special on Voice transformation IEEE Trans ASLP
- 7-5 . Special Issue on Statistical Learning Methods for Speech and Language Processing
- 7-6 . SPECIAL ISSUE OF SPEECH COMMUNICATION: Perceptual and Statistical Audition
- 8 . Future Speech Science and Technology Events
- 8-1 . (2009-09) Emotion challenge INTERSPEECH 2009
- 8-2 . (2009-09-06) Special session at Interspeech 2009:adaptivity in dialog systems
- 8-3 . (2009-09-07) Information Retrieval and Information Extraction for Less Resourced Languages
- 8-4 . (2009-09-09) CfP IDP 09 Discourse-Prosody Interface
- 8-5 . (2009-09-09)Conference IDP (Interface Discours Prosodie) Paris France
- 8-6 . (2009-09-11) SIGDIAL 2009 CONFERENCE
- 8-7 . (2009-09-11) Int. Workshop on spoken language technology for development: from promise to practice.
- 8-8 . (2009-09-11) ACORNS Workshop Brighton UK
- 8-9 . (2009-09-13)Young Researchers' Roundtable on Spoken Dialogue Systems 2009 London
- 8-10 . (2009-09-14) 7th International Conference on Recent Advances in Natural Language Processing
- 8-11 . (2009-09-14) Student Research Workshop at RANLP (Bulgaria)
- 8-12 . (2009-09-28) ELMAR 2009
- 8-13 . (2009-10-05) 2009 APSIPA ASC
- 8-14 . (2009-10-05) IEEE International Workshop on Multimedia Signal Processing - MMSP'09
- 8-15 . (2009-10-13) CfP ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
- 8-16 . (2009-10-18) 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
- 8-17 . (2009-10-23) CfP Searching Spontaneous Conversational Speech (SSCS 2009) ACM Mltimedia Wkshp
- 8-18 . (2009-10-23)ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
- 8-19 . (2009-11-01) NLP Approaches for Unmet Information Needs in Health Care
- 8-20 . (2009-11-02) Eleventh International Conference on Multimodal Interfaces and Workshop on Machine Learning for Multi-modal Interaction
- 8-21 . (2009-11-05)LRL WORKSHOP: Getting Less-Resourced Languages on-Board! Poznan Poland
- 8-22 . (2009-11-06)4th LANGUAGE AND TECHNOLOGY CONFERENCE: Human Language Technologies as a challenge Poznan Poland
- 8-23 . (2009-11-15) CIARP 2009
- 8-24 . (2009-11-15) Entertainment=Emotion (International workshop) Spain
- 8-25 . (2009-11-16) 8ème Rencontres Jeunes Chercheurs en Parole (french)
- 8-26 . (2009-11-20) Seminar FROM PERCEPTION TO COMPREHENSIONOF A FOREIGN LANGUAGE(Strasbourg-France)
- 8-27 . (2009-12-04) Troisièmes Journées de Phonétique Clinique Aix en Provence France (french)
- 8-28 . (2009-12-09)1st EUROPE-ASIA SPOKEN DIALOGUE SYSTEMS TECHNOLOGY WORKSHOP
- 8-29 . (2010-03-15)CfP IEEE ICASSP 2010 International Conference on Acoustics, Speech, and Signal Processing March 15 – 19, 2010 Sheraton Dallas Hotel * Dallas, Texas, U.S.A.
- 8-30 . (2010-04-13) CfP Workshop: Positional phenomena in phonology and phonetics Wroclaw-
- 8-31 . (2010-05-10) Cfp Workshop on Prosodic Prominence: Perceptual and Automatic Identification
- 8-32 . (2010-05-11) CfP Speech prosody 2010 Chicago IL USA
- 8-33 . (2010-05-24)CfP 4th INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS (LATA 2010)
- 8-34 . (2010-05-25) CfP JEP 2010
1 . Editorial
Dear members,
This is issue was prepaqred very quickly in order to be available in time for Interspeech 2009.
We had no time to include a presentaqtion of one of our SIG's. Sorry, our presentations will resume in October.
Let us meet at Interspeech and do not forget to participate to the ISCA events: our presence at the General Assembly is highly appreciated. Pldease come with new ideas for improving our actions in favor of the speech community...but don't ask for the moon; assume yourself have to implement it!
Have a safe journey to Brighton.
Prof. em. Chris Wellekens
Institut Eurecom
Sophia Antipolis
France
public@isca-speech.org
2 . ISCA News
2-1 . (2009-09-06) The Loebner Contest at Interspeech 2009
INTERSPEECH 2009 - CALL FOR PARTICIPATION IN THE LOEBNER CONTEST 2009 "How can we tell if a machine can think?" This question is the inspiration for the Loebner contest, hosted by Interspeech, which will be held in the Brighton Centre from 10:45am on Sunday 6 September. It conducts a Turing test to determine whether a computer program can successfully give the illusion of being human. We are seeking volunteers to pit themselves against the entries -- and prove to the judges just how human they are! The test involves using a computer interface to chat (type messages) for 5 minutes with a judge, who does the same with the program, not knowing which is which. The judge has to determine which is the true human. The test involves multiple rounds and the whole competition is expected to last approximately 2 hours. Further details can be found on the conference website (http://www.interspeech2009.org/conference/exhibition.php). If you are interested in taking part or have any questions, please contact the organiser directly, Philip Jackson (p.jackson@surrey.ac.uk). Everyone is welcome to attend. We'll be in the Rainbow Room!
2-2 . Message to students
Dear students,
The International Speech Communication Association ISCA is now opening an online system to build a database of résumés of researchers/students working in the various fields of speech communication.
The goal of this service is to build a centralized place where many interested employers/corporations can access and search for potential candidates.
Please be advised that the posting service will be updated at 4 month intervals. Next switch will be mid October 2009.
We encourage all of you to upload an updated version of your résumé to: http://www.isca-speech.org/resumes/ and wish you good luck with a fruitful career.
Professor Helen Meng
3 . Future ISCA Conferences and Workshops (ITRW)
3-1 . (2009-09-06) INTERSPEECH 2009 Brighton UK
3-2 . (2009-09-06) Satellite workshops Interspeech 2009
Interspeech 2009 satellite workshops
---------------------------------------------------
http://www.interspeech2009.org/conference/workshops.php
ACORNS Workshop on Computational Models of Language Evolution, Acquisition and Processing
The workshop brings together up to 50 scientists to discuss future research in language acquisition, processing and evolution. Deb Roy, Friedemann Pulvermüller, Rochelle Newman and Lou Boves will provide an overview of the state-of-art, a number of discussants from different disciplines will widen the perspective, and all participants can contribute to a roadmap.
AVSP 2009 - Audio-Visual Speech Processing
The International Conference on Auditory-Visual Speech Processing (AVSP) attracts an interdisciplinary audience of psychologists, engineers, scientists and linguists, and considers a range of topics related to speech perception, production, recognition and synthesis. Recently the scope of AVSP has broadened to also include discussion on more general issues related to audiovisual communication. For example, the interplay between speech and the expressions of emotion, and the relationship between speech and manual gestures.
Blizzard Challenge Workshop
In order to better understand and compare research techniques in building corpus-based speech synthesizers on the same data, the Blizzard Challenge was devised. The basic challenge is to take the released speech database, build a synthetic voice from the data and synthesize a prescribed set of test sentences which are evaluated through listening tests. The results are presented at this workshop. Attendance at the 2009 workshop for the 4th Blizzard Challenge is open to all, not just participants in the challenge. Registration closes on 14th August 2009.
SIGDIAL - Special Interest Group on Dialogue
The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in discourse and dialogue to both academic and industry researchers. The conference is sponsored by the SIGDIAL organization, which serves as the Special Interest Group in discourse and dialogue for both the Association for Computational Linguistics and theInternational Speech Communication Association.
SLaTE Workshop on Speech and Language Technology in Education
SLaTE 2009 follows SLaTE 2007, held in Farmington, Pennsylvania, USA, and the STiLL meeting organized by KTH in Marholmen, Sweden, in 1998. The workshop will address all topics which concern speech and language technology for education. Papers will discuss theories, applications, evaluation, limitations, persistent difficulties, general research tools and techniques. Papers that critically evaluate approaches or processing strategies will be especially welcome, as will prototype demonstrations of real-world applications.
Young Researchers' Roundtable on Spoken Dialogue Systems
The Young Researchers' Roundtable on Spoken Dialog Systems is an annual workshop designed for students, post docs, and junior researchers working in research related to spoken dialogue systems in both academia and industry. The roundtable provides an open forum where participants can discuss their research interests, current work and future plans. The workshop is meant to provide an interdisciplinary forum for creative thinking about current issues in spoken dialogue systems research, and help create a stronger international network of young researchers working in the field.
3-3 . (2010-09-26) INTERSPEECH 2010 Chiba Japan
Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."
3-4 . (2011-08-27) INTERSPEECH 2011 Florence Italy
Interspeech 2011
Palazzo dei Congressi, Italy, August 27-31, 2011.
Organizing committee
Piero Cosi (General Chair),
Renato di Mori (General Co-Chair),
Claudia Manfredi (Local Chair),
Roberto Pieraccini (Technical Program Chair),
Maurizio Omologo (Tutorials),
Giuseppe Riccardi (Plenary Sessions).
More information www.interspeech2011.org
4 . Workshops and conferences supported (but not organized) by ISCA
4-1 . (2009-11-05) Workshop on Child, Computer and Interaction
Call for Papers
4-2 . (2009-12-13) ASRU 2009
Automatic Speech Recognition and Understanding Workshop
Merano, Italy December 13-17, 2009
http://www.asru2009.org/
The eleventh biannual IEEE workshop on Automatic Speech Recognition
and Understanding (ASRU) will be held on December 13-17, 2009.
The ASRU workshops have a tradition of bringing together
researchers from academia and industry in an intimate and
collegial setting to discuss problems of common interest in
automatic speech recognition and understanding.
Workshop topics
• automatic speech recognition and understanding
• human speech recognition and understanding
• speech to text systems
• spoken dialog systems
• multilingual language processing
• robustness in ASR
• spoken document retrieval
• speech-to-speech translation
• spontaneous speech processing
• speech summarization
• new applications of ASR.
The workshop program will consist of invited lectures, oral
and poster presentations, and panel discussions. Prospective
authors are invited to submit full-length, 4-6 page papers,
including figures and references, to the ASRU 2009 website
http://www.asru2009.org/.
All papers will be handled and reviewed electronically.
The website will provide you with further details. Please note
that the submission dates for papers are strict deadlines.
IMPORTANT DATES
Paper submission deadline July 15, 2009
Paper notification of acceptance September 3, 2009
Demo session proposal deadline September 24, 2009
Early registration deadline October 7, 2009
Workshop December 13-17, 2009
Please note that the number of attendees will be limited and
priority will be given to paper presenters. Registration will
be handled via the ASRU 2009 website,
http://www.asru2009.org/, where more information on the workshop
will be available.
General Chairs
Giuseppe Riccardi, U. Trento, Italy
Renato De Mori, U. Avignon, France
Technical Chairs
Jeff Bilmes, U. Washington, USA
Pascale Fung, HKUST, Hong Kong China
Shri Narayanan, USC, USA
Tanja Schultz, U. Karlsruhe, Germany
Panel Chairs
Alex Acero, Microsoft, USA
Mazin Gilbert, AT&T, USA
Demo Chairs
Alan Black, CMU, USA
Piero Cosi, CNR, Italy
Publicity Chairs
Dilek Hakkani-Tür, ICSI, USA
Isabel Trancoso, INESC -ID/IST, Portugal
Publication Chair
Giuseppe di Fabbrizio, AT&T, USA
Local Chair
Maurizio Omologo, FBK-irst, Italy
4-3 . (2009-12-14) 6th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications MAVEBA 2009
4-4 . (2010-05-19) CfP LREC 2010 - 7th Conference on Language Resources and Evaluation
4-5 . (2010-05-03) Workshop on Spoken Languages Technologies for Under-Resourced Languages (SLTU'10)
5 . Books,databases and softwares
5-1 . Books
5-1-1 . Advances in Digital Speech Transmission
5-1-2 . Sprachverarbeitung -- Grundlagen und Methoden der Sprachsynthese und Spracherkennung
5-1-3 . Digital Speech Transmission
5-1-4 . Distant Speech Recognition,
5-1-5 . Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods
5-1-6 . Some aspects of Speech and the Brain.
5-1-7 . Spoken Language Processing,
Spoken Language Processing, edited by Joseph Mariani (IMMI and
LIMSI-CNRS, France). ISBN: 9781848210318. January 2009. Hardback 504 pp
Publisher ISTE-Wiley
Speech processing addresses various scientific and technological areas. It includes speech analysis and variable rate coding, in order to store or transmit speech. It also covers speech synthesis, especially from text, speech recognition, including speaker and language identification, and spoken language understanding. This book covers the following topics: how to realize speech production and perception systems, how to synthesize and understand speech using state-of-the-art methods in signal processing, pattern recognition, stochastic modeling, computational linguistics and human factor studies.
More on its content can be found at
http://www.iste.co.uk/index.php?f=a&ACTION=View&id=150
5-2 . Database providers
5-2-1 . LDC News
- LDC Offices to close for Labor Day Holiday -
LDC is pleased to announce its participation at Interspeech 2009 in
- XTrans: A Speech Annotation and Transcription Tool
- The Broadcast Narrow Band Speech Corpus: A New Resource Type for Large Scale Language Recognition
Visit our display in the exhibition hall at the Brighton Centre on Kings’ Road for a special giveaway or just to say hello.
Follow the link for more information on Interspeech 2009.
LDC at the 2009
LDC is happy to report that our exhibit at ALA 2009 went off without a hitch! The American Library Association’s (
Follow the link for additional information on the ALA Conference.
New Publications
(1) The Arabic English Newswire Translation Collection consists of approximately 550,000 words of Arabic newswire text and its English translation from Agence France Presse (France), An Nahar (
The number of stories and their epochs for each source are as follows:
AFP | 734 stories; July 2000 - November 2000 |
An Nahar | 600 stories; January 2002 - December 2002 |
Assabah | 397 stories; September 2004 - November 2004 |
Total | 1731 stories |
Word count of Arabic tokens by source is shown in the following table:
AFP | 102,564 |
An Nahar | 299,681 |
Assabah | 149,259 |
Total | 551,504 |
The original source files used different encodings for the Arabic characters, including UTF8 and ASMO. SGML tags were used for marking sentence and paragraph boundaries and for annotating other information about each story. All Arabic source data was converted to UTF and most SGML tags were removed or replaced by "plain text" markers.
Arabic English Newswire Translation Collection
2009 Subscription Members will automatically receive two copies of this corpus on disc. 2009 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$1500.
*
(2) BioProp Version 1.0 was developed by researchers at Academia Sinica,
The purpose of the GENIA Project is to develop tools and resources for automatic information extraction of biomedical information. One result of that work is the GENIA corpus, a collection of 2000 biomedical journal abstracts containing semantic class annotation for biomedical terms, part-of-speech (POS) tags and coreferences. The GTB is a subset of that corpus. BioProp Version 1.0 adds a proposition bank to the GTB.
Proposition Bank (PropBank) contains annotations of predicate argument structures and semantic roles in a treebank schema in the newswire domain. To construct BioProp Version 1.0, a semantic role labeling (SRL) system trained on PropBank was used to annotate the GTB. SRL, also called shallow semantic parsing, is a popular semantic analysis technique. In SRL, sentences are represented by one or more predicate-argument structures (PAS), also known as propositions. Each PAS is composed of a predicate (e.g., a verb) and several arguments (e.g., noun phrases) that have different semantic roles, including main arguments such as agent and patient, and adjunct arguments, such as time, manner and location. The term "argument" refers to a syntactic constituent of the sentence related to the predicate, and the term "semantic role" refers to the semantic relationship between a sentence's predicate and argument.
BioProp Version 1.0 consists of approximately 150,000 words. Each line in the corpus provides a PAS annotation that can be mapped to a sentence in the GTB.
BioProp Version 1.0 is distributed via web download.
2009 Subscription Members will automatically receive two copies of this corpus on disc, provided that they have submitted a signed copy of the User License Agreement for BioProp Version 1.0 (LDC2009T04). 2009 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$300.
LDC Offices to close for Labor Day Holiday, September 7, 2009
5-2-2 . ELDA/ELRA press release
Press Release - Immediate
Paris, France, September, 3rd 2009
Distribution Agreement signed for BioLexicon
ELRA together with the European Bioinformatics Institute (EBI, Hinxton, UK), Istituto di Linguistica Computazionale-Consiglio Nazionale Ricerche (ILC-CNR, Pisa, Italy), and the National Centre for Text Mining (NaCTeM, University of Manchester, UK) has signed a Language Resources distribution agreement for a large-scale English language terminological resource in the biomedical domain: BioLexicon.
Biological terminology is a frequent cause of analysis errors when processing literature written in the biology domain, due largely to the high degree of variation in term forms, to the frequent mis-matches between labels of controlled vocabularies and ontologies on the one hand and the forms actually occurring in text on the other, and to the lack of detailed formal information on the linguistic behaviour of domain terms. For example, "retro-regulate" is a terminological verb often used in molecular biology but it is not included in conventional dictionaries. BioLexicon is a linguistic resource for the biology domain, tailored to cope with these problems. It contains information on:
- terminological nouns, including nominalised verbs and proper names (e.g., gene names)
- terminological adjectives
- terminological adverbs
- terminological verbs
- general English words frequently used in the biology domain
Existing information on terms was integrated, augmented, complemented and linked, through processing of massive amounts of biomedical text, to yield inter alia over 2.2M entries, and information on over 1.8M variants and on over 2M synonymy relations. Moreover, extensive information is provided on how verbs and nominalised verbs in the domain behave at both syntactic and semantic levels, supporting thus applications aiming at discovery of relations and events involving biological entities in text.
This comprehensive coverage of biological terms makes BioLexicon a unique linguistic resource within the domain. It is primarily intended to support text mining and information retrieval in the biomedical
domain, however its standards-based structure and rich content make it a valuable resource for many other kinds of application.
On behalf of ELRA, ELDA will act as the distribution agency, by incorporating the BioLexicon in the ELRA Language Resources catalogue.
With these resources, ELRA is willing to extend the current catalogue, by offering specialized resources and thus allow a better coverage of the language.
For more information on BioLexicon (catalogue reference: ELRA-S0373): http://catalog.elra.info/product_info.php?products_id=1113
For more information on the ELRA catalogue, please contact:
Valérie Mapelli, mapelli@elda.org
For more information on ELRA & ELDA, please contact:
Khalid Choukri, choukri@elda.org
Hélène Mazo, mazo@elda.org
ELDA
55-57, rue Brillat Savarin
75013 Paris (France)
Tel.: +33 1 43 13 33 33
Fax: +33 1 43 13 33 30
5-2-3 . MEDAR project/ELDA
The goal of the MEDAR project, supported by the European Commission ICT programme, is to establish a network of partner centres of best practice in Arabic dedicated to promoting Arabic HLT (Human Language Technologies).
Within this framework, we are working on a global directory of players, experts, projects and Language resouces related to Arabic Human Language Technology. If you have not already answered the 1st survey and would like to be part of the community, please take a few minutes to complete the survey: http://survey.elda.org/index.php?sid=15471&lang=en.
The collected information will be made available through a Knowledge Base.
About MEDAR: http://www.medar.info
6 . Jobs openings
We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobs.html as well as http://www.elsnet.org/ Jobs).
The ads will be automatically removed from ISCApad after 6 months. Informing ISCApad editor when the positions are filled will avoid irrelevant mails between applicants and proposers.
6-1 . (2009-04-02)The Johns Hopkins University: Post-docs, research staff, professors on sabbaticals
6-2 . (2009-04-07) PhD Position in The Auckland University - New Zealand
PhD Position in The Auckland University - New Zealand Speech recognition for Healthcare Robotics Description: This project is the speech recognition component of a larger project for a speech enabled command module with verbal feedback software to facilitate interaction between aged people and robots. Including: speech generation and empathetic speech expression by the robot, speech recognition by the robot. For more details please refer to the link: https://wiki.auckland.ac.nz/display/csihealthbots/Speech+recognition+PhD
6-3 . (2009-04-23) R&D position in SPEECH RECOGNITION, PROCESSING AND SYNTHESIS IRCAM Paris
RESEARCH AND DEVELOPMENT POSITION IN SPEECH RECOGNITION, PROCESSING AND SYNTHESIS =========================================================================
The position is available immediately in the Speech group of the Analysis/Synthesis team at Ircam.
The Analysis/Synthesis team undertakes research and development
centered on new and advanced algorithms for analysis, synthesis and
transformation of audio signals, and, in particular, speech.
JOB DESCRIPTION:
A full-time position is open for research and development of advanced statistics
and signal processing algorithms in the field of speech recognition,
transformation and synthesis.
http://www.ircam.fr/anasyn.html (projects Rhapsodie, Respoken,
Affective Avatars, Vivos, among others)
The applications in view are, for example,
- Transformation of the identity, type and nature of a voice
- Text-to-Speech and expressive Speech Synthesis
- Synthesis from actor and character recordings.
The principal task is the design and the development of new algorithms
for some of the subjects above and in collaboration with the other
members of the Speech group. The research environment is Linux, Matlab
and various scripting languages like Perl. The development environment
is C/C++, for Windows in particular.
REQUIRED EXPERIENCE AND COMPETENCE:
O Excellent experience of research in statistics, speech and signal processing
O Experience in speech recognition, automatic segmentation (e.g. HTK)
O Experience of C++ development
O Good knowledge of UNIX and Windows environments
O High productivity, methodical work, and excellent programming style.
AVAILABILITY:
The position is available in the Analysis/Synthesis team of the Research
and Ddevelopment department of Ircam to start as soon as possible.
DURATION:
The initial contract is for 1 year, and could be prolonged.
EEC WORKING PAPERS:
In order to be able to begin immediately, the candidate SHALL HAVE valid EEC working papers.
SALARY:
According to formation and experience.
TO APPLY:
Please send your CV describing in a very detailed way the level of knowledge,
expertise and experience in the fields mentioned above (and any other
relevant information, recommendations in particular) preferably by email to:
Xavier.Rodet@ircam.fr (Xavier Rodet, Head of the Analysis/Synthesis team)
Or by fax: (33 1) 44 78 15 40, attention of Xavier Rodet
Or by post to: Xavier Rodet, IRCAM, 1 Place Stravinsky, 75004 Paris, France
6-4 . (2009-05-04) Several Ph.D. positions and Ph.D. or Postdoc scholarships, Universität Bielefeld
Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld
- speech synthesis and/or recognition
- discourse prosody
- laboratory phonology
- speech and language rhythm research
- multimodal speech (technology)
6-5 . (2009-05-07) PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING FRANCE)
=============================================================================
PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting
09/09)
=============================================================================
The PORT-MEDIA (ANR CONTINT 2008-2011) is a cooperative project
sponsored by the French National Research Agency, between the University
of Avignon, the University of Grenoble, the University of Le Mans, CNRS
at Nancy and ELRA (European Language Resources Association). PORT-MEDIA
will address the multi-domain and multi-lingual robustness and
portability of spoken language understanding systems. More specifically,
the overall objectives of the project can be summarized as:
- robustness: integration/coupling of the automatic speech recognition
component in the spoken language understanding process.
- portability across domains and languages: evaluation of the genericity
and adaptability of the approaches implemented in the
understanding systems, and development of new techniques inspired by
machine translation approaches.
- representation: evaluation of new rich structures for high-level
semantic knowledge representation.
The PhD thesis will focus on the multilingual portability of speech
understanding systems. For example, the candidate will investigate
techniques to fast adapt an understanding system from one language to
another and creating low-cost resources with (semi) automatic methods,
for instance by using automatic alignment techniques and lightly
supervised translations. The main contribution will be to fill the gap
between the techniques currently used in the statistical machine
translation and spoken language understanding fields.
The thesis will be co-supervised by Fabrice Lefèvre, Assistant Professor
at LIA (University of Avignon) and Laurent Besacier, Assistant Professor
at LIG (University of Grenoble). The candidate will spend 18 months at
LIG then 18 months at LIA.
The salary of a PhD position is roughly 1,300€ net per month. Applicants
should hold a strong university degree entitling them to start a
doctorate (Masters/diploma or equivalent) in a relevant discipline
(Computer Science, Human Language Technology, Machine Learning, etc).
The applicants should be fluent in English. Competence in French is
optional, though applicants will be encouraged to acquire this skill
during training. All applicants should have very good programming skills.
For further information, please contact Fabrice Lefèvre (Fabrice.Lefevre
at univ-avignon.fr) AND Laurent Besacier (Laurent.Besacier at imag.fr).
====================================================================================
Sujet de thèse en Traduction Automatique et Compréhension de la Parole
(début 09/09)
====================================================================================
Le projet PORT-MEDIA (ANR CONTINT 2008-2011) concerne la robustesse et
la portabilité multidomaine et multilingue des systèmes de compréhension
de l'oral. Les partenaires sont le LIG, le LIA, le LORIA, le LIUM et
ELRA (European Language Ressources Association). Plus précisément, les
trois objectifs principaux du projet concernent :
-la robustesse et l'intégration/couplage du composant de reconnaissance
automatique de la parole dans le processus de compréhension.
-la portabilité vers un nouveau domaine ou langage : évaluation des
niveaux de généricité et d'adaptabilité des approches implémentées dans
les systèmes de compréhension.
-l’utilisation de représentations sémantiques de haut niveau pour
l’interaction langagière.
Ce sujet de thèse concerne essentiellement la portabilité multilingue
des différents composants d’un système de compréhension automatique ;
l’idée étant d’utiliser, par exemple, des techniques d’alignement
automatique et de traduction pour adapter rapidement un système de
compréhension d’une langue vers une autre, en créant des ressources à
faible coût de façon automatique ou semi-automatique. L'idée forte est
de rapprocher les techniques de traduction automatique et de
compréhension de la parole.
Cette thèse est un co-encadrement entre deux laboratoires (Fabrice
Lefevre, LIA & Laurent Besacier, LIG). Les 18 premiers mois auront lieu
au LIG, les 18 suivants au LIA.
Le salaire pour un etudiant en thèse est d'environ 1300€ net par mois.
Nous recherchons des étudiants ayant un Master (ou équivalent) mention
Recherche dans le domaine de l'Informatique, et des compétences dans les
domaines suivants : traitement des langues écrites et/ou parlées,
apprentissage automatique...
Pour de plus amples informations ou candidater, merci de contacter
Fabrice Lefèvre (Fabrice.Lefevre at univ-avignon.fr) ET Laurent Besacier
(Laurent.Besacier at imag.fr).
--------------------------
6-6 . (2009-05-07)Several Ph.D. Positions and Ph.D. or Postdoc Scholarships, Universität Bielefeld
- speech synthesis and/or recognition
- discourse prosody
- laboratory phonology
- speech and language rhythm research
- multimodal speech (technology)
6-7 . (2009-05-08)PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)
PhD POSITION in MACHINE TRANSLATION AND SPEECH UNDERSTANDING (starting 09/09)
=============================================================================
The PORT-MEDIA (ANR CONTINT 2008-2011) is a cooperative project sponsored by the French National Research Agency, between the University of Avignon, the University of Grenoble, the University of Le Mans, CNRS at Nancy and ELRA (European Language Resources Association). PORT-MEDIA will address the multi-domain and multi-lingual robustness and portability of spoken language understanding systems. More specifically, the overall objectives of the project can be summarized as:
- robustness: integration/coupling of the automatic speech recognition component in the spoken language understanding process.
- portability across domains and languages: evaluation of the genericity and adaptability of the approaches implemented in the
understanding systems, and development of new techniques inspired by machine translation approaches.
- representation: evaluation of new rich structures for high-level semantic knowledge representation.
The PhD thesis will focus on the multilingual portability of speech understanding systems. For example, the candidate will investigate techniques to fast adapt an understanding system from one language to another and creating low-cost resources with (semi) automatic methods, for instance by using automatic alignment techniques and lightly supervised translations. The main contribution will be to fill the gap between the techniques currently used in the statistical machine translation and spoken language understanding fields.
The thesis will be co-supervised by Fabrice Lefèvre, Assistant Professor at LIA (University of Avignon) and Laurent Besacier, Assistant Professor at LIG (University of Grenoble). The candidate will spend 18 months at LIG then 18 months at LIA.
The salary of a PhD position is roughly 1,300€ net per month. Applicants should hold a strong university degree entitling them to start a doctorate (Masters/diploma or equivalent) in a relevant discipline (Computer Science, Human Language Technology, Machine Learning, etc). The applicants should be fluent in English. Competence in French is optional, though applicants will be encouraged to acquire this skill during training. All applicants should have very good programming skills.
For further information, please contact Fabrice Lefèvre (Fabrice.Lefevre at univ-avignon.fr) AND Laurent Besacier (Laurent.Besacier at imag.fr).
6-8 . (2009-05-11) Thèse Cifre indexation de données multimédia Institut Eurecom
Thèse Cifre indexation de données multimédiaTheseDeadLine: 01/11/2009merialdo@eurecom.frhttp://bmgroup.eurecom.fr/The Multimedia Communications Department of EURECOM, in partnership the travel service provider company AMADEUS, invites applications for a PhD position on multimedia indexing. The goal of the thesis is to study new techniques to organize large quantities of multimedia information, specifically images and videos, for improving services to travelers. This includes managing images and videos from providers as well as from users about places, locations, events, etc… The approach will be based on the most recent techniques in multimedia indexing, and will benefit from the strong research experience of EURECOM in this domain, joint to the industrial experience of AMADEUS.We are looking for very good and motivated students, with a strong knowledge in image and video processing, statistical and probabilistic modeling, for the theoretical part, and a good C/C++ programming ability for the experimental part. English is required. The successful candidate will be employed by AMADEUS in Sophia Antipolis, and will strongly interact with the researchers at EURECOM.Applicants should email a resume, letter of motivation, and all relevant information to.withProf. Bernard Merialdomerialdo@eurecom.frThe project will be conducted within AMADEUS (http://www.amadeus.com/), a world leader in provision of solutions to the travel industry to manage the distribution and selling of travel services. The company is the leading Global Distribution System (GDS) and the biggest processor of travel bookings in the world. Their main development center is located in Sophia Antipolis, France, and employs more than 1200 engineers. The research will be supervised by EURECOM (http://www.eurecom.fr), a graduate school and research center in communication systems, whose activity includes corporate, multimedia and mobile communications. EURECOM currently counts about 20 professors, 10 post-docs, 170 MS and 60 PhD students, and is involved in many European research projects and joint collaborations with industry. EURECOM is also located in Sophia-Antipolis, a major European technology park for telecommunications research and development in the French Riviera.
6-9 . (2009-05-11)Senior Research Fellowship in Speech Perception and Language Development,MARCS Auditory Laboratories
Ref 147/09 Senior Research Fellowship in Speech Perception and Language Development, MARCS Auditory Laboratories, Australia
5 Year Fixed Term Contract , Bankstown Campus
Remuneration Package: Academic Level C $107,853 to $123,724 p.a. (comprising Salary $91,266 to $104,831 p.a., 17% Superannuation, and Leave Loading)
Position Enquiries: Professor Denis Burnham, (02) 9772 6677 or email d.burnham@uws.edu.au
Closing Date: The closing date for this position has been extended until 30 June 2009.
6-10 . (2009-06-02)Proposition de sujet de thèse 2009 Analyse de scènes de parole Grenoble France
Proposition de sujet de thèse 2009
Ecole Doctorale EDISCE (http://www-sante.ujf-grenoble.fr/edisce/)
Financement ANR (http://www.icp.inpg.fr/~schwartz/Multistap/Multistap.html)
Analyse de scènes de parole : le problème du liage audio-visuo-moteur à la lumière de données comportementales et neurophysiologiques
Deux questions importantes traversent les recherches actuelles sur le traitement cognitif de la parole : la question de la multisensorialité (comment se combinent les informations auditives et visuelles dans le cerveau) et celle des interactions perceptuo-motrices.
Une question manquante est selon nous celle du « liage » (binding) : comment dans ces processus de traitement auditif ou audiovisuel, le cerveau parvient-il à « mettre ensemble » les informations pertinentes, à éliminer les « bruits », à construire les « flux de parole » pertinents avant la prise de décision ? Plus précisément, les objets élémentaires de la scène de parole sont les phonèmes, et des modules spécialisés auditifs, visuels, articulatoires contribuent au processus d'identification phonétique, mais il n'a pas été possible jusqu'à présent d'isoler leur contribution respective, ni la manière dont ces contributions sont fusionnées. Des expériences récentes permettent d'envisager le processus d'identification phonétique comme étant de nature non hériarchique, et essentiellement instancié par des opérations associatives. La thèse consistera à développer d’autres paradigmes expérimentaux originaux, mais aussi à mettre en place des expériences de neurophysiologie et neuroimagerie (EEG, IRMf) disponibles au laboratoire et dans son environnement Grenoblois, afin de déterminer la nature et le fonctionnement des processus de groupement audiovisuel des scènes de parole, en relation avec le mécanismes de production.
Cette thèse se réalisera dans le cadre d’un projet ANR « Multistap » (Multistabilité et groupement perceptif dans l’audition et dans la parole » http://www.icp.inpg.fr/~schwartz/Multistap/Multistap.html). Ce projet fournira à la fois le support de financement pour la bourse de thèse, et un environnement stimulant pour le développement des recherches, en partenariat avec des équipes de spécialistes d’audition et de vision, de Paris (DEC ENS), Lyon (LNSCC) et Toulouse (Cerco).
Responsables
Jean-Luc Schwartz (DR CNRS, HDR) : 04 76 57 47 12,
Frédéric Berthommier (CR CNRS) : 04 76 57 48 28
Jean-Luc.Schwartz, Frederic.Berthommier@gipsa-lab.grenoble-inp.fr
6-11 . (2009-06-10) PhD in ASR in Le Mans France
PhD position in Automatic Speech Recognition
=====================================
Starting in september-october 2009.
The ASH (Attelage de Systèmes Hétérogènes) project is a project funded by the ANR (French National Research Agency). Three French academic laboratories are involved: LIUM (University of Le Mans), LIA (University of Avignon) and IRISA (Rennes).
The main objective of the ASH project is to define and experiment an original methodological framework for the integration of heterogeneous automatic speech recognition systems. Integrating heterogeneous systems, and hence heterogeneous sources of knowledge, is a key issue in ASR but also in many other applicative fields concerned with knowledge integration and multimodality.
Clearly, the lack of a generic framework to integrate systems operating with different viewpoints, different knowledges and at different levels is a strong limitation which needs to be overcome: the definition of such a framework is the fundamental challenge of this work.
By defining a rigorous and generic framework to integrate systems, significant scientific progresses are expected in automatic speech recognition. Another objective of this project is to enable the efficient and reliable processing of large data streams by combining systems on the y.
At last, we expect to develop an on-the-fly ASR system as a real-time demonstrator of this new approach.
The thesis will be co-supervised by Paul Deléglise, Professeur at LIUM, Yannick Estève, Assistant Professor at LIUM and Georges Linarès, Assistant Professor at LIA. The candidate will work at Le Mans (LIUM), but will regularly spend a few days in Avignon (LIA)
Applicants should hold a strong university degree entitling them to start a doctorate (Masters/diploma or equivalent) in a relevant discipline (Computer Science, Human Language Technology, Machine Learning, etc).
The applicants for this PhD position should be fluent in English or in French. Competence in French is optional, though applicants will be encouraged to acquire this skill during training. This position is funded by the ANR.
Strong software skills are required, especially Unix/linux, C, Java, and a scripting language such as Perl or Python.
Contacts:
Yannick Estève: yannick.esteve@lium.univ-lemans.fr
Georges Linarès: georges.linares@univ-avignon.fr
6-12 . (2009-06-17)Two post-docs in the collaboration between CMU (USA) and University-Portugal program
Two post-doctoral positions in the framework of the Carnegie MellonUniversity-Portugal program are available at the Spoken Language SystemsLab (www.l2f.inesc-id.pt), INESC-ID, Lisbon, Portugal.Positions are for a fixed term contract of length up to two and a halfyears, renewable in one year intervals, in the scope of the researchprojects PT-STAR (Speech Translation Advanced Research to and fromPortuguese) and REAP.PT (Computer Aided Language Learning – ReadingPractice), both financed by FCT (Portuguese Foundation for Science andTechnology).The starting date for these positions is September 2009, or as soon aspossible thereafter.Candidates should send their CVs (in .pdf format) before July 15th, tothe email addresses given below, together with a motivation letter.Questions or other clarification requests should be emailed to the sameaddresses.======== PT-STAR (project CMU-PT/HuMach/0039/2008) ========Topic: Speech-to-Speech Machine TranslationDescription: We seek candidates with excellent knowledge in statisticalapproaches to machine translation (and if possible also speechtechnologies) and strong programming skills. Familiarity with thePortuguese language is not at all mandatory, although the main sourceand target languages are Portuguese/English.Email address for applications: lcoheur at l2f dot inesc-id dot pt======== REAP.PT (project CMU-PT/HuMach/0053/2008) ========Topic: Computer Aided Language LearningDescription: We seek candidates with excellent knowledge in automaticquestion generation (multiple-choice synonym questions, related wordquestions, and cloze questions) and/or measuring the reading difficultyof a text (exploring the combination of lexical features, grammaticalfeatures and statistical models). Familiarity with a romance language isrecommended, since the target language is Portuguese.Email address for applications: nuno dot mamede at inesc-id dot pt
6-13 . (2009-06-19) POSTDOC POSITION in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES
POSTDOC POSITION in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES (18 months ; starting January 2010 or later) IN GRENOBLE (France)
=============================================================================
PI (ANR BLANC 2009-2012) is a cooperative project sponsored by the French National Research Agency, between the University of Grenoble (France), the University of Avignon (France), and the International Research Center MICA in Hanoï (Vietnam).
PI addresses spoken language processing (notably speech recognition) for under-resourced languages (or ?-languages). From a scientific point of view, the interest and originality of this project consists in proposing viable innovative methods that go far beyond the simple retraining or adaptation of acoustic and linguistic models. From an operational point of view, this project aims at providing a free open source ASR development kit for ?-languages. We plan to distribute and evaluate such a development kit by deploying ASR systems for new under-resourced languages with very poor resources from Asia (Khmer, Lao) and Africa (Bantu languages).
The POSTDOC position focus on the development of ASR for two low-ressourced languages from Asia and Africa. This includes : supervising the ressource collection (in relation with the language partners), propose innovative methods to quickly develop ASR systems for these languages, evaluation., etc.
The salary of the POSTDOC position is roughly 2300€ net per month. Applicants should hold a PhD related to spoken language processing. The applicants should be fluent in English. Competence in French is optional, though applicants will be encouraged to acquire this skill during the postdoc.
For further information, please contact Laurent Besacier (Laurent.Besacier at imag.fr).
6-14 . (2009-06-22) PhD studentship in speech and machine learning ESPCI ParisTech
6-15 . (2009-06-30)Postdoctoral Fellowships in machine learning/statistics/machine vision at Monash University, Australia
6-16 . (2009-06-30) PhD studentship at LIMSI France
6-17 . (2009-07-01) These: Vocal Prosthesis Based on Machine Learning (France)
Vocal Prosthesis Based on Machine Learning(2)
6-18 . (2009-07-06) PhD in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES (Grenoble France)
POSTDOC POSITION in SPEECH RECOGNITION FOR UNDER-RESOURCED LANGUAGES (18 months ; starting January 2010 or later) IN GRENOBLE (France)
=============================================================================
PI (ANR BLANC 2009-2012) is a cooperative project sponsored by the French National Research Agency, between the University of Grenoble (France), the University of Avignon (France), and the International Research Center MICA in Hanoï (Vietnam).
PI addresses spoken language processing (notably speech recognition) for under-resourced languages (or ?-languages). From a scientific point of view, the interest and originality of this project consists in proposing viable innovative methods that go far beyond the simple retraining or adaptation of acoustic and linguistic models. From an operational point of view, this project aims at providing a free open source ASR development kit for ?-languages. We plan to distribute and evaluate such a development kit by deploying ASR systems for new under-resourced languages with very poor resources from Asia (Khmer, Lao) and Africa (Bantu languages).
The POSTDOC position focus on the development of ASR for two low-ressourced languages from Asia and Africa. This includes : supervising the ressource collection (in relation with the language partners), propose innovative methods to quickly develop ASR systems for these languages, evaluation., etc.
The salary of the POSTDOC position is roughly 2300? net per month. Applicants should hold a PhD related to spoken language processing. The applicants should be fluent in English. Competence in French is optional, though applicants will be encouraged to acquire this skill during the postdoc.
For further information, please contact Laurent Besacier (Laurent.Besacier at imag.fr).
6-19 . (2009-07-08) Position at Deutsche Telekom R&D
6-20 . (2009-07-15) PhD at LIMSI Paris
6-21 . (2009-07-17) 2 PhD in Computational linguistics in Radboud University Nijmegen NL
Two PhD students for Second Language Acquisition/Computational Linguistics (1,0 fte)
Faculty of Arts,
Vacancy number: 23.24.09
Closing date:
Job description
As a PhD student you will take part in the larger research project ‘Corrective feedback and the acquisition of syntax in oral proficiency’. The goal of this research project is to investigate the essential role of corrective feedback in L2 learning of syntax in oral proficiency. It will proceed from a granular level investigating the short-term effects of different types of feedback moves on different types of learners, to a global level by studying whether the granular, short-term effects also generalize to actual learning in the long term. Corrective feedback will be provided through a CALL system that makes use of automatic speech recognition. This will make it possible to assess the learner’s oral production online and to provide corrective feedback immediately under near-optimal conditions.
As a PhD student you will study which feedback moves lead to immediate uptake and acquisition in learners with a high level of education (PhD1) or learners with a low level of education (PhD2).
You are expected to start in November 2009. You will be part of an international and interdisciplinary team and will work in a motivating research environment.
For more information, see: http://lands.let.ru.nl/~strik/research/ASOP.html; http://www.ru.nl/cls/.
Requirements
You must have:
- a Master’s degree in (Applied) Linguistics, Computational Linguistics, Computer Science, Psycholinguistics, Artificial Intelligence, Cognitive Science or Education;
- programming skills (e.g. Matlab, Perl);
- an interest in second language acquisition;
- a working knowledge of Dutch and a good command of the English language.
Organization
The Faculty of Arts consists of eleven departments in the fields of language and culture, history, history of arts, linguistics and business communication, which together cater for about 2,800 students and collaborate closely in teaching and research. The project will be carried out at the Centre for Language Studies as part of the Linguistic Information Processing and Communicative Competences research programmes.
Website: http://www.ru.nl/cls/
Conditions of employment
Employment: 1,0 fte
Additional conditions of employment
The total duration of the contract is 3.5 years. The PhD students will receive an initial contract for 18 months with possible extension by 2 years.
The starting gross salary is €2,042 per month based on full-time employment.
The short-listed applicants will be interviewed in September 2009
Other Information
Please include:
- a copy of your university degree (in English or Dutch)
- a list of all your university marks (in English or Dutch)
- a motivation letter with details of research interests/experience, programming skills and knowledge of Linguistics and Psycholinguistics.
Additional Information
Prof. Roeland van Hout (r.vanhout@let.ru.nl)
Dr. Catia Cucchiarini (c.cucchiarini@let.ru.nl)
Dr. Helmer Strik (h.strik@let.ru.nl)
Application
You can apply for the job (mention the vacancy number 23.24.09) before
RU Nijmegen, Faculty of Arts, Personnel Department
P.O. Box 9103, 6500 HD, Nijmegen, The Netherlands
E-mail: vacatures@let.ru.nl
6-22 . (2009-07-24) Acoustic signal detection engineer Oregon State University
Acoustic Signal Detection Engineer
OSU’s Cooperative Institute for Marine Resources Studies offers one year of support, with the possibility of additional support, for a researcher on a project studying passive acoustic monitoring of large whales under the direction of Dr. David Mellinger. This is a full-time (1.0 FTE), 12-month-per-year, fixed-term Faculty Research Assistant position. Individuals with a Ph.D. may be appointed as a Research Associate (Postdoc). To see full details and to apply, please see http://oregonstate.edu/jobs, then search for posting #0004462 (the leading zeros are required). For questions, please email Jessica.Waddell@oregonstate.edu or David.Mellinger@oregonstate.edu. For full consideration, apply by September 21, 2009. OSU is an Affirmative Action/Office of Equal Opportunity employer.
6-23 . (2009-08-06) Post graduate Research positions at Marcs, Australia
MARCS Auditory Laboratories currently has 3 Postgraduate Research Awards available, offering a competitive tax free living allowance of $30,427 per annum and a funded place in the doctoral program.
Projects Available:
Thinking Head - Performance
Supervisor: Dr Garth Paine (ga.paine@uws.edu.au)
Thinking Head and Head-User Evaluation
Supervisor: Assoc Prof Kate Stevens (kj.stevens@uws.edu.au)
Sonification of Real-Time Data: Computational and Cognitive Approaches
Supervisor: Professor Roger Dean (roger.dean@uws.edu.au)
Thinking Head—Human-Human and Human-Head Interaction
Supervisor: Professor Chris Davis (chris.davis@uws.edu.au)
Learning Complex Temporal and Rhythmic Relations
Supervisor: Assoc Prof Kate Stevens (kj.stevens@uws.edu.au)
Tuning in to Native Speech and Perceiving Spoken Words
Supervisor: Professor Catherine Best (c.best@uws.edu.au)
Applications close 21 August 2009. For further information visit the scholarship website – www.uws.edu.au/research/scholarships.
MARCS Website - http://marcs.uws.edu.au/ Thinking Head Website - http://thinkinghead.edu.au//
6-24 . (2009-08-06) PhD position is available at the Queensland University of Technology, Brisbane, Australia.
PHD OPPORTUNITY:
A full-time 3 year PhD position is available at the Queensland University of Technology, Brisbane, Australia.
The position is within the Speech and Audio Research Lab, part of Smart Systems Theme of the Faculty of Built Environment and Engineering, and the Information Securities Institute. The lab conducts world class research and postgraduate training in a variety of speech and audio processing areas (speaker recognition, diarisation, speech detection, speech enhancement, multi-microphone speech technology, automatic language identification, keyword spotting)
Project title: Speaker Diarisation
Starting date: November/December 2009
Research fields: Speech and audio processing, pattern recognition, bayesian theory, machine learning, biometrics and security.
Project Description:
Large volumes of spoken audio are being recorded on a daily basis and audio archives of these recordings around the world are expanding rapidly. It is becoming increasingly important to be able to efficiently and automatically search, index and access information from these audio information sources. Speaker diarisation is an important, fundamental task in this process which aims to annotate the audio stream with speaker identities for each temporal region—determining “who spoke when.”
Current diarisation systems are susceptible to a number of impediments including wide variability in the acoustic characteristics of recordings from different sources, differences in the number of speakers present in a recording, the dominance of speakers, and the style and structure of the speech. All can affect the diarisation performance dramatically.
The aim of this research is to develop a framework and methods for better exploiting the sources of prior information that are generally available in many applications with the view to making portable, robust speaker diarisation systems a reality. Examples of relevant prior information include identities of participating speakers, models describing the characteristics of speakers in general or of a specific known speaker, models of the effects that recording conditions and domains have on acoustic features and knowledge of the recording domain.
Information for Applicants:
Applicants should hold a strong university degree which would entitle them to embark on a doctorate (Masters/diploma or equivalent) in a relevant discipline (computer science, mathematics, computer systems engineering etc). International students are encouraged to apply. The project is part of a ARC linkage between QUT, a commercial partner, and two partner speech/audio processing laboratories at European universities. Opportunities for exchange/internships at these partner institutions exist. The opportunity also exists for cotutelle PhD with the French partner university.
Information on Brisbane and Queensland University of Technology can be found at www.qut.edu.au
The salary of the PhD piosition is provided as a Linkage APAI scholarship ($26,669 in 2009, indexed annually) + top-up scholarship (approx $5,000pa). The salary is tax-exempt.
Funding is also available for conference/internship travel.
Interested students are encouraged to contact both the project leader Prof. Sridha Sridharan (s.sridharan@qut.edu.au), and Dr Brendan Baker (bj.baker@qut.edu.au).
Applicants are asked to provide:
- cover letter describing your interest in the project
- curriculum vitae indicating degrees obtained, disciplines covered (list of courses), publications, and other relevant experience.
- sample of written work (research papers in English) is also desirable.
- references along with contact details
As the start date is later this year, potential applicants are encouraged to contact the project coordinators as soon as possible to register their interest.
Deadlines for applications: 10 September 2009. (international applicants should apply as soon as possible)
6-25 . (2009-08-26) Ph Positions at the University of Bielefeld Germany
PhD Positions
The Applied Informatics Group, Faculty of Technology, Bielefeld University is looking for PhD candidates for grants and project positions in the following areas:
* Dialog modeling for human-robot interaction
* Speech signal modelling and analysis for speech recognition and synthesis
* Modeling and combining bottom-up with top-down attentional-processes
We invite applications from motivated young scientists with a background in computer science, linguistics, psychology, robotics, mathematics, cognitive science or similar areas, that are willing to contribute to the cross-disciplinary research agenda of our research group. Research and development are directed towards understanding the processes and functional constituents of cognitive interaction, and establishing cognitive interfaces and robots that facilitate the use of complex technical systems. Bielefeld University provides a unique environment for research in cognitive and intelligent systems by bringing together researchers from all over the world in a variety of relevant disciplines under the roof of central institutions such as the Excellence Center of Cognitive Interaction Technology (CITEC) or the Research Institute for Cognition and Robotics (CoR-Lab).
Successful candidates should hold an academic degree (MSc/Diploma) in a related discipline and have a strong interest in research and social robotics.
All applications should include: a short cover letter indicating the motivation and research interests of the candidate, a CV including a list of publications, and relevant certificates of academic qualification.
Bielefeld University is an equal opportunity employer. Women are especially encouraged to apply and in the case of comparable competences and qualification, will be given preference. Bielefeld University explicitly encourages disabled people to apply. Bielefeld University offers a family friendly environment and special arrangements for child care and double carrier opportunities.
Please send your application with reference to one of the three offered research areas no later than 15.9.2009 to Ms Susanne Hoeke (shoeke@techfak.uni-bielefeld.de).
Contact:
Susanne Hoeke
AG Applied Informatics
Faculty of Technology
Universitaetsstr. 21-23
33615 Bielefeld
Germany
Email: shoeke@techfak.uni-bielefeld.de
6-26 . (2009-09-03) Post-doc au laboratoire d'informatique de Grenoble France (french)
Le laboratoire LIG propose un sujet de recherche
pour un post-doctorant
CDD de 12 mois
Grenoble, campus
année 2009-2010
Sujet de recherche
------------------
Apprentissage Parallèle pour l'Indexation Multimédia Sémantique
Mots-clés : Apprentissage, Parallélisme, Indexation Multimédia.
Contexte
--------
Le poste est proposé dans le contexte du projet APIMS (Apprentissage
Parallèle pour l'Indexation Multimédia Sémantique) soutenu par le pôle
MSTIC de l’Université Joseph Fourrier.
La quantité de documents image et vidéo numériques croît de manière
exponentielle depuis de nombreuses années et cette tendance devrait se
poursuivre encore longtemps grâce aux progrès technologiques dans ce
domaine. L’indexation par concepts des documents image et vidéo est une
nécessité pour gérer de manière efficace les masses de données
correspondantes. En effet, les mots-clés nécessaires pour la recherche
par le contenu n’y sont pas explicitement présents comme dans le cas
des documents textuels. La recherche à partir d’exemples ou à partir
de caractéristiques dites « de bas niveau » présente également de
sérieuses limitations : les exemples nécessaires ne sont généralement
pas disponibles et les caractéristiques de bas niveau ne sont pas
aisément manipulables et interprétables par un utilisateur. Par ailleurs,
une similarité au niveau de ces caractéristiques ne correspond pas
forcément à une similarité au niveau sémantique. L’indexation par
concepts est un grand challenge en raison du « fossé sémantique »
séparant le contenu brut de ces documents (pixels, échantillons audio)
et les concepts qui on un sens pour un utilisateur.
Des progrès importants ont été accomplis ces dernières années, notamment
dans le cadre des campagnes d’évaluation TRECVID [1]. Ces campagnes
annuelles organisées par le National Institute of Standards and
Technologies (NIST) américain fournissent des données en quantité
importante, des tâches bien définies, des « vérités terrain », des
métriques et des outils d’évaluation associés. Elles contribuent
largement à fédérer les recherches dans le domaine de l’indexation
et de la recherche par le contenu des documents vidéo.
Les méthodes fonctionnant le mieux actuellement sont des méthodes
statistiques fonctionnant par apprentissage supervisé à partir
d’exemples annotés manuellement. Des caractéristiques dites de bas
niveau sont extraites à partir du signal audio ou image brut (des
histogrammes de couleur ou des transformées de Gabor par exemple) et
sont ensuite envoyées à des classifieurs qui sont entraînés à partir
d’exemples positifs et négatifs des concepts à reconnaître. Pour
obtenir de bons résultats, il est nécessaire de multiplier les
caractéristiques utilisées et de les combiner en utilisant des
techniques de fusion appropriées. Un gain supplémentaire est obtenu
en utilisant les relations entre les concepts comme les relations
statistiques (cooccurrences) ou logiques (générique-spécifique par
exemple).
Les principes généraux étant les mêmes, les différences entre les
approches concernent les choix sur les caractéristiques, sur les
outils de classification et/ou de fusion, et sur la façon de prendre
en compte le contexte. La qualité et la quantité des exemples positifs
et négatifs utilisés fait également une différence importante. L’état
de l’art actuel est l’extraction conjointe de plusieurs centaines de
concepts définis dans l’ontologie LSCOM [2]. Cependant, malgré les
efforts très importants fournis par un grand nombre d’équipes (plus
de 40 équipes ont participé à la tâche d’extraction de concepts dans
les plans vidéo lors de la campagne TRECVID 2008), la précision
moyenne des meilleurs systèmes ne dépasse pas 20%.
L’équipe MRIM du LIG a développé des méthodes et des outils pour
l’extraction automatique de concepts dans les plans vidéo et a
obtenu des résultats un peu supérieurs à la moyenne dans les
campagnes TRECVID 2005 à 2007 [3]. L’objectif de ce projet est
d’améliorer de manière importante ces méthodes et de leur faire
rejoindre voire définir l’état de l’art dans le domaine. Pour
cela, il faut d’une part les optimiser en prenant en compte tous
les facteurs importants et de leur ajouter un certain nombre
d’innovations comme l’utilisation de concepts de niveau
intermédiaire, la combinaison de méthodes génériques et
spécifiques, et l’apprentissage actif pour l’amélioration de
la quantité et qualité de l’annotation servant à l’entraînement
des systèmes.
Un des facteurs limitant est la puissance de calcul nécessaire.
Il faut en effet entraîner et évaluer les systèmes sur plusieurs
centaines de concepts et sur plusieurs dizaines de milliers
d’images ou de plans vidéo. Il faut en outre faire cela en
étudiant de multiples combinaisons de caractéristiques de bas
et moyen niveau, de méthodes de classification et de méthodes
de fusion. Nous envisageons pour cela d’utiliser les ressources
du projet GRID 5000 [4] afin de pouvoir étudier à grande échelle
l’influence combinée de ces différents facteurs. Dans sa version
simple, le problème se parallélise assez facilement (on peut faire
faire l’apprentissage et l’évaluation d’un concept sur un
processeur) mais lorsqu’on veut utiliser le contexte, c'est-à-dire
les relations statistiques ou ontologiques des concepts entre eux,
il y a lieu de faire coopérer les différents processus entre eux
et cela devient un réel problème de programmation parallèle.
L’équipe MESCAL du LIG dispose d’une grande expertise dans ce
domaine et participera à l’étude et à la mise en œuvre des versions
parallèles des méthodes d’extraction de concepts.
L’utilisation de la multi modalité naturellement présente dans
les documents vidéo est également essentielle pour la performance
des systèmes d’indexation par concepts. L’équipe GETALP du LIG
dispose de compétences dans le domaine du traitement du signal
audio et de parole et participera à la définition et à
l’optimisation des caractéristiques de bas et moyen niveau pour
l’indexation des concepts à partir de la piste audio. De même,
l’équipe GPIG de GIPSA-Lab dispose de compétences dans l’analyse
et l’indexation du mouvement dans les documents vidéo et participera
à la définition et à l’optimisation des caractéristiques de bas et
moyen niveau pour l’indexation des concepts à partir du mouvement
dans la piste image.
Références
[1] Smeaton, A. F., Over, P., and Kraaij, W. TRECVID: evaluating
the effectiveness of information retrieval tasks on digital video.
In Proceedings of the 12th Annual ACM international Conference on
Multimedia, New York, NY, USA, October 10-16, 2004.
[2] M. Naphade, J.R. Smith, J. Tesic, S.-F. Chang, W. Hsu,
L. Kennedy, A. Hauptmann and J. Curtis, Large-Scale Concept
Ontology for Multimedia, IEEE Multimedia 13(3), pp. 86-91, 2006.
[3] Stéphane Ayache, Georges Quénot and Jérôme Gensel, CLIPS-LSR
Experiments at TRECVID 2006, TRECVID’2006 Workshop, Gaithersburg,
MD, USA, November 13-14, 2006.
[4] Bolze, R. et al, Grid'5000: a large scale and highly reconfigurable
experimental Grid testbed International Journal of High Performance
Computing Applications, 20(4), pp 481-494, 2006.
Description du poste
--------------------
La première partie du travail consistera à mettre en œuvre des
versions parallèles des méthodes de classification développées dans
l’équipe MRIM et à utiliser ces versions parallèles pour optimiser
conjointement les différents éléments (jeux de caractéristiques,
opérateurs de classification et opérateurs de fusion) intervenant
dans celles-ci. Cette optimisation devra être faire de manière aussi
systématique que possible. Compte tenu de l’aspect hautement
combinatoire et du coût de calcul (même sur une architecture
parallèle) de celle-ci, des méthodes heuristiques appropriées devront
être étudiées et mises en œuvre afin d’obtenir le meilleur résultat
dans un temps donné.
Dans une deuxième partie, il faudra mettre en œuvre des approches
intégrées pour la reconnaissance simultanée de plusieurs centaines
de concepts en prenant en compte dès les premiers niveaux de
l’apprentissage les corrélations existant entre ceux-ci.
Ces travaux seront, dans la mesure du possible, planifiés en fonction
des évaluations TRECVID sur la détection de concepts dans les plans
vidéo. Les expérimentations on lieu en général pendant l’été
(juillet-août) et les campagnes s’étendent de février à novembre de
l’année en cours. L’objectif est de pouvoir évaluer lors des campagnes
2009 et 2010 ce qu’il est prévu de développer dans la première et la
deuxième partie décrites ci-dessus.
Type de poste et localisation
-----------------------------
CDD de 12 mois au laboratoire LIG.
Intégration dans l’équipe MRIM du LIG (recherche en recherche
d’information multimédia et systèmes de recommandation) et
collaboration avec les équipes GETALP et MESCAL du LIG et
l’équipe GPIG du laboratoire GIPSA.
Localisation : Grenoble, campus de Saint Martin d’Hères.
Salaire : 2 000 euros nets / mois environ.
Formation et compétences nécessaires
------------------------------------
Profil demandé
o Expérience importante et compétences reconnues en conception –
développement de logiciels.
o Connaissances et expérience significative en langage C ou C++.
o Thèse dans l’un des domaines suivants : programmation parallèle,
systèmes de recherche d’information, apprentissage automatique,
traitement statistique des données, traitement d’images.
Compétences complémentaires intéressantes pour le poste
o Expérience dans l’optimisation des performances des algorithmes.
o Expérience du travail en équipe.
Date limite de candidature
--------------------------
Les candidatures peuvent être déposée jusqu’au 30 septembre 2009.
Le poste est à pourvoir début novembre ou décembre 2009 au plus tard.
Dès qu'une candidature sera retenue, le poste sera affecté.
Contact
-------
Georges Quénot – Laboratoire LIG – Equipe MRIM – http://mrim.imag.fr/georges.quenot
Adresse : Bâtiment B – 385 avenue de la Bibliothèque – 38400 Saint Martin d’Hères
E-mail : Georges.Quenot@imag.fr – Tél : 04 76 63 58 55
7 . Journals
7-1 . Special issue of Speech Comm: Non-native speech perception in adverse conditions: imperfect knowledge, imperfect signal
SPECIAL ISSUE OF SPEECH COMMUNICATION
NON-NATIVE SPEECH PERCEPTION IN ADVERSE CONDITIONS: IMPERFECT KNOWLEDGE, IMPERFECT SIGNAL
Much work in phonetics and speech perception has focused on doubly-optimal conditions, in which the signal reaching listeners is unaffected by distorting influences and in which listeners possess native competence in the sound system. However, in practice, these idealised conditions are rarely met. The processes of speech production and perception thus have to account for imperfections in the state of knowledge of the interlocutor as well as imperfections in the signal received. In noisy settings, these factors combine to create particularly adverse conditions for non-native listeners.
The purpose of the Special Issue is to assemble the latest research on perception in adverse conditions with special reference to non-native communication. The special issue will bring together, interpret and extend the results emerging from current research carried out by engineers, psychologists and phoneticians, such as the general frailty of some sounds for both native and non-native listeners and the strong non-native disadvantage experienced for categories which are apparently equivalent in the listeners’ native and target languages.
Papers describing novel research on non-native speech perception in adverse conditions are welcomed, from any perspective including the following. We especially welcome interdisciplinary contributions.
• models and theories of L2 processing in noise
• informational and energetic masking
• role of attention and processing load
• effect of noise type and reverberation
• inter-language phonetic distance
• audiovisual interactions in L2
• perception-production links
• the role of fine phonetic detail
GUEST EDITORS
Maria Luisa Garcia Lecumberri (Department of English, University of the Basque Country, Vitoria, Spain).
garcia.lecumberri@ehu.es
Martin Cooke (Ikerbasque and Department of Electrical & Electronic Engineering, University of the Basque Country, Bilbao, Spain).
m.cooke@ikerbasque.org
Anne Cutler (Max-Planck Institute for Psycholinguistics, Nijmegen, The Netherlands and MARCS Auditory Laboratories, Sydney, Australia).
anne.cutler@mpi.nl
DEADLINE
Full papers should be submitted by 31st July 2009
SUBMISSION PROCEDURE
Authors should consult the “guide for authors”, available online at http://www.elsevier.com/locate/specom, for information about the preparation of their manuscripts. Papers should be submitted via http://ees.elsevier.com/specom, choosing “Special Issue: non-native speech perception” as the article type. If you are a first time user of the system, please register yourself as an author. Prospective authors are welcome to contact the guest editors for more details of the Special Issue.
7-2 . IEEE Special Issue on Speech Processing for Natural Interaction with Intelligent Environments
Call for Papers IEEE Signal Processing Society IEEE Journal of Selected Topics in Signal Processing Special Issue on Speech Processing for Natural Interaction with Intelligent Environments With the advances in microelectronics, communication technologies and smart materials, our environments are transformed to be increasingly intelligent by the presence of robots, bio-implants, mobile devices, advanced in-car systems, smart house appliances and other professional systems. As these environments are integral parts of our daily work and life, there is a great interest in a natural interaction with them. Also, such interaction may further enhance the perception of intelligence. "Interaction between man and machine should be based on the very same concepts as that between humans, i.e. it should be intuitive, multi-modal and based on emotion," as envisioned by Reeves and Nass (1996) in their famous book "The Media Equation". Speech is the most natural means of interaction for human beings and it offers the unique advantage that it does not require carrying a device for using it since we have our "device" with us all the time. Speech processing techniques are developed for intelligent environments to support either explicit interaction through message communications, or implicit interaction by providing valuable information about the physical ("who speaks when and where") as well as the emotional and social context of an interaction. Challenges presented by intelligent environments include the use of distant microphone(s), resource constraints and large variations in acoustic condition, speaker, content and context. The two central pieces of techniques to cope with them are high-performing "low-level" signal processing algorithms and sophisticated "high-level" pattern recognition methods. We are soliciting original, previously unpublished manuscripts directly targeting/related to natural interaction with intelligent environments. The scope of this special issue includes, but is not limited to: * Multi-microphone front-end processing for distant-talking interaction * Speech recognition in adverse acoustic environments and joint optimization with array processing * Speech recognition for low-resource and/or distributed computing infrastructure * Speaker recognition and affective computing for interaction with intelligent environments * Context-awareness of speech systems with regard to their applied environments * Cross-modal analysis of speech, gesture and facial expressions for robots and smart spaces * Applications of speech processing in intelligent systems, such as robots, bio-implants and advanced driver assistance systems. Submission information is available at http://www.ece.byu.edu/jstsp. Prospective authors are required to follow the Author's Guide for manuscript preparation of the IEEE Transactions on Signal Processing at http://ewh.ieee.org/soc/sps/tsp. Manuscripts will be peer reviewed according to the standard IEEE process. Manuscript submission due: Jul. 3, 2009 First review completed: Oct. 2, 2009 Revised manuscript due: Nov. 13, 2009 Second review completed: Jan. 29, 2010 Final manuscript due: Mar. 5, 2010 Lead guest editor: Zheng-Hua Tan, Aalborg University, Denmark zt@es.aau.dk Guest editors: Reinhold Haeb-Umbach, University of Paderborn, Germany haeb@nt.uni-paderborn.de Sadaoki Furui, Tokyo Institute of Technology, Japan furui@cs.titech.ac.jp James R. Glass, Massachusetts Institute of Technology, USA glass@mit.edu Maurizio Omologo, FBK-IRST, Italy omologo@fbk.eu
7-3 . Special issue "Speech as a Human Biometric: I know who you are from your voice" Int. Jnl Biometrics
7-4 . Special on Voice transformation IEEE Trans ASLP
CALL FOR PAPERSIEEE Signal Processing SocietyIEEE Transactions on Audio, Speech and Language ProcessingSpecial Issue on Voice TransformationWith the increasing demand for Voice Transformation in areas such asspeech synthesis for creating target or virtual voices, modeling variouseffects (e.g., Lombard effect), synthesizing emotions, making more naturaldialog systems which use speech synthesis, as well as in areas likeentertainment, film and music industry, toys, chat rooms and games, dialogsystems, security and speaker individuality for interpreting telephony,high-end hearing aids, vocal pathology and voice restoration, there is agrowing need for high-quality Voice Transformation algorithms and systemsprocessing synthetic or natural speech signals.Voice Transformation aims at the control of non-linguistic information ofspeech signals such as voice quality and voice individuality. A great dealof interest and research in the area has been devoted to the design anddevelopment of mapping functions and modifications for vocal tractconfiguration and basic prosodic features.However, high quality Voice Transformation systems that create effectivemapping functions for vocal tract, excitation signal, and speaking styleand whose modifications take into account the interaction of source andfilter during voice production, are still lacking.We invite researchers to submit original papers describing new approachesin all areas related to Voice Transformation including, but not limited to,the following topics:* Preprocessing for Voice Transformation(alignment, speaker selection, etc.)* Speech models for Voice Transformation(vocal tract, excitation, speaking style)* Mapping functions* Evaluation of Transformed Voices* Detection of Voice Transformation* Cross-lingual Voice Transformation* Real-time issues and embedded Voice Transformation Systems* ApplicationsThe call for paper is also available at:http://www.ewh.ieee.org/soc/sps/tap/sp_issue/VoiceTransformationCFP.pdfProspective authors are required to follow the Information for Authors formanuscript preparation of the IEEE Transactions on Audio, Speech, andLanguage Processing Signal Processing athttp://www.signalprocessingsociety.org/periodicals/journals/taslp-author-information/Manuscripts will be peer reviewed according to the standard IEEE process.Schedule:Submission deadline: May 10, 2009Notification of acceptance: September 30, 2009Final manuscript due: October 30, 2009Publication date: January 2010Lead Guest Editor:Yannis Stylianou, University of Crete, Crete, Greeceyannis@csd.uoc.grGuest Editors:Tomoki Toda, Nara Inst. of Science and Technology, Nara, Japantomoki@is.naist.jpChung-Hsien Wu, National Cheng Kung University, Tainan, Taiwanchwu@csie.ncku.edu.twAlexander Kain, Oregon Health & Science University, Portland Oregon, USAkaina@ohsu.eduOlivier Rosec, Orange-France Telecom R&D, Lannion, Franceolivier.rosec@orange-ftgroup.com
7-5 . Special Issue on Statistical Learning Methods for Speech and Language Processing
Samy Bengio, Google Inc., Mountain View (CA), USA, bengio@google.com
7-6 . SPECIAL ISSUE OF SPEECH COMMUNICATION: Perceptual and Statistical Audition
8 . Future Speech Science and Technology Events
8-1 . (2009-09) Emotion challenge INTERSPEECH 2009
8-2 . (2009-09-06) Special session at Interspeech 2009:adaptivity in dialog systems
Call for papers (submission deadline Friday 17 April 2009) Special Session : "Machine Learning for Adaptivity in Spoken Dialogue Systems"at Interspeech 2009, Brighton U.K., http://www.interspeech2009.org/Session chairs: Oliver Lemon, Edinburgh University,and Olivier Pietquin, Supélec - IMS Research GroupIn the past decade, research in the field of Spoken Dialogue Systems(SDS) has experienced increasing growth, and new applications includeinteractive mobile search, tutoring, and troubleshooting systems(e.g. fixing a broken internet connection). The design andoptimization of robust SDS for such tasks requires the development ofdialogue strategies which can automatically adapt to different typesof users (novice/expert, youth/senior) and noise conditions(room/street). New statistical learning techniques are emerging fortraining and optimizing adaptive speech recognition, spoken languageunderstanding, dialogue management, natural language generation, andspeech synthesis in spoken dialogue systems. Among machine learningtechniques for spoken dialogue strategy optimization, reinforcementlearning using Markov Decision Processes (MDPs) and PartiallyObservable MDP (POMDPs) has become a particular focus.We therefore solicit papers on new research in the areas of:- Adaptive dialogue strategies and adaptive multimodal interfaces- User simulation techniques for adaptive strategy learning and testing- Rapid adaptation methods- Reinforcement Learning of dialogue strategies- Partially Observable MDPs in dialogue strategy optimization- Statistical spoken language understanding in dialogue systems- Machine learning and context-sensitive speech recognition- Learning for adaptive Natural Language Generation in dialogue- Corpora and annotation for machine learning approaches to SDS- Machine learning for adaptive multimodal interaction- Evaluation of adaptivity in statistical approaches to SDS and usersimulation.Important Dates--Full paper submission deadline: Friday 17 April 2009Notification of paper acceptance: Wednesday 17 June 2009Conference dates: 6-10 September 2009
8-3 . (2009-09-07) Information Retrieval and Information Extraction for Less Resourced Languages
8-4 . (2009-09-09) CfP IDP 09 Discourse-Prosody Interface
IDP 09 : CALL FOR PAPERS
Discourse – Prosody Interface
Paris, September 9-10-11, 2009
The third round of the “Discourse – Prosody Interface” Conference will be hosted by the Laboratoire de Linguistique Formelle (UMR 7110 / LLF), the Equipe CLILLAC-ARP (EA 3967) and the Linguistic Department (UFRL) of the University of Paris-Diderot (Paris 7), on September 9-10-11, 2009 in Paris. The first round was organized by the Laboratoire Parole et Langage (UMR 6057 /LPL) in September 2005, in Aix-en-Provence. The second took place in Geneva in September 2007 and was organized by the Department of Linguistics at the University of Geneva, in collaboration with the École de Langue et Civilisation Françaises at the University of Geneva, and the VALIBEL research centre at the Catholic University of Louvain.
The third round will be held at the Paris Center of the University of Chicago, 6, rue Thomas Mann, in the XIIIth arrondissement, near the Bibliothèque François Mitterrand (BNF).
The Conference is addressed to researchers in prosody, phonology, phonetics, pragmatics, discourse analysis and also psycholinguistics, who are particularly interested in the relations between prosody and discourse. The participants may develop their research programmes within different theoretical paradigms (formal approaches to phonology and semantics/ pragmatics, conversation analysis, descriptive linguistics, etc.). For this third edition, spécial attention will be given to research work that propose a formal analysis of the Discourse- Prosody interface.
So as to favour convergence among contributions, the IDP09 conference will focus on :
* Prosody, its parts and discourse :
- How to analyze the interaction between the different prosodic subsystems (accentuation,
intonation, rhythm; register changes or voice quality)?
- How to model the contribution of each subsystem to the global interpretation of discourse?
- How to describe and analyze prosodic facts, and at which level (phonetic vs. phonological) ?
* Prosodic units & discourse units
- What are the relevant units for discourse or conversation analysis ? What are their prosodic
properties ?
- How the embedding of utterances in discourse is marked syntactically or prosodically ?
What consequence of the modelling of syntax & prosody ?
* Prosody and context(s)
- What is the contribution of the context in the analysis of prosody in discourse?
- How can the relations between prosody and context(s) be modelled?
* Acquisition of the relations between prosody & discourse in L1 and L2
- How are the relations between prosody & discourse acquired in L1, in L2 ?
- Which methodological tools could best describe and transcribe these processes ?
Guest speakers :
* Diane Blakemore (School of Languages, University of Salford, United Kingdom)
* Piet Mertens (Department of Linguistics, K.U Leuven, Belgium)
* Hubert Truckenbrodt (ZAS, Zentrum für Allgemeine Sprachwissenschaft, Berlin,
Germany)
Conference will be held in English or French. Studies can be about any language.
Submission will be made by uploading an anonymous two pages abstract (plus an extra page for references and figures) in A4 and with Times 12 font, written in either English or French as PDF file at the following address : http://www.easychair.org/conferences/?conf=idp09 .
Author’s name and affiliation should be given as requested, but not in the PDF file.
If you have any question concerning the submission procedure or you encounter any problem,
please send an email at the following address : idp09@linguist.jussieu.fr
Authors may submit as many proposals as they wish.
The proposals will be evaluated anonymously by the scientific committee.
Schedule
• Submission deadline: April, 26th, 2009
• Notification of acceptation: June, 8th, 2009
• Conference (IDP 09): September 9th-11th, 2009.
Further information is available on the conférence website : http://idp09.linguist.univ-paris-diderot.fr
8-5 . (2009-09-09)Conference IDP (Interface Discours Prosodie) Paris France
8-6 . (2009-09-11) SIGDIAL 2009 CONFERENCE
10th Annual Meeting of the Special Interest Group
on Discourse and Dialogue
Queen Mary University of London, UK September 11-12, 2009
(right after Interspeech 2009)
Submission Deadline: April 24, 2009
PRELIMINARY CALL FOR PAPERS
The SIGDIAL venue provides a regular forum for the presentation of
cutting edge research in discourse and dialogue to both academic and
industry researchers. Due to the success of the nine previous SIGDIAL
workshops, SIGDIAL is now a conference. The conference is sponsored by
the SIGDIAL organization, which serves as the Special Interest Group in
discourse and dialogue for both ACL and ISCA. SIGDIAL 2009 will be
co-located with Interspeech 2009 as a satellite event.
In addition to presentations and system demonstrations, the program
includes an invited talk by Professor Janet Bavelas of the University of
Victoria, entitled "What's unique about dialogue?".
TOPICS OF INTEREST
We welcome formal, corpus-based, implementation, experimental, or
analytical work on discourse and dialogue including, but not restricted
to, the following themes:
1. Discourse Processing and Dialogue Systems
Discourse semantic and pragmatic issues in NLP applications such as text
summarization, question answering, information retrieval including
topics like:
- Discourse structure, temporal structure, information structure ;
- Discourse markers, cues and particles and their use;
- (Co-)Reference and anaphora resolution, metonymy and bridging resolution;
- Subjectivity, opinions and semantic orientation;
Spoken, multi-modal, and text/web based dialogue systems including
topics such as:
- Dialogue management models;
- Speech and gesture, text and graphics integration;
- Strategies for preventing, detecting or handling miscommunication
(repair and correction types, clarification and under-specificity,
grounding and feedback strategies);
- Utilizing prosodic information for understanding and for disambiguation;
2. Corpora, Tools and Methodology
Corpus-based and experimental work on discourse and spoken, text-based
and multi-modal dialogue including its support, in particular:
- Annotation tools and coding schemes;
- Data resources for discourse and dialogue studies;
- Corpus-based techniques and analysis (including machine learning);
- Evaluation of systems and components, including methodology, metrics
and case studies;
3. Pragmatic and/or Semantic Modeling
The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a
single sentence) including the following issues:
- The semantics/pragmatics of dialogue acts (including those which are
less studied in the semantics/pragmatics framework);
- Models of discourse/dialogue structure and their relation to
referential and relational structure;
- Prosody in discourse and dialogue;
- Models of presupposition and accommodation; operational models of
conversational implicature.
SUBMISSIONS
The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
presentations, followed by posters and demonstrations.
- Long papers must be no longer than 8 pages, including title, examples,
references, etc. In addition to this, two additional pages are allowed
as an appendix which may include extended example discourses or
dialogues, algorithms, graphical representations, etc.
- Short papers and demo descriptions should be 4 pages or less
(including title, examples, references, etc.).
Please use the official ACL style files:
http://ufal.mff.cuni.cz/acl2007/styles/
Papers that have been or will be submitted to other meetings or
publications must provide this information (see submission format).
SIGDIAL 2009 cannot accept for publication or presentation work that
will be (or has been) published elsewhere. Any questions regarding
submissions can be sent to the General Co-Chairs.
Authors are encouraged to make illustrative materials available, on the
web or otherwise. Examples might include excerpts of recorded
conversations, recordings of human-computer dialogues, interfaces to
working systems, and so on.
BEST PAPER AWARDS
In order to recognize significant advancements in dialog and discourse
science and technology, SIGDIAL will (for the first time) recognize a
BEST PAPER AWARD and a BEST STUDENT PAPER AWARD. A selection committee
consisting of prominent researchers in the fields of interest will
select the recipients of the awards.
IMPORTANT DATES (SUBJECT TO CHANGE)
Submission: April 24, 2009
Workshop: September 11-12, 2009
WEBSITES
SIGDIAL 2009 conference website:
http://www.sigdial.org/workshops/workshop10/
SIGDIAL organization website: http://www.sigdial.org/
Interspeech 2009 website: http://www.interspeech2009.org/
ORGANIZING COMMITTEE
For any questions, please contact the appropriate members of the
organizing committee:
GENERAL CO-CHAIRS
Pat Healey (Queen Mary University of London): ph@dcs.qmul.ac.uk
Roberto Pieraccini (SpeechCycle): roberto@speechcycle.com
TECHNICAL PROGRAM CO-CHAIRS
Donna Byron (Northeastern University): dbyron@ccs.neu.edu
Steve Young (University of Cambridge): sjy@eng.cam.ac.uk
LOCAL CHAIR
Matt Purver (Queen Mary University of London): mpurver@dcs.qmul.ac.uk
SIGDIAL PRESIDENT
Tim Paek (Microsoft Research): timpaek@microsoft.com
SIGDIAL VICE PRESIDENT
Amanda Stent (AT&T Labs - Research): amanda.stent@gmail.com
Matthew Purver - http://www.dcs.qmul.ac.uk/~mpurver/
Senior Research Fellow
Interaction, Media and Communication
Department of Computer Science
Queen Mary University of London, London E1 4NS, UK
8-7 . (2009-09-11) Int. Workshop on spoken language technology for development: from promise to practice.
International Workshop on Spoken Language Technology for Development
- from promise to practice
Venue - The Abbey Hotel, Tintern, UK
Dates - 11-12 September 2009
Following on from a successful special session at SLT 2008 in Goa, this workshop invites participants with an interest in SLT4D and who have expertise and experience in any of the following areas:
- Development of speech technology for resource-scarce languages
- SLT deployments in the developing world
- HCI in a developing world context
- Successful ICT4D interventions
The aim of the workshop is to develop a "Best practice in developing and deploying speech systems for developmental applications". It is also hoped that the participants will form the core of an open community which shares tools, insights and methodologies for future SLT4D projects.
If you are interested in participating in the workshop, please submit a 2-4 page position paper explaining how your expertise and experience might be applied to SLT4D, formatted according to the Interspeech 2009 guidelines, to Roger Tucker at roger@outsideecho.com by 30th April 2009.
Important Dates:
Papers due: 30th April 2009
Acceptance Notification: 10th June 2009
Early Registration deadline: 3rd July 2009
Workshop: 11-12 September 2009
Further details can be found on the workshop website at www.llsti.org/SLT4D-09
8-8 . (2009-09-11) ACORNS Workshop Brighton UK
8-9 . (2009-09-13)Young Researchers' Roundtable on Spoken Dialogue Systems 2009 London
Young Researchers' Roundtable on Spoken Dialogue Systems 2009
13th-14th September, at Queen Mary University of London
*Overview and goals*
The Young Researchers' Roundtable on Spoken Dialogue Systems (YRRSDS) is an annual workshop designed for post-graduate students, post-docs and junior researchers working in research related to spoken dialogue systems in both academia and industry. The roundtable provides an open forum where participants can discuss their research interests, current work and future plans. The workshop has three main goals:
- to offer an interdisciplinary forum for creative thinking about current issues in spoken dialogue systems research
- to provide young researchers with career advice from senior researchers and professionals from both academic and industrial backgrounds
- to develop a stronger international network of young researchers working in the field.
(Important note: There is no age restriction to participating in the workshop; the word 'young' is meant to indicate that it is targeted towards researchers who are at a relatively early stage in their career.)
*Topics and sessions*
Potential roundtable discussion topics include: best practices for conducting and evaluating user studies of spoken dialogue systems, the prosody of conversation, methods of analysis for dialogue systems, conversational agents and virtual characters,cultural adaptation of dialogue strategies, and user modelling.
YRRSDS’09 will feature:
- a senior researcher panel (both academia and industry)
- a demo and poster session
- a special session on frameworks and grand challenges for dialogue system evaluation
- a special session on EU projects related to spoken dialogue systems.
Previous workshops were held in Columbus (ACL 2008), Antwerp (INTERSPEECH 2007), Pittsburgh (INTERSPEECH 2006) and Lisbon (INTERSPEECH 2005).
*Workshop date*
YRRSDS'09 will take place on September 13th and 14th, 2009 (immediately after Interspeech and SIGDial 2009).
*Workshop location*
The 2009 YRRSDS will be held at Queen Mary University of London, one of the UK's leading research-focused higher education institutions. Queen Mary’s Mile End campus began life in 1887 as the People's Palace, a philanthropic endeavour to provide east Londoners with education and social activities, and is located in the heart of London's vibrant East End.
*Grants*
YRRSDS 2009 will be supported this year by ISCA, the International Speech Communication Association. ISCA will consider applications for a limited number of travel grants. Applications should be send directly to grants@isca-speech.org, details of the application process and forms are available from http://www.isca-speech.org/grants.html.We are also negotiating with other supporters the possibility of offering a limited number of travel grants to students.
*Endorsements*
SIGDial, ISCA, Dialogs on Dialogs
*Sponsors*
Orange, Microsoft Research, AT&T
*Submission process*
Participants will be asked to submit a 2-page position paper based on a template provided by the organising committee. In their papers, authors will include a short biographical sketch, a brief statement of research interests, a description of their research work, and a short discussion of what they believe to be the most significant and interesting issues in spoken dialogue systems today and in the near future. Participants will also provide three suggestions for discussion topics.
Workshop attendance will be limited to 50 participants. Submissions will be accepted on a first-come-first-served basis. Submissions will be collated and made available to participants. We also plan to publish the position papers and presentations from the workshop on the web, subject to any sponsor or publisher constraints.
*Important Dates*
- Submissions open: May 15, 2009
- Submissions deadline: June 30, 2009
- Final notification: July 31, 2009
- Registration begins: TBD
- Registration deadline: TBD
- Interspeech: 6-10 September 2009
- SIGDial: 11-12 September, 2009
- YRR: 13-14 September, 2009
*More information on related websites*
- Young Researchers' Roundtable website: http://www.yrrsds.org/
- SIGDIAL 2009 conference website: http://www.sigdial.org/workshops/workshop10/
- Interspeech 2009 website: http://www.interspeech2009.org/
*Organising Committee*
- David Díaz Pardo de Vera, Polytechnic University of Madrid, Spain
- Milica Gašić, Cambridge University, UK
- François Mairesse, Cambridge University, UK
- Matthew Marge, Carnegie Mellon University, USA
- Joana Paulo Pardal, Technical University Lisbon, Portugal
- Ricardo Ribeiro, ISCTE, Lisbon, Portugal
*Local Organisers*
- Arash Eshghi, Queen Mary University of London, UK
- Christine Howes, Queen Mary University of London, UK
- Gregory Mills, Queen Mary University of London, UK
*Scientific Advisory Committee*
- Hua Ai, University of Pittsburgh, USA
- James Allen, University of Rochester, USA
- Alan Black, Carnegie Mellon University, USA
- Dan Bohus, Microsoft Research, USA
- Philippe Bretier, Orange Labs, France
- Robert Dale, Macquarie University, Australia
- Maxine Eskenazi, Carnegie Mellon University, USA
- Sadaoki Furui, Tokyo Institute of Technology, Japan
- Luis Hernández Gómez, Polytechnic University of Madrid, Spain
- Carlos Gómez Gallo, University of Rochester, USA
- Kristiina Jokinen, University of Helsinki, Finland
- Nuno Mamede, Spoken Language Systems Lab, INESC-ID, Portugal
- David Martins de Matos, Spoken Language Systems Lab, INESC-ID, Portugal
- João Paulo Neto, Voice Interaction, Portugal
- Tim Paek, Microsoft Research
- Antoine Raux, Honda Research, USA
- Robert J. Ross, Universitat Bremen, Germany
- Alexander Rudnicky, Carnegie Mellon University, USA
- Mary Swift, University of Rochester, USA
- Isabel Trancoso, Spoken Language Systems Lab, INESC-ID, Portugal
- Tim Weale, The Ohio State University, USA
- Jason Williams, AT&T, USA
- Sabrina Wilske, Lang Tech and Cognitive Sys at Saarland University, Germany
- Andi Winterboer, Universiteit van Amsterdam, Netherlands
- Craig Wootton, University of Ulster, Belfast, Northern Ireland
- Steve Young, University of Cambridge, United Kingdom
8-10 . (2009-09-14) 7th International Conference on Recent Advances in Natural Language Processing
RANLP-09 Second Call for Papers and Submission Information
"RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING"
International Conference RANLP-2009
September 14-16, 2009
Borovets, Bulgaria
http://www.lml.bas.bg/ranlp2009
Further to the successful and highly competitive 1st, 2nd, 3rd, 4th, 5th
and 6th conferences 'Recent Advances in Natural Language Processing'
(RANLP), we are pleased to announce the 7th RANLP conference to be held in
September 2009.
The conference will take the form of addresses from invited keynote
speakers plus peer-reviewed individual papers. There will also be an
exhibition area for poster and demo sessions.
We invite papers reporting on recent advances in all aspects of Natural
Language Processing (NLP). The conference topics are announced at the
RANLP-09 website. All accepted papers will be published in the full
conference proceedings and included in the ACL Anthology. In addition,
volumes of RANLP selected papers are traditionally published by John
Benjamins Publishers; currently the volume of Selected RANLP-07 papers is
under print.
KEYNOTE SPEAKERS:
• Kevin Bretonnel Cohen (University of Colorado School of Medicine),
• Mirella Lapata (University of Edinburgh),
• Shalom Lappin (King’s College, London),
• Massimo Poesio (University of Trento and University of Essex).
CHAIR OF THE PROGRAMME COMMITTEE:
Ruslan Mitkov (University of Wolverhampton)
CHAIR OF THE ORGANISING COMMITTEE:
Galia Angelova (Bulgarian Academy of Sciences)
The PROGRAMME COMMITTEE members are distinguished experts from all over
the world. The list of PC members will be announced at the conference
website. After the review, the list of all reviewers will be announced at
the website as well.
SUBMISSION
People interested in participating should submit a paper, poster or demo
following the instructions provided at the conference website. The review
will be blind, so the article text should not reveal the authors' names.
Author identification should be done in additional page of the conference
management system.
TUTORIALS 12-13 September 2009:
Four half-day tutorials will be organised at 12-13 September 2009. The
list of tutorial lecturers includes:
• Kevin Bretonnel Cohen (University of Colorado School of Medicine),
• Constantin Orasan (University of Wolverhampton)
WORKSHOPS 17-18 September 2009:
Post-conference workshops will be organised at 17-18 September 2009. All
workshops will publish hard-copy proceedings, which will be distributed at
the event. Workshop papers might be listed in the ACL Anthology as well
(depending on the workshop organisers). The list of RANLP-09 workshops
includes:
• Semantic Roles on Human Language Technology Applications, organised by
Paloma Moreda, Rafael Muсoz and Manuel Palomar,
• Partial Parsing 2: Between Chunking and Deep Parsing, organised by Adam
Przepiorkowski, Jakub Piskorski and Sandra Kuebler,
• 1st Workshop on Definition Extraction, organised by Gerardo Eugenio
Sierra Martнnez and Caroline Barriere,
• Evaluation of Resources and Tools for Central and Eastern European
languages, organised by Cristina Vertan, Stelios Piperidis and Elena
Paskaleva,
• Adaptation of Language Resources and Technology to New Domains,
organised by Nuria Bel, Erhard Hinrichs, Kiril Simov and Petya Osenova,
• Natural Language Processing methods and corpora in translation,
lexicography, and language learning, organised by Viktor Pekar, Iustina
Narcisa Ilisei, and Silvia Bernardini,
• Events in Emerging Text Types (eETTs), organised by Constantin Orasan,
Laura Hasler, and Corina Forascu,
• Biomedical Information Extraction, organised by Guergana Savova,
Vangelis Karkaletsis, and Galia Angelova.
IMPORTANT DATES:
Conference paper submission notification: 6 April 2009
Conference paper submission deadline: 13 April 2009
Conference paper acceptance notification: 1 June 2009
Final versions of conference papers submission: 13 July 2009
Workshop paper submission deadline (suggested): 5 June 2009
Workshop paper acceptance notification (suggested): 20 July 2009
Final versions of workshop papers submission (suggested): 24 August 2009
RANLP-09 tutorials: 12-13 September 2009 (Saturday-Sunday)
RANLP-09 conference: 14-16 September 2009 (Monday-Wednesday)
RANLP-09 workshops: 17-18 September 2009 (Thursday-Friday)
For further information about the conference, please visit the conference
site http://www.lml.bas.bg/ranlp2009.
THE TEAM BEHIND RANLP-09
Galia Angelova, Bulgarian Academy of Sciences, Bulgaria, Chair of the Org.
Committee
Kalina Bontcheva, University of Sheffield, UK
Ruslan Mitkov, University of Wolverhampton, UK, Chair of the Programme
Committee
Nicolas Nicolov, Umbria Inc, USA (Editor of volume with selected papers)
Nikolai Nikolov, INCOMA Ltd., Shoumen, Bulgaria
Kiril Simov, Bulgarian Academy of Sciences, Bulgaria (Workshop Coordinator)
e-mail: ranlp09 [AT] lml (dot) bas (dot)
8-11 . (2009-09-14) Student Research Workshop at RANLP (Bulgaria)
First Call for Papers
Student Research Workshop
14-15 September 2009,
associated with the International Conference RANLP-2009
/RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING/
http://lml.bas.bg/ranlp2009/stud-ranlp09
The International Conference RANLP 2009 would like to invite students at all levels (Bachelor-, Master-, and PhD-students) to present their ongoing work at the Student Research Workshop. This will provide an excellent opportunity to present and discuss your work in progress or completed projects to an international research audience and receive feedback from senior researchers. The research being presented can come from any topic area within natural language processing and computational linguistics including, but not limited to, the following topic areas:
Anaphora Resolution, Complexity, Corpus Linguistics, Discourse, Evaluation, Finite-State Technology, Formal Grammars and Languages, Information Extraction, Information Retrieval, Lexical Knowledge Acquisition, Lexicography, Machine Learning, Machine Translation, Morphology, Natural Language Generation, Natural Language in Multimodal and Multimedia Systems, Natural Language Interraction, Natural Language Processing in Computer-Assisted Language Learning, Natural Language Processing for Biomedical Texts, Ontologies, Opinion Mining, Parsing, Part-of-Speech Tagging, Phonology, Post-Editing, Pragmatics and Dialogue, Question Answering, Semantics, Speech Recognition, Statistical Methods, Sublanguages and Controlled Languages, Syntax, Temporal Processing, Term Extraction and Automatic Indexing, Text Data Mining, Text Segmentation, Text Simplification, Text Summarisation, Text-to-Speech Synthesis, Translation Technology, Tree-Adjoining Grammars, Word Sense Disambiguation.
All accepted papers will be presented at the Student Workshop sessions during the main conference days: 14-16 September 2009. The articles will be issued in special Student Session electronic proceedings.
Important Dates
Submission deadline: 25 July
Acceptance notification: 20 August
Camera-ready deadline: 1 September
Submission Requirements
All papers must be submitted in .doc or .pdf format and must be 4-8 pages long (including references). For format requirements please refer to the main RANLP website at http://lml.bas.bg/ranlp2009, Submission Info Section. Each submission will be reviewed by 3 reviewers from the Programme Committee, who will feature both experienced researchers and PhD students nearing the completion of their PhD studies. The final decisions will be made based on these reviews. The submissions will have to specify the student's level (Bachelor-, Master-, or PhD).
Programme Committee
To be announced in the Second Call for Papers.
Organising Committee
Irina Temnikova (
Ivelina Nikolova (Bulgarian
Natalia Konstantinova (
For More Information
8-12 . (2009-09-28) ELMAR 2009
51st International Symposium ELMAR-2009
28-30 September 2009 Zadar, CROATIAPaper submission deadline: March 16, 2009http://www.elmar-zadar.org/CALL FOR PAPERS TECHNICAL CO-SPONSORS IEEE Region 8 EURASIP - European Assoc. Signal, Speech and Image Processing IEEE Croatia Section IEEE Croatia Section Chapter of the Signal Processing Society IEEE Croatia Section Joint Chapter of the AP/MTT SocietiesCONFERENCE PROCEEDINGS INDEXED BY IEEE XploreINSPEC TOPICS --> Image and Video Processing --> Multimedia Communications --> Speech and Audio Processing --> Wireless Commununications --> Telecommunications --> Antennas and Propagation --> e-Learning and m-Learning --> Navigation Systems --> Ship Electronic Systems --> Power Electronics and Automation --> Naval Architecture --> Sea Ecology --> Special Sessions Proposals - A special session consist of 5-6 papers which should present a unifying theme from a diversity of viewpointsKEYNOTE TALKS* Prof. Gregor Rozinaj,Slovak University of Technology, Bratislava, SLOVAKIA: -Title to be announced soon.* Mr. David Wood, European Broadcasting Union, Geneva, SWITZERLAND: What strategy and research agenda for Europe in 'new media'?SUBMISSIONPapers accepted by two reviewers will be published in conference proceedings available at the conference and abstracted/indexed in the IEEE Xplore and INSPEC database. More info is available here: http://www.elmar-zadar.org/ IMPORTANT: Web-based (online) paper submission of papers in PDF format is required for all authors. No e-mail, fax, or postal submissions will be accepted. Authors should prepare their papers according to ELMAR-2009 paper sample, convert them to PDF based on IEEE requirements, and submit them using web-based submission system by March 16, 2009.SCHEDULE OF IMPORTANT DATESDeadline for submission of full papers: March 16, 2009Notification of acceptance mailed out by: May 11, 2009Submission of (final) camera-ready papers: May 21, 2009Preliminary program available online by: June 11, 2009Registration forms and payment deadline: June 18, 2009Accommodation deadline: September 10, 2009GENERAL CO-CHAIRSIve Mustac, Tankerska plovidba, Zadar, Croatia Branka Zovko-Cihlar, University of Zagreb, CroatiaPROGRAM CHAIRMislav Grgic, University of Zagreb, CroatiaINTERNATIONAL PROGRAM COMMITTEE Juraj Bartolic, Croatia David Broughton, United Kingdom Paul Dan Cristea, Romania Kresimir Delac, Croatia Zarko Cucej, Slovenia Marek Domanski, Poland Kalman Fazekas, Hungary Janusz Filipiak, Poland Renato Filjar, Croatia Borko Furht, USA Mohammed Ghanbari, United Kingdom Mislav Grgic, Croatia Sonja Grgic, Croatia Yo-Sung Ho, Korea Bernhard Hofmann-Wellenhof, Austria Ismail Khalil Ibrahim, Austria Bojan Ivancevic, Croatia Ebroul Izquierdo, United Kingdom Kristian Jambrosic, Croatia Aggelos K. Katsaggelos, USA Tomislav Kos, Croatia Murat Kunt, Switzerland Panos Liatsis, United Kingdom Rastislav Lukac, Canada Lidija Mandic, Croatia Gabor Matay, Hungary Branka Medved Rogina, Croatia Borivoj Modlic, Croatia Marta Mrak, United Kingdom Fernando Pereira, Portugal Pavol Podhradsky, Slovak Republic Ramjee Prasad, Denmark Kamisetty R. Rao, USA Gregor Rozinaj, Slovak Republic Gerald Schaefer, United Kingdom Mubarak Shah, USA Shiguang Shan, China Thomas Sikora, Germany Karolj Skala, Croatia Marian S. Stachowicz, USA Ryszard Stasinski, Poland Luis Torres, Spain Frantisek Vejrazka, Czech Republic Stamatis Voliotis, Greece Nick Ward, United Kingdom Krzysztof Wajda, Poland Branka Zovko-Cihlar, CroatiaCONTACT INFORMATION Assoc.Prof. Mislav Grgic, Ph.D. FER, Unska 3/XII HR-10000 Zagreb CROATIA Telephone: + 385 1 6129 851 Fax: + 385 1 6129 717 E-mail: elmar2009 (at) fer.hr For further information please visit: http://www.elmar-zadar.org/
8-13 . (2009-10-05) 2009 APSIPA ASC
APSIPA Annual Summit and Conference October 5 - 7, 2009 Sapporo Convention Center, Sapporo, Japan2009 APSIPA Annual Summit and Conference is the inaugural event supported by the Asia-Pacific Signal and Information Processing Association (APSIPA). The APSIPA is a new association and it promotes all aspects of research and education on signal processing, information technology, and communications. The field of interest of APSIPA concerns all aspects of signals and information including processing, recognition, classification, communications, networking, computing, system design, security, implementation, and technology with applications to scientific, engineering, and social areas. The topics for regular sessions include, but are not limited to:Signal Processing Track1.1 Audio, speech, and language processing1.2 Image, video, and multimedia signal processing1.3 Information forensics and security1.4 Signal processing for communications1.5 Signal processing theory and methodsSapporo and Conference Venue: One of many nice cities in Japan, Sapporo is always recognized as a quite beautiful and well-organized city. With a population of 1,800,000, Hokkaido's largest/capital city, Sapporo, is fully serviced by a network of subway, streetcar, and bus lines connecting to its fullcompliment of hotel accommodations. Sapporo has already played host to international meetings, sports events, and academic societies. There are a lot of flights from/to Tokyo, Nagoya, Osaka et al. and overseas cities. With all the amenities of a major city yet in balance with its natural surroundings, this beautiful northern capital, Sapporo, is well-equipped to offer a new generation of conventions.Important Due Dates and Author's Schedule:Proposals for Special Session: March 1, 2009Proposals for Forum, Panel and Tutorial Sessions: March 20, 2009Deadline for Submission of Full-Papers: March 31, 2009Notification of Acceptance: July 1, 2009Deadline for Submission of Camera Ready Papers: August 1, 2009Conference dates: October 5 - 7, 2009Submission of Papers: Prospective authors are invited to submit either long papers, up to 10 pages in length, or short papers up to four pages in length, where long papers will be for the single-track oral presentation and short papers will be mostly for poster presentation. The conference proceedings will be published, available, and maintained at the APSIPA website.Detail Information: WEB Site : http://www.gcoe.ist.hokudai.ac.jp/apsipa2009/Organizing Committee:Honorary Chair : Sadaoki Furui, Tokyo Institute of Technology, JapanGeneral co-Chairs : Yoshikazu Miyanaga, Hokkaido University, Japan K. J. Ray Liu, University of Maryland,USATechnical Program co-Chairs : Hitoshi Kiya, Tokyo Metropolitan Univ., Japan Tomoaki Ohtsuki, Keio University, Japan Mark Liao, Academia Sinica, Taiwan Takao Onoye, Osaka University, Japan
8-14 . (2009-10-05) IEEE International Workshop on Multimedia Signal Processing - MMSP'09
Call for Papers 2009 IEEE International Workshop on Multimedia Signal Processing - MMSP'09 October 5-7, 2009 Sheraton Rio Hotel & Resort Rio de Janeiro, Brazil We would like to invite you to submit your work to MMSP-09, the eleventh IEEE International Workshop on Multimedia Signal Processing. We also would like to advise you of the upcoming paper submission deadline on April, 17th. This year MMSP will introduce a new type of paper award: the “top 10%” paper award. While MMSP papers are already very well regarded and highly cited, there is a growing need among the scientific community for more immediate quality recognition. The objective of the top 10% award is to acknowledge outstanding quality papers, while at the same time keeping the wider participation and information exchange allowed by higher acceptance rates. MMSP will continue to accept as many as high quality papers as possible, with acceptance rates in line with other top events of the IEEE Signal Processing Society. This new award will be granted to as many as 10% of the total paper submissions, and is open to all accepted papers, whether presented in oral or poster form. The workshop is organized by the Multimedia Signal Processing Technical Committee of the IEEE Signal Processing Society. Organized in Rio de Janeiro, MMSP-09 provides excellent conditions for brainstorming on, and sharing the latest advances in multimedia signal processing and technology in one of the most beautiful and exciting cities in the world. Scope: Papers are solicited on the following topics (but not limited to) Systems and applications - Teleconferencing, telepresence, tele-immersion, immersive environments - Virtual classrooms and distance learning - Multimodal collaboration, online multiplayer gaming, social networking - Telemedicine, human-human distance collaboration - Multimodal storage and retrieval Multimedia for communication and collaboration - Ad hoc broadband sensor array processing - Microphone and camera array processing - Automatic sensor calibration, synchronization - De-noising, enhancement, source separation, - Source localization, spatialization Scene analysis for immersive telecommunication and human collaboration - Audiovisual scene analysis - Object detection, identification, and tracking - Gesture, face, and human pose recognition - Presence detection and activity classification - Multimodal sensor fusion Coding - Distributed/centralized source coding for sensor arrays - Scalable source coding for multiparty conferencing - Error/loss resilient coding for telecommunications - Channel coding, error protection and error concealment Networking - Voice/video over IP and wireless - Quality monitoring and management - Security - Priority-based QoS control and scheduling - Ad-hoc and real time communications - Channel coding, packetization, synchronization, buffering A thematic emphasis for MMSP-09 is on topics related to multimedia processing and interaction for immersive telecommunications and collaboration. Papers on these topics are encouraged. Schedule - Papers (full paper, 4 pages, to be received by): April 17, 2009 - Notification of acceptance by: June 13, 2009 - Camera-ready paper submission by: July 6, 2009 More information is available at http://www.mmsp09.org ================================================================================ You have received this mailing because you are a member of IEEE and/or one of the IEEE Technical Societies. To unsubscribe, please go to http://ewh.ieee.org/enotice/options.php?SN=Wellekens&LN=CONF and be certain to include your IEEE member number. If you need assistance with your E-Notice subscription, please contact k.n.luu@ieee.org
8-15 . (2009-10-13) CfP ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
***Submission Deadline Extended to Monday, June 15, 2009***
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/
Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."
The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.
We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:
-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)
Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/
Important Dates:
Monday, June 15, 2009 (Extended Deadline) Submission Deadline
Saturday, July 10, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing
For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org
On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands
8-16 . (2009-10-18) 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
Call for Papers
2009 IEEE Workshop on Applications of Signal Processing to Audio and
Acoustics
Mohonk Mountain House
New Paltz, New York
October 18-21, 2009
The 2009 IEEE Workshop on Applications of Signal Processing to Audio and
Acoustics (WASPAA'09) will be held at the Mohonk Mountain House in New
Paltz, New York, and is sponsored by the Audio & Electroacoustics committee
of the IEEE Signal Processing Society. The objective of this workshop is to
provide an informal environment for the discussion of problems in audio and
acoustics and the signal processing techniques leading to novel solutions.
Technical sessions will be scheduled throughout the day. Afternoons will be
left free for informal meetings among workshop participants.
Papers describing original research and new concepts are solicited for
technical sessions on, but not limited to, the following topics:
* Acoustic Scenes
- Scene Analysis: Source Localization, Source Separation, Room Acoustics
- Signal Enhancement: Echo Cancellation, Dereverberation, Noise Reduction,
Restoration
- Multichannel Signal Processing for Audio Acquisition and Reproduction
- Microphone Arrays
- Eigenbeamforming
- Virtual Acoustics via Loudspeakers
* Hearing and Perception
- Auditory Perception, Spatial Hearing, Quality Assessment
- Hearing Aids
* Audio Coding
- Waveform Coding and Parameter Coding
- Spatial Audio Coding
- Internet Audio
- Musical Signal Analysis: Segmentation, Classification, Transcription
- Digital Rights
- Mobile Devices
* Music
- Signal Analysis and Synthesis Tools
- Creation of Musical Sounds: Waveforms, Instrument Models, Singing
- MEMS Technologies for Signal Pick-up
Submission of four-page paper: April 15, 2009
Notification of acceptance: June 26, 2009
Early registration until: September 1, 2009
Workshop Committee
General Co-Chair:
Jacob Benesty
Université du Québec
INRS-EMT
Montréal, Québec, Canada
General Co-Chair:
Tomas Gaensler
mh acoustics
Summit, NJ, USA
Technical Program Chair:
Yiteng (Arden) Huang
WeVoice Inc.
Bridgewater, NJ, USA
Technical Program Chair:
Jingdong Chen
Bell Labs
Alcatel-Lucent
Murray Hill, NJ, USA
jingdong@research.bell-labs.com
Finance Chair:
Michael Brandstein
Information Systems
Technology Group
MIT Lincoln Lab
Lexington, MA, USA
Publications Chair:
Eric J. Diethorn
Multimedia Technologies
Avaya Labs Research
Basking Ridge, NJ, USA
Publicity Chair:
Sofiène Affes
Université du Québec
INRS-EMT
Montréal, Québec, Canada
Local Arrangements Chair:
Heinz Teutsch
Multimedia Technologies
Avaya Labs Research
Basking Ridge, NJ, USA
Far East Liaison:
Shoji Makino
NTT Communication Science
Laboratories, Japan
8-17 . (2009-10-23) CfP Searching Spontaneous Conversational Speech (SSCS 2009) ACM Mltimedia Wkshp
Call for Papers
----------------------------
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
***Submission Deadline Extended to Monday, June 15, 2009***
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/
Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."
The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.
We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:
***Integration***
-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech
***Interface/Interaction***
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts
***Scale/Scope***
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail
***Community***
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)
Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/
Important Dates:
Monday, June 15, 2009 (Extended Deadline) Submission Deadline
Saturday, July 10, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing
For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org
On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands
8-18 . (2009-10-23)ACM Multimedia 2009 Workshop Searching Spontaneous Conversational Speech (SSCS 2009)
Call for Papers
----------------------------
ACM Multimedia 2009 Workshop
Searching Spontaneous Conversational Speech (SSCS 2009)
October 23, 2009
Beijing, China
----------------------------
http://ict.ewi.tudelft.nl/SSCS2009/
Multimedia content often contains spoken audio as a key component. Although speech is generally acknowledged as the quintessential carrier of semantic information, spoken audio remains underexploited by multimedia retrieval systems. In particular, the potential of speech technology to improve information access has not yet been successfully extended beyond multimedia content containing scripted speech, such as broadcast news. The SSCS 2009 workshop is dedicated to fostering search research based on speech technology as it expands into spoken content domains involving non-scripted, less-highly conventionalized, conversational speech characterized by wide variability of speaking styles and recording conditions. Such domains include podcasts, video diaries, lifelogs, meetings, call center recordings, social video networks, Web TV, conversational broadcast, lectures, discussions, debates, interviews and cultural heritage archives. This year we are setting a particular focus on the user and the use of speech techniques and technology in real-life multimedia access systems and have chosen the theme "Speech technology in the multimedia access framework."
The development of robust, scalable, affordable approaches for accessing multimedia collections with a spoken component requires the sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR 2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in conjunction with ACM Multimedia 2009 in Beijing. This year the workshop will focus on addressing the research challenges that were identified during SSCS 2008: Integration, Interface/Interaction, Scale/Scope, and Community.
We welcome contributions on a range of trans-disciplinary issues related to these research challenges, including:
***Integration***
-Information retrieval techniques based on speech analysis (e.g., applied to speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech
***Interface/Interaction***
-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts
***Scale/Scope***
-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail
***Community***
-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)
Contributions for oral presentations (8-10 pages) poster presentations (2 pages), demonstration descriptions (2 pages) and position papers for selection of panel members (2 pages) will be accepted. Further information including submission guidelines is available on the workshop website: http://ict.ewi.tudelft.nl/SSCS2009/
Important Dates:
Monday, June 1, 2009 Submission Deadline
Saturday, July 4, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing
For more information: m.a.larson@tudelft.nl
SSCS 2009 Website: http://ict.ewi.tudelft.nl/SSCS2009/
ACM Multimedia 2009 Website: http://www.acmmm09.org
On behalf of the SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands
8-19 . (2009-11-01) NLP Approaches for Unmet Information Needs in Health Care
NLP Approaches for Unmet Information Needs in Health Care
(http://www.uwm.edu/~hongyu/files/BIBM.workshop.html)
A workshop of IEEE International Conference on Bioinformatics and
Biomedicine 2009, Washington DC
As the amount of literature and other information in the biomedical
field continues to grow at a rapid rate, researchers in the health
care community dependent on computers to find the best answers for
meeting their information needs. Traditionally, information needs have
been simply represented as a set of queries. Recently, there have been
growing research efforts addressing these needs with natural language
valuable biomedical databases, more work needs to be done to develop
computational approaches that enable users to search multiple
databases, which often comprise a variety of formats, including
journal articles, clinical guidelines, and electronic health care
records. Therefore, the task at hand is to develop natural language
systems that can understand the queries or complex questions being
asked, interpret the different resources that could be used to answer
the question, extract relevant information, and summarize this
information to meet user needs, and data mine the structured data for
clinical decision support. This workshop will explore a broad range of
traditional NLP approaches and emerging new methods, and the variety
of challenges that need to be overcome with respect to these issues.
Some specific topics include:
* Clinical information needs
* Clinical terminology and coding clinical data
* Annotation and machine learning
* Healthcare, domain-specific adaption of open-domain NLP techniques
* Information extraction from electronic health records
* Data mining of electronic health records
* NLP approaches that involve with image and video
* Automatic speech recognition for the healthcare domain
* Spoken clinical question answering
Paper submission: http://kis-lab.com/cyberchair/bibm09/cbc_index.html
Timeline:
August 10, 2009: Due date for full workshop papers submission
September 10, 2009: Notification of paper acceptance to authors
September 17, 2009: Camera-ready of accepted papers
November 1-4, 2009: Workshops
Organizers:
Workshop co-chairs:
Hong Yu, PhD, University of Wisconsin-Milwaukee
Dilek Hakkani-Tür, PhD, International Computer Science Institute
John Ely, MD University of Iowa
Lyle Ungar, PhD, University of Pennsylvania
Workshop PC members:
Eugene Agichtein, Emory University
Alan Aronson, NLM
James Cimino, NIH
Kevin Cohen, University of Colorado
Nigel Collier, National Institute of Informatics, Japan
Chris Chute, Mayo Clinic
Dina Demner Fushman, NLM
Bob Futrelle, Northeastern University
Henk Harkema, University of Pittsburgh
Lynette Hirschman, MITRE
Susan McRoy, University of Wisconsin
Serguei Pakhomov, University of Minnesota
Tim Patrick, University of Wisconsin
Thomas Rindflesch, NLM
Pete White, Children's Hospital of Philadelphia
John Wilbur, NLM
Pierre Zweigenbaum, LIMSI
8-20 . (2009-11-02) Eleventh International Conference on Multimodal Interfaces and Workshop on Machine Learning for Multi-modal Interaction
The Eleventh International Conference on Multimodal Interfaces and Workshop
on Machine Learning for Multi-modal Interaction will jointly take place in the Boston area during November 2-6, 2009.
The main aim of ICMI-MLMI 2009 is to further scientific research within the broad field of multimodal interaction, methods and systems. The joint conference will focus on major trends and challenges in this area, and work to identify a roadmap for future research and commercial success. ICMI-MLMI 2009 will feature a single-track main conference with keynote speakers, panel discussions, technical paper presentations, poster sessions, and demonstrations of state of the art multimodal systems and concepts. It will be followed by workshops.
The conference will take place at the MIT Media Lab, widely known for its innovative spirit. Organized in Cambridge, Massachusetts, USA, ICMI-MLMI 09 provides an excellent setting for brainstorming and sharing the latest advances in multimodal interaction, systems and methods in an inspired setting of a city, known as one of the top historical, technological and scientific centers of the US.
Program committees:
James Crowley, INRIA
Yuri Ivanov, MERL
Christopher Wren, Google
Daniel Gatica-Perez, Idiap Research Institute
Michael Johnston, AT&T Research
Rainer Stiefelhagen University of Karlsruhe
Janet McAndless, MERL
Hervé Bourlard, Idiap Research Institute
Rana el Kaliouby, MIT Media Lab
Matthew Berlin, MIT Media Lab
Clifton Forlines, MERL
Deb Roy, MIT Media Lab
Thanks to Cole Krumbholz, MITRE
Sonya Allin, University of Toronto
Yang Liu, University of Texas at Dallas
Louis-Philippe Morency, University of South California
Xilin Chen, JDL
Steve Renals, University of Edinburgh
Denis Lalanne, University of Fribourg
Enrique Vidal, Polytechnic University of Valencia
Kenji Mase, University of Nagoya
ICMI Advisory Board
Matthew Turk, Chair, UC Santa Barbara (USA)
Jim Crowley, INRIA-Rhone Alpes (France)
Trevor Darrell, MIT (USA)
Kenji Mase, University of Nagoya (Japan)
Eric Horvitz, Microsoft Research (USA)
Sharon Oviatt, Adapx (USA)
Fabio Pianesi, ITC-irst (Italy)
Wolfgang Wahlster, DFKI (Germany)
Jie Yang, Carnegie Mellon University (USA)
MLMI Advisory Board
Hervé Bourlard, Idiap Research Institute (Switzerland)
Steve Renals, University of Edinburgh (UK)
Sharon Oviatt, Adapx (USA)
Rainer Stiefelhagen, Universitaet Karlsruhe (Germany)
Jean Carletta, University of Edinburgh (UK)
Catherine Pelachaud, CNRS (France)
Sadaoki Furui, Tokyo Institute of Technology (Japan)
Samy Bengio, Google (USA)
Andrei Popescu-Belis, Idiap Research Institute (Switzerland)
See http://icmi2009.acm.org/ for more information.
The following is a list of co-located workshops.
2nd Workshop on Child, Computer and Interaction
Thursday, 5 November 2009 (Full Day)
More Information: http://wocci2009.fbk.eu/
Workshop on Use of Context in Vision Processing (UCVP)
Thursday, 5 November 2009 (Full Day)
More Information: http://hmi.ewi.utwente.nl/ucvp09
Affect-Aware Virtual Agents and Social Robots (AFFINE)
Friday, 6 November 2009 (Full Day)
More Information: http://homepages.feis.herts.ac.uk/~comqjm/affine/index.html
Multimodal Computing with Mobile Phones: Sensing, Modeling and Sharing
Friday, 6 November 2009 (Morning)
Workshop on Multimodal Sensor-Based Systems for Social Computing
Friday, 6 November 2009 (Afternoon)
More Information: http://web.media.mit.edu
8-21 . (2009-11-05)LRL WORKSHOP: Getting Less-Resourced Languages on-Board! Poznan Poland
LRL WORKSHOP: Getting Less-Resourced Languages on-Board!
Name: Getting Less-Resourced Languages on-Board!
Date: 5.11.2009, half-day (13h30 – 18h00) + cocktail
Theme:
Language Technologies (LT) provide an essential support to the challenge of Multilingualism. In order to develop them, it is necessary to have access to Language Resources (LR) and to assess LT performances. To this regard, the situation is very different across the different languages. Little or sparse data exist for languages in countries or regions where limited efforts have been devoted to such issues in the past, also known as Less-Resourced Languages (LRL). The workshop aims at reporting the needs, at presenting achievements and at proposing solutions for the future, both in terms of LR and of LT evaluation, especially in the European, Euro-Mediterranean and regional frameworks. This will allow to identity the factors that have an impact on a potential and shared roadmap towards supplying LR and LT for all languages.
Topics:
- Experience in the production, validation and distribution of LR for less-resourced languages
- Experience in the evaluation of LT for less-resourced languages
- Infrastructures for making available LR and LT in less-resourced languages
- Alternative approaches (comparable corpora, pivot languages, language clustering…)
- To be completed…
Co-Chairs: Joseph Mariani (LIMSI-CNRS & IMMI-CNRS), Khalid Choukri(ELRA & ELDA), Zygmunt Vetulani (
Paper submission deadline: August 15.
Sponsors: FLaReNet, ELRA
Inscriptions: as for the general LTC (+ cc to workshop chairs)
Fees: inscription fees to the LTC + extra 40 Euros or 80 Euros for the Workshop-only attenders.
Paper submission: as for the general LTC(EasyChair) + to the workshop chairs
Presentation: publication in the LTC proceedings (paper + CD)
Reviewing: up to the workshop chairs + scientific committee
Program: The workshop will comprise presentations (including keynote talks) and a panel session, including a EC representative (tentative). In addition, selected speakers will be invited to
present their papers to a larger audience at the main LTC conference.
E-mail: ltc@amu.edu.pl
WWW: http://www.ltc.amu.edu.pl/
8-22 . (2009-11-06)4th LANGUAGE AND TECHNOLOGY CONFERENCE: Human Language Technologies as a challenge Poznan Poland
LTC2009 FlaReNet-LRL2009 Workshop - 1 week reminder for LTC
Call for papers and participation
The 4th LANGUAGE AND TECHNOLOGY CONFERENCE: Human Language Technologies as a Challenge
for Computer Science and Linguistics (LTC 2009), a meeting organized by the Faculty of Mathematics and Computer Science of Adam Mickiewicz University,
Human Language Technologies (HLT) continue to be a challenge for computer science, linguistics and related fields as these areas become an ever more essential element of our everyday technological environment. Since the very beginning of the Computer and Information Age these fields have influenced and stimulated each other. The European Union strongly supports HLT under the 7th Framework Program. These efforts as well as technological, social and cultural globalization have created a favorable climate for the intensive exchange of novel ideas, concepts and solutions across initially distant disciplines. We aim at further contributing to this exchange and invite you to join us at LTC in November 2009, as well as at the FlaReNet workshop (LRL 2009) on the theme "Getting Less-Resourced Languages on-Board!".
Zygmunt Vetulani
LTC 2009 Chair
CONFERENCE TOPICS
The conference topics include the following (the ordering is not significative):
- electronic language resources and tools,
- formalisation of natural languages,
- parsing and other forms of NL processing,
- computer modelling of language competence,
- NL user modelling,
- NL understanding by computers,
- knowledge representation,
- man-machine NL interfaces,
- Logic Programming in Natural Language Processing,
- speech processing,
- NL applications in robotics,
- text-based information retrieval and extraction,
- question answering,
- tools and methodologies for developing multilingual systems,
- translation enhancement tools,
- corpora-based methods in language engineering,
- WordNet-like ontologies,
- methodological issues in HLT,
- language-specific computational challenges for HLTs (especially for languages other than English),
- HLT standards,
- HLTs as a support for foreign language teaching,
- communicative intelligence,
- legal issues connected with HLTs (problems and challenges),
- contribution of HLTs to the Homeland Security problems (technology applications and legal aspects),
- visionary papers in the field of HLT,
- HLT's for the Less-Resourced Languages
- HLT related policies,
- system prototype presentations.
This list is by no means closed and we are open to further proposals. Please do not hesitate to contact us in order to feed us with you suggestions and ideas of how to satisfy your expectation concerning the program. The Program Committee is also open to suggestions concerning accompanying events (workshops, exhibits, panels, etc). Suggestions, ideas and observations may be addressed directly to the LTC Chair by email (
PAPER SUBMISSION
The conference accepts papers in English. Papers (5 formatted pages) are due by July 31, 2009 (midnight, any time zone) and should not identify the author(s)in any manner. In order to facilitate submission we have decided to reduce the formatting requirements as much as possible at this stage. Please, however, do observe the following:
1. Accepted fonts for texts are Times Roman, Times New Roman. Courier is recommended for program listings. Character size for the main text should be 10 points, with 11 points leading (line spacing).
2. Text should be presented in 2 columns,
3. The paper size is 5 pages formatted according to (1) and (2) above.
4. The use of PDF format is strongly recommended, although MS Word will also be accepted.
Detailed guidelines for the final submission of accepted papers will be
published on the conference Web site by September 10, 2009 (acceptance
notification date).
All submissions are to be made electronically via the LTC 2009 web submission system. Acceptance/rejection notification will be sent by September 1, 2009.
IMPORTANT DATES/DEADLINES
- Deadline for submission of papers for review: July 31, 2009.
- Acceptance/Rejection notification: September 10, 2009.
- Deadline for submission of final versions of accepted papers: October 1, 2009.
- Conference: November 6-8, 2009.
REGISTRATION
Only electronic registration will be possible. Details will be provided later on www.ltc.amu.edu.pl.
CONFERENCE FEES
Non-student participants:
- Regular registration (payment by October 4, 2009) 160 EURO
- Late registration (payment after October 4, 2009) 190 EURO
Student participants:
- Regular registration (payment before October 4, 2009) 100 EURO
- Late registration (payment after October 4, 2009) 120 EURO
Extra 40 Euro will be charged for the LRL Workshop participation (5.11.2009, cf below).
Student registrations must be accompanied by a proof of full-time student status valid on the payment date. Registrants are requested to scan and e-mail their proof of student status to ltc@amu.edu.pl. The e-mail subject field must have the following format:
LTC-09-StudentStatus-< Name_of_participant >
(e.g. LTC-09-StudentStatus-VETULANI)
The conference fee covers:
- Participation in the scientific programme.
- Conference materials.
- Proceedings on CD and paper.
- Social events (banquet,...).
- Coffee breaks.
PAYMENT
The payment methods will be detailed shortly.
8-23 . (2009-11-15) CIARP 2009
8-24 . (2009-11-15) Entertainment=Emotion (International workshop) Spain
Entertainment=Emotion (International workshop) – From November 15 to 21, 2009 - Centro de Ciencias de Benasque Pedro Pascual (Spanish Pyrenees)
What is the relationship between entertainment and emotions in the consumption of new forms of media?
How does said relationship affect the attitudes, behaviors and thoughts of audiences?
What new emotions are generated by the new forms of interactive entertainment?
How does interactivity affect the emotional experience of entertainment?
What is the importance of morality or aesthetic appreciation in the experience of emotions during the consumption of media entertainment?
What emotions do we consume through new interactive products?
How do the intensity and valence of emotions change our aesthetic perception of entertainment products?
How are we influenced both by the emotions experienced during the processes of media entertainment, and by the perception of entertainment we obtain from experiencing these emotions?
Where are we taken by the emotions that entertain us?
Are there other ways of entertaining ourselves that make us freer?
In what products, and with what characteristics, are emotions stimulated or presented nowadays?
What are the cultural, economic, ideological, sociological, or artistic consequences of experiencing media emotions nowadays?
Does entertaining ourselves essentially mean generating emotions?
These are the kind of questions that will be answered at Entertainment=Emotion (E=E), the first edition of a very special workshop that will be held at the Centro de Ciencias de Benasque Pedro Pascual (CCBPP) from November 15 to 21, co-managed by María Teresa Soto Sanfiel (Department of Audiovisual Communication and Advertising, at the Universitat Autònoma de Barcelona) and Peter Vorderer (Center for Advanced Media Research -CAMeRA-, Free University Amsterdam).
E=E is an international workshop to which researchers, professionals and students of media entertainment are invited. The event, which follows a very special format, far removed from the traditional meetings held in the area, seeks to create the right atmosphere for prominent international researchers, media professionals, creators of content and students to think together about the phenomenon of emotions in the audiovisual consumption of entertainment. Of special interest to E=E are the new forms of interactive entertainment that provoke new emotional experiences among audiences.
The organizers are hoping that such a meeting, in the magnificent setting of the CCBPP, surrounded by beautiful mountains and delightful scenery, will encourage relaxed exchange between researchers and professionals from different traditions and of differing levels of experience, and for this to produce new visions and problems to be investigated, and the inspiration to create content. They also hope to create a permanent meeting point for media entertainment researchers that sets a first class international standard.
The organizers also hope to help generate aesthetic, philosophical, sociological and political discourse on the good for cultural progress signified by the different forms of emotional experience obtained during media entertainment processes. In this sense, the ultimate aim of E=E is to generate knowledge that arouses positive, free and responsible attitudes to the use, and experience, of emotions in relation to the consumption of audiovisual entertainment.
Similarly, the organizers of E=E seek to promote the creation, at the CCBPP, of a high level academic space where professionals and researchers can exchange and experience in situ the emotions produced through exposure to products that can generate emotion in order to entertain.
Finally, the organizers aim to investigate new forms of interactive entertainment and the emotional experiences they generate, and we therefore invite any developments to be exhibited or presented.
The application period for the presentation of communications, reports, presentations, exhibitions, audiovisual performances and poster session ends on September 7. The candidatures will be evaluated by a scientific committee, after which a list of those accepted will be published. For more information about the requirements for participation, please visit the website (http://www.benasque.org/2009emotion/) or write to Maria Teresa Soto (mariateresa.soto@uab.es <mailto:mariateresa.soto@uab.es>).
The best selected full papers from Entertainment=Emotion (International workshop) will be considered to be published in a *special issue of the International Journal of Arts and Technology (IJART)*, the leading journal in the area (http://www.inderscience.com/browse/index.php?journalCODE=ijart).
A limited number of people can attend the workshop, so we advise you to send in your application as early as possible.
The CCBPP is managed by Physics Professors José Ignacio Latorre (UB) and Manuel Asorey (UZAR) and is supported by the Spanish Ministry for Education and Science, the Benasque Town Council, the Government of Aragon, the University of Zaragoza and the BBV. The CCBPP is a centre of renowned international prestige and is used to hold high level scientific meetings and, as well as hosting this meeting, seeks to stimulate the production of significant advances both in the study of science as in the professional creation of content, associated to the experience of emotions in media entertainment.
8-25 . (2009-11-16) 8ème Rencontres Jeunes Chercheurs en Parole (french)
8-26 . (2009-11-20) Seminar FROM PERCEPTION TO COMPREHENSIONOF A FOREIGN LANGUAGE(Strasbourg-France)
8-27 . (2009-12-04) Troisièmes Journées de Phonétique Clinique Aix en Provence France (french)
JPC3
Troisièmes Journées de Phonétique Clinique
Appel à Communication
**4-5 décembre 2009, Aix-en-Provence, France
_http://www.lpl-aix.fr/~jpc3/ <http://www.lpl-aix.fr/%7Ejpc3/>
_********************************************************************************************************
*
Ces journées s’inscrivent dans la lignée des premières et deuxièmes journées d’études de phonétique clinique, qui s’étaient tenues respectivement à Paris en 2005 et Grenoble en 2007. La phonétique clinique réunit des chercheurs, enseignants-chercheurs, ingénieurs, médecins et orthophonistes, différents corps de métiers complémentaires qui poursuivent le même objectif : une meilleure connaissance des processus d’acquisition et de dysfonctionnement de la parole et de la voix. Cette approche interdisciplinaire vise à optimiser les connaissances fondamentales relatives à la communication parlée chez le sujet sain et à mieux comprendre, évaluer, diagnostiquer et remédier aux troubles de la parole et de la voix chez le sujet pathologique.
Les communications porteront sur les études phonétiques de la parole et de la voix pathologiques, chez l’adulte et chez l’enfant. Les *thèmes* du colloque incluent, de façon non limitative :
Perturbations du système oro-pharyngo-laryngé Perturbations du système perceptif Troubles cognitifs et moteurs Instrumentation et ressources en phonétique clinique Modélisation de la parole et de la voix pathologique Evaluation et traitement des pathologies de la parole et de la voix
*Les contributions sélectionnées seront présentées sous l’une des deux formes suivantes :*
Communication longue: 20 minutes, pour l’exposé de travaux aboutis Communication courte: 8 minutes pour l’exposé d'observations
cliniques, de travaux préliminaires, de problématiques émergentes
afin de favoriser au mieux les échanges interdisciplinaires entre
phonéticiens et cliniciens.
*Format de soumission:
*Les soumissions aux JPC se présentent sous la forme de résumés rédigés en français, d’une longueur maximale d’une page A4, police Times New Roman, 12pt, interligne simple. Les résumés devront être soumis au format PDF à l’adresse suivante: _soumission.jpc3@lpl-aix.fr
_*Date limite de soumission: 15 mai 2009
Date de notification auteurs : 1er juillet 2009
*Pour toute information complémentaire, contactez les organisateurs:
_org.jpc3@lpl-aix.fr
_/L’inscription aux JPC3 (1^er juillet 2009) sera ouverte à tous, publiant ou non publiant.
8-28 . (2009-12-09)1st EUROPE-ASIA SPOKEN DIALOGUE SYSTEMS TECHNOLOGY WORKSHOP
8-29 . (2010-03-15)CfP IEEE ICASSP 2010 International Conference on Acoustics, Speech, and Signal Processing March 15 – 19, 2010 Sheraton Dallas Hotel * Dallas, Texas, U.S.A.
IEEE ICASSP 2010 International Conference on Acoustics, Speech, and Signal Processing March 15 – 19, 2010 Sheraton Dallas Hotel * Dallas, Texas, U.S.A. http://www.icassp2010.com/ The 35th International Conference on Acoustics, Speech, and Signal Processing (ICASSP) will be held at the Sheraton Dallas Hotel, March 15 – 19, 2010. The ICASSP meeting is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 120 lecture and poster sessions on the following topics: * Audio and electroacoustics * Bio imaging and signal processing * Design and implementation of signal processing systems * Image and multidimensional signal processing * Industry technology tracks * Information forensics and security * Machine learning for signal processing * Multimedia signal processing * Sensor array and multichannel systems * Signal processing education * Signal processing for communications * Signal processing theory and methods * Speech processing * Spoken language processing Welcome to Texas, Y’All! Dallas is known for living large and thinking big. As the nation’s ninth-largest city, Dallas is exciting, diverse and friendly — factors that contribute to its success as a leading leisure and convention destination. There’s a whole “new” vibrant Dallas to enjoy-new entertainment districts, dining, shopping, hotels, arts and cultural institutions- with more on the way. There’s never been a more exciting time to visit Dallas than now. Submission of Papers: Prospective authors are invited to submit full-length, four-page papers, including figures and references, to the ICASSP Technical Committee. All ICASSP papers will be handled and reviewed electronically. The ICASSP 2010 website www.icassp2010.com will provide you with further details. Please note that all submission deadlines are strict. Tutorial and Special Session Proposals: Tutorials will be held on March 14 and 15, 2010. Brief proposals should be submitted by July 31, 2009, through the ICASSP 2010 website and must include title, outline, contact information for the presenter, and a description of the tutorial and material to be distributed to participants. Special sessions proposals should be submitted by July 31, 2009, through the ICASSP 2010 website and must include a topical title, rationale, session outline, contact information, and a list of invited papers. Tutorial and special session authors are referred to the ICASSP website for additional information regarding submissions. * Important Deadlines * Submission of Camera-Ready Papers September 14, 2009 Notification of Paper Acceptance December 11, 2009 Revised Paper Upload Deadline January 8, 2010 Author’s Registration Deadline January 15, 2010 For more detailed information, please visit the ICASSP 2010 official website, http://www.icassp2010.com/.
8-30 . (2010-04-13) CfP Workshop: Positional phenomena in phonology and phonetics Wroclaw-
http://www.ifa.uni.wroc.pl/~glow33/phon.html
Workshop: Positional phenomena in phonology and phonetics
(Organised by Zentrum für Allgemeine Sprachwissenschaft, Berlin)
*Date:* 13 April 2010
*Organisers:* Marzena Zygis, Stefanie Jannedy, Susanne Fuchs
*Deadline for abstract submission:* 1st November 2009
*Abstracts submitted to:* zygis@zas.gwz-berlin.de
*Invited speakers:*
* Taehong Cho (Hanyang University, Seoul) confirmed
* Grzegorz Dogil (University of Stuttgart) confirmed
*Venue:* /Instytut Filologii Angielskiej, ul. Kuz'nicza 22, 50-138 Wroc?aw/
Positional effects found cross-linguistically at the edges of prosodic
constituents (e.g. final lengthening, final lowering, strengthening
effects, or final devoicing) have increasingly received attention in
phonetic-phonological research. Recent empirical investigations of such
positional effects and their variability pose, however, a great number
of questions challenging e.g. the idea of perceptual invariance. It has
been claimed that acoustic variability is a necessary prerequisite for
the perceptual system to parse segmental strings into words, phrases or
larger prosodic units.
This workshop will provide a forum for discussing controversies and
recent developments regarding positional phenomena. We invite abstracts
bearing on positional effects from various perspectives.The following
questions can be addressed, but are not limited to:
1. What kind of variability is found in the data, and how does such
variability need to be accounted for? What positional effects are
common cross-linguistically and how can they be attributed to
perceptual, articulatory or aerodynamic principles?
2. How does positional prominence (lexical stress; accent) interact
with acoustic and articulatory realizations of prosodic
boundaries? What are the positional (a)symmetries in the
realizations of boundaries, and what are the mechanisms underlying
them?
3. How does left- and right-edge phrasal marking interact with the
acoustic and articulatory realizations at these prosodic
boundaries? How are these interpreted in phonetics and in phonology?
4. What are the necessary prerequisites for the interpretation of
prosodic constituents? Which auditory cues are essential for the
perception of boundaries and positional effects? Are such cues
language-specific?
5. To what extent do lexical frequency, phonotactic probability, and
neighbourhood density contribute to the production and recognition
of prosodic boundaries in (fluent/spontaneous) speech?
6. How are positional characteristics exploited during the process of
language acquisition? How are they learned during the process of
language acquisition? Are positional effects salient enough for L2
learners?
Abstracts are invited for a 20-min. presentation (excluding discussion).
Abstracts should be sent in two copies: one with a name and one without
as attached files (the name(s) should also be clearly mentioned in the
e-mail) to: zygis@zas.gwz-berlin.de in .pdf format. Only electronic
submissions will be considered. Abstracts may not exceed two pages of
text with at least a one-inch margin on all four sides (measured on A4
paper) and must employ a font not smaller than 12 point. Each page may
include a maximum of 50 lines of text. An additional page with
references may be included.
Deadline for submissions: November 1, 2009.
Contact person: Marzena Zygis
*************************
Susanne Fuchs, PhD
ZAS/Phonetik
Schützenstrasse 18
10117 Berlin
phone: 030 20192 569
fax: 030 20192 402
webpage: http://susannefuchs.org
*************************
8-31 . (2010-05-10) Cfp Workshop on Prosodic Prominence: Perceptual and Automatic Identification
Speech Prosody 2010 Satellite Workshop May 10th, 2010, Chicago, Illinois Description of the workshop: Efficient tools for (semi-)automatic prosodic annotation are becoming more and more important for the speech community, as most systems of prosodic annotation rely on the identification of syllabic prominence in spoken corpora (whether they lead a phonological interpretation or not). The use of automatic and semi-automatic annotation has also facilitated multilingual research; many experiments on prosodic prominence identification have been conducted for European and non-European languages, and protocols have been written in order to build large databases of spoken languages prosodically annotated all around the world. The aim of this workshop is to bring together specialists of automatic prosodic annotation interested in the development of robust algorithms for prominence detection, and linguists who developed methodologies for the identification of prosodic prominence in natural languages on perceptual bases. The conference will include oral and poster sessions, and a final round table. Scientific topics: 1. Annotation of prominence 2. Perceptual processing of prominences: gestalt theories’ background 3. Acoustic correlates of prominence 4. Prominence and its relations with prosodic structure 5. Prominence and its relations with accent, stress, tone and boundary 6. The use of syntactic/pragmatic information in prominence identification 7. Perception of prominence by naive/expert listeners 8. Statistical methods for prominence’s detection 9. Number of relevant prominence degrees : categorical or continuous scale 10.Prosodic prominence and visual perception Submission of papers: Anonymous four-page papers (including figures and references) must be written in English, and be uploaded as pdf files here: https://www.easychair.org/login.cgi?conf=prom2010. All papers will be reviewed by at least three members of the scientific committee. Accepted four-page papers will be included in the online proceedings of the workshop published on the workshop website. The publication of extended selected papers after the workshop in a special issue of a journal is being considered. Organizing Committee: Mathieu Avanzi (Université de Neuchâtel, CH) Anne Lacheret-Dujour (Université de Paris Ouest Nanterre) Anne-Catherine Simon (Université catholique de Louvain-la-Neuve) Scientific committee: the names of the scientific committee will be announced in the second circular. Venue: The workshop will take place in The Doubletree Hotel Magnificent Mile, in Chicago. See the Speech prosody 2010 website (http://www.speechprosody2010.illinois.edu/index.html) for further information. Important deadlines: Submission of four-page papers: November 15, 2009 Notification of acceptation: January 15, 2009 Author's Registration Deadline: March 2, 2010 Workshop: March 10, 2010 Website of the workshop: http://www2.unine.ch/speechprosody-prominence
8-32 . (2010-05-11) CfP Speech prosody 2010 Chicago IL USA
SPEECH PROSODY 2010
===============================================================
Every Language, Every Style: Globalizing the Science of Prosody
===============================================================
Call For Papers
===============================================================
Prosody is, as far as we know, a universal characteristic of human speech, founded on the cognitive processes of speech production and perception. Adequate modeling of prosody has been shown to improve human-computer interface, to aid clinical diagnosis, and to improve the quality of second language instruction, among many other applications.
Speech Prosody 2010, the fifth international conference on speech prosody, invites papers addressing any aspect of the science and technology of prosody. Speech Prosody is the only recurring international conference focused on prosody as an organizing principle for the social, psychological, linguistic, and technological aspects of spoken language. Speech Prosody 2010 seeks, in particular, to discuss the universality of prosody. To what extent can the observed scientific and technological benefits of prosodic modeling be ported to new languages, and to new styles of spoken language? Toward this end, Speech Prosody 2010 especially welcomes papers that create or adapt models of prosody to languages, dialects, sociolects, and/or communicative situations that are inadequately addressed by the current state of the art.
=======
TOPICS
=======
Speech Prosody 2010 will include keynote presentations, oral sessions, and poster sessions covering topics including:
* Prosody of under-resourced languages and dialects
* Communicative situation and speaking style
* Dynamics of prosody: structures that adapt to new situations
* Phonology and phonetics of prosody
* Rhythm and duration
* Syntax, semantics, and pragmatics
* Meta-linguistic and para-linguistic communication
* Signal processing
* Automatic speech synthesis, recognition and understanding
* Prosody of sign language
* Prosody in face-to-face interaction: audiovisual modeling and analysis
* Prosodic aspects of speech and language pathology
* Prosody in language contact and second language acquisition
* Prosody and psycholinguistics
* Prosody in computational linguistics
* Voice quality, phonation, and vocal dynamics
====================
SUBMISSION OF PAPERS
====================
Prospective authors are invited to submit full-length, four-page papers, including figures and references, at http://speechprosody2010.org. All Speech Prosody papers will be handled and reviewed electronically.
===================
VENUE
===================
The Doubletree Hotel Magnificent Mile is located two blocks from North Michigan Avenue, and three blocks from Navy Pier, at the cultural center of Chicago. The Windy City has been the center of American innovation since the mid nineteenth century, when a railway link connected Chicago to the west coast, civil engineers reversed the direction of the Chicago river, Chicago financiers invented commodity corn (maize), and the Great Chicago Fire destroyed almost every building in the city. The Magnificent Mile hosts scores of galleries and museums, and hundreds of world-class restaurants and boutiques.
===================
IMPORTANT DATES
===================
Submission of Papers (http://speechprosody2010.org): October 15, 2009
Notification of Acceptance: December 15, 2009
Conference: May 11-14, 2010
8-33 . (2010-05-24)CfP 4th INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS (LATA 2010)
1st Call for Papers 4th INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS (LATA 2010) Trier, Germany, May 24-28, 2010 http://grammars.grlmc.com/LATA2010/ ********************************************************************* AIMS: LATA is a yearly conference in theoretical computer science and its applications. As linked to the International PhD School in Formal Languages and Applications that was developed at Rovira i Virgili University (the host of the previous three editions and co-organizer of this one) in the period 2002-2006, LATA 2010 will reserve significant room for young scholars at the beginning of their career. It will aim at attracting contributions from both classical theory fields and application areas (bioinformatics, systems biology, language technology, artificial intelligence, etc.). SCOPE: Topics of either theoretical or applied interest include, but are not limited to: - algebraic language theory - algorithms on automata and words - automata and logic - automata for system analysis and programme verification - automata, concurrency and Petri nets - cellular automata - combinatorics on words - computability - computational complexity - computer linguistics - data and image compression - decidability questions on words and languages - descriptional complexity - DNA and other models of bio-inspired computing - document engineering - foundations of finite state technology - fuzzy and rough languages - grammars (Chomsky hierarchy, contextual, multidimensional, unification, categorial, etc.) - grammars and automata architectures - grammatical inference and algorithmic learning - graphs and graph transformation - language varieties and semigroups - language-based cryptography - language-theoretic foundations of artificial intelligence and artificial life - neural networks - parallel and regulated rewriting - parsing - pattern matching and pattern recognition - patterns and codes - power series - quantum, chemical and optical computing - semantics - string and combinatorial issues in computational biology and bioinformatics - symbolic dynamics - term rewriting - text algorithms - text retrieval - transducers - trees, tree languages and tree machines - weighted machines STRUCTURE: LATA 2010 will consist of: - 3 invited talks - 2 invited tutorials - refereed contributions - open sessions for discussion in specific subfields, on open problems, or on professional issues (if requested by the participants) Invited speakers to be announced. PROGRAMME COMMITTEE: Alberto Apostolico (Atlanta) Thomas Bäck (Leiden) Stefania Bandini (Milano) Wolfgang Banzhaf (St. John's) Henning Bordihn (Potsdam) Kwang-Moo Choe (Daejeon) Andrea Corradini (Pisa) Christophe Costa Florencio (Leuven) Maxime Crochemore (Marne-la-Vallée) W. Bruce Croft (Amherst) Erzsébet Csuhaj-Varjú (Budapest) Jürgen Dassow (Magdeburg) Volker Diekert (Stuttgart) Rodney G. Downey (Wellington) Frank Drewes (Umea) Henning Fernau (Trier, co-chair) Rusins Freivalds (Riga) Rudolf Freund (Wien) Paul Gastin (Cachan) Edwin Hancock (York, UK) Markus Holzer (Giessen) Helmut Jürgensen (London, Canada) Juhani Karhumäki (Turku) Efim Kinber (Fairfield) Claude Kirchner (Bordeaux) Carlos Martín-Vide (Brussels, co-chair) Risto Miikkulainen (Austin) Victor Mitrana (Bucharest) Claudio Moraga (Mieres) Sven Naumann (Trier) Chrystopher Nehaniv (Hatfield) Maurice Nivat (Paris) Friedrich Otto (Kassel) Daniel Reidenbach (Loughborough) Klaus Reinhardt (Tübingen) Antonio Restivo (Palermo) Christophe Reutenauer (Montréal) Kai Salomaa (Kingston, Canada) Jeffrey Shallit (Waterloo) Eljas Soisalon-Soininen (Helsinki) Bernhard Steffen (Dortmund) Frank Stephan (Singapore) Wolfgang Thomas (Aachen) Marc Tommasi (Lille) Esko Ukkonen (Helsinki) Todd Wareham (St. John's) Osamu Watanabe (Tokyo) Bruce Watson (Pretoria) Thomas Wilke (Kiel) Slawomir Zadrozny (Warsaw) Binhai Zhu (Bozeman) ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Henning Fernau (Trier, co-chair) Maria Gindorf (Trier) Stefan Gulan (Trier) Anna Kasprzik (Trier) Carlos Martín-Vide (Brussels, co-chair) Norbert Müller (Trier) Bianca Truthe (Magdeburg) SUBMISSIONS: Authors are invited to submit papers presenting original and unpublished research. Papers should not exceed 12 single-spaced pages and should be formatted according to the standard format for Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs/lncs+authors?SGWID=0-40209-0-0-0). Submissions have to be uploaded at: http://www.easychair.org/conferences/?conf=lata2010 PUBLICATIONS: A volume of proceedings published by Springer in the LNCS series will be available by the time of the conference. At least one special issue of a major journal will be later published containing extended versions of the papers contributed to the conference. Submissions to the post-conference publications will be only by invitation. REGISTRATION: The period for registration will be open since September 1, 2009 until May 24, 2010. The registration form can be found at the website of the conference: http://grammars.grlmc.com/LATA2010/ Early registration fees: 500 Euro Early registration fees (PhD students): 400 Euro Late registration fees: 530 Euro Late registration fees (PhD students): 430 Euro On-site registration fees: 550 Euro On-site registration fees (PhD students): 450 Euro At least one author per paper should register. Papers that do not have a registered author by February 15, 2010 will be excluded from the proceedings. Fees comprise access to all sessions, one copy of the proceedings volume, coffee breaks, lunches, excursion, and conference dinner. PAYMENT: Early (resp. late) registration fees must be paid by bank transfer before February 15, 2010 (resp. May 14, 2010) to the conference series account at Open Bank (Plaza Manuel Gomez Moreno 2, 28020 Madrid, Spain): IBAN: ES1300730100510403506598 - Swift code: OPENESMMXXX (account holder: Carlos Martin-Vide & URV – LATA 2010). Please write the participant’s name in the subject of the bank form. Transfers should not involve any expense for the conference. On-site registration fees can be paid only in cash. A receipt for the payment will be provided on site. Besides paying the registration fees, it is required to fill in the registration form at the website of the conference. BEST PAPER AWARDS: An award will be presented to the authors of the two best papers accepted to the conference. Only papers fully authored by PhD students are eligible. The award intends to cover their travel expenses. IMPORTANT DATES: Paper submission: December 3, 2009 Notification of paper acceptance or rejection: January 21, 2010 Final version of the paper for the LNCS proceedings: February 3, 2010 Early registration: February 15, 2010 Late registration: May 14, 2010 Starting of the conference: May 24, 2010 Submission to the post-conference publications: August 27, 2010 FURTHER INFORMATION: gindorf-ti@informatik.uni-trier.de CONTACT: LATA 2010 Universität Trier Fachbereich IV – Informatik Campus II, Behringstraße D-54286 Trier Phone: +49-(0)651-201-2836 Fax: +49-(0)651-201-3954
8-34 . (2010-05-25) CfP JEP 2010
Université de Mons, Belgique
du 25 au 28 mai 2010
http://w3.umh.ac.be/jep2010
=====================================================================
Les Journées d'Études de la Parole (JEP) sont consacrées à l'étude de la communication parlée ainsi qu'à ses applications. Ces journées ont pour but de rassembler l'ensemble des communautés scientifiques francophones travaillant dans le domaine. La conférence se veut aussi un lieu d'échange convivial entre doctorants et chercheurs confirmés.
En 2010, les JEP sont organisées par le Laboratoire des Sciences de la Parole de l'Académie Wallonie-Bruxelles, sur le site de l'Université de Mons en Belgique, sous l'égide de l'AFCP
(Association Francophone de la Communication Parlée) avec le soutien de l'ISCA (International Speech Communication Association).
CALENDRIER
===========
Notification aux auteurs: 15 mars 2010
Conférence: 25-28 mai 2010
Chargée de Recherches FNRS
Laboratoire de Phonétique
Service de Métrologie et Sciences du Langage
Université de Mons-Hainaut
18, Place du Parc
7000 Mons
Belgium
+3265373140