ISCApad number 96

June 9th, 2006

Dear Members,
It was a pleasure to meet a lot of you in Toulouse for ICASSP 2006 which showed the long lasting importance of speech processing. Please let others know, about ISCA's role and interest in supporting international collaboration between the academic world, the research labs and speech and language industries. Don't hesitate to send any relevant information for publication in ISCApad.
We are all preparing for our next major event INTERSPEECH 2006 in Pittsburg.
I remind you two important requests:
First, SIG leaders are urged to submit brief activity reports to ISCApad.
Second, if you are aware of new books devoted to speech science and/or technology, please draw my attention to them, so that I can advertise these books in ISCApad.

Christian Wellekens

TABLE OF CONTENTS

  1. ISCA News
  2. SIG's activities
  3. Courses, internships
  4. Books, databases, softwares
  5. Job openings
  6. Journals
  7. Future Interspeech Conferences
  8. Future ISCA Tutorial and Research Workshops (ITRW)
  9. Forthcoming Events supported (but not organized) by ISCA
  10. Future Speech Science and technology events

ISCA NEWS


From ISCA Student activity committee (SAC)
ISCA Speech Labs Listing?
ISCA-SAC is in the process of updating ISCA databases. An important part of this process is to have an extensive list of speech labs and groups from all over the world. Right now, there are 102 labs from 24 countries. Please, check the listing, and enter your group's information at http://www.isca-students.org/new-speech-lab.php if your group is not listed.
Do you want to become a board member in ISCA Student Advisory Committee?
ISCA-SAC is looking for new motivated members (PhD students early in their degrees are preferred). There are available positions on ISCA-SAC board. If you want to volunteer for ISCA and contribute to ISCA-SAC efforts (to get an idea please visit our website), get into contact with us by sending email to . There are exciting projects that current board members and volunteering students are working on. Join us!
Murat Akbacak
ISCA-SAC Student Coordinator
PhD Student, University of Colorado at Boulder
Research Intern, University of Texas at Dallas

ISCA GRANTS
are available for students and young scientists attending meetings.
For more information: http://www.isca-speech.org/grants

top

SIG's activities


A list of Speech Interest Groups can be found on our web.

top

COURSES, INTERNSHIPS


Call for NATO Advanced Study Institute
International NATO Summer School "E.R.Caianiello" XI Course on
The Fundamentals of Verbal and Non-verbal Communication and the Biometrical Issue
September 2-12, 2006 Vietri sul Mare Italy
Website for details

Studentships available for 2006/7 at the Department of Computer Science
The University of Sheffield - UK

One-Year MSc in HUMAN LANGUAGE TECHNOLOGY
The Sheffield MSc in Human Language Technology has been carefully tailored to meet the demand for graduates with the highly-specialised multi-disciplinary skills that are required in HLT, both as practitioners in the development of HLT applications and as researchers into the advanced capabilities required for next-generation HLT systems. The course provides a balanced programme of instruction across a range of relevant disciplines including speech technology, natural language processing and dialogue systems.
The programme is taught in a research-led environment. This means that you will study the most advanced theories and techniques in the field, and also have the opportunity to use state- of-the-art software tools. You will also have opportunities to engage in research-level activity through in-depth exploration of chosen topics and through your dissertation.
Graduates from this course are highly valued in industry, commerce and academia. The programme is also an excellent introduction to the substantial research opportunities for doctoral-level study in HLT.
A number of studentships are available, on a competitive basis, to suitably qualified applicants. These awards pay a stipend in addition to the course fees.
See further details of the course
Information on how to apply

Ecoles thematiques CNRS DIALOGUE et INTERACTION
2-8 juillet 2006, AUTRANS (ISERE, FRANCE)
Site Web pour plus d'informations
Une fiche d'inscription est disponible sur le site. Date limite 2 juin

The 12th ELSNET European Summer School on Language and Speech Communication
INFORMATION FUSION IN NATURAL LANGUAGE SYSTEMS

hosted by the University of Hamburg, Hamburg, Germany
3 - 14 July 2006
Website
The summer school will depart from a survey of phenomena and mechanisms for information fusion. It continues with studying various approaches for sensor-data fusion in technical systems, like robots. Finally it will investigate the issue of information fusion from the perspective of a range of speech and language processing tasks, namely:
speech recognition and spoken language systems
machine translation
distributed and multilingual information systems
parsing
multimodal speech and language systems
COURSES
Information fusion for command and control, Pontus Svenson (FOI Stockholm, Sweden)
Audio visual speech recognition, Rainer Stiefelhagen (University Karlsruhe, Germany)
XML integration of natural language processing components, Ulrich Schaefer, (DFKI, Germany)
Hybrid Parsing, Kilian Forth and Wolfgang Menzel (University of Hamburg, Germany)
Ontologies for information fusion, Luciano Serafini, (ITC-IRST Trento, I taly)
Syntax semantics integration in HPSG, Valia Kordoni, (DFKI Germany)
Hybrid approaches in machine translation, Stephan Oepen (University of Oslo, Norway)
Ensemble based architectures, to be announced
Information fusion in multi-document summarization, to be announced
Courses will have the duration of one week. Some of them will include practical exercises IMPORTANT DATES
Pre-registration deadline 30.05.2006
Notification of acceptance 10.06.2006
Payment Deadline 30.06.2006
Summer school 3.07 - 14.07.2006
In order to pre-register, candidates are required to send a statement of interest to participate in the summer school, a curriculum vitae and a title for a contribution to the "student" session as well as courses of interest, by e-mail
ORGANIZING COMMITTEE
Walther v. Hahn
Wolfgang Menzel
Cristina Vertan
University of Hamburg, Dept. of Computer Science
Natural Language Systems Division
Vogt-Koelln Str. 30
D-22527, Germany
Tel: +49 40 428832533
Fax: +49 40 428832515

Appel a participation: l'Ecole Recherche Multimedia d'Information Techniques & Sciences (ERMITES)
Presqu'ile de Giens - Var (France)/4 au 6 septembre 2006. website
email
ERMITES est organisé avec les soutiens du LSIS, du département d'informatique de l'UFRST USTV, et de l'Association Francophone de la Commmunication Parlée (AFCP).
Objectifs :
La diffusion d'informations audiovisuelles, notamment par le web, est de plus en plus anarchique, ce qui en rend la recherche très hasardeuse. "L'Ecole Recherche Multimedia d'Information: Techniques & Sciences" (ERMITES) regroupe, dans un cadre convivial, une dizaine de spécialistes et une vingtaine de doctorants, postdoc, ingenieurs ou enseignant/chercheurs, qui analyseront les dernières avancées, théoriques et pratiques, des Systèmes Robustes de recherche d'Information Multimodale (SRIM), couplant textes, images, sons ou videos. Le chercheur ou l'inventeur ne doit plus sentir d'antagonisme entre ces différents domaines, mais doit en réaliser une synthèse pour renforcer l'originalité de ses travaux. ERMITES ouvre le vaste champs scientifique nécessaire à l'élaboration de SRIM, et au problème de leur fiabilité. Seront donc traités en particulier les th=E9ories de l'information, du signal, des processus aléatoires, de l'apprentissage automatique ; l'analyse de sc=E8ne computationnelle (audio et video), l'intelligence artificielle, les langages de requêtes pour données XML ; le traitement automatique du langage et de la parole ; sciences cognitives et neurophysiologie de la perception. Plusieurs sessions seront consacrées à des démonstrations de prototypes et de toolbox FREEWARE de qualité (dont certaines construites par les intervenants), notamment :
-apprentissage automatique, modélisation: TORCH, HTK
-traitement de la parole et des images: Sirocco et Spro, Speeral, OCTAVE et librairie image
- traitement langage, entités nommées, XML: PYTHON, UNITEX, GALATEX pour la RI dans des documents semi-structurés.
ERMITES est un lieu privil=E9gi=E9 de rencontres qui a pour but de renforcer les liens entre les acteurs du domaine de chaque modalité et théorie traitées. L'=Ecole est organisée par sessions de 2 à 3 heures, où chaque spécialiste présente un exposé pédagogique, repris lors de tables rondes avec l'ensemble des participants, où pourront être présentés et discutés les projets de recherche dans lesquels sont engagés les participants.
Intervenants (voir résumés sur le site web )
Organisateurs
Hervé Glotin & Jacques Le Maitre
LSIS / Univ. Sud Toulon Var
BP20132, 83957 La Garde cedex 20
TEl : 04 94 14 20 06 ; 04 94 14 28 24
Fax: 04 94 14 28 97

top

BOOKS, DATABASES, SOFTWARES

Multilingual Speech Processing
Editors: Tanja Schultz & Katrin Kirchhoff ,
Elsevier Academic Press, April 2006
Website

Reconnaissance automatique de la parole: Du signal a l'interpretation
Authors: Jean-Paul Haton
Christophe Cerisara
Dominique Fohr
Yves Laprie
Kamel Smaili
392 Pages
Publisher: Dunod

IEEE Transaction of speech and language processing.
Special Issues (Call for Papers)
Blind Signal Processing for Speech and Audio Applications
Guest Editors: Shoji Makino, Te-Won Lee, and Guy Brown
Manuscript Submission Deadline: 1 July 2006

CFP Special Issue on Multimodal Audiovisual Content Abstraction
website
International Journal of Image and Video Processing
The accurate management of large volumes of digital multimodal audiovisual content calls for a proper mapping of this content onto representation spaces with a high-level degree of interpretation. This operation referred to as content abstraction may be supervised, unsupervised, automatic, or semiautomatic. Content abstraction here is considered in the large sense and includes (semi-)automatic annotation, content (e.g., keyframe) selection, or summarization. Typical problems are fusion of heterogeneous streams, learning (structured) semantics from low-level features, interrelating document content parts, and extracting salient multimodal content. Tools used in this context arise from signal processing, machine learning, data mining, and knowledge engineering.
Specifically, this special issue will gather high-quality original contributions on all aspects of audiovisual content abstraction. Topics of interest include (but are not limited to):
* “Key” feature extraction/characterization (frames, transitions, shots, story, etc.)
* Summarization of video content
* Similarity measures for video content
* Video content processing for indexing
* Multistream processing/fusion
* Interactive video content characterization
* Mosaicing for content representation
Authors should follow the IJIVP manuscript format described on the website. Prospective authors should submit an electronic copy of their complete manuscripts through the IJIVP manuscript tracking system , according to the following timetable:
Manuscript Due October 1, 2006
Acceptance Notification February 1, 2007
Final Manuscript Due April 1, 2007
Publication Date 2nd Quarter, 2007
Guest Editors:
Stéphane Marchand-Maillet, Viper Group, Computer Vision and Multimedia Laboratory, Department of Computer Science, University of Geneva, CH-1211 Geneva 4, Switzerland
Bernard Mérialdo, Department of Multimedia Communications, Institut Eurécom, 06904 Sophia Antipolis Cedex, France
Marcel Worring, Intelligent Sensory Information Systems, Computer Science Institute, Faculty of Science, University of Amsterdam, 1098 SJ Amsterdam, The Netherlands
Milind R. Naphade, Pervasive Media Management Group, IBM T.J. Watson Research Center, White Plain, NY 10604, USA

top

JOB OPENINGS

We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobsas well as http://www.elsnet.org/Jobs)

Open positions at the Adaptive Multimodal Interface Research Lab at University of Trento (Italy)

Areas
Automatic Speech Recognition (PhD Research Fellowship)
Natural Language Processing (PhD Research Fellowship)
Machine Learning (PhD Research Fellowship/Senior Researcher)
HCI/User Interface (Junior Researcher)
Multimodal/Spoken Dialog (Senior Researcher)
The Adaptive Multimodal Interface research lab pursues excellence research in next-generation interfaces for human-machine and human-human communication. The research positions will be funded by the prestigious Marie Curie Excellence grant awarded by the European Commission for cutting edge and interdisciplinary research.
The candidates for PhD research fellowships should have background in speech, natural language processing or machine learning. The successful applicants should have EE or CS degree with strong academic records. The students will be part of an interdisciplinary research team working on speech recognition, language understanding, spoken dialog, machine learning and adaptive user interfaces.
Deadline for application submission is July 11, 2006.
The candidates for the junior/senior researcher positions should have a PhD degree either in computer science, cognitive science or related disciplines. They will have an established international research track record in their field of expertise and leadership skills. Deadline for application submission is November 1, 2006.
The applicants should be fluent in English. The Italian language competence is optional and applicants are encouraged to acquire this skill on the job. The applicants should have good programming skills in most of the following C++/Java/JavaScript/Perl/Python. University of Trento is an equal opportunity employer. Interested applicants should send their CV along with their statement of research interest and three reference letters to: Prof. Ing. Giuseppe Riccardi
The University of Trento is constantly ranked as premiere Italian graduate university institution (see ). DIT Department
-DIT has a strong focus on interdisciplinarity with professors from different faculties of the University (Physical Science, Electrical Engineering, Economics, Social Science, Cognitive Science, Computer Science) with international background.
-DIT aims at exploiting the complementary experiences present in the various research areas in order to develop innovative methods and technologies, applications and advanced services. -English is the official language.

Nuance (Burlington MA)_ #1365 Software Engineer (Burlington MA)

Overview
Nuance Communications, Inc, a worldwide leader in speech and imaging solutions, has an opening for a senior software engineer to maintain and improve acoustic model training and testing toolkits in the Dragon R&D department.
The candidate will join a group of talented speech scientists and research engineers to advance acoustic modeling techniques for Dragon dictation solutions and other Nuance speech recognition products. We are looking for a self-motivated, goal-driven individual who has strong programming and software architecture skills.
Responsibilities
• Maintain and improve acoustic modeling toolkit
o Improve efficiency, flexibility and, when appropriate, architecture of training algorithms
o Improve resource utilization of the toolkit in a large grid computing environment
o Implement new training algorithms in cooperation with speech scientists
o Handle toolkit bug reports and feature requests
o Clean up legacy codes, improve code quality and maintainability
o Perform regression tests and release toolkits
o Improve toolkit documentation
• Improve the software implementation of our research testing framework
• Update acoustic modeling and testing toolkits to work with new versions of speech recognizer
Qualifications
• Bachelor’s or Master’s degree in computer science or electrical engineering
• Strong programming skill using C/C++ and scripting languages Perl/Python in UNIX environment
• Significant experience in creating and maintaining a software toolkit. This includes version control, bug reporting, testing, and releasing code to a user community.
• Ability to work with a large existing code base
• Good software design and architecture skill
• Attention to detail: ability and interest in getting lots of details right on a work task
• Desire and ability to be a team player
? Experience with building acoustic models for speech recognition
? Experience with CVS
? Experience coming up to speed on a large existing code base in a short period of time
? Knowledge of speech recognition algorithms, including model training algorithms
Preference will give to candidates who have experience in maintaining a speech recognition toolkit. Previous experience in computer administration and grid software management is a plus.
Please apply on-line

Sr. Research Scientist at Nuance

Nuance, a worldwide leader in imaging, speech and language solutions, has an opening for a research scientist in speech recognition. The candidate will work on improving recognition performance of speech recognition engine and its main application in Nuance's award-winning dictation products. Dragon NaturallySpeaking® is our market-leading desktop dictation product. The recently released version 8 showed substantial accuracy improvements over previous versions. DragonMT is our new medical transcription server, which brings the benefit of ScanSoft’s dictation technology to the problem of machine assisted medical transcription. We are looking for an individual who wants to solve difficult speech recognition problems, and help get those solutions into our products, so that our customers can work more effectively. Responsibilities
Main responsibilities of the candidate will include:
provide experimental and theoretical analysis of speech recognition problems
formulate new algorithms, create research tools, design and carry out experiments to verify new algorithms
work with other members in the team to improve the performance of our products and add new product features to meet business requirements
work with other team members to deliver acoustic models for products
work with development engineers to insure a high quality implementation of algorithms and models in company products
follow developments in speech recognition to keep our research work state-of-the-art patent new algorithms and write scientific papers when appropriate
Qualifications
Requirements: Ph.D. or Master degree in computer science or electrical engineering good analytical and diagnostic skills
experience with C/C++, scripting using Perl, Python and csh in UNIX environment ability to work with a large existing code base
desire and ability to be a team player
strong desire and demonstrated ability to work on and solve engineering problems.
Preference will give to candidates who have strong speech recognition background. Previous envolvement in DARPA EARS project is a plus. New graduates with good GPA from top universities are encouraged to apply.
The position will be located in our new headquarters in Burlington, MA, which is approximated 15 miles west of Boston. Information about Scansoft and its products. can be found in .
Please apply on-line

Research Engineer - Natural Language Understanding- Nuance

Overview
Nuance, a worldwide leader in imaging, speech and language solutions, has an opening for a research engineer in natural language understanding. Core Technology group in NetASR at Nuance builds the technology behind telephone speech applications. The focus is on call routing and other forms of statistical semantics. We currently automate 7 billion phone calls a year and have been moving fairly aggressively towards more open grammars using a combination of SLM for recognition followed by statistical call routing. This is used both for call center types of applications as well as directory assistance, where there may be millions of destinations with limited training data. The Nuance NLU Group is doing exciting research and product development in C/C++ and we are looking for top talent to join our team.
Responsibilities
The candidate will work in the Network NL group, which develops technology, tools and runtime software to enable our customers to build speech applications using natural language. Some of the current problems include Generating language models for new applications with little application-specific training data. Statistical semantics, e.g. training classifiers for call routing. Robust parsing and other techniques to extract richer semantics than a routing destination.
Responsibilities:
The candidate will work on the full product cycle: speak with professional service engineers or customers to identify NL needs and help with solutions, develop new algorithms, conduct experiments, and write product quality software to deliver these new algorithms in the product release cycle.
Qualifications
Strong software skills. C++ required. Perl/python desirable. Needed both for research code and for product quality, unit-tested code that we ship. Advanced degree in computer science or related field. Experience in natural language processing, especially call routing, language modeling and related areas. Ability to take initiative, but also follow a plan and work well in a group environment. A strong desire to make things “really work” in practice.
Please apply on-line

Principal Engineer - Clinical Language Understanding

Overview
Nuance, a worldwide leader in imaging, speech and language solutions, has an opening for a research engineer in clinical language understanding.
The Clinical Language Understanding group at Nuance is a multi-disciplinary team developing a cutting-edge medical fact extraction engine in Java. Important facts about medications, problems, and procedures are identified in clinical reports, classified, and normalized to standard medical vocabularies.
Responsibilities
The person will be responsible for contributing to the on-going engineering of the Medical Fact Extraction engine. The person will research methods and technologies for improving engine functionality, as well as improving accuracy, performance, and reliability. They will have good software architecture/design skills, to balance API requirements for both research and deployment configurations. They will design, code and test new functionality, and will analyze existing code to extend, optimize and refactor it. The person will also help maintain and enhance systems used for corpus management, document annotation, machine learning algorithm development, and accuracy and performance measurement. They will also work closely with colleagues in Research and in Development.
Qualifications
* Bachelor in computer science or equivalent -- advanced degree preferred.
* Minimum 5 years of experience in software development, preferably in the areas of information extraction and retrieval, knowledge management, document management, or natural language processing.
* Excellent software design, development and diagnostic skills, preferably in Java.
* Excellent scripting and prototyping skills, preferably in Perl.
* Excellent knowledge and understanding of XML, XSLT, and related technologies.
* Significant experience with relational databases.
* Demonstrated ability and desire to learn new technologies rapidly.
* Ability to work well in a multi-disciplinary team.
* Good written and verbal communication skills.
In addition, the applicant must have several of the following:
* Experience in designing and implementing complex commercial applications.
* Experience with computational linguistic research and development, especially as applied to Information Extraction and Retrieval.
* Experience with ontologies and controlled medical vocabularies (e.g. SNOMED).
* Familiarity with clinical documentation standards and medical terminology.
* Experience with applying machine learning approaches
* Experience in conducting computational and/or technical research is a plus.
* Experience developing user interfaces is a plus.
* Experience with Eclipse and Perforce is a plus.
* Experience with Tomcat, Web services, Servlets is a plus.
* Knowledge of Windows and UNIX is a plus.
Please apply on-line

Computational Linguist, Text-to-Speech Synthesis, Boston area

Location: Boston area (Position AXG-1005)
The Computational Linguist will work with the company's technical team to develop and integrate linguistic resources and applications for the company's TTS engine.
Areas of Competence
* Computational Linguistics
* Semantics
* Linguistics
* Speech corpus
* Text corpus
Primary Duties
* Produce and maintain speech corpus, audio data, transcripts and phonetic dictionary, data annotation, and component/model configuration management
* Verify existing corpus
* Developing utilities, lexicons, and other language resources for company's unique TTS
* Adapt text language parsing and analysis software for new TTS needs
Required skills/experience
* Thorough grounding in phonology, phonetics, lexicography, orthography, semantics, morphology, syntax, and other branches of linguistics
* Experience with language parsing and analysis software, such as part-of-speech (POS) and syntactic taggers, semantics, and discourse analysis
* Experience with formant or concatenated based speech synthesis
* Experience working on medium-scale, multi-developer software projects
* Experience with development of speech corpus, transcripts, data annotation, and phonetic dictionary
* Programming experience in C/C++/Matlab/Perl
* Self-motivation and ability to work independently
* Familiarity with concepts and techniques from DSP theory, machine learning and statistical modeling is a plus
Must have a Master/PhD. in Engineering, Computer Science or Linguistics with development or research experience in speech synthesis/recognition/technology.
Direct your confidential response to:
Arnold L. Garlick III
President
Pacific Search Consultants
(949) 366-9000 Ext. 2#
Please refer to Position AXG-1005
Website

Doctoral (PhD) Positions in the field of Content-based Multimedia Information Retrieval and Management Department of Computer Science - Faculty of Sciences - University of Geneva - Switzerland

Context:
The Viper group , part of the Computer Vision and Multimedia Laboratory , has a long research experience in Content-based Multimedia Information Retrieval (image, video, text, ...). Its activities have led amongst other results to the development of interactive demo systems for content-based video ( Vicode ) and image ( GIFT) retrieval and multimedia management. We wish to continue these activities.
Description of posts:
Several doctoral positions are open in relation with international and national project funds awarded on the basis of our research activities in the broad field of content-based multimedia information search, retrieval and management. The research performed will form direct contributions in our current and upcoming projects, including ViCode and the Collection Guide (see our main website for details).
The successful applicants should show knowledge and interest in one or many of the following domains:
* Data mining, statistical data analysis
* Statistical learning
* Signal, image, audio processing
* Knowledge engineering
* Indexing, Databases
* Operation research
Starting date: No later than September 2006.
Salary: 48'000CHF per annum (1st year)
Supervision: Dr. S. Marchand-Maillet and Dr. E. Bruno
Application: Applications by email are welcome to:
Dr. Eric Bruno
Computer Vision and Multimedia Laboratory
Department of Computer Science, University of Geneva
24, rue du General Dufour, CH-1211 Geneva 4
SWITZERLAND
e-mail. This announce (with more info).

top

JOURNALS

Papers accepted for FUTURE PUBLICATION in Speech Communication

Full text available on http://www.sciencedirect.com/ for Speech Communication subscribers and subscribing institutions. Click on Publications, then on Speech Communication and on Articles in press. The list of papers in press is displayed and a .pdf file for each paper is available.

Makoto Hirohata, Yosuke Shinnaka, Koji Iwano and Sadaoki Furui, Sentence-extractive automatic speech summarization and evaluation techniques, Speech Communication, In Press, Uncorrected Proof, , Available online 5 June 2006, . (Website) Keywords: Automatic speech summarization; Sentence extraction; Evaluation metrics; Spontaneous presentations

Frederik Stouten, Jacques Duchateau, Jean-Pierre Martens and Patrick Wambacq, Coping with disfluencies in spontaneous speech recognition: Acoustic detection and linguistic context manipulation, Speech Communication, In Press, Uncorrected Proof, , Available online 26 May 2006, . (Website) Keywords: Disfluency handling; Spontaneous speech recognition; Disfluency detection

Dimitrios Ververidis and Constantine Kotropoulos, Emotional speech recognition: Resources, features, and methods, Speech Communication, In Press, Uncorrected Proof, , Available online 24 May 2006, . (Website) Keywords: Emotions; Emotional speech data collections; Emotional speech classification; Stress; Interfaces; Acoustic features

Vivek Tyagi, Hervé Bourlard and Christian Wellekens, On variable-scale piecewise stationary spectral analysis of speech signals for ASR, Speech Communication, In Press, Uncorrected Proof, , Available online 24 May 2006, . (Website) Keywords: Variable-scale quasi-stationary analysis; Speech spectral analysis

Valentin Ion and Reinhold Haeb-Umbach, Uncertainty decoding for distributed speech recognition over error-prone networks, Speech Communication, In Press, Uncorrected Proof, , Available online 17 May 2006, . (Website) Keywords: Distributed speech recognition; Channel error robustness; Soft features; Uncertainty decoding

Teruhisa Misu and Tatsuya Kawahara, Dialogue strategy to clarify user's queries for document retrieval system with speech interface, Speech Communication, In Press, Uncorrected Proof, , Available online 17 May 2006, . (Website) Keywords: Spoken dialogue system; Information retrieval; Document retrieval; Dialogue strategy

Abhinav Sethy, Shrikanth Narayanan and S. Parthasarthy, A split lexicon approach for improved recognition of spoken names, Speech Communication, In Press, Uncorrected Proof, , Available online 5 May 2006, . (Website) Keywords: Syllable; Spoken name recognition; Reverse lookup; Split lexicon

Akira Sasou, Futoshi Asano, Satoshi Nakamura and Kazuyo Tanaka, HMM-based noise-robust feature compensation, Speech Communication, In Press, Corrected Proof, , Available online 4 May 2006, . (Website) Keywords: Noise robust; Hidden Markov model; AURORA2

Alejandro Bassi, Nestor Becerra Yoma and Patricio Loncomilla, Estimating tonal prosodic discontinuities in Spanish using HMM, Speech Communication, In Press, Corrected Proof, , Available online 2 May 2006, . (Website)

Esfandiar Zavarehei, Saeed Vaseghi and Qin Yan, Inter-frame modeling of DFT trajectories of speech and noise for speech enhancement using Kalman filters, Speech Communication, In Press, Corrected Proof, , Available online 25 April 2006, . (Website) Keywords: Speech enhancement; Kalman filter; AR modeling of DFT; DFT distributions

SungHee Kim, Robert D. Frisina and D. Robert Frisina, Effects of age on speech understanding in normal hearing listeners: Relationship between the auditory efferent system and speech intelligibility in noise, Speech Communication, In Press, Corrected Proof, , Available online 7 April 2006, . (Website) Keywords: Aging; Presbycusis; Medial efferent system; Release from masking; Cocktail party effect

Fatih Ögüt, Mehmet Akif Kiliç, Erkan Zeki Engin and Rasit Midilli, Voice onset times for Turkish stop consonants, Speech Communication, In Press, Uncorrected Proof, , Available online 3 April 2006, . (Website) Keywords: Articulation; Consonant; Acoustics; Speech; Stop consonants; Voice onset time

Frédéric Bimbot, Marcos Faundez-Zanuy and Renato de Mori, Editorial, Speech Communication, In Press, Corrected Proof, , Available online 10 March 2006, . (Website)

Jan Stadermann and Gerhard Rigoll, Hybrid NN/HMM acoustic modeling techniques for distributed speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 3 March 2006, . (Website) Keywords: Distributed speech recognition; Tied-posteriors; Hybrid speech recognition

Gerasimos Xydas and Georgios Kouroupetroglou, Tone-Group F0 selection for modeling focus prominence in small-footprint speech synthesis, Speech Communication, In Press, Corrected Proof, , Available online 2 March 2006, . (Website) Keywords: Text-to-speech synthesis; Tone-Group unit-selection; Intonation and emphasis in speech synthesis

Antonio Cardenal-López, Carmen García-Mateo and Laura Docío-Fernández, Weighted Viterbi decoding strategies for distributed speech recognition over IP networks, , Speech Communication, In Press, Corrected Proof, , Available online 28 February 2006, . (Website) Keywords: Distributed speech recognition; Weighted Viterbi decoding; Missing data

Felicia Roberts, Alexander L. Francis and Melanie Morgan, The interaction of inter-turn silence with prosodic cues in listener perceptions of "trouble" in conversation, Speech Communication, In Press, Corrected Proof, , Available online 28 February 2006, . (Website) Keywords: Silence; Prosody; Pausing; Human conversation; Word duration

Ismail Shahin, Enhancing speaker identification performance under the shouted talking condition using second-order circular hidden Markov models, Speech Communication, In Press, Corrected Proof, , Available online 14 February 2006, . (Website) Keywords: First-order left-to-right hidden Markov models; Neutral talking condition; Second-order circular hidden Markov models; Shouted talking condition

A. Borowicz, M. Parfieniuk and A.A. Petrovsky, An application of the warped discrete Fourier transform in the perceptual speech enhancement, Speech Communication, In Press, Corrected Proof, , Available online 10 February 2006, . (Website) Keywords: Speech enhancement; Warped discrete Fourier transform; Perceptual processing

Pushkar Patwardhan and Preeti Rao, Effect of voice quality on frequency-warped modeling of vowel spectra, Speech Communication, In Press, Corrected Proof, , Available online 3 February 2006, . (Website) Keywords: Voice quality; Spectral envelope modeling; Frequency warping; All-pole modeling; Partial loudness

Veronique Stouten, Hugo Van hamme and Patrick Wambacq, Model-based feature enhancement with uncertainty decoding for noise robust ASR, Speech Communication, In Press, Corrected Proof, , Available online 3 February 2006, . (Website) Keywords: Noise robust speech recognition; Model-based feature enhancement; Additive noise; Convolutional noise; Uncertainty decoding

Jinfu Ni and Keikichi Hirose, Quantitative and structural modeling of voice fundamental frequency contours of speech in Mandarin, Speech Communication, In Press, Corrected Proof, , Available online 26 January 2006, . (Website) Keywords: Prosody modeling; F0 contours; Tone; Intonation; Tone modulation; Resonance principle; Analysis-by-synthesis; Tonal languages

Francisco Campillo Díaz and Eduardo Rodríguez Banga, A method for combining intonation modelling and speech unit selection in corpus-based speech synthesis systems, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, . (Website) Keywords: Speech synthesis; Unit selection; Corpus-based; Intonation

Jean-Baptiste Maj, Liesbeth Royackers, Jan Wouters and Marc Moonen, Comparison of adaptive noise reduction algorithms in dual microphone hearing aids, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, . (Website) Keywords: Adaptive beamformer; Adaptive directional microphone; Calibration; Noise reduction algorithms; Hearing aids

Roberto Togneri and Li Deng, A state-space model with neural-network prediction for recovering vocal tract resonances in fluent speech from Mel-cepstral coefficients, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, . (Website) Keywords: Vocal tract resonance; Tracking; Cepstra; Neural network; Multi-layer perceptron; EM algorithm; Hidden dynamics; State-space model

T. Nagarajan and H.A. Murthy, Language identification using acoustic log-likelihoods of syllable-like units, Speech Communication, In Press, Corrected Proof, , Available online 19 January 2006, . (Website) Keywords: Language identification; Syllable; Incremental training

Yasser Ghanbari and Mohammad Reza Karami-Mollaei, A new approach for speech enhancement based on the adaptive thresholding of the wavelet packets, Speech Communication, In Press, Corrected Proof, , Available online 19 January 2006, . (Website) Keywords: Speech processing; Speech enhancement; Wavelet thresholding; Noisy speech recognition

Mohammad Ali Salmani-Nodoushan, A comparative sociopragmatic study of ostensible invitations in English and Farsi, Speech Communication, In Press, Corrected Proof, , Available online 11 January 2006, . (Website) Keywords: Ostensible invitations; Politeness; Speech act theory; Pragmatics; Face threatening acts

Laurent Benaroya, Frédéric Bimbot, Guillaume Gravier and Rémi Gribonval, Experiments in audio source separation with one sensor for robust speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 19 December 2005, . (Website) Keywords: Noise suppression; Source separation; Speech enhancement; Speech recognition

Naveen Srinivasamurthy, Antonio Ortega and Shrikanth Narayanan, Efficient scalable encoding for distributed speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 19 December 2005, . (Website) Keywords: Distributed speech recognition; Scalable encoding; Multi-pass recognition; Joint coding-classification

Luis Fernando D'Haro, Ricardo de Córdoba, Javier Ferreiros, Stefan W. Hamerich, Volker Schless, Basilis Kladis, Volker Schubert, Otilia Kocsis, Stefan Igel and José M. Pardo, An advanced platform to speed up the design of multilingual dialog applications for multiple modalities, Speech Communication, In Press, Corrected Proof, , Available online 5 December 2005, . (Website) Keywords: Automatic dialog systems generation; Dialog management tools; Multiple modalities; Multilinguality; XML; VoiceXML

Dimitrios Dimitriadis and Petros Maragos, Continuous energy demodulation methods and application to speech analysis, Speech Communication, In Press, Corrected Proof, , Available online 25 October 2005, . (Website) Keywords: Nonstationary speech analysis; Energy operators; AM-FM modulations; Demodulation; Gabor filterbanks; Feature distributions; ASR; Robust features; Nonlinear speech analysis

Marcos Faundez-Zanuy, Speech coding through adaptive combined nonlinear prediction, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, . (Website) Keywords: Speech coding; Nonlinear prediction; Neural networks; Data fusion

Giampiero Salvi, Dynamic behaviour of connectionist speech recognition with strong latency constraints, Speech Communication, In Press, Corrected Proof, , Available online 14 June 2005, . (Website) Keywords: Speech recognition; Neural network; Low latency; Non-linear dynamics

Erhard Rank and Gernot Kubin, An oscillator-plus-noise model for speech synthesis, Speech Communication, In Press, Corrected Proof, , Available online 21 April 2005, . (Website) Keywords: Non-linear time-series; Oscillator model; Speech production; Noise modulation

Kevin M. Indrebo, Richard J. Povinelli and Michael T. Johnson, Sub-banded reconstructed phase spaces for speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 24 February 2005, . (Website) Keywords: Speech recognition; Dynamical systems; Nonlinear signal processing; Sub-bands

top

FUTURE CONFERENCES

Publication policy: Hereunder, you will find very short announcements of future events. The full call for participation can be accessed on the conference websites
See also our Web pages (http://www.isca-speech.org/) on conferences and workshops.

FUTURE INTERSPEECH CONFERENCES

INTERSPEECH 2006-ICSLP
INTERSPEECH 2006 - ICSLP, the Ninth International Conference on Spoken Language Processing dedicated to the interdisciplinary study of speech science and language technology, will be held in Pittsburgh, Pennsylvania, September 17-21, 2006, under the sponsorship of the International Speech Communication Association (ISCA).
The INTERSPEECH meetings are considered to be the top international conference in speech and language technology, with more than 1000 attendees from universities, industry, and government agencies. They are unique in that they bring together faculty and students from universities with researchers and developers from government and industry to discuss the latest research advances, technological innovations, and products. The conference offers the prospect of meeting the future leaders of our field, exchanging ideas, and exploring opportunities for collaboration, employment, and sales through keynote talks, tutorials, technical sessions, exhibits, and poster sessions. In recent years the INTERSPEECH meetings have taken place in a number of exciting venues including most recently Lisbon, Jeju Island (Korea), Geneva, Denver, Aalborg (Denmark), and Beijing.
ISCA, together with the INTERSPEECH 2006 - ICSLP organizing committee, would like to encourage submission of papers for the upcoming conference in the following
TOPICS of INTEREST
Linguistics, Phonetics, and Phonology
Prosody
Discourse and Dialog
Speech Production
Speech Perception
Physiology and Pathology
Paralinguistic and Nonlinguistic Information (e.g. Emotional Speech)
Signal Analysis and Processing
Speech Coding and Transmission
Spoken Language Generation and Synthesis
Speech Recognition and Understanding
Spoken Dialog Systems
Single-channel and Multi-channel Speech Enhancement
Language Modeling
Language and Dialect Identification
Speaker Characterization and Recognition
Acoustic Signal Segmentation and Classification
Spoken Language Acquisition, Development and Learning
Multi-Modal Processing
Multi-Lingual Processing
Spoken Language Information Retrieval
Spoken Language Translation
Resources and Annotation
Assessment and Standards
Education
Spoken Language Processing for the Challenged and Aged
Other Applications
Other Relevant Topics
SPECIAL SESSIONS
In addition to the regular sessions, a series of special sessions has been planned for the meeting. Potential authors are invited to submit papers for special sessions as well as for regular sessions, and all papers in special sessions will undergo the same review process as papers in regular sessions. Confirmed special sessions and their organizers include:
* The Speech Separation Challenge, Martin Cooke (Sheffield) and Te-Won Lee (UCSD)
* Speech Summarization, Jean Carletta (Edinburgh) and Julia Hirschberg (Columbia)
* Articulatory Modeling, Eric Bateson (University of British Columbia)
* Visual Intonation, Marc Swerts (Tilburg)
* Spoken Dialog Technology R&D, Roberto Pieraccini (Tell-Eureka)
* The Prosody of Turn-Taking and Dialog Acts, Nigel Ward (UTEP) and Elizabeth Shriberg (SRI and ICSI)
* Speech and Language in Education, Patti Price (pprice.com) and Abeer Alwan (UCLA)
* From Ideas to Companies, Janet Baker (formerly of Dragon Systems)
IMPORTANT DATES
Notification of paper status: June 9, 2006
Early registration deadline: June 23, 2006
Tutorial Day: September 17, 2006
Main Conference: September 18-21, 2006
Further information via Website or send email
Organizer
Professor Richard M. Stern (General Chair)
Carnegie Mellon University
Electrical Engineering and Computer Science
5000 Forbes Avenue
Pittsburgh, PA 15213-3890
Fax: +1 412 268-3890
Email

INTERSPEECH 2007-EUROSPEECH
August 27-31,2007,Antwerp, Belgium
Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves, K.U.Nijmegen
Website

INTERSPEECH 2008-ICSLP
September 22-26, 2008, Brisbane, Queensland, Australia
Chairman: Denis Burnham, MARCS, University of West Sydney.

INTERSPEECH 2009-EUROSPEECH
Brighton, UK,
Chairman: Prof. Roger Moore, University of Sheffield.

top

FUTURE ISCA TUTORIAL AND RESEARCH WORKSHOP (ITRW)

ITRW on Experimental Linguistics

28-30 August 2006, Athens Greece
CALL FOR PAPERS
AIMS
The general aims of the Workshop are to bring together researchers of linguistics and related disciplines in a unified context as well as to discuss the development of experimental methodologies in linguistic research with reference to linguistic theory, linguistic models and language applications.
SUBJECTS AND RELATED DISCIPLINES
1. Theory of language
2. Cognitive linguistics
3. Neurolinguistics
4. Speech production
5. Speech acoustics
6. Phonology
7. Morphology
8. Syntax
9. Prosody
10. Speech perception
11. Psycholinguistics
12. Pragmatics
13. Semantics
14. Discourse linguistics
15. Computational linguistics
16. Language technology
MAJOR TOPICS
I. Lexicon
II. Sentence
III. Discourse
IMPORTANT DATES
1 February 2006, deadline of abstract submission
1 March 2006, notification of acceptance
1 April 2006, registration
1 May 2006, camera ready paper submission
28-30 August 2006, Workshop
CHAIR
Antonis Botinis, University of Athens, Greece
Marios Fourakis, University of Wisconsin-Madison, USA
Barbara Gawronska, University of Skövde, Sweden
ORGANIZING COMMITTEE
Aikaterini Bakakou-Orphanou, University of Athens
Antonis Botinis, University of Athens
Christoforos Charalambakis, University of Athens
SECRETARIAT
ISCA Workshop on Experimental Linguistics
Department of Linguistics
University of Athens
GR-15784, Athens GREECE
Tel.: +302107277668
Fax: +302107277029
e-mail
Workshop site address

2nd ITRW on PERCEPTUAL QUALITY OF SYSTEMS

Berlin, Germany, 4 - 6 September 2006
WORKSHOP AIMS
The quality of systems which address human perception is difficult to describe. Since quality is not an inherent property of a system, users have to decide on what is good or bad in a specific situation. An engineering approach to quality includes the consideration of how a system is perceived by its users, and how the needs and expectations of the users develop. Thus, quality assessment and prediction have to take the relevant human perception and judgement factors into account. Although significant progress has been made in several areas affecting quality within the last two decades, there is still no consensus on the definition of quality and its contributing components, as well as on assessment, evaluation and prediction methods.
Perceptual quality is attributed to all systems and services which involve human perception. Telecommunication services directly provoke such perceptions: Speech communication services (telephone, Voice over IP), speech technology (synthesis, spoken dialogue systems), as well as multimodal services and interfaces (teleconference, multimedia on demand, mobile phones, PDAs). However, the situation is similar for the perception of other products, like machines, domestic devices, or cars. An integrated view on system quality makes use of knowledge gained in different disciplines and may therefore help to find general underlying principles. This will assist the increase of usability and perceived quality of systems and services, and finally yield better acceptance.
The workshop is intended to provide an interdisciplinary exchange of ideas between both academic and industrial researchers working on different aspects of perceptual quality of systems. Papers are invited which refer to methodological aspects of quality and usability assessment and evaluation, the underlying perception and judgment processes, as well as to particular technologies, systems or services. Perception-based as well as instrumental approaches will complement each other in giving a broader picture of perceptual quality. It is expected that this will help technology providers to develop successful, high-quality systems and services.
WORKSHOP TOPICS
The following non-exhaustive list gives examples of topics which are relevant for the workshop, and for which papers are invited:
- Methodologies and Methods of Quality Assessment and Evaluation
- Metrology: Test Design and Scaling
- Quality of Speech and Music
- Quality of Multimodal Perception
- Perceptual Quality vs. Usability
- Semio-Acoustics and -Perception
- Quality and Usability of Speech Technology Devices
- Telecommunication Systems and Services
- Multi-Modal User Interfaces
- Virtual Reality
- Product-Sound Quality
IMPORTANT DATES
April 15, 2006 (updated): Abstract submission (approx. 800 words)
May 15, 2006: Notification of acceptance
June 15, 2006: Submission of the camera-ready paper (max. 6 pages)
September 4-6, 2006: Workshop
WORKSHOP VENUE
The workshop will take place in the "Harnack-Haus", a villa-like conference center located in the quiet western part of Berlin, near the Free University. As long as space permits, all participants will be accommodated in this center. Accommodation and meals are included in the workshop fees. The center is run by the Max-Planck-Gesellschaft and can easily be reached from all three airports of Berlin (Tegel/TXL, Schönefeld/SXF and Tempelhof/THF). Details on the venue, accommodation and transportation will be announced soon.
PROCEEDINGS
CD workshop proceedings will be available upon registration at the conference venue and subsequently on the workshop web site.
LANGUAGE
The official language of the workshop will be English.
LOCAL WORKSHOP ORGANIZATION
Ute Jekosch (IAS, Technical University of Dresden)
Sebastian Möller (Deutsche Telekom Labs, Technical University of Berlin)
Alexander Raake (Deutsche Telekom Labs, Technical University of Berlin)
CONTACT INFORMATION
Sebastian Möller, Deutsche Telekom Labs, Ernst-Reuter-Platz 7,
D-10587 Berlin, Germany
phone +49 30 8353 58465, fax +49 30 8353 58409
Website

ITRW on Statistical and Perceptual Audition ( 2006)

A satellite workshop of INTERSPEECH 2006 -ICSLP
September 16, 2006, Pittsburgh, PA, USA
Website
This will be a one-day workshop with a limited number of oral presentations, chosen for breadth and provocation, and an informal atmosphere to promote discussion. We hope that the participants in the workshop will be exposed to a broader perspective, and that this will help foster new research and interesting variants on current approaches.
Topics
Generalized audio analysis
Speech analysis
Music analysis
Audio classification
Scene analysis
Signal separation
Speech recognition
Multi-channel analysis
In all cases, preference will be given to papers that clearly involve both perceptually-defined or perceptually-related problems, and statistical or machine-learning based solutions.
Important dates
Submission of a 4-6 pages long paper deadline (double column) April 21 2006
Notification of acceptance June 9, 2006

NOLISP'07: Non linear Speech Processing

May 22-25, 2007 , Paris, France

6th ISCA Speech Synthesis Research Workshop (SSW-6)

Bonn (Germany), August 22-24, 2007
A satellite of INTERSPEECH 2007 (Antwerp)in collaboration with SynSIG
Details will be posted by early 2007
Contact
Prof. Wolfgang Hess

ITRW on Robustness

November 2007, Santiago, Chile

top

FORTHCOMING EVENTS SUPPORTED (but not organized) by ISCA

7th SIGdial workshop on discourse and dialogue

Sydney (co-located with COLING/ACL)
June 15-16,2006 (tentative dates)
Website
Contact: Dr Jan Alexandersson

11-th International Conference SPEECH AND COMPUTER (SPECOM'2006)

25-29 June 2006
St. Petersburg, Russia
Conference website Organized by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS)
Supported by SIMILAR NoE, INTAS association, ELSNET and ISCA.
Topics
- Signal processing and feature extraction;
- Multimodal analysis and synthesis;
- Speech recognition and understanding;
- Natural language processing;
- Speaker and language identification;
- Speech synthesis;
- Speech perception and speech disorders;
- Speech and language resources;
- Applied systems for Human-Computer Interaction;
IMPORTANT DATES
- Early registration deadline: 15 April 2006
- Conference SPECOM: 25-29 June 2006
The conference venue and dates were selected so that the attendees can possibly be exposed to St. Petersburg unique and wonderful phenomenon known as the White Nights, for our city is the world's only metropolis where such a phenomenon occurs every summer.
CONTACT INFORMATION
SPECOM'2006, SPIIRAS, 39, 14th line, St-Petersburg, 199178, RUSSIA
Tel.: +7 812 3287081 Fax: +7 812 3284450
E-mail
Web

IEEE Odyssey 2006: The Speaker and Language Recognition Workshop

28 - 30 June 2006
Ritz Carlton Hotel, Spa & Casino
San Juan, Puerto Rico
The IEEE Odyssey 2006 Workshop on Speaker and Language Recognition will be held in scenic San Juan, Puerto Rico at the Ritz Carlton Hotel. This Odyssey is sponsored by the IEEE, is an ISCA Tutorial and Research Workshop of the ISCA Speaker and Language Characterization SIG, and is hosted by The Polytechnic University of Puerto Rico.
Topics
Topics of interest include speaker recognition (verification, identification, segmentation, and clustering); text-dependent and -independent speaker recognition; multispeaker training and detection; speaker characterization and adaptation; features for speaker recognition; robustness in channels; robust classification and fusion; speaker recognition corpora and evaluation; use of extended training data; speaker recognition with speech recognition; forensics, multimodality, and multimedia speaker recognition; speaker and language confidence estimation; language, dialect, and accent recognition; speaker synthesis and transformation; biometrics; human recognition; and commercial applications.
Paper Submission
Prospective authors are invited to submit papers written in English via the Odyssey website. The style guide, templates, and submission form can be downloaded from the Odyssey website. Two members of the Scientific Committee will review each paper. At least one author of each paper is required to register. The workshop proceedings will be published on CD- ROM.
Schedule
Preliminary program 21 April 2006
Workshop 28-30 June 2006
Registration and Information
Registration will be handled via the Odyssey website
. NIST SRE ‘06 Workshop
The NIST Speaker Recognition Evaluation 2006 Workshop will be held immediately before Odyssey ‘06 at the same location on 25-27 June. Everyone is invited to evaluate their systems via the NIST SRE. The NIST Workshop is only for participants and by prearrangement. Please contact Dr. Alvin Martin to participate and see the NIST website for details.
Chairs
Kay Berkling, Co-Chair Polytechnic University of Puerto Rico
Pedro A. Torres-Carrasquillo, Co-Chair MIT Lincoln Laboratory, USA

IV Jornadas en Tecnologia del Habla

Zaragoza, Spain
November 8-10, 2006
Website

Call for papers-International Workshop on Spoken Language Translation (IWSLT 2006)

Evaluation campaign for language translation
Palulu Plaza Kyoto (right in front of Kyoto Station) (Japan)
November 30-December 1 2006
Website
Spoken language translation technologies attempt to cross the language barriers between people having different native languages who each want to engage in conversation by using their mother-tongue. Spoken language translation has to deal with problems of automatic speech recognition (ASR) and machine translation (MT).
One of the prominent research activities in spoken language translation is the work being conducted by the Consortium for Speech Translation Advanced Research (C-STAR III), which is an international partnership of research laboratories engaged in automatic translation of spoken language. Current members include ATR (Japan), CAS (China), CLIPS (France), CMU (USA), ETRI (Korea), ITC-irst (Italy), and UKA (Germany).
A multilingual speech corpus comprised of tourism-related sentences (BTEC*) has been created by the C-STAR members and parts of this corpus were already used for previous IWSLT workshops focusing on the evaluation of MT results using text input () and the translation of ASR output (word lattice, NBEST list) using read speech as input (). The full BTEC* corpus consists of 160K of sentence-aligned text data and parts of the corpus will be provided to the participants for training purposes.
In this workshop, we focus on the translation of spontaneous speech which includes ill-formed utterances due to grammatical incorrectness, incomplete sentences, and redundant expressions. The impact of spontaneity aspects on the ASR and MT systems performance as well as the robustness of state-of- the-art MT engines towards speech recognition errors will be investigated in detail.
Two types of submissions are invited:
1) participants in the evaluation campaign of spoken language translation technologies,
2) technical papers on related issues.
Evaluation campaign (see details on our website)
Each participant in the evaluation campaign is requested to submit a paper describing the utilized ASR and MT systems and to report results using the provided test data.
Technical Paper Session
The workshop also invites technical papers related to spoken language translation. Possible topics include, but are not limited to:
+ Spontaneous speech translation
+ Domain and language portability
+ MT using comparable and non-parallel corpora
+ Phrase alignment algorithms
+ MT decoding algorithms
+ MT evaluation measures
Important Dates
+ Evaluation Campaign
May 12, 2006 -- Training Corpus Release
August 1, 2006 -- Test Corpus Release [00:01 JST]
August 3, 2006 -- Result Submission Due [23:59 JST]
September 15, 2006 -- Result Feedback to Participants 2006
September 29, 2006 -- Paper Submission Due
October 14, 2006 -- Notification of Acceptance
October 27, 2006 -- Camera-ready Submission Due
- system registrations will be accepted until release of test corpus
- late result submissions will be treated as unofficial result submissions
+ Technical Papers
July 21, 2006 -- Paper Submission Due [23:59 JST]
September 29, 2006 -- Notification of Acceptance
October 27, 2006 -- Camera-ready Submission Due
Contact
Michael Paul
ATR Spoken Language Communication Research Laboratories
2-2-2 Hikaridai, Keihanna Science City, Kyoto 619-0288 Japan

Call for papers International Symposium on Chinese Spoken Language Processing (ISCSLP'2006)

Singapore Dec. 13-16, 2006
Conference website
Topics
ISCSLP'06 will feature world-renowned plenary speakers, tutorials, exhibits, and a number of lecture and poster sessions on the following topics:
* Speech Production and Perception
* Phonetics and Phonology
* Speech Analysis
* Speech Coding
* Speech Enhancement
* Speech Recognition
* Speech Synthesis
* Language Modeling and Spoken Language Understanding
* Spoken Dialog Systems
* Spoken Language Translation
* Speaker and Language Recognition
* Indexing, Retrieval and Authoring of Speech Signals
* Multi-Modal Interface including Spoken Language Processing
* Spoken Language Resources and Technology Evaluation
* Applications of Spoken Language Processing Technology
* Others
The official language of ISCSLP is English. The regular papers will be published as a volume in the Springer LNAI series, and the poster papers will be published in a companion volume. Authors are invited to submit original, unpublished work on all the aspects of Chinese spoken language processing.
The conference will also organize four special sessions:
* Special Session on Rich Information Annotation and Spoken Language Processing
* Special Session on Robust Techniques for Organizing and Retrieving Spoken Documents
* Special Session on Speaker Recognition
* Special Panel Session on Multilingual Corpus Development
Schedule
* Full paper submission by Jun. 15, 2006
* Notification of acceptance by Jul. 25, 2006
* Camera ready papers by Aug. 15, 2006
* Early registration by Nov. 1, 2006
Please visit the conference website for more details.

ISCSLP 2006-Special session on speaker recognition

Singapore, Dec 13-16, 2006
Website
Chair:
Dr Thomas Fang Zheng, Tsinghua Univ., Beijing.
Speaker recognition (or voiceprint recognition, VPR) is one of the most important branches in speech processing. Its applications become wider and wider in various fields, such as public security, anti-terrorism, justice, telephony banking, personal services, and so on. However, there are still many fundamental and theoretical problems to solve, such as issues of background noises, cross-channel, multiple-speakers, and short speech segment for training and testing.
The purpose of this special session is to invite researchers in this field to present their state-of-art technical achievements. Papers are invited to cover, but not limited to, the following topics:
* Text-dependent and text-independent speaker identification
* Text-dependent and text-independent speaker verification
* Speaker detection
* Speaker segmentation
* Speaker tracking
* Speaker recognition systems and application
* Resource creation for speaker recognition
This special session also provides a platform for developers in this field to evaluate their speaker recognition systems using the same database provided by this special session. Evaluation of speaker recognition systems will cover the following tasks:
* Text-independent speaker identification
* Text-dependent and text-independent speaker verification
* Text-independent cross-channel speaker identification
* Text-dependent and text-independent cross-channel speaker verification
Final details on these tasks (including evaluation criteria) will be made available in due course. The development and testing data will be provided by the Chinese Corpus Consortium (CCC). The data sets will be extracted from two CCC databases, which are CCC-VPR3C2005 and CCC-VPR2C2005-1000. Participants are required to submit a full paper to the conference describing their algorithms, systems and results.
Schedule for this special session
* Feb. 01, 2006: On-line registration open, CLOSED on May 1st, 2006
* May. 01, 2006: Development data made available to participants
* May. 21, 2006 (revised): Test data made available to participants
* Jun. 7, 2006 (revised):Test results due at CCC
* Jun. 10, 2006: Results released to participants
* Jun. 15, 2006: Papers due (using ISCSLP standard format)
* Jul. 25, 2006: The full set of the two databases made available to the participants of this special session upon request
* Dec. 16, 2006: Conference presentation
This special session is organized by the CCC .
Please address your enquiries to Dr. Thomas Fang Zheng.
Download the Speaker Recognition Evaluation Registration Form

top

FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS

XXVIèmes Journées d'Étude sur la Parole

12-16 juin 2006
Bretagne
Website
OBJECTIFS
Themes
Les principaux thèmes retenus pour la conférence sont:
1 Production de parole
2 Acoustique de la parole
3 Perception de parole
4 Phonétique et phonologie
5 Prosodie
6 Reconnaissance et compréhension de la parole
7 Reconnaissance de la langue et du locuteur
8 Modèles de langage
9 Synthèse de la parole
10 Analyse, codage et compression de la parole
11 Applications à composantes orales (dialogue, indexation...)
12 Évaluation, corpus et ressources
13 Psycholinguistique
14 Acquisition de la parole et du langage
15 Apprentissage d'une langue seconde
16 Pathologies de la parole
17 Autres ...
DATES À RETENIR
Soumission des articles finaux 1 mai 2006
Date du congrès 12-16 juin 2006
CONTACTS
Pour les questions scientifiques, contactez Pascal Perrier, Président de l'AFCP.
Pour des renseignements pratiques, jep2006@irisa.fr.

PERCEPTION AND INTERACTIVE TECHNOLOGIES ( 06)

Kloster Irsee in southern Germany from June 19 to June 21, 2006.
Website.
Submissions will be short/demo or full papers of 4-10 pages.
Important dates
April 18, 2006: Deadline for advance registration
June 7, 2006: Final programme available on the web
It is envisioned to publish the proceedings in the LNCS/LNAI Series by Springer.
PIT'06 Organising Committee:
Elisabeth André, Laila Dybkjaer, Wolfgang Minker, Heiko Neumann, Michael Weber, Marcus Hennecke, Gregory Baratoff

LABPHON 10 (10th Conference on Laboratory Phonology): Variation, détail et représentation

Paris,29 juin-1er juillet 2006
website Places limitées.
Contact: Cécile Fougeron
Laboratoire de Phonétique et Phonologie
UMR 7018, CNRS- Université Paris 3
ILPGA, 19 rue des Bernardins, 75005 Paris
phone: (+33.1) (0) 1.43.26.57.17 fax. : (+33.1) (0) 1.44.32.05.73


9th Western Pacific Acoustics Conference(WESPAC IX 2006)

June 26-28, 2006
Seoul, Korea
Program Highlights of WESPAC IX 2006
(by Session Topics)
* Human Related Topics- Aeroacoustics
* Product Oriented Topics
* Speech Communication
* Analysis: Through Software and Hardware
* Underwater Acoustics
* Physics: Fundamentals and Applications
* Other Hot Topics in Acoustics
WESPAC IX 2006 Secretariat
SungKyunKwan University, Acoustics Research Laboratory
300 Chunchun-dong, Jangan-ku, Suwon 440-746, Republic of Korea
Tel: +82-31-290-5957 Fax: +82-31-290-7055
E-mail
Website

Colloque international / International colloquium
Bases phonetiques des traits distinctifs- Phonetic bases of distinctive features

Paris, 3 juillet 2006 / 3 July 2006
Objectif : Rassembler des chercheurs en sciences du langage, sciences phone tiques et sciences cognitives autour d'un theme federateur: les bases phonetiques des ultimes atomes de la parole, les= traits distinctifs.
Objective : To bring researchers in the fields of linguistics, phonetics and cognitive sciences together to discuss a common theme : the phonetic bases of the ultimate units of spoken language, the distinctive features.
Lieu / Place : Carré des Sciences, Ministère de la Recherche, 1 rue Descartes, Paris 5e (metro Maubert-Mutualité)
Pré-inscription gratuite mais obligatoire avant le 25 juin au site
Preregistration free but obligatory before June 25 at the site.
Organiser / organisateur: Nick Clements, Laboratoire de Phonétique et Phonologie (UMR 7018, CNRS/Sorbonne-nouvelle), 19 rue des Bernardins, 75005 Paris, France
Funded by / financé par Le Ministère Délégué de la Recherche, sous le programme d'Action Concertée Incitative PROSODIE
Organizing Committee / Comitéd'organisation
Nick Clements, LPP, Paris
Jean-Marc Beltzung, LPP, Paris
Rajesh Khatiwada, LPP, Paris
C=C3=A9dric Patin, LPP, Paris
Alexis Michaud, LPP, Paris
Rachid Ridouane, LPP, Paris
Martine Toda, LPP, Paris
Pour plus d'informations, voir le site

Journée Nasalité

Mercredi 5 juillet 2006 de 9h00 à 18h30.
Auditoire Hotyat (1er étage), Université de Mons-Hainaut, 17, Place Warocqué, 7000 Mons.
Website
Conférenciers invités
Pierre Badin (Institut de la Communication Parlée, Grenoble, France)
Abigail Cohn (Cornell University, New York, USA)
Didier Demolin (Universidade de Sao Paulo, Brazil & Université Libre de Bruxelles, Belgique)
Dates
Date de notification de l'acceptation Mercredi 26 avril 2006
Date du colloque Mercredi 5 juillet 2006
Publications
*Un livre contenant les résumés des communications sera distribué à toutes les personnes inscrites au colloque.
*Les participants sont invités à soumettre une version écrite de leur communication pour une éventuelle publication dans le numéro spécial de la revue Parole qui sera consacré au colloque.
Date limite de soumission des papiers: mercredi 9 aout 2006.
Inscription
Inscrivez-vous par simple mail à l'adresse: nasal@umh.ac.be.
Informations
Website
Contact: Véronique Delvaux
Laboratoire de Phonétique
Université de Mons-Hainaut
18, place du Parc, 7000 Mons Belgium
+3265373140

AAAI Workshop on Statistical and Empirical Approaches for Spoken Dialogue Systems

Boston, Massachusetts, USA
16 or 17 July 2006
Workshop website
OVERVIEW
This workshop seeks to draw new work on statistical and empirical approaches for spoken dialogue systems. We welcome both theoretical and applied work, addressing issues such as:
* Representations and data structures suitable for automated learning of dialogue models
* Machine learning techniques for automatic generation and improvement of dialogue managers
* Machine learning techniques for ontology construction and integration
* Techniques to accurately simulate human-computer dialogue
* Creation, use, and evaluation of user models
* Methods for automatic evaluation of dialogue systems
* Integration of spoken dialogue systems into larger intelligent agents, such as robots
* Investigations into appropriate optimization criteria for spoken dialogue systems
* Applications and real-world examples of spoken dialogue systems incorporating statistical or empirical techniques
* Use of statistical or empirical techniques within multi-modal dialogue systems
* Application of statistical or empirical techniques to multi-lingual spoken dialogue systems
* Rapid development of spoken dialogue systems from database content and corpora
* Adaptation of dialogue systems to new domains and languages
* The use and application of techniques and methods from related areas, such as cognitive science, operations research, emergence models, etc.
* Any other aspect of the application of statistical or empirical techniques to Spoken Dialogue Systems.
WORKSHOP FORMAT
This will be a one-day workshop immediately before the main AAAI conference and will consist mainly of presentations of new work by participants.
The day will also feature a keynote talk from Satinder Singh (University of Michigan), who will speak about using Reinforcement Learning in the spoken dialogue domain.
Interaction will be encouraged and sufficient time will be left for discussion of the work presented. To facilitate a collaborative environment, the workshop size will be limited to authors, presenters, and a small number of other participants.
Proceedings of the workshop will be published as an AAAI technical report.
SUBMISSION AND REVIEW PROCESS
Prospective authors are invited to submit full-length, 6-page, camera-ready papers via email. Authors are requested to use the AAAI paper template and follow the AAAI formatting guidelines.
AAAI paper template
AAAI formatting guidelines.
Authors are asked to email papers to Jason Williams.
All papers will be reviewed electronically by three reviewers. Comments will be provided and time will be given for incorporation of comments into accepted papers.
For accepted papers, at least one author from each paper is expected to register and attend. If no authors of an accepted paper register for the workshop, the paper may be removed from the workshop proceedings. Finally, authors of accepted papers will be expected to sign a standard AAAI-06 "Permission to distribute" form.
IMPORTANT DATES
* Monday 24 April 2006 : Acceptance notification
* Friday 5 May 2006 : AAAI-06 and workshop registration opens
* Friday 12 May 2006 : Final camera-ready papers and "AAAI Permission to distribute" forms due
* Friday 19 May 2006 : AAAI-06 Early registration deadline
* Friday 16 June 2006 : AAAI-06 Late registration deadline
* Sunday 16 or Monday 17 July 2006 : Workshop
* Tuesday-Thursday 18-20 July 2006 : Main AAAI-06 Conference
ORGANIZERS
Pascal Poupart, University of Waterloo
Stephanie Seneff, Massachusetts Institute of Technology
Jason D. Williams, University of Cambridge
Steve Young, University of Cambridge
ADDITIONAL INFORMATION
For additional information please contact: Jason D. Williams
submissions
Phone: +44 7786 683 013
Fax: +44 1223 332662
Cambridge University
Department of Engineering
Trumpington Street
Cambridge
CB2 1PZ
United Kingdom

2006 IEEE International Workshop on Machine Learning for Signal Processing

(Formerly the IEEE Workshop on Neural Networks for Signal Processing)
September 6 - 8, 2006, Maynooth, Ireland
MLSP'2006 webpage
The sixteenth in a series of IEEE workshops on Machine Learning for Signal Processing (MLSP) will be held in Maynooth, Ireland, September 6-8, 2006. Maynooth is located 15 miles west of Dublin in Co. Kildare, Ireland?s equestrian and golfing heartland (and home to the 2006 Ryder Cup). It is a pleasant 18th century planned town, best known for its seminary, St. Patrick's College, where Catholic Priests have been trained since 1795. Co.Kildare.
The workshop, formally known as Neural Networks for Signal Processing (NNSP), is sponsored by the IEEE Signal Processing society (SPS) and organized by the MLSP technical committee of the IEEE SPS. The name of the NNSP technical committee, and hence the workshop, was changed to Machine Learning for Signal Processing in September 2003 to better reflect the areas represented by the technical committee.
Topics
The workshop will feature keynote addresses, technical presentations, special sessions and tutorials, all of which will be included in the registration. Papers are solicited for, but not limited to, the following areas:
Learning Theory and Modeling; Bayesian Learning and Modeling; Sequential Learning; Sequential Decision Methods; Information-theoretic Learning; Neural Network Learning; Graphical and Kernel Models; Bounds on performance; Blind Signal Separation and Independent Component Analysis; Signal Detection; Pattern Recognition and Classification, Bioinformatics Applications; Biomedical Applications and Neural Engineering; Intelligent Multimedia and Web Processing; Communications Applications; Speech and Audio Processing Applications; Image and Video Processing Applications.
A data analysis and signal processing competition is being organized in conjunction with the workshop. This competition is envisioned to become an annual event where problems relevant to the mission and interests of the MLSP community will be presented with the goal of advancing the current state-of-the-art in both theoretical and practical aspects. The problems are selected to reflect the current trends to evaluate existing approaches on common benchmarks as well as areas where crucial developments are thought to be necessary. Details of the competition can be found on the workshop website.
Selected papers from MLSP 2006 will be considered for a special issue of Neurocomputing to appear in 2007. The winners of the data analysis and signal processing competition will also be invited to contribute to the special issue.
Paper Submission Procedure
Prospective authors are invited to submit a double column paper of up to six pages using the electronic submission procedure described at the workshop homepage. Accepted papers will be published in a bound volume by the IEEE after the workshop and a CDROM volume will be distributed at the workshop.
Chairs
General Chair:Seán MCLOONE, NUI Maynooth,
Technical Chair:Tülay ADALI , University of Maryland, Baltimore County

Workshop on Multimedia Content Representation, Classification and Security (MRCS)

September 11 - 13, 2006
Istanbul, Turkey
Workshop website
In cooperation with
The International Association for Pattern Recognition (IAPR)
The European Association for Signal-Image Processing (EURASIP)
GENERAL CHAIRS
Bilge Gunsel,Istanbul Technical Univ.,Turkey
Anil K. Jain, Michigan State University, USA
TECHNICAL PROGRAM CHAIR
Murat Tekalp,Koc University, Turkey
SPECIAL SESSIONS CHAIR
Kivanc Mihcak, Microsoft Research, USA
Prospective authors are invited to submit extended summaries of not more than six (6) pages including results, figures and references. Submitted papers will be reviewed by at least two members of the program committee. Conference Proceedings will be available on site. Please check the website for further information.
IMPORTANT DATES
Notification of Acceptance: June 10, 2006
Camera-ready Paper Submission Due: July 10, 2006
Topics
The areas of interest include but are not limited to:
- Feature extraction, multimedia content representation and classification techniques
- Multimedia signal processing
- Authentication, content protection and digital rights management
- Audio/Video/Image Watermarking/Fingerprinting
- Information hiding, steganography, steganalysis
- Audio/Video/Image hashing and clustering techniques
- Evolutionary algorithms in content based multimedia data representation, indexing and retrieval
- Transform domain representations
- Multimedia mining
- Benchmarking and comparative studies
- Multimedia applications (broadcasting, medical, biometrics, content aware networks, CBIR.)

Ninth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2006)

Brno, Czech Republic, 11-15 September 2006
Website
The conference is organized by the Faculty of Informatics, Masaryk University, Brno, and the Faculty of Applied Sciences, University of West Bohemia, Pilsen. The conference is supported by International Speech Communication Association.
TSD SERIES
TSD series evolved as a prime forum for interaction between researchers in both spoken and written language processing from the former East Block countries and their Western colleagues. Proceedings of TSD form a book published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series.
TOPICS
Topics of the conference will include (but are not limited to):
text corpora and tagging
transcription problems in spoken corpora
sense disambiguation
links between text and speech oriented systems
parsing issues, especially parsing problems in spoken texts
multi-lingual issues, especially multi-lingual dialogue systems
information retrieval and information extraction
text/topic summarization
machine translation semantic networks and ontologies
semantic web speech modeling
speech segmentation
speech recognition
search in speech for IR and IE
text-to-speech synthesis
dialogue systems
development of dialogue strategies
prosody in dialogues
emotions and personality modeling
user modeling
knowledge representation in relation to dialogue systems assistive technologies based on speech and dialogue applied systems and software facial animation visual speech synthesis Papers on processing of languages other than English are strongly encouraged.
ORGANIZERS
Frederick Jelinek, USA (general chair)
Hynek Hermansky, USA (executive chair)
KEYNOTE SPEAKERS
Eduard Hovy, USA
Louise Guthrie, GB
James Pustejovsky, USA
FORMAT OF THE CONFERENCE
The conference program will include presentation of invited papers, oral presentations, and a poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions.
Social events including a trip in the vicinity of Brno will allow for additional informal interactions.
CONFERENCE PROGRAM
The conference program will include oral presentations and poster/demonstration sessions with sufficient time for discussions of the issues raised. The conference will welcome three keynote speakers - Eduard Hovy, Louise Guthrie and James Pustejovsky, and it will offer two special panels devoted to Emotions and Search in Speech.
IMPORTANT DATES
May 15 2006 .............. Notification of acceptance
May 31 2006 .............. Final papers (camera ready) and registration
July 23 2006 ............. Submission of demonstration abstracts
July 30 2006 ............. Notification of acceptance for demonstrations sent to the authors
September 11-15 2006 ..... Conference date
The contributions to the conference will be published in proceedings that will be made available to participants at the time of the conference.
OFFICIAL LANGUAGE
of the conference will be English.
ADDRESS
All correspondence regarding the conference should be addressed to
Dana Hlavackova, TSD 2006
Faculty of Informatics, Masaryk University
Botanicka 68a, 602 00 Brno, Czech Republic
phone: +420-5-49 49 33 29
fax: +420-5-49 49 18 20
email
LOCATION
Brno is the the second largest city in the Czech Republic with a population of almost 400.000 and is the country's judiciary and trade-fair center. Brno is the capital of Moravia, which is in the south-east part of the Czech Republic. It had been a Royal City since 1347 and with its six universities it forms a cultural center of the region.
Brno can be reached easily by direct flights from London and Munich and by trains or buses from Prague (200 km) or Vienna (130 km).

MMSP-06

IEEE Signal Processing Society 2006 International Workshop on Multimedia Signal Processing (MMSP06),
October 3-6, 2006,
Fairmount Empress Hotel, Victoria, BC, Canada
Website
- A Student Paper Contest with awards sponsored by Microsoft Research. To enter the contest a paper submission must have a student as the first author
- Overview sessions that consist of papers presenting the state-of-the-art in methods and applications for selected topics of interest in multimedia signal processing
- Wrap-up presentations that summarize the main contributions of the papers accepted at the workshop, hot topics and current trends in multimedia signal processing
- New content requirements for the submitted papers
- New review guidelines for the submitted papers
SCOPE
Papers are solicited for, but not limited to, the general areas:
- Multimedia Processing (modalities: audio, speech, visual, graphics, other; processing: pre- and post- processing of multimodal data, joint audio/visual and multimodal processing, joint source/channel coding, 2-D and 3-D graphics/geometry coding and animation, multimedia streaming)
- Multimedia Databases (content analysis, representation, indexing, recognition, and retrieval)
- Multimedia Security (data hiding, authentication, and access control)
- Multimedia Networking (priority-based QoS control and scheduling, traffic engineering, soft IP multicast support, home networking technologies, wireless technologies)
- Multimedia Systems Design, Implementation and Applications (design: distributed multimedia systems, real-time and non real-time systems; implementation: multimedia hardware and software; applications: entertainment and games, IP video/web conferencing, wireless web, wireless video phone, distance learning over the Internet, telemedicine over the Internet, distributed virtual reality)
- Human-Machine Interfaces and Interaction using multiple modalities
- Human Perception (including integration of art and technology)
- Standards
SCHEDULE
- Notification of acceptance by: June 8, 2006
- Camera-ready paper submission by: July 8, 2006 (Instructions for Authors)
Check the workshop website for updates.
Manage your subscription at: http://ewh.ieee.org/enotice/ options.php?LN=CONF

CFP Fifth Slovenian and First International LANGUAGE TECHNOLOGIES CONFERENCE IS-LTC 2006

Slovenian Language Technologies Society
Information Society - IS 2006
Ljubljana, Slovenia/October 9 - 10, 2006
conference website
The Slovenian Language Technologies Society invites contributions to its biennial conference to be held in the scope of the Information Society - IS 2006, taking place October 9 - 13, 2006 at the Jožef Stefan Institute in Ljubljana, Slovenia.
The official languages of the conference are English and Slovene. The conference will be organised in two tracks, one for contributions in English, and the other for those in Slovenian. The accepted papers will be published in printed proceedings, as well as on-line, on the conference Web site http://nl.ijs.si/is-ltc06/.
Conference Topics
We invite papers from academia, government, and industry on all areas of traditional interest to the HLT community, as well as related fields, including but not limited to:
* development, standardisation and use of language resources
* speech technologies
* machine translation and other multi- and cross-lingual processing
* semantic web and knowledge representation related HLT
* statistical and machine learning of language models
* information retrieval and extraction, question answering
* HLT applications
* presentations of HLT related projects
Invited speakers
> Nick Campbell, Chief Researcher, Media Information Science Laboratories
ATR, Japan
Steven Krauwer, Coordinator of ELSNET (European Network of Excellence in Human Language Technologies)
Utrecht University, Netherlands
Title of talk: Strengthening the smaller languages in Europe
Guidelines for Submissions
Submitted papers should present original research relevant to the field of human language technologies. Overview papers on HLT research and applications are also welcome.
The contributions should be written in English or Slovene. They should be 4 or 6 pages long and formatted according to the conference style guidelines, which are available from the conference Web site.
The papers will be published in printed proceedings, as well as on-line, on the conference Web site. Some papers will be chosen for re-submission to the journal Informatica.
Important Dates
June 25th paper submission deadline
September 15th camera ready submission
October 9 - 10 conference
Organising Committee
Tomaž Erjavec, Jožef Stefan Institute
Vojko Gorjanc, University of Ljubljana
Jerneja Žganec Gros, Alpineon
Information
Up to date information is available at http://nl.ijs.si/is-ltc06/ or email.

Call for papers- 9th DIMACS Implementation Challenge Workshop: Shortest Paths

WEBSITE
Goals
Shortest path problems are ones of the most fundamental combinatorial optimization problems with many applications, both direct and as subroutines in other combinatorial optimization algorithms. Algorithms for these problems have been studied since 1950's and still remain an active area of research. One goal of this Challenge is to create a reproducible picture of the state of the art in the area of shortest path algorithms. To this end we are identifying a standard set of benchmark instances and generators, as well as a benchmark implementations of well-known shortest path algorithms. Another goal is to enable current researchers to compare their codes with each other, in hopes of identifying the more effective of the recent algorithmic innovations that have been proposed. The final goal is to publish proceedings containing results presented at the Challenge Workshop, and a book containing the best of the proceedings papers.
Scope
The Challenge addresses a wide range of shortest path problems, including all sensible combinations of the following:
* Point-to-point, single-source, all-pairs.
* Non-negative arc lengths and arbitrary arc lengths (including negative cycle detection).
* Directed and undirected graphs.
* Static and dynamic problems. The latter include those dynamic in CS sense (arc additions, deletions, length changes) and those dynamic in OR sense (arc transit times depending on arrival times).
* Exact and approximate shortest paths.
* Compact routing tables and shortest path oracles.
Implementations on any platform of interest, for example desktop machines, supercomputers, and handheld devices, are encouraged.
How to participate
People interested in submitting papers to the Challenge Workshop can find benchmark instances, generators, and code for the problems they address at the Challenge website, along with detailed information on file formats. Your work can take two different directions.
1. Defining instances for algorithm evaluation. The instances should be natural and interesting. By the latter we mean instances that cause good algorithms to behave differently from the other instances. Interesting real-life application data are especially welcome.
2. Algorithm evaluation. Description of implementations of algorithms with experimental data that supports conclusions about practical performance. Common benchmark instances and codes should be used so that there is common ground for comparison. The most obvious way for such a paper to be interesting (and selected for the proceedings) is if the implementation improves state-of-the-art. However, there may be other ways to produce and interesting paper, for example by showing that an approach that looks well in theory does not work well in practice by explaining why this is the case.
Challenge Book
The best papers presented at the Challenge Workshop will be selected for publication in a book published in the DIMACS Book Series.
Important dates
- August 25, 2006: Paper submission deadline
- September 25, 2006: Author notification
- November 13-14, 2006: Challenge Workshop, DIMACS Center, Rutgers University, Piscataway, NJ
Organizing Committee
Camil Demetrescu, University of Rome "La Sapienza"
Andrew Goldberg, Microsoft Research
David Johnson, AT&T Labs - Research
Advisory Committee
Paolo Dell'Olmo, University of Rome "La Sapienza"
Irina Dumitrescu, University of New South Wales
Mikkel Thorup, AT&T Labs-Research
Dorothea Wagner, Universitaet Karlsruhe

Call for papers 8th International Conference on Signal Processing

Nov. 16-20, 2006, Guilin, China
website
The 8th International Conference on Signal Processing will be held in Guilin, China on Nov. 16-20, 2006. It will include sessions on all aspects of theory, design and applications of signal processing. Prospective authors are invited to propose papers in any of the following areas, but not limited to:
A. Digital Signal Processing (DSP)
B. Spectrum Estimation & Modeling
C. TF Spectrum Analysis & Wavelet
D. Higher Order Spectral Analysis
E. Adaptive Filtering &SP
F. Array Signal Processing
G. Hardware Implementation for Signal Processing
H. Speech and Audio Coding
I. Speech Synthesis & Recognition
J. Image Processing & Understanding
K. PDE for Image Processing
L. Video compression &Streaming
M. Computer Vision & VR
N. Multimedia & Human-computer Interaction
O. Statistic Learning & Pattern Recognition
P. AI & Neural Networks
Q. Communication Signal processing
R. SP for Internet and Wireless Communications
S. Biometrics & Authentification
T. SP for Bio-medical & Cognitive Science
U. SP for Bio-informatics
V. Signal Processing for Security
W. Radar Signal Processing
X. Sonar Signal Processing and Localization
Y. SP for Sensor Networks
Z. Application & Others

CFP CI 2006 Special Session on Natural Language Processing for Real Life Applications

November 20-22, 2006 San Francisco, California, USA
Website
Topics
The Special Session on Natural Language Processing for Real Life Applications will cover the following topics (but is not limited to):
1. speech recognition, in particular
* multilingual speech recognition
* large vocabulary continuous speech recognition with focus on the application
2. real life dialog systems
* natural language dialog systems
* multimodal dialog systems
3. speech-based classification
* speaker classification, i.e. exploiting paralinguistic features of the speech to gather information about the speaker (for example age, gender, cognitive load, and emotions)
* language and accent identification
Paper Submission
Please submit papers for the special session directly to the session chair (christian.mueller@dfki.de). DO NOT submit the papers through the IASTED website. Otherwise, the papers will be handled as general papers for the conference. Each submission will be reviewed by at least two independent reviewers. The final selection of papers for the session will be done by the session chair; acceptance/rejection letters and review comments along with registration information will be provided by IASTED by the general Notification deadline.
Formatting instructions
Please follow the formatting instructions provided by IASTED. Website.
Important Dates
Submissions due June 15, 2006
Notification of acceptance August 1, 2006
Camera-ready manuscripts due September 1, 2006
Registration Deadline September 15, 2006
Conference November 20 - 22, 2006
Registration
All papers accepted for the special session are required to register before the general conference registration deadline.

CFP - IEEE/ACL 2006 Workshop on Spoken Language Technology

Aruba Marriott
Palm Beach, Aruba
December 10 -- December 13, 2006
Workshop website
Workshop Topics
Spoken language understanding; Spoken document summarization, Machine translation for speech; Spoken dialog systems; Spoken language generation; Spoken document retrieval; Human/Computer Interactions (HCI); Speech data mining; Information extraction from speech; Question/Answering from speech; Multimodal processing; Spoken language systems, applications and standards.
Submissions for the Technical Program
The workshop program will consist of tutorials, oral and poster presentations, and panel discussions. Attendance will be limited with priority for those who will present technical papers; registration is required of at least one author for each paper. Submissions are encouraged on any of the topics listed above. The style guide, templates, and submission form will follow the IEEE ICASSP style. Three members of the Scientific Committee will review each paper. The workshop proceedings will be published on a CD-ROM.
Schedule
Camera-ready paper submission deadline July 15, 2006
Hotel Reservation and Workshop registration opens July 30, 2006
Paper Acceptance / Rejection September 1, 2006
Hotel Reservation and Workshop Registration closes October 15, 2006
Workshop December 10-13, 2006
Registration and Information
Registration and paper submission, as well as other workshop information, can be found on the SLT website.

Organizing Committee
General Chair: Mazin Gilbert, AT&T, USA
Co-Chair: Hermann Ney, RWTH Aachen, Germany
Finance Chair: Gokhan Tur, SRI, USA
Publication Chair: Brian Roark, OGI/OHSU, USA
Publicity Chair: Eric Fosler-Lussier, Ohio State U., USA
Industrial Chair: Roberto Pieraccini, Tell-Eureka, USA

IEEE International Symposium on Multimedia - ISM 2006

Conference website
Special track: Remote Sensors for Audio Processing
organizer website
In recent decades, the cost of acoustic technologies has declined dramatically. Advances in networks, storage devices, and power management have made it practical to consider the remote location of sensors that either transmit data to a central processing facility or store the data for later retrieval. Nonetheless, many challenges remain for the fabrication, deployment and use of remote sensors. In locations with limited infrastructure, power management and the ability for the user to access or retrieve the data are paramount. In some situations, the need for localization or improved signal to noise ratio may dictate the use of microphone arrays or other signal enhancement techniques. Deployment in hostile environments such as arctic or deep sea conditions requires additional considerations. Remote sensors are capable of generating large acoustic or mixed media datasets. With these large corpora, the need for automated processing becomes critical as the staffing requirements for human analysis are both cost and labor prohibitive. The development of automated analysis can yield valuable data such as seasonal or diel patterns of animals, perimeter intrusion detection, access control, and a myriad of other applications. This special session invites researchers to submit high quality papers describing either preliminary or mature results on topics related to audio for remote sensors.
Topics of interest
· Audio classification and detection tasks for remote sensors (speech, bioacoustics, auditory scene analysis, etc.)
· Deployment issues
· Power management
· Networking/Storage/Data Management
· Array processing
· Remote audio sensors in challenging environments
· Applications of remote sensors with a significant audio component
Submissions and deadlines
The written and spoken language of ISM2006 is English. Authors should submit an 8-page technical paper manuscript in double-column IEEE format including authors' names and affiliations, and a short abstract electronically. Submissions should be directed to < href="mailto:mroch@sciences.sdsu.edu"> Prof. Marie Roch , following the formatting instructions available in the submission guidelines for regular papers. Note that papers should not be submitted directly to ISM web site. Only electronic submissions will be accepted. All papers should be in Adobe portable document format (PDF). The paper should have a cover page, which includes a 200-word abstract, a list of keywords, and author's phone number and e-mail address. The Conference Proceedings will be published by the IEEE Computer Society Press.
Important dates:
· August 8 - submission of papers
· September 10 - Notification of acceptance of papers
· September 25 - Camera-ready papers due
· December 11-13 - Conference at Paradise Point Resort & Spa in San Diego , California

16th International Congress of Phonetic Sciences

Saarland University, Saarbrücken,<
6-10 August 2007.
The first call for papers will be made in April 2006. The deadline for *full-paper submission* to ICPhS 2007 Germany will be February 2007. Further information is available under conference website

RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING (RANLP-07)

SAMOKOV hotel, Borovets, Bulgaria
conference website
RANLP-07 tutorials: September 23-25, 2007 (Sunday-Tuesday)
RANLP-07 workshops: September 26, 2007 (Wednesday)
6th Int. Conference RANLP-07: September 27-29, 2007 (Thursday-Saturday)
We are pleased to announce that the dates for RANLP’07 have been finalised (see above). Building on both the successful international summer schools organised for more than 17 years, and previous conferences held in 1995, 1997, 2001, 2003 and 2005, RANLP has become one of the most influential, competitive and far-reaching conferences, with wide international participation from all over the world. Featuring leading lights in the area as keynote speakers or tutorial speakers, RANLP has now grown into a larger-scale meeting with accompanying workshops and other events. In addition to the 6 keynote speeches and tutorials on hot NLP topics, RANLP07 will be accompanied by workshops and shared task competitions.
Volumes of selected papers are traditionally published by John Benjamins Publishers and previous conferences have enjoyed support from the European Commission.
Important dates
: Conference 1st Call for Papers: October 2006;
Call for Workshop proposals: November 2006,
deadline of proposals end of January 2007;
Workshop selection: early March 2007;
Conference Submission deadline: March 2007 with notification 30 May 2007;
Workshop Submission deadline: 15 June 2007 with notification in July 2007;
RANLP-07 tutorials, workshops and conference: 23-30 September 2007
The conference will be held in the picturesque resort of Borovets. It is located in the Rila mountains and is one of the best known ski and tourist resorts in South-East Europe. The conference venue Samokov hotel offers excellent working and leisure facilities. Borovets is only 1 hour away from Sofia international airport.
THE TEAM BEHIND RANLP-07
Galia Angelova, Bulgarian Academy of Sciences, Bulgaria (Chair of the Organising Committee)
Kalina Bontcheva, University of Sheffield, UK
Ruslan Mitkov, University of Wolverhampton, UK (Chair of the Programme Committee)
Nicolas Nicolov, Umbria Communications, Boulder, USA
Nikolai Nikolov, INCOMA Ltd., Shoumen, Bulgaria
Kiril Simov, Bulgarian Academy of Sciences, Bulgaria (Workshop Coordinator)
E-mail

top