MESSAGE from :Jean-François Bonastre, Board Member in charge of Student Relations, Grants and Awards.
Dear Colleagues,
I joined the ISCA Board last September. My work concerns mainly the relations between ISCA and the students. For that,
I am working together with the ISCA Student Advisory Committee, in order to hear their needs, to help them
in organizing
both the student web services and student special events. Visit their
pages . I am also requesting
their advice for all the student related questions, like the grant system evolution.
I am also the “ISCA Grant Manager”, it means that I am dealing with the grant requests. This work is
done in collaboration with
the ISCA secretary, Emmanuelle, who is in charge of the administrative problems
(like money transfers).The Grant system is an important
ISCA action, we are receiving an increasing number of requests and the ISCA Board allocated an increasing budget for the grants in 2006.
I am trying to manage this budget for a good balance between countries, topics and events. We extend
also the grant system, by accepting
requests from students and young researchers but also, exceptionally from researchers
in special situations like unemployment or coming
from low income-level countries. The grant web pages will also change very soon, going from a paper based system to a fully online system.
It will help us to reduce the grant acceptance delay.
Finally, my last specific role in the ISCA Board concerns the awards, like the Student
Best Paper Award for Internspeech conferences.
I hope to see you in Pittsburgh.
Best,
Jean-François Bonastre
Editorial
Dear Members,
This month, after the presentation by our president in the previous issue (#97), the focus will be on the
activity of Jean-Francois Bonastre,ISCA board member in charge of Grants and Awards and Student liaison.
His message appears at the top of this issue.
An important and urgent message from our secretariat will draw your attention in section ISCA News and
require
your personal participation to a survey on critical ISCA activities. Please contribute to our
continuous development by answering this questionnaire.
Also you will find below a call for bids for Interspeech 2010.
I remind you of two important and permanent requests:
First, SIG leaders are urged to submit brief activity reports
to ISCApad.
Second, if you are aware of new books devoted to speech science and/or technology,
please draw my attention
to them, so that I can advertise these books in ISCApad.
I wish you refreshing holidays!
Christian Wellekens
TABLE OF CONTENTS
- ISCA News
- SIG's activities
- Courses, internships
- Books, databases,
softwares
- Job openings
- Journals
- Future Interspeech Conferences
- Future ISCA Tutorial and Research Workshops (ITRW)
- Forthcoming Events supported (but not organized) by ISCA
- Future Speech Science and technology events
ISCA NEWS
IMPORTANT and URGENT message from our secretary
Dear ISCA members,
We are currently conducting an online survey on the activities and services
offered by ISCA.
We value your feedback, and we would greatly appreciate it if you took a few
moments to respond to some questions. Your answers will help us in our efforts to
improve membership services, the ISCA and student websites, and the ISCA conferences
and workshops.
You can either find the survey on the isca homepage
or
click directly
here.
We thank you very much for your time.
Best Regards,
David HOUSE & Emmanuelle FOXONET for the ISCA Board
secretariat.
Call for Bids for Interspeech 2010
Organization of INTERSPEECH 2010
CALL FOR PROPOSALS
Individuals or organizations interested in organizing:
INTERSPEECH 2010
should submit by 15 December 2006 a brief preliminary proposal,
including:
* The name and position of the proposed general chair and other
principal organizers.
* The proposed period in September/October 2010 when the conference
would be held
* The institution assuming financial responsibility for the conference
and any other cooperating institutions
* The city and conference center proposed (with information on that
center's capacity)
* Information on transportation and housing for conference participants
* Likely support from local bodies (e.g. governmental) The commercial
conference organizer (if any)
* A preliminary budget
Interspeech conferences may be held in any country, although they
generally should not occur in the same continent in two consecutive
years. (IS2009 will be held in Brighton, UK.)
Guidelines for the preparation of the proposal are available on our
website.
Additional information can be provided by Isabel Trancoso.
Those who plan to put in a bid are asked to inform ISCA of their
intentions as soon as possible. They should also consider attending
Interspeech 2006 in Pittsburgh to discuss their bids, if possible.
Proposals should be submitted by email to the above address.
Candidates fulfilling basic requirements will be asked to submit a
detailed proposal by 28 February 2007.
ISCA Archive
ISCA is proud to inform the members that thanks to the continuous efforts of our archivist Professor
Wolfgang Hess, all publications of ISCA
including all Conference and Workshop Proceedings are now accessible on our website. Abstracts can be accessed
by everybody. Members have access to the full papers via a password.
We hartily thank Professor Wolfgang Hess for this invaluable contribution.
From ISCA Student activity committee (SAC)
ISCA Speech Labs Listing?
ISCA-SAC is in the process of updating ISCA databases. An important part of
this process is to have an extensive list of speech labs and groups from all
over the world. Right now, there are 102 labs from 24 countries. Please, check
the listing, and enter your
group's information at http://www.isca-students.org/new-speech-lab.php
if your group is not listed.
Student Panel Discussion during Interspeech 2006: "How to get your dream job?
What are they looking for?"
ISCA-SAC (Student Advisory Committee) is organizing a panel discussion on
Sunday (at 6:00 p.m.), September 17th, 2006. Panelists will be research group
managers and professors from industry and academics, and they will answer
students' questions about how to find jobs in industry or academics. Please
make the travel arrangements accordingly if you don't want to miss this event.
This is a great oppurtunity to learn about what big research companies are
looking for during their hiring process. Check ISCA-SAC website later for more
information about this event.
ISCA-SAC website's new version launching before Interspeech 2006:
A new version of ISCA-SAC website will be launching right before Interspeech
2006. As you might know, the new website will contain tools to help students
to access information in their area easily from a personalized environment.
You will see the demo of the website during the conference. Interested
students can take a look at the current version of the website, and if they
want to volunteer for this effort, send an email to us with your resume.
Do you want to become a board member in ISCA Student Advisory Committee?
ISCA-SAC is looking for new motivated members (PhD students early in their
degrees are preferred). There are available positions on ISCA-SAC board. If
you want to volunteer for ISCA and contribute to ISCA-SAC efforts (to get an
idea please visit our website), get into
contact with us by sending email to .
There are
exciting projects that current board members and volunteering students are
working on. Join us!
Murat Akbacak
ISCA-SAC President
PhD Student, University of Colorado at Boulder
Research Intern, University of Texas at Dallas
ISCA student branch
ISCA GRANTS are available for students and young scientists
attending meetings. For more information: http://www.isca-speech.org/grants
top
SIG's activities
A list of Speech Interest Groups can be found on
our web.
top
COURSES, INTERNSHIPS
Call for NATO Advanced Study Institute
International NATO Summer School "E.R.Caianiello" XI Course on
The Fundamentals of Verbal and Non-verbal Communication and the Biometrical Issue
September 2-12, 2006 Vietri sul Mare Italy
Website for details
Studentships available for 2006/7 at the Department of Computer Science
The University of Sheffield - UK
One-Year MSc
in
HUMAN LANGUAGE TECHNOLOGY
The Sheffield MSc in Human Language Technology has been carefully tailored
to meet the demand for graduates with the highly-specialised
multi-disciplinary skills that are required in HLT, both as practitioners in
the development of HLT applications and as researchers into the advanced
capabilities required for next-generation HLT systems. The course provides
a balanced programme of instruction across a range of relevant disciplines
including speech technology, natural language processing and dialogue
systems.
The programme is taught in a research-led environment. This means that you
will study the most advanced theories and techniques in the field, and also
have the opportunity to use state- of-the-art software tools. You will also
have opportunities to engage in research-level activity through in-depth
exploration of chosen topics and through your dissertation.
Graduates from this course are highly valued in industry, commerce and
academia. The programme is also an excellent introduction to the
substantial research opportunities for doctoral-level study in HLT.
A number of studentships are available, on a competitive basis, to
suitably qualified applicants. These awards pay a stipend in addition to
the course fees.
See further details of
the course
Information on how to apply
Appel a participation:
l'Ecole Recherche Multimedia d'Information
Techniques & Sciences
(ERMITES)
Presqu'ile de Giens - Var (France)/4 au 6 septembre 2006.
website
email
ERMITES est organisé avec les soutiens du LSIS, du département
d'informatique de l'UFRST USTV, et de l'Association Francophone de la
Commmunication Parlée (AFCP).
Objectifs :
La diffusion d'informations audiovisuelles, notamment par le web, est
de plus en plus anarchique, ce qui en rend la recherche très
hasardeuse. "L'Ecole Recherche Multimedia d'Information: Techniques &
Sciences" (ERMITES) regroupe, dans un cadre convivial, une dizaine de
spécialistes et une vingtaine de doctorants, postdoc, ingenieurs
ou enseignant/chercheurs, qui analyseront les dernières
avancées, théoriques et pratiques, des Systèmes Robustes de recherche
d'Information Multimodale (SRIM), couplant textes, images, sons ou
videos. Le chercheur ou l'inventeur ne doit plus sentir d'antagonisme
entre ces différents domaines, mais doit en réaliser une synthèse pour
renforcer l'originalité de ses travaux. ERMITES ouvre le vaste champs
scientifique nécessaire à l'élaboration de SRIM, et au problème de
leur fiabilité. Seront donc traités en particulier les theories de
l'information, du signal, des processus aléatoires, de l'apprentissage
automatique ; l'analyse de scene computationnelle (audio et video),
l'intelligence artificielle, les langages de requêtes pour données XML
; le traitement automatique du langage et de la parole ; sciences
cognitives et neurophysiologie de la perception. Plusieurs sessions
seront consacrées à des démonstrations de prototypes et de toolbox
FREEWARE de qualité (dont certaines construites par les intervenants),
notamment :
-apprentissage automatique, modélisation:
TORCH, HTK
-traitement de la parole et des images:
Sirocco et Spro, Speeral, OCTAVE et librairie image
- traitement langage, entités nommées, XML:
PYTHON, UNITEX, GALATEX pour la RI dans des documents
semi-structurés.
ERMITES est un lieu privilegie de rencontres qui a pour but de
renforcer les liens entre les acteurs du domaine de chaque modalité et
théorie traitées. L'Ecole est organisée par sessions de 2 à 3
heures,
où chaque spécialiste présente un exposé pédagogique, repris lors
de
tables rondes avec l'ensemble des participants, où pourront être
présentés et discutés les projets de recherche dans lesquels sont
engagés les participants.
Intervenants (voir résumés sur le site
web )
Organisateurs
Hervé Glotin & Jacques Le Maitre
LSIS / Univ. Sud Toulon Var
BP20132, 83957 La Garde cedex 20
TEl : 04 94 14 20 06 ; 04 94 14 28 24
Fax: 04 94 14 28 97
top
BOOKS, DATABASES, SOFTWARES
Multilingual Speech Processing
Editors: Tanja Schultz & Katrin Kirchhoff ,
Elsevier Academic Press, April 2006
Website
Reconnaissance automatique de la parole: Du signal a
l'interpretation
Authors: Jean-Paul Haton
Christophe Cerisara
Dominique Fohr
Yves Laprie
Kamel Smaili
392 Pages
Publisher: Dunod
CFP Special Issue on
Multimodal Audiovisual Content Abstraction
International Journal of
Image and Video Processing
The accurate management of large volumes of digital multimodal
audiovisual content calls for a proper mapping of this content onto
representation spaces with a high-level degree of interpretation. This
operation referred to as content abstraction may be supervised,
unsupervised, automatic, or semiautomatic. Content abstraction here is
considered in the large sense and includes (semi-)automatic annotation,
content (e.g., keyframe) selection, or summarization. Typical problems
are fusion of heterogeneous streams, learning (structured) semantics
from low-level features, interrelating document content parts, and
extracting salient multimodal content. Tools used in this context arise
from signal processing, machine learning, data mining, and knowledge
engineering.
Specifically, this special issue will gather high-quality original
contributions on all aspects of audiovisual content abstraction. Topics
of interest include (but are not limited to):
* “Key” feature extraction/characterization (frames, transitions,
shots, story, etc.)
* Summarization of video content
* Similarity measures for video content
* Video content processing for indexing
* Multistream processing/fusion
* Interactive video content characterization
* Mosaicing for content representation
Authors should follow the IJIVP manuscript format described on the
website. Prospective authors should
submit an electronic copy of their complete manuscripts through the
IJIVP manuscript tracking system ,
according to the following timetable:
Manuscript Due October 1, 2006
Acceptance Notification February 1, 2007
Final Manuscript Due April 1, 2007
Publication Date 2nd Quarter, 2007
Guest Editors:
Stéphane Marchand-Maillet, Viper Group, Computer Vision and Multimedia
Laboratory, Department of Computer Science, University of Geneva,
CH-1211 Geneva 4, Switzerland
Bernard Mérialdo, Department of Multimedia Communications, Institut
Eurécom, 06904 Sophia Antipolis Cedex, France
Marcel Worring, Intelligent Sensory Information Systems, Computer
Science Institute, Faculty of Science, University of Amsterdam, 1098 SJ
Amsterdam, The Netherlands
Milind R. Naphade, Pervasive Media Management Group, IBM T.J. Watson
Research Center, White Plain, NY 10604, USA
top
JOB OPENINGS
We invite all laboratories and industrial companies which have job
offers to send them to the ISCApad
editor: they will appear in the newsletter and on our website for
free. (also have a look at http://www.isca-speech.org/jobs.html as
well as http://www.elsnet.org/ Jobs)
Saarland University, Saarbruecken, Germany, 3 doctoral scholarships
Saarland University anticipates the availability of up to three doctoral scholarships within the
Partnership International for Research and Education (PIRE)
Meaning Representations in Language Understanding
The partnership for research and education (PIRE), established in 2005,
is a collaborative PhD programme between
* Saarland University, Germany
* the Brown Laboratory for Linguistic Information Processing headed by
Eugene Charniak
* The Johns Hopkins University Center for Language and Speech Processing
(CLSP) headed by Frederick Jelinek
* Charles University (Jan Hajic), Czech Republic.
PIRE is also affiliated with our existing International Graduate College
(IGK) co-operation with Edinburgh University.
Each scholarship is funded for two years in the first instance, normally
extendable for a third year. Doctoral degrees may be obtained in
computational linguistics, phonetics, engineering or informatics, from
Saarland University. The official language of the programme is English,
and dissertations may be written in English or German.
The nature of the cooperation includes:
* Joint supervision of dissertations by lecturers from Saarbruecken and
the US
* A six to twelve months research stay at Brown or Johns Hopkins
University
* An intensive research exchange programme between all four
participating sites (including, for example, an annual two-week forum
attended by college members and lecturers from all four centres)
PhD projects will be in the area of meaning representation for natural
language processing and suitable applications (Speech Reconstruction,
Machine Translation Systems, ...).
Academic staff in Saarbruecken are William Barry, Matthew Crocker,
Martin Kay, Dietrich Klakow, Valia Kordoni, Jonas Kuhn, Manfred Pinkal,
Hans Uszkoreit, and Wolfgang Wahlster
In Prague is Jan Hajic is the coordinator
At Brown University Eugene Charniak and Mark Johnson participate.
Academic staff at Johns Hopkins University are Frederick Jelinek, Jason
Eisner, Bob Frank, Keith Hall, Sanjeev Khudanpur and Paul Smolensky.
The scholarship currently provides EURO 1468 per month (approximately
USD 1835). Additional compensation includes family allowance (where
applicable), travel funding, support for carrying out experiments, and
an additional monthly allowance for the duration of the stay in the US.
Applicants should hold a strong university degree equivalent to the
German Diplom or Magister (e.g. Master's level), in a relevant
discipline. Applicants should not be more than 28 years of age. Female
scientists and international students are particularly encouraged to
apply.
Applications should include:
* a curriculum vitae indicating degrees obtained, disciplines covered
(e.g. list of courses or transcript), publications, and other relevant
experience
* a sample of written work (e.g. research paper, or dissertation,
preferably in English)
* copies of high school and university
certificates two references (to be sent directly to the college speaker
by the deadline)
* an informal cover letter specifying interests, previous knowledge and
activities in any of the relevant research areas. Where possible it
should include a brief outline of research interests to be pursued
within the scholarship.
Up to three scholarships will be available from October 2006. Your
application should be sent to:
PIRE office
Claudia Verburg
Department of Computational Linguistics
Saarland University
P.O. Box 15 11 50
D-66041 Saarbruecken
Germany
by 15. August 2006. Later applications may be considered subject to
availability of scholarships.
For additional information please contact
Prof. Dr. Dietrich Klakow
PD Dr. Valia Kordoni or
Prof. Dr. Matthew Crocker
See also:
http://www.coli.uni-saarland.de/projects/igk/
http://www.clsp.jhu.edu/research/pire/
http://www.coli.uni-saarland.de/.
Research positions (PhDs, Postdocs) in spoken language processing at
INESC ID, Lisbon Portugal
The Spoken Language Systems Lab (L2F) of INESC ID currently has several
open positions for PhD students and postdocs in spoken language
processing, covering the following topics:
- audio events detection
- voice morphing
- spontaneous speech recognition
- recognition of different varieties of Portuguese
- use of GRID computing for NLP
Most of these research activities take place in the framework of
national projects (e.g. "Rich Transcription of Lectures for E-Learning
Applications", "Natural Language Engineering on a Computational GRID")
and/or European projects ("Education through Characters with Emotional
Intelligence and Role-playing Capabilities that Understand Social
Interaction", "Audiovisual Search Engines") or bilateral cooperation
programs.
Applicants are expected to have several years experience in the above
areas, with good and practical knowledge of C/C++/Java.
Interested candidates should send a letter of motivation until October
1st, along with their detailed CV and names of 3 references by
email
For more information, please contact Isabel Trancoso at the same address
or visit the website.
Positions available at Acapela Group, Mons, Belgium
R&D engineer TTS
R&D engineer ASR
Computational linguist TTS
Details can be found on our website
Position Available:
SRI International
Speech Technology and Research Laboratory
The Speech Technology and Research (STAR) Laboratory at SRI International seeks a self-motivated,
team-oriented researcher
in machine translation. Highly qualified postdoctoral fellows may also apply.
The STAR Laboratory is engaged in leading-edge research in speech recognition, automatic spoken language
translation,
speaker recognition and verification, human-machine interfaces, and other areas of speech/language
technology, and offers
opportunities for basic research as well as prototyping and collaborative productization. For further details about the SRI STAR Lab
please see our Website.
The successful candidate will have the opportunity to work on multiple government-funded research projects and to
collaborate with other researchers at SRI and partner institutions. S/he will work on high-performance, deployable machine
translation systems for multiple language pairs with varying levels of resources. A PhD with machine translation background is desired.
The candidate must have strong engineering capability, with skills in C/C++ and scripting languages in a Unix/Linux environment. Strong
oral and written communication skills are expected. Experience in previous NIST MT evaluations and knowledge of speech recognition
are highly desirable.
Candidates must be able to work both independently and cooperatively across multiple projects with dynamically forming teams.
Characteristics of STAR staff are enthusiasm, self-motivation, initiative, and passion for learning.
Please apply via Internet
Open positions at
the Adaptive Multimodal Interface Research Lab at University of Trento (Italy)
Areas
Automatic Speech Recognition (PhD Research Fellowship)
Natural Language Processing (PhD Research Fellowship)
Machine Learning (PhD Research Fellowship/Senior Researcher)
HCI/User Interface (Junior Researcher)
Multimodal/Spoken Dialog (Senior Researcher)
The Adaptive Multimodal Interface research lab pursues excellence research in next-generation interfaces
for human-machine and human-human communication. The research positions will be funded by the prestigious
Marie Curie Excellence grant awarded by the European Commission for cutting edge and interdisciplinary
research.
The candidates for PhD research fellowships should have background in speech, natural
language processing
or machine learning. The successful applicants should have EE or CS degree with strong
academic records.
The students will be part of an interdisciplinary research team working on speech
recognition, language
understanding, spoken dialog, machine learning and adaptive user interfaces.
Deadline for application
submission is July 11, 2006.
The candidates for the junior/senior researcher positions should have a PhD degree
either in computer
science, cognitive science or related disciplines. They will have an established
international research
track record in their field of expertise and leadership skills. Deadline for application submission is
November 1, 2006.
The applicants should be fluent in English. The Italian language competence is optional and applicants
are encouraged to acquire this skill on the job. The applicants should have good programming skills in
most of the following C++/Java/JavaScript/Perl/Python.
University of Trento is an equal opportunity employer. Interested applicants should send their CV along
with their statement of research interest and three reference letters to:
Prof. Ing. Giuseppe Riccardi
The University of Trento is constantly ranked as premiere Italian graduate university institution
(see ).
DIT Department
-DIT has a strong focus on interdisciplinarity with professors from different faculties
of the University
(Physical Science, Electrical Engineering, Economics, Social Science, Cognitive
Science, Computer Science)
with international background.
-DIT aims at exploiting the complementary experiences present in the various research
areas in order to
develop innovative methods and technologies, applications and advanced services.
-English is the official language.
Nuance (Burlington MA)_ #1365 Software Engineer (Burlington MA)
Overview
Nuance Communications, Inc, a worldwide leader in speech and imaging solutions,
has an opening for a senior software engineer to maintain and improve acoustic model
training and testing toolkits in the Dragon R&D department.
The candidate will join a group of talented speech scientists and research engineers
to advance acoustic modeling techniques for Dragon dictation solutions and other
Nuance speech recognition products. We are looking for a self-motivated, goal-driven
individual who has strong programming and software architecture skills.
Responsibilities
• Maintain and improve acoustic modeling toolkit
o Improve efficiency, flexibility and, when appropriate, architecture of training
algorithms
o Improve resource utilization of the toolkit in a large grid computing environment
o Implement new training algorithms in cooperation with speech scientists
o Handle toolkit bug reports and feature requests
o Clean up legacy codes, improve code quality and maintainability
o Perform regression tests and release toolkits
o Improve toolkit documentation
• Improve the software implementation of our research testing framework
• Update acoustic modeling and testing toolkits to work with new versions of
speech recognizer
Qualifications
• Bachelor’s or Master’s degree in computer science or electrical engineering
• Strong programming skill using C/C++ and scripting languages Perl/Python in UNIX environment
• Significant experience in creating and maintaining a software toolkit. This includes
version control, bug reporting, testing, and releasing code to a user community.
• Ability to work with a large existing code base
• Good software design and architecture skill
• Attention to detail: ability and interest in getting lots of details right on a work
task
• Desire and ability to be a team player
? Experience with building acoustic models for speech recognition
? Experience with CVS
? Experience coming up to speed on a large existing code base in a short period of time
? Knowledge of speech recognition algorithms, including model training algorithms
Preference will give to candidates who have experience in maintaining a speech recognition
toolkit. Previous experience in computer administration and grid software management is a
plus.
Please apply on-line
Sr. Research Scientist at Nuance
Nuance, a worldwide leader in imaging, speech and language solutions, has an opening for a research scientist in speech recognition.
The candidate will work on improving recognition performance of speech recognition engine and its main application in Nuance's award-winning dictation products. Dragon NaturallySpeaking® is our market-leading desktop dictation product. The recently released version 8 showed substantial accuracy improvements over previous versions. DragonMT is our new medical transcription server, which brings the benefit of ScanSoft’s dictation technology to the problem of machine assisted medical transcription. We are looking for an individual who wants to solve difficult speech recognition problems, and help get those solutions into our products, so that our customers can work more effectively.
Responsibilities
Main responsibilities of the candidate will include:
provide experimental and theoretical analysis of speech recognition problems
formulate new algorithms, create research tools, design and carry out experiments
to verify new algorithms
work with other members in the team to improve the performance of our products and
add new product features to meet business requirements
work with other team members to deliver acoustic models for products
work with development engineers to insure a high quality implementation of algorithms
and models in company products
follow developments in speech recognition to keep our research work state-of-the-art
patent new algorithms and write scientific papers when appropriate
Qualifications
Requirements:
Ph.D. or Master degree in computer science or electrical engineering
good analytical and diagnostic skills
experience with C/C++, scripting using Perl, Python and csh in UNIX environment
ability to work with a large existing code base
desire and ability to be a team player
strong desire and demonstrated ability to work on and solve engineering problems.
Preference will give to candidates who have strong speech recognition background.
Previous envolvement in DARPA EARS project is a plus. New graduates with good GPA from
top universities are encouraged to apply.
The position will be located in our new headquarters in Burlington, MA, which is
approximated 15 miles west of Boston. Information about
Scansoft and its products.
can be found in .
Please apply on-line
Research Engineer - Natural Language Understanding- Nuance
Overview
Nuance, a worldwide leader in imaging, speech and language solutions, has an
opening for a research engineer in natural language understanding.
Core Technology group in NetASR at Nuance builds the technology behind telephone
speech applications. The focus is on call routing and other forms of statistical
semantics. We currently automate 7 billion phone calls a year and have been moving
fairly aggressively towards more open grammars using a combination of SLM for
recognition followed by statistical call routing. This is used both for call center
types of applications as well as directory assistance, where there may be millions
of destinations with limited training data.
The Nuance NLU Group is doing exciting research and product development in C/C++
and we are looking for top talent to join our team.
Responsibilities
The candidate will work in the Network NL group, which develops technology, tools
and runtime software to enable our customers to build speech applications using
natural language. Some of the current problems include
Generating language models for new applications with little application-specific
training data.
Statistical semantics, e.g. training classifiers for call routing.
Robust parsing and other techniques to extract richer semantics than a routing
destination.
Responsibilities:
The candidate will work on the full product cycle: speak with professional service
engineers or customers to identify NL needs and help with solutions, develop
new algorithms, conduct experiments, and write product quality software to
deliver these new algorithms in the product release cycle.
Qualifications
Strong software skills. C++ required. Perl/python desirable. Needed both for research code and for product quality, unit-tested code that we ship.
Advanced degree in computer science or related field.
Experience in natural language processing, especially call routing, language modeling and related areas.
Ability to take initiative, but also follow a plan and work well in a group environment.
A strong desire to make things “really work” in practice.
Please apply on-line
Principal Engineer - Clinical Language Understanding
Overview
Nuance, a worldwide leader in imaging, speech and language solutions,
has an opening for a research engineer in clinical language
understanding.
The Clinical Language Understanding group at Nuance is a
multi-disciplinary team developing a cutting-edge medical fact
extraction engine in Java. Important facts about medications,
problems, and procedures are identified in clinical reports,
classified, and normalized to standard medical vocabularies.
Responsibilities
The person will be responsible for contributing to the on-going
engineering of the Medical Fact Extraction engine. The person will
research methods and technologies for improving engine functionality,
as well as improving accuracy, performance, and reliability. They
will have good software architecture/design skills, to balance API
requirements for both research and deployment configurations. They
will design, code and test new functionality, and will analyze
existing code to extend, optimize and refactor it. The person will
also help maintain and enhance systems used for corpus management,
document annotation, machine learning algorithm development, and
accuracy and performance measurement. They will also work closely
with colleagues in Research and in Development.
Qualifications
* Bachelor in computer science or equivalent -- advanced degree preferred.
* Minimum 5 years of experience in software development, preferably in the areas of information extraction and retrieval, knowledge
management, document management, or natural language processing.
* Excellent software design, development and diagnostic skills, preferably in Java.
* Excellent scripting and prototyping skills, preferably in Perl.
* Excellent knowledge and understanding of XML, XSLT, and related technologies.
* Significant experience with relational databases.
* Demonstrated ability and desire to learn new technologies rapidly.
* Ability to work well in a multi-disciplinary team.
* Good written and verbal communication skills.
In addition, the applicant must have several of the following:
* Experience in designing and implementing complex commercial applications.
* Experience with computational linguistic research and development, especially as
applied to Information Extraction and Retrieval.
* Experience with ontologies and controlled medical vocabularies (e.g. SNOMED).
* Familiarity with clinical documentation standards and medical terminology.
* Experience with applying machine learning approaches
* Experience in conducting computational and/or technical research is a plus.
* Experience developing user interfaces is a plus.
* Experience with Eclipse and Perforce is a plus.
* Experience with Tomcat, Web services, Servlets is a plus.
* Knowledge of Windows and UNIX is a plus.
Please apply on-line
Computational Linguist, Text-to-Speech Synthesis, Boston area
Location: Boston area (Position AXG-1005)
The Computational Linguist will work with the company's technical team
to develop and integrate linguistic resources and applications for the
company's TTS engine.
Areas of Competence
* Computational Linguistics
* Semantics
* Linguistics
* Speech corpus
* Text corpus
Primary Duties
* Produce and maintain speech corpus, audio data, transcripts and
phonetic dictionary, data annotation, and component/model configuration
management
* Verify existing corpus
* Developing utilities, lexicons, and other language resources for
company's unique TTS
* Adapt text language parsing and analysis software for new TTS needs
Required skills/experience
* Thorough grounding in phonology, phonetics, lexicography, orthography,
semantics, morphology, syntax, and other branches of linguistics
* Experience with language parsing and analysis software, such as
part-of-speech (POS) and syntactic taggers, semantics, and discourse
analysis
* Experience with formant or concatenated based speech synthesis
* Experience working on medium-scale, multi-developer software projects
* Experience with development of speech corpus, transcripts, data
annotation, and phonetic dictionary
* Programming experience in C/C++/Matlab/Perl
* Self-motivation and ability to work independently
* Familiarity with concepts and techniques from DSP theory, machine
learning and statistical modeling is a plus
Must have a Master/PhD. in Engineering, Computer Science or Linguistics
with development or research experience in speech
synthesis/recognition/technology.
Direct your confidential response to:
Arnold L. Garlick III
President
Pacific Search Consultants
(949) 366-9000 Ext. 2#
Please refer to Position AXG-1005
Website
Doctoral (PhD) Positions in the field of Content-based Multimedia Information Retrieval and Management
Department of Computer Science - Faculty of Sciences - University of Geneva - Switzerland
Context:
The Viper group , part of the Computer Vision and Multimedia
Laboratory , has a long research experience in Content-based Multimedia Information Retrieval
(image, video, text, ...). Its activities have led amongst other results to the development of
interactive demo systems for content-based video ( Vicode )
and image ( GIFT) retrieval and
multimedia
management. We wish to continue these activities.
Description of posts:
Several doctoral positions are open in relation with international and national project funds
awarded on the basis of our research activities in the broad field of content-based multimedia
information search, retrieval and management. The research performed will form direct contributions
in our current and upcoming projects, including ViCode and the Collection Guide (see our main
website for details).
The successful applicants should show knowledge and interest in one or many of the following domains:
* Data mining, statistical data analysis
* Statistical learning
* Signal, image, audio processing
* Knowledge engineering
* Indexing, Databases
* Operation research
Starting date: No later than September 2006.
Salary: 48'000CHF per annum (1st year)
Supervision: Dr. S. Marchand-Maillet and Dr. E. Bruno
Application: Applications by email are welcome to:
Dr. Eric Bruno
Computer Vision and Multimedia Laboratory
Department of Computer Science, University of Geneva
24, rue du General Dufour, CH-1211 Geneva 4
SWITZERLAND
e-mail.
This announce (with more info).
top
JOURNALS
Papers accepted for FUTURE PUBLICATION in Speech
Communication Full text available on http://www.sciencedirect.com/ for
Speech Communication subscribers and subscribing institutions. Click on
Publications, then on Speech Communication and on Articles in press. The
list of papers in press is displayed and a .pdf file for each paper is
available.
Sacha Krstulovic, Frédéric Bimbot, Olivier Boëffard, Delphine Charlet, Dominique Fohr and Odile Mella, Optimizing the coverage of a speech database through a selection of representative speaker recordings, Speech Communication, In Press, Uncorrected Proof, , Available online 28 July 2006, .
(Website)
Keywords: Speech database; Cost minimization; Speaker selection; Speaker clustering; Optimal coverage; Multi-models; Speech and speaker recognition; Speech synthesis
Wen Jin and Michael S. Scordilis, Speech enhancement by residual domain constrained optimization, Speech Communication, In Press, Uncorrected Proof, , Available online 28 July 2006, .
(Website)
Keywords: Speech enhancement; Linear prediction; Constrained optimization
Marcos Faundez-Zanuy, Martin Hagmüller and Gernot Kubin, Speaker verification security improvement by means of speech watermarking, Speech Communication, In Press, Uncorrected Proof, , Available online 25 July 2006, .
(Website)
Keywords: Biometric; Speech watermarking; Speaker verification
Geng-xin Ning, Shu-hung Leung, Kam-keung Chu and Gang Wei, A dynamic parameter compensation method for noisy speech recognition, Speech Communication, In Press, Uncorrected Proof, , Available online 21 July 2006, .
(Website)
Keywords: Noisy speech recognition; Model compensation; Dynamic parameter combination
Zekeriya Tufekci, John N. Gowdy, Sabri Gurbuz and Eric Patterson, Applied mel-frequency discrete wavelet coefficients and parallel model compensation for noise-robust speech recognition, Speech Communication, In Press, Uncorrected Proof, , Available online 21 July 2006, .
(Website)
Keywords: Noise robust ASR; Wavelet; Local feature; Feature weighting
Rupal Patel and Maria I. Grigos, Acoustic characterization of the question-statement contrast in 4, 7 and 11 year-old children, Speech Communication, In Press, Uncorrected Proof, , Available online 21 July 2006, .
(Website)
Keywords: Prosody; Children; Acoustics; Speech; Development; Acquisition; Questions; Statements; Intonation
Kentaro Ishizuka and Tomohiro Nakatani, A feature extraction method using subband based periodicity and aperiodicity decomposition with noise robust frontend processing for automatic speech recognition, Speech Communication, In Press, Uncorrected Proof, , Available online 21 July 2006, .
(Website)
Keywords: Speech feature; Noise robust frontend; Subband; Periodicity; Aperiodicity
Tran Huy Dat, Kazuya Takeda and Fumitada Itakura, On-line Gaussian mixture modeling in the log-power domain for signal-to-noise ratio estimation and speech enhancement, Speech Communication, In Press, Uncorrected Proof, , Available online 21 July 2006, .
(Website)
Keywords: Gaussian mixture modeling; Segmental SNR; Log-normal distributions; Cumulative distribution function equalization; Speech enhancement
Mohammed Bahoura and Jean Rouat, Wavelet speech enhancement based on time-scale adaptation, Speech Communication, In Press, Uncorrected Proof, , Available online 17 July 2006, .
(Website)
Keywords: Speech enhancement; Wavelet transform; Teager energy operator; Speech recognition; Adaptive thresholds
S.R. Mahadeva Prasanna, Cheedella S. Gupta and B. Yegnanarayana, Extraction of speaker-specific excitation information from linear prediction residual of speech, Speech Communication, In Press, Corrected Proof, , Available online 17 July 2006, .
(Website)
Keywords: Speaker recognition; Excitation information; LP residual; AANN model; Vocal tract information
Özgül Salor and Mübeccel Demirekler, Dynamic programming approach to voice transformation, Speech Communication, In Press, Corrected Proof, , Available online 17 July 2006, .
(Website)
Keywords: Voice transformation; Speaker transformation; Codebook; Line spectral frequencies; Dynamic programming
Jonathan Darch, Ben Milner and Saeed Vaseghi, MAP prediction of formant frequencies and voicing class from MFCC vectors in noise, Speech Communication, In Press, Uncorrected Proof, , Available online 7 July 2006, .
(Website)
Keywords: Formant prediction; Formant estimation; MAP prediction; GMM; HMM; DSR
Jonathan Darch, Ben Milner and Saeed Vaseghi, MAP prediction of formant frequencies and voicing class from MFCC vectors in noise, Speech Communication, In Press, Corrected Proof, , Available online 7 July 2006, .
(Website)
Keywords: Formant prediction; Formant estimation; MAP prediction; GMM; HMM; DSR
Javier Latorre, Koji Iwano and Sadaoki Furui, New approach to the polyglot speech generation by means of an HMM-based speaker adaptable synthesizer, Speech Communication, In Press, Corrected Proof, , Available online 23 June 2006, .
(Website)
Keywords: Multilingual; Polyglot synthesis; Voice adaptation; Cross-language synthesis; Phone mapping
Frederik Stouten, Jacques Duchateau, Jean-Pierre Martens and Patrick Wambacq, Coping with disfluencies in spontaneous speech recognition: Acoustic detection and linguistic context manipulation, Speech Communication, In Press, Corrected Proof, , Available online 26 May 2006, .
(Website)
Keywords: Disfluency handling; Spontaneous speech recognition; Disfluency detection
Valentin Ion and Reinhold Haeb-Umbach, Uncertainty decoding for distributed speech recognition over error-prone networks, Speech Communication, In Press, Corrected Proof, , Available online 17 May 2006, .
(Website)
Keywords: Distributed speech recognition; Channel error robustness; Soft features; Uncertainty decoding
Esfandiar Zavarehei, Saeed Vaseghi and Qin Yan, Inter-frame modeling of DFT trajectories of speech and noise for speech enhancement using Kalman filters, Speech Communication, In Press, Corrected Proof, , Available online 25 April 2006, .
(Website)
Keywords: Speech enhancement; Kalman filter; AR modeling of DFT; DFT distributions
Antonio Cardenal-López, Carmen García-Mateo and Laura Docío-Fernández, Weighted Viterbi decoding strategies for distributed speech recognition over IP networks, , Speech Communication, In Press, Corrected Proof, , Available online 28 February 2006, .
(Website)
Keywords: Distributed speech recognition; Weighted Viterbi decoding; Missing data
Veronique Stouten, Hugo Van hamme and Patrick Wambacq, Model-based feature enhancement with uncertainty decoding for noise robust ASR, Speech Communication, In Press, Corrected Proof, , Available online 3 February 2006, .
(Website)
Keywords: Noise robust speech recognition; Model-based feature enhancement; Additive noise; Convolutional noise; Uncertainty decoding
top
FUTURE CONFERENCES
Publication policy: Hereunder, you will find very short announcements
of future events. The full call for participation can be accessed on the
conference websites See also our Web pages (http://www.isca-speech.org/) on
conferences and workshops.
FUTURE INTERSPEECH CONFERENCES
INTERSPEECH 2006-ICSLP
INTERSPEECH 2006 - ICSLP, the Ninth International Conference on
Spoken Language Processing dedicated to the interdisciplinary study
of speech science and language technology, will be held in
Pittsburgh, Pennsylvania, September 17-21, 2006, under the
sponsorship of the International Speech Communication Association
(ISCA).
The INTERSPEECH meetings are considered to be the top international
conference in speech and language technology, with more than 1000
attendees from universities, industry, and government agencies. They
are unique in that they bring together faculty and students from
universities with researchers and developers from government and
industry to discuss the latest research advances, technological
innovations, and products. The conference offers the prospect of
meeting the future leaders of our field, exchanging ideas, and
exploring opportunities for collaboration, employment, and sales
through keynote talks, tutorials, technical sessions, exhibits, and
poster sessions. In recent years the INTERSPEECH meetings have taken
place in a number of exciting venues including most recently Lisbon,
Jeju Island (Korea), Geneva, Denver, Aalborg (Denmark), and Beijing.
ISCA, together with the INTERSPEECH 2006 - ICSLP organizing
committee, would like to encourage submission of papers for the
upcoming conference in the following
TOPICS of INTEREST
Linguistics, Phonetics, and Phonology
Prosody
Discourse and Dialog
Speech Production
Speech Perception
Physiology and Pathology
Paralinguistic and Nonlinguistic Information (e.g. Emotional Speech)
Signal Analysis and Processing
Speech Coding and Transmission
Spoken Language Generation and Synthesis
Speech Recognition and Understanding
Spoken Dialog Systems
Single-channel and Multi-channel Speech Enhancement
Language Modeling
Language and Dialect Identification
Speaker Characterization and Recognition
Acoustic Signal Segmentation and Classification
Spoken Language Acquisition, Development and Learning
Multi-Modal Processing
Multi-Lingual Processing
Spoken Language Information Retrieval
Spoken Language Translation
Resources and Annotation
Assessment and Standards
Education
Spoken Language Processing for the Challenged and Aged
Other Applications
Other Relevant Topics
SPECIAL SESSIONS
In addition to the regular sessions, a series of special sessions has
been planned for the meeting. Potential authors are invited to
submit papers for special sessions as well as for regular sessions,
and all papers in special sessions will undergo the same review
process as papers in regular sessions. Confirmed special sessions
and their organizers include:
* The Speech Separation Challenge, Martin Cooke (Sheffield) and Te-Won
Lee (UCSD)
* Speech Summarization, Jean Carletta (Edinburgh) and Julia Hirschberg
(Columbia)
* Articulatory Modeling, Eric Bateson (University of British Columbia)
* Visual Intonation, Marc Swerts (Tilburg)
* Spoken Dialog Technology R&D, Roberto Pieraccini (Tell-Eureka)
* The Prosody of Turn-Taking and Dialog Acts, Nigel Ward (UTEP) and
Elizabeth Shriberg (SRI and ICSI)
* Speech and Language in Education, Patti Price (pprice.com) and Abeer
Alwan (UCLA)
* From Ideas to Companies, Janet Baker (formerly of Dragon Systems)
IMPORTANT DATES
Notification of paper status: June 9, 2006
Early registration deadline: June 23, 2006
Tutorial Day: September 17, 2006
Main Conference: September 18-21, 2006
Further information via Website or
send email
Organizer
Professor Richard M. Stern (General Chair)
Carnegie Mellon University
Electrical Engineering and Computer Science
5000 Forbes Avenue
Pittsburgh, PA 15213-3890
Fax: +1 412 268-3890
Email
INTERSPEECH 2007-EUROSPEECHAugust 27-31,2007,Antwerp,
Belgium Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves,
K.U.Nijmegen Website
Important dates
Proposals for special sessions: November 1, 2006
Proposals for tutorials: January 8, 2007
Four-page paper deadline: March 23, 2007
Notification of paper acceptance: May 25, 2007
Early registration deadline: June 22, 2007
Tutorial Day: August 27, 2007
Main conference: August 28-31, 2007
INTERSPEECH 2008-ICSLP September 22-26, 2008, Brisbane,
Queensland, Australia Chairman: Denis Burnham, MARCS, University of West Sydney.
INTERSPEECH 2009-EUROSPEECH Brighton, UK,
Chairman: Prof. Roger Moore, University of Sheffield.
top
FUTURE ISCA TUTORIAL AND RESEARCH WORKSHOP (ITRW)
ITRW on Experimental
Linguistics 28-30 August 2006, Athens Greece CALL FOR
PAPERS AIMS The general aims of the Workshop are to bring
together researchers of linguistics and related disciplines in a unified
context as well as to discuss the development of experimental
methodologies in linguistic research with reference to linguistic theory,
linguistic models and language applications. SUBJECTS AND RELATED
DISCIPLINES 1. Theory of language 2. Cognitive linguistics
3. Neurolinguistics 4. Speech production 5. Speech acoustics
6. Phonology 7. Morphology 8. Syntax 9. Prosody 10.
Speech perception 11. Psycholinguistics 12. Pragmatics 13.
Semantics 14. Discourse linguistics 15. Computational linguistics
16. Language technology MAJOR TOPICS I. Lexicon II.
Sentence III. Discourse IMPORTANT DATES 1 February 2006,
deadline of abstract submission 1 March 2006, notification of
acceptance 1 April 2006, registration 1 May 2006, camera ready
paper submission 28-30 August 2006, Workshop
CHAIR Antonis Botinis, University of Athens, Greece
Marios Fourakis, University of Wisconsin-Madison, USA Barbara
Gawronska, University of Skövde, Sweden ORGANIZING COMMITTEE
Aikaterini Bakakou-Orphanou, University of Athens Antonis
Botinis, University of Athens Christoforos Charalambakis, University of
Athens SECRETARIAT ISCA Workshop on Experimental Linguistics
Department of Linguistics University of Athens GR-15784,
Athens GREECE Tel.: +302107277668 Fax: +302107277029 e-mail Workshop site address
2nd ITRW on PERCEPTUAL
QUALITY OF SYSTEMS Berlin, Germany, 4 - 6 September 2006
WORKSHOP AIMS
The quality of systems which address human perception is difficult to describe.
Since quality is not an inherent property of a system, users have to decide on
what is good or bad in a specific situation. An engineering approach to quality
includes the consideration of how a system is perceived by its users, and how
the needs and expectations of the users develop. Thus, quality assessment
and prediction have to take the relevant human perception and judgement
factors into account. Although significant progress has been made in several
areas affecting quality within the last two decades, there is still no
consensus on the definition of quality and its contributing components,
as well as on assessment, evaluation and prediction methods.
Perceptual quality is attributed to all systems and services which involve
human perception. Telecommunication services directly provoke such perceptions:
Speech communication services (telephone, Voice over IP), speech technology
(synthesis, spoken dialogue systems), as well as multimodal services and
interfaces (teleconference, multimedia on demand, mobile phones, PDAs).
However, the situation is similar for the perception of other products,
like machines, domestic devices, or cars. An integrated view on system quality
makes use of knowledge gained in different disciplines and may therefore help
to find general underlying principles. This will assist the increase of usability
and perceived quality of systems and services, and finally yield better acceptance.
The workshop is intended to provide an interdisciplinary exchange of ideas between
both academic and industrial researchers working on different aspects of perceptual
quality of systems. Papers are invited which refer to methodological aspects of
quality and usability assessment and evaluation, the underlying perception and
judgment processes, as well as to particular technologies, systems or services.
Perception-based as well as instrumental approaches will complement each other
in giving a broader picture of perceptual quality. It is expected that this will
help technology providers to develop successful, high-quality systems and services.
WORKSHOP TOPICS
The following non-exhaustive list gives examples of topics which are
relevant for the workshop, and for which papers are invited:
- Methodologies and Methods of Quality Assessment and Evaluation
- Metrology: Test Design and Scaling
- Quality of Speech and Music
- Quality of Multimodal Perception
- Perceptual Quality vs. Usability
- Semio-Acoustics and -Perception
- Quality and Usability of Speech Technology Devices
- Telecommunication Systems and Services
- Multi-Modal User Interfaces
- Virtual Reality
- Product-Sound Quality
IMPORTANT DATES
April 15, 2006 (updated): Abstract submission (approx. 800 words)
May 15, 2006: Notification of acceptance
June 15, 2006: Submission of the camera-ready paper (max. 6 pages)
September 4-6, 2006: Workshop
WORKSHOP VENUE
The workshop will take place in the "Harnack-Haus", a villa-like
conference center located in the quiet western part of Berlin, near the
Free University. As long as space permits, all participants will be
accommodated in this center. Accommodation and meals are included in
the workshop fees. The center is run by the Max-Planck-Gesellschaft and
can easily be reached from all three airports of Berlin (Tegel/TXL,
Schönefeld/SXF and Tempelhof/THF). Details on the venue,
accommodation and transportation will be announced soon.
PROCEEDINGS
CD workshop proceedings will be available upon registration at the
conference venue and subsequently on the workshop web site.
LANGUAGE
The official language of the workshop will be English.
LOCAL WORKSHOP ORGANIZATION
Ute Jekosch (IAS, Technical University of
Dresden) Sebastian Möller (Deutsche Telekom Labs, Technical
University of Berlin) Alexander Raake (Deutsche Telekom Labs, Technical
University of Berlin)
CONTACT INFORMATION
Sebastian Möller, Deutsche Telekom Labs, Ernst-Reuter-Platz 7,
D-10587 Berlin, Germany
phone +49 30 8353 58465, fax +49 30 8353 58409
Website
ITRW on Statistical and Perceptual Audition ( 2006) A
satellite workshop of INTERSPEECH 2006 -ICSLP September 16, 2006,
Pittsburgh, PA, USA Website
This will be a one-day workshop with a limited number of oral presentations,
chosen for breadth and provocation, and an informal atmosphere to promote
discussion. We hope that the participants in the workshop will be exposed
to a broader perspective, and that this will help foster new research and
interesting variants on current approaches.
Topics Generalized
audio analysis Speech analysis Music analysis Audio
classification Scene analysis Signal separation Speech
recognition Multi-channel analysis
In all cases, preference will be given to papers that clearly involve
both perceptually-defined or perceptually-related problems, and statistical
or machine-learning based solutions.
Important dates
Submission of a 4-6 pages long paper deadline (double column)
April 21 2006
Notification of acceptance June 9, 2006
NOLISP'07: Non linear Speech Processing
May 22-25, 2007 , Paris, France
6th ISCA Speech Synthesis Research Workshop
(SSW-6) Bonn (Germany), August 22-24, 2007 A satellite of
INTERSPEECH 2007 (Antwerp)in collaboration with SynSIG Details will be
posted by early 2007 Contact Prof. Wolfgang Hess
ITRW on Robustness
November 2007, Santiago, Chile
top
FORTHCOMING EVENTS SUPPORTED (but not organized) by ISCA
IV Jornadas en Tecnologia del Habla
Zaragoza, Spain
November 8-10, 2006
Website
Call for papers-International Workshop on Spoken Language
Translation (IWSLT 2006) Evaluation campaign for language translation
Palulu Plaza Kyoto (right in front of Kyoto Station)
(Japan) November 30-December 1 2006 Website
Spoken language translation technologies attempt to cross the language
barriers between people having different native languages who each want
to engage in conversation by using their mother-tongue. Spoken language
translation has to deal with problems of automatic speech recognition
(ASR) and machine translation (MT).
One of the prominent research activities in spoken language translation
is the work being conducted by the Consortium for Speech Translation
Advanced Research (C-STAR III), which is an international partnership of
research laboratories engaged in automatic translation of spoken language.
Current members include ATR (Japan), CAS (China), CLIPS (France), CMU (USA),
ETRI (Korea), ITC-irst (Italy), and UKA (Germany).
A multilingual speech corpus comprised of tourism-related sentences (BTEC*)
has been created by the C-STAR members and parts of this corpus were already
used for previous IWSLT workshops focusing on the evaluation of MT results
using text input () and the translation of
ASR output (word lattice, NBEST list) using read speech as input
(). The full BTEC* corpus consists of
160K of sentence-aligned text data and parts of the corpus will be provided
to the participants for training purposes.
In this workshop, we focus on the translation of spontaneous speech which
includes ill-formed utterances due to grammatical incorrectness, incomplete
sentences, and redundant expressions. The impact of spontaneity aspects on
the ASR and MT systems performance as well as the robustness of state-of-
the-art MT engines towards speech recognition errors will be investigated
in detail.
Two types of submissions are invited:
1) participants in the evaluation campaign of spoken language translation
technologies,
2) technical papers on related issues.
Evaluation campaign (see details on our website)
Each participant in the evaluation campaign is requested to submit a paper
describing the utilized ASR and MT systems and to report results using
the provided test data.
Technical Paper Session
The workshop also invites technical papers related to spoken language
translation. Possible topics include, but are not limited to:
+ Spontaneous speech translation
+ Domain and language portability
+ MT using comparable and non-parallel corpora
+ Phrase alignment algorithms
+ MT decoding algorithms
+ MT evaluation measures
Important Dates
+ Evaluation Campaign
May 12, 2006 -- Training Corpus Release
August 1, 2006 -- Test Corpus Release [00:01 JST]
August 3, 2006 -- Result Submission Due [23:59 JST]
September 15, 2006 -- Result Feedback to Participants 2006
September 29, 2006 -- Paper Submission Due
October 14, 2006 -- Notification of Acceptance
October 27, 2006 -- Camera-ready Submission Due
- system registrations will be accepted until release of
test corpus
- late result submissions will be treated as unofficial
result submissions
+ Technical Papers
July 21, 2006 -- Paper Submission Due [23:59 JST]
September 29, 2006 -- Notification of Acceptance
October 27, 2006 -- Camera-ready Submission Due
Contact
Michael Paul
ATR Spoken Language Communication Research Laboratories
2-2-2 Hikaridai, Keihanna Science City, Kyoto 619-0288 Japan
Call for papers International Symposium on Chinese Spoken
Language Processing (ISCSLP'2006)
Singapore Dec. 13-16, 2006 Conference
website
Topics ISCSLP'06 will feature world-renowned plenary speakers, tutorials, exhibits,
and a number of lecture and poster sessions on the following topics:
* Speech Production and Perception
* Phonetics and Phonology
* Speech Analysis
* Speech Coding
* Speech Enhancement
* Speech Recognition
* Speech Synthesis
* Language Modeling and Spoken Language Understanding
* Spoken Dialog Systems
* Spoken Language Translation
* Speaker and Language Recognition
* Indexing, Retrieval and Authoring of Speech Signals
* Multi-Modal Interface including Spoken Language Processing
* Spoken Language Resources and Technology Evaluation
* Applications of Spoken Language Processing Technology
* Others
The official language of ISCSLP is English. The regular papers will be
published as a volume in the Springer LNAI series, and the poster papers
will be published in a companion volume. Authors are invited to submit
original, unpublished work on all the aspects of Chinese spoken language
processing.
The conference will also organize four special sessions:
* Special Session on Rich Information Annotation and Spoken Language
Processing
* Special Session on Robust Techniques for Organizing and Retrieving
Spoken Documents
* Special Session on Speaker Recognition
* Special Panel Session on Multilingual Corpus Development
Schedule
* Full paper submission by Jun. 15, 2006
* Notification of acceptance by Jul. 25, 2006
* Camera ready papers by Aug. 15, 2006
* Early registration by Nov. 1, 2006
Please visit the conference website for
more details.
ISCSLP 2006-Special session on speaker recognition
Singapore, Dec 13-16, 2006
Website
Chair:
Dr Thomas Fang Zheng, Tsinghua Univ., Beijing.
Speaker recognition (or voiceprint recognition, VPR) is one of the most
important branches in speech processing. Its applications become wider and
wider in various fields, such as public security, anti-terrorism, justice,
telephony banking, personal services, and so on. However, there are still
many fundamental and theoretical problems to solve, such as issues of
background noises, cross-channel, multiple-speakers, and short speech
segment for training and testing.
The purpose of this special session is to invite researchers in this field
to present their state-of-art technical achievements. Papers are invited to
cover, but not limited to, the following topics:
* Text-dependent and text-independent speaker identification
* Text-dependent and text-independent speaker verification
* Speaker detection
* Speaker segmentation
* Speaker tracking
* Speaker recognition systems and application
* Resource creation for speaker recognition
This special session also provides a platform for developers in this field
to evaluate their speaker recognition systems using the same database
provided by this special session. Evaluation of speaker recognition systems
will cover the following tasks:
* Text-independent speaker identification
* Text-dependent and text-independent speaker verification
* Text-independent cross-channel speaker identification
* Text-dependent and text-independent cross-channel speaker
verification
Final details on these tasks (including evaluation criteria) will be made
available in due course. The development and testing data will be provided
by the Chinese Corpus Consortium (CCC). The data sets will be extracted from
two CCC databases, which are CCC-VPR3C2005 and CCC-VPR2C2005-1000.
Participants are required to submit a full paper to the conference
describing their algorithms, systems and results.
Schedule for this special session
* Feb. 01, 2006: On-line registration open, CLOSED on May 1st, 2006
* May. 01, 2006: Development data made available to participants
* May. 21, 2006 (revised): Test data made available to participants
* Jun. 7, 2006 (revised):Test results due at CCC
* Jun. 10, 2006: Results released to participants
* Jun. 15, 2006: Papers due (using ISCSLP standard format)
* Jul. 25, 2006: The full set of the two databases made available to
the participants of this special session upon request
* Dec. 16, 2006: Conference presentation
This special session is organized by the CCC
.
Please address your enquiries to Dr. Thomas Fang
Zheng.
Download the
Speaker Recognition Evaluation Registration Form
top
FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS
2006 IEEE International Workshop on Machine
Learning for Signal Processing (Formerly the IEEE Workshop on
Neural Networks for Signal Processing) September 6 - 8, 2006, Maynooth,
Ireland MLSP'2006
webpage The sixteenth in a
series of IEEE workshops on Machine Learning for Signal Processing (MLSP)
will be held in Maynooth, Ireland, September 6-8, 2006. Maynooth is
located 15 miles west of Dublin in Co. Kildare, Ireland?s equestrian and
golfing heartland (and home to the 2006 Ryder Cup). It is a pleasant 18th
century planned town, best known for its seminary, St. Patrick's College,
where Catholic Priests have been trained since 1795. Co.Kildare. The workshop,
formally known as Neural Networks for Signal Processing (NNSP), is
sponsored by the IEEE Signal Processing society (SPS) and organized by the
MLSP technical committee of the IEEE SPS. The name of the NNSP technical
committee, and hence the workshop, was changed to Machine Learning for
Signal Processing in September 2003 to better reflect the areas
represented by the technical committee. Topics The workshop
will feature keynote addresses, technical presentations, special sessions
and tutorials, all of which will be included in the registration. Papers
are solicited for, but not limited to, the following areas: Learning
Theory and Modeling; Bayesian Learning and Modeling; Sequential Learning;
Sequential Decision Methods; Information-theoretic Learning; Neural
Network Learning; Graphical and Kernel Models; Bounds on performance;
Blind Signal Separation and Independent Component Analysis; Signal
Detection; Pattern Recognition and Classification, Bioinformatics
Applications; Biomedical Applications and Neural Engineering; Intelligent
Multimedia and Web Processing; Communications Applications; Speech and
Audio Processing Applications; Image and Video Processing Applications.
A data analysis and signal processing competition is being organized
in conjunction with the workshop. This competition is envisioned to become
an annual event where problems relevant to the mission and interests of
the MLSP community will be presented with the goal of advancing the
current state-of-the-art in both theoretical and practical aspects. The
problems are selected to reflect the current trends to evaluate existing
approaches on common benchmarks as well as areas where crucial
developments are thought to be necessary. Details of the competition can
be found on the workshop website. Selected papers from MLSP 2006 will
be considered for a special issue of Neurocomputing to appear in 2007. The
winners of the data analysis and signal processing competition will also
be invited to contribute to the special issue. Paper Submission
Procedure Prospective authors are invited to submit a double column
paper of up to six pages using the electronic submission procedure
described at the workshop homepage. Accepted papers will be published in a
bound volume by the IEEE after the workshop and a CDROM volume will be
distributed at the workshop. Chairs General Chair:Seán
MCLOONE, NUI Maynooth, Technical Chair:Tülay ADALI , University of
Maryland, Baltimore County
Workshop on Multimedia Content
Representation, Classification and Security (MRCS) September 11 -
13, 2006 Istanbul, Turkey Workshop website In
cooperation with The International Association for Pattern Recognition
(IAPR) The European Association for Signal-Image Processing
(EURASIP) GENERAL CHAIRS Bilge Gunsel,Istanbul Technical
Univ.,Turkey Anil K. Jain, Michigan State University,
USA TECHNICAL PROGRAM CHAIR Murat Tekalp,Koc University,
Turkey SPECIAL SESSIONS CHAIR Kivanc Mihcak, Microsoft
Research, USA Prospective authors are invited to submit extended
summaries of not more than six (6) pages including results, figures and
references. Submitted papers will be reviewed by at least two members of
the program committee. Conference Proceedings will be available on site.
Please check the website for
further information. IMPORTANT DATES Notification of Acceptance: June 10,
2006 Camera-ready Paper Submission Due: July 10,
2006 Topics The areas of interest include but are not limited
to: - Feature extraction, multimedia content representation and
classification techniques - Multimedia signal processing -
Authentication, content protection and digital rights management -
Audio/Video/Image Watermarking/Fingerprinting - Information hiding,
steganography, steganalysis - Audio/Video/Image hashing and clustering
techniques - Evolutionary algorithms in content based multimedia data
representation, indexing and retrieval - Transform domain
representations - Multimedia mining - Benchmarking and comparative
studies - Multimedia applications (broadcasting, medical, biometrics,
content aware networks, CBIR.)
Workshop on Speech in Mobile and Pervasive Environments
(in conjunction with ACM Mobile HCI '06)
Espoo, Finland
September 12, 2006
Workshop website
Organisers
* Amit A. Nanavati, IBM India Research Laboratory.
* Nitendra Rajput, IBM India Research Laboratory.
* Alexander I. Rudnicky, Carnegie Mellon University.
* Roberto Sicconi, IBM T.J. Watson Research Center.
Programme Committee
* Abeer Alwan, UCLA, USA.
* Peter Boda, Nokia Research Center, Finland.
* Shrikanth S. Narayanan, USC, USA.
* David Pearce, Motorola, UK.
* Harry Printz, Promptu, USA.
* Markku Turunen, University of Tampere, Finland.
Theme
Traditionally, voice-based applications have been accessed using dumb
telephone devices through Voice Browsers that reside on the server. The
proliferation of pervasive devices and the increase in their processing
capabilities, client-side speech processing is emerging as a viable
alternative. This workshop will explore the various issues that arise
while doing speech processing on resource-constrained, possibly mobile
devices. The workshop will highlight the many open areas that require
research attention, identify key problems that need to be addressed, and
also discuss a few approaches for solving some of them, in order to make
the next generation of conversational systems.
Topics of Interest
All areas that enable, optimise or enhance Speech in mobile and pervasive
environments and devices. Possible areas include, but are not restricted
to:
* Robust Speech Recognition in Noisy and Resource-constrained Environments
* Memory/Energy Efficient Algorithms
* Multimodal User Interfaces for Mobile Devices
* Protocols and Standards for Speech Applications
* Distributed Speech Processing
* Mobile Application Adaptation and Learning
* Prototypical System Architectures
* User Modelling
Intended Audience
This cross-disciplinary burgeoning area invites researchers interested in
any aspect of the intersection of Speech processing and Mobile computing
-- speech recognition, speech synthesis, multimodal interfaces, mobile
HCI, distributed speech processing, mobile applications, voice user
interface design, memory/energy efficient algorithms -- to meet and pave
the way forward. We anticipate a good mix of industrial and academic
participation which should lead to lively discussions. There appear to be
many isolated groups of people working in these areas who need to come
together as a community.
Submissions
We invite position papers (upto 8 pages - shorter papers are also
welcome). Electronic submission is required. Submissions should be
formatted according to at
ACM SIG style .
All submissions should be in PDF (preferred) or Postscript format.
If any of these requirements is a problem for you, please feel free to
contact
the workshop organisers.
We also welcome participation without paper submission.
Please email your submissions and participation requests to:
Amit A. Nanavati
Nitendra Rajput
Note: Registration for this workshop in included in the MobileHCI'06
conference registration fee. Alternatively, participants can register only
for the workshop - one day registration fee.
Seed Questions
* How do we construct speech systems with small footprints of memory and
power consumption ?
* How to do speech recognition in noisy environments ?
* How can we distribute processing more efficiently given the increased
available computing power on handhelds ?
* How do we make such devices adapt automatically to the user, task and
environment ?
* What novel applications and services can be deployed on such devices ?
* What are the acoustic and linguistic implications ?
* Evaluation, Benchmarks and Performance modelling of mobile speech
systems.
* What is the role of standards like VoiceXML/SALT in this domain ?
Key Dates
* Position Paper Submission Deadline: June 30, 2006
* Notification of Acceptance: July 10, 2006
* Early Registration Deadline: July 20, 2006
* Workshop Timing: 8:30AM -- 6:00PM, September 12, 2006.
Websites
* SiMPE Workshop
* ACM Mobile HCI '06
Ninth International Conference on TEXT, SPEECH and DIALOGUE
(TSD 2006) Brno, Czech Republic, 11-15 September 2006 Website The conference is
organized by the Faculty of Informatics, Masaryk University, Brno, and the
Faculty of Applied Sciences, University of West Bohemia, Pilsen. The
conference is supported by International Speech Communication
Association. TSD SERIES TSD series evolved as a prime forum
for interaction between researchers in both spoken and written language
processing from the former East Block countries and their Western
colleagues. Proceedings of TSD form a book published by Springer-Verlag in
their Lecture Notes in Artificial Intelligence (LNAI)
series. TOPICS Topics of the conference will include (but are
not limited to): text corpora and tagging transcription problems in
spoken corpora sense disambiguation links between text and speech
oriented systems parsing issues, especially parsing problems in spoken
texts multi-lingual issues, especially multi-lingual dialogue
systems information retrieval and information extraction text/topic
summarization machine translation semantic networks and
ontologies semantic web speech modeling speech
segmentation speech recognition search in speech for IR and
IE text-to-speech synthesis dialogue systems development of
dialogue strategies prosody in dialogues emotions and personality
modeling user modeling knowledge representation in relation to
dialogue systems assistive technologies based on speech and dialogue
applied systems and software facial animation visual speech synthesis
Papers on processing of languages other than English are strongly
encouraged. ORGANIZERS Frederick Jelinek, USA (general
chair) Hynek Hermansky, USA (executive chair) KEYNOTE
SPEAKERS Eduard Hovy, USA Louise Guthrie, GB James
Pustejovsky, USA FORMAT OF THE CONFERENCE The conference
program will include presentation of invited papers, oral presentations,
and a poster/demonstration sessions. Papers will be presented in plenary
or topic oriented sessions. Social events including a trip in the
vicinity of Brno will allow for additional informal
interactions. CONFERENCE PROGRAM The conference program will
include oral presentations and poster/demonstration sessions with
sufficient time for discussions of the issues raised. The conference will
welcome three keynote speakers - Eduard Hovy, Louise Guthrie and James
Pustejovsky, and it will offer two special panels devoted to Emotions and
Search in Speech. IMPORTANT DATES May 15 2006 .............. Notification of acceptance May 31
2006 .............. Final papers (camera ready) and registration July
23 2006 ............. Submission of demonstration abstracts July 30
2006 ............. Notification of acceptance for demonstrations sent to
the authors September 11-15 2006 ..... Conference date The
contributions to the conference will be published in proceedings that will
be made available to participants at the time of the
conference. OFFICIAL LANGUAGE of the conference will be
English. ADDRESS All correspondence regarding the conference
should be addressed to Dana Hlavackova, TSD 2006 Faculty of
Informatics, Masaryk University Botanicka 68a, 602 00 Brno, Czech
Republic phone: +420-5-49 49 33 29 fax: +420-5-49 49 18 20 email LOCATION Brno
is the the second largest city in the Czech Republic with a population of
almost 400.000 and is the country's judiciary and trade-fair center. Brno
is the capital of Moravia, which is in the south-east part of the Czech
Republic. It had been a Royal City since 1347 and with its six
universities it forms a cultural center of the region. Brno can be
reached easily by direct flights from London and Munich and by trains or
buses from Prague (200 km) or Vienna (130 km).
MMSP-06
IEEE Signal Processing Society 2006 International Workshop
on Multimedia Signal Processing (MMSP06),
October 3-6, 2006,
Fairmount Empress Hotel, Victoria, BC, Canada
Website
- A Student Paper Contest with awards sponsored by Microsoft Research. To
enter the contest a paper submission must have a student as the first
author
- Overview sessions that consist of papers presenting the state-of-the-art
in methods and applications for selected topics of interest in multimedia
signal processing
- Wrap-up presentations that summarize the main contributions of the papers
accepted at the workshop, hot topics and current trends in multimedia
signal processing
- New content requirements for the submitted papers
- New review guidelines for the submitted papers
SCOPE
Papers are solicited for, but not limited to, the general areas:
- Multimedia Processing (modalities: audio, speech, visual, graphics,
other; processing: pre- and post- processing of multimodal data, joint
audio/visual and multimodal processing, joint source/channel coding, 2-D
and 3-D graphics/geometry coding and animation, multimedia streaming)
- Multimedia Databases (content analysis, representation, indexing,
recognition, and retrieval)
- Multimedia Security (data hiding, authentication, and access control)
- Multimedia Networking (priority-based QoS control and scheduling, traffic
engineering, soft IP multicast support, home networking technologies,
wireless technologies)
- Multimedia Systems Design, Implementation and Applications (design:
distributed multimedia systems, real-time and non real-time systems;
implementation: multimedia hardware and software; applications:
entertainment and games, IP video/web conferencing, wireless web, wireless
video phone, distance learning over the Internet, telemedicine over the
Internet, distributed virtual reality)
- Human-Machine Interfaces and Interaction using multiple modalities
- Human Perception (including integration of art and technology)
- Standards
SCHEDULE
- Notification of acceptance by: June 8,
2006
- Camera-ready paper submission by: July 8, 2006
(Instructions for Authors)
Check the workshop website
for updates.
Manage your subscription at:
http://ewh.ieee.org/enotice/
options.php?LN=CONF
CFP Fifth Slovenian and First International
LANGUAGE TECHNOLOGIES CONFERENCE
IS-LTC 2006
Slovenian Language Technologies Society
Information Society - IS 2006
Ljubljana, Slovenia/October 9 - 10, 2006
conference website
The Slovenian Language Technologies Society invites contributions to its
biennial conference to be held in the scope of the Information Society -
IS 2006, taking place October 9 - 13, 2006 at the Jožef Stefan Institute
in Ljubljana, Slovenia.
The official languages of the conference are English and Slovene. The
conference will be organised in two tracks, one for contributions in
English, and the other for those in Slovenian. The accepted papers will
be published in printed proceedings, as well as on-line, on the conference
Web site http://nl.ijs.si/is-ltc06/.
Conference Topics
We invite papers from academia, government, and industry on all areas of
traditional interest to the HLT community, as well as related fields,
including but not limited to:
* development, standardisation and use of language resources
* speech technologies
* machine translation and other multi- and cross-lingual processing
* semantic web and knowledge representation related HLT
* statistical and machine learning of language models
* information retrieval and extraction, question answering
* HLT applications
* presentations of HLT related projects
Invited speakers >
Nick Campbell,
Chief Researcher, Media Information Science Laboratories
ATR, Japan
Steven Krauwer,
Coordinator of ELSNET (European Network of Excellence in Human
Language Technologies)
Utrecht University, Netherlands
Title of talk: Strengthening the smaller languages in Europe
Guidelines for Submissions
Submitted papers should present original research relevant to the field
of human language technologies. Overview papers on HLT research and
applications are also welcome.
The contributions should be written in English or Slovene. They should
be 4 or 6 pages long and formatted according to the conference style
guidelines, which are available from the conference Web site.
The papers will be published in printed proceedings, as well as on-line,
on the conference Web site. Some papers will be chosen for re-submission
to the journal Informatica.
Important Dates
June 25th paper submission deadline
September 15th camera ready submission
October 9 - 10 conference
Organising Committee
Tomaž Erjavec, Jožef Stefan Institute
Vojko Gorjanc, University of Ljubljana
Jerneja Žganec Gros, Alpineon
Information
Up to date information is available at http://nl.ijs.si/is-ltc06/
or email.
Call for papers-
9th DIMACS Implementation Challenge Workshop: Shortest Paths
WEBSITE
Goals
Shortest path problems are ones of the most fundamental combinatorial
optimization problems with many applications, both direct and as
subroutines in other combinatorial optimization algorithms. Algorithms for
these problems have been studied since 1950's and still remain an active
area of research. One goal of this Challenge is to create a reproducible
picture of the state of the art in the area of shortest path algorithms.
To this end we are identifying a standard set of benchmark instances and
generators, as well as a benchmark implementations of well-known shortest
path algorithms. Another goal is to enable current researchers to compare
their codes with each other, in hopes of identifying the more effective of
the recent algorithmic innovations that have been proposed. The final goal
is to publish proceedings containing results presented at the Challenge
Workshop, and a book containing the best of the proceedings papers.
Scope
The Challenge addresses a wide range of shortest path problems, including
all sensible combinations of the following:
* Point-to-point, single-source, all-pairs.
* Non-negative arc lengths and arbitrary arc lengths (including negative
cycle detection).
* Directed and undirected graphs.
* Static and dynamic problems. The latter include those dynamic in CS sense
(arc additions, deletions, length changes) and those dynamic in OR sense
(arc transit times depending on arrival times).
* Exact and approximate shortest paths.
* Compact routing tables and shortest path oracles.
Implementations on any platform of interest, for example desktop machines,
supercomputers, and handheld devices, are encouraged.
How to participate
People interested in submitting papers to the Challenge Workshop can find
benchmark instances, generators, and code for the problems they address at
the Challenge website, along with detailed information on file formats.
Your work can take two different directions.
1. Defining instances for algorithm evaluation. The instances should be
natural and interesting. By the latter we mean instances that cause good
algorithms to behave differently from the other instances. Interesting
real-life application data are especially welcome.
2. Algorithm evaluation. Description of implementations of algorithms
with experimental data that supports conclusions about practical
performance. Common benchmark instances and codes should be used so that
there is common ground for comparison. The most obvious way for such a
paper to be interesting (and selected for the proceedings) is if the
implementation improves state-of-the-art. However, there may be other
ways to produce and interesting paper, for example by showing that an
approach that looks well in theory does not work well in practice by
explaining why this is the case.
Challenge Book
The best papers presented at the Challenge Workshop will be selected for
publication in a book published in the DIMACS Book Series.
Important dates
- August 25, 2006:
Paper submission deadline
- September 25, 2006:
Author notification
- November 13-14, 2006:
Challenge Workshop, DIMACS Center, Rutgers University, Piscataway, NJ
Organizing Committee
Camil Demetrescu, University of Rome "La Sapienza"
Andrew Goldberg, Microsoft Research
David Johnson, AT&T Labs - Research
Advisory Committee
Paolo Dell'Olmo, University of Rome "La Sapienza"
Irina Dumitrescu, University of New South Wales
Mikkel Thorup, AT&T Labs-Research
Dorothea Wagner, Universitaet Karlsruhe
Call for papers 8th International Conference on Signal Processing
Nov. 16-20, 2006, Guilin, China
website
The 8th International Conference on Signal Processing will be held in Guilin,
China on Nov. 16-20, 2006. It will include sessions on all aspects of theory,
design and applications of signal processing. Prospective authors are invited to
propose papers in any of the following areas, but not limited to:
A. Digital Signal Processing (DSP)
B. Spectrum Estimation & Modeling
C. TF Spectrum Analysis & Wavelet
D. Higher Order Spectral Analysis
E. Adaptive Filtering &SP
F. Array Signal Processing
G. Hardware Implementation for Signal Processing
H. Speech and Audio Coding
I. Speech Synthesis & Recognition
J. Image Processing & Understanding
K. PDE for Image Processing
L. Video compression &Streaming
M. Computer Vision & VR
N. Multimedia & Human-computer Interaction
O. Statistic Learning & Pattern Recognition
P. AI & Neural Networks
Q. Communication Signal processing
R. SP for Internet and Wireless Communications
S. Biometrics & Authentification
T. SP for Bio-medical & Cognitive Science
U. SP for Bio-informatics
V. Signal Processing for Security
W. Radar Signal Processing
X. Sonar Signal Processing and Localization
Y. SP for Sensor Networks
Z. Application & Others
CFP CI 2006 Special Session on
Natural Language Processing for Real Life Applications
November 20-22, 2006 San Francisco, California, USA
Website
Topics
The Special Session on Natural Language Processing for Real Life Applications
will cover the following topics (but is not limited to):
1. speech recognition, in particular
* multilingual speech recognition
* large vocabulary continuous speech recognition with focus on the
application
2. real life dialog systems
* natural language dialog systems
* multimodal dialog systems
3. speech-based classification
* speaker classification, i.e. exploiting paralinguistic features of
the speech to gather information about the speaker (for example age, gender,
cognitive load, and emotions)
* language and accent identification
Paper Submission
Please submit papers for the special session directly to the session chair
(christian.mueller@dfki.de). DO NOT submit the papers through the IASTED
website. Otherwise, the papers will be handled as general papers for the
conference. Each submission will be reviewed by at least two independent
reviewers. The final selection of papers for the session will be done by the
session chair; acceptance/rejection letters and review comments along with
registration information will be provided by IASTED by the general Notification
deadline.
Formatting instructions
Please follow the formatting instructions provided by IASTED.
Website.
Important Dates
Submissions due June 15, 2006
Notification of acceptance August 1, 2006
Camera-ready manuscripts due September 1, 2006
Registration Deadline September 15, 2006
Conference November 20 - 22, 2006
Registration
All papers accepted for the special session are required to register before the
general conference registration deadline.
ELEVENTH AUSTRALASIAN INTERNATIONAL CONFERENCE ON
SPEECH SCIENCE AND TECHNOLOGY
AUCKLAND, NEW ZEALAND, 6-8 DECEMBER 2006
Conference Website Conference Website
The Australasian Speech Science and Technology Association (ASSTA) is a
scientific association that aims to advance the
understanding of speech science and its application to speech technology.
ASSTA and the University of Auckland are pleased to announce the Eleventh
International Conference on Speech Science and Technology (SST2006).
Conference Themes
Submissions are invited for oral and poster presentations. Submissions
should describe original contributions to spoken language, speech science
and/or technology that will be of interest to an audience including
scientists, engineers, linguists, psychologists, speech and language
therapists, audiologists and other professionals. Submissions are invited
in all areas of speech science and technology, but particularly in the
following areas:
Speech production
Acoustic phonetics
Acoustics of accent change
Music and speech processing
Emotional speech, voice, intonation and prosody
Applications of speech science and technology
Speech Processing for Forensic Applications
Speech recognition and understanding
Speaker recognition and classification
Speech enhancement and noise cancellation
Pedagogical technologies for speech and singing
Corpus management and speech tools
Contributions of speech science and technology to
Phonetics and Phonology of Australian and New Zealand English
audiology and speech language therapy (PANZE)
Combined session with Australasian Conference on Robotics and Automation
Keynote Speakers
Prof. Joseph Perkell, Massachusetts Institute of Technology
Prof. Pat Keating, University of California Los Angeles
Prof. Michael Corballis, University of Auckland.
Important Dates
. Abstract submission closing date - Monday, 28 August 2006
. Acceptance notice date - Monday, 25 September 2006
. Manuscript closing date - Monday, 6 November 2006
. Early registration date for conference and pre-conference workshop -
Sunday, 29 October 2006
. Presenter/Author registration Deadline - Sunday, 29 October 2006
. Pre-conference tutorials and workshops - 5 December 2006
. SST 2006 Conference, 6-8 December
Important Contacts:
Abstract and Manuscript Submission: these should be submitted
online. Click on the "Submission" link and follow
the guidelines posted. Word and Latex templates, and a comprehensive
author's guide for submissions, are available on the website.
Registration: An online registration form can be found on the conference
website. Any queries regarding your registration should be directed either
to the University Conference Management or to the Conference Chair Dr Catherine Watson.
Pre-Conference Workshops: Any enquiries regarding the Pre-Conference
workshops should be sent to Assoc. Prof. Paul Warren.
Conference Organising Committee: Dr Catherine Watson (chair), Assoc. Prof.
Paul Warren, Dr Waleed Abdulla, Dr Elaine Ballard, Helen Charters, Dr.
Claire Fletcher Flynn, Dr Bernard Guillemin, Dr William Thorpe, Assoc. Prof.
Suzanne Purdy, Dr Peter Keegan
Conference Advisory Committee
Prof. Cathy Best, Prof. Bob Bogner, Prof. Herve Bourlard, Prof. Anne Cutler,
Prof. Hiroya Fujisaki, . Jonathan Harrington, Prof. Hynek Hermansky, Prof.
Louis Pols, Prof. Peter Thorne, Prof. Roger Wales, Assoc. Prof. Paul Warren,
Assoc. Prof. Thomas Fang Zheng
Pre-Conference Workshops:
Morning
1. Speech Processing
Waleed Abdulla, University of Auckland
2. Intonation and Prosody in AuE and NZ
Janet Fletcher, University of Melbourne and Paul Warren, Victoria University
of Wellington
Afternoon
3. Speech database management and access
Jen Hay, University of Canterbury
4. The phonetics of Maori
Peter Keegan, University of Auckland
Accommodation:
A variety of accommodation options have been arranged at special conference
rates. An accommodation reservation form can be downloaded from the website
http://www.assta.org/sst/2006/.
Other hotels within walking distance of the University include The
Copthorne, Duxton, Rydges and Quest on Mount. Information regarding these
hotels can be found on the www.nz.com website
CFP -
IEEE/ACL 2006 Workshop on Spoken Language Technology
Aruba Marriott
Palm Beach, Aruba
December 10 -- December 13, 2006
Workshop website
Workshop Topics
Spoken language understanding; Spoken document summarization, Machine
translation for speech; Spoken dialog systems; Spoken language
generation; Spoken document retrieval; Human/Computer Interactions
(HCI); Speech data mining; Information extraction from speech;
Question/Answering from speech; Multimodal processing; Spoken language
systems, applications and standards.
Submissions for the Technical Program
The workshop program will consist of tutorials, oral and poster
presentations, and panel discussions. Attendance will be limited with
priority for those who will present technical papers; registration is
required of at least one author for each paper. Submissions are
encouraged on any of the topics listed above. The style guide,
templates, and submission form will follow the IEEE ICASSP
style. Three members of the Scientific Committee will review each
paper. The workshop proceedings will be published on a CD-ROM.
Schedule
Camera-ready paper submission deadline July 15, 2006
Hotel Reservation and Workshop registration opens July 30, 2006
Paper Acceptance / Rejection September 1, 2006
Hotel Reservation and Workshop Registration closes October 15, 2006
Workshop December 10-13, 2006
Registration and Information
Registration and paper submission, as well as other workshop
information, can be found on the SLT website.
Organizing Committee
General Chair: Mazin Gilbert, AT&T, USA
Co-Chair: Hermann Ney, RWTH Aachen, Germany
Finance Chair: Gokhan Tur, SRI, USA
Publication Chair: Brian Roark, OGI/OHSU, USA
Publicity Chair: Eric Fosler-Lussier, Ohio State U., USA
Industrial Chair: Roberto Pieraccini, Tell-Eureka, USA
IEEE International Symposium on Multimedia - ISM 2006
Conference website
Special track: Remote Sensors for Audio
Processing
In recent decades, the cost of acoustic technologies has declined dramatically. Advances in networks, storage devices, and power management have made it practical to consider the remote location of sensors that either transmit data to a central processing facility or
store the data for later retrieval.
Nonetheless, many challenges remain for the fabrication, deployment and use of remote sensors.
In locations with limited infrastructure, power management and the ability for the user to access
or retrieve the data are paramount. In some situations, the need for localization or improved
signal to noise ratio may dictate the use of microphone arrays or other signal enhancement techniques.
Deployment in hostile environments such as arctic or deep sea conditions requires additional
considerations.
Remote sensors are capable of generating large acoustic or mixed media datasets. With these large corpora, the need for automated processing becomes critical as the staffing requirements for human analysis are both cost and labor prohibitive. The development of automated analysis can yield valuable data such as seasonal or diel patterns of animals, perimeter intrusion detection, access control, and a myriad of other applications.
This special session invites researchers to submit high quality papers
describing either preliminary or mature results on topics related to
audio for remote sensors.
Topics of interest
· Audio classification and detection tasks for remote sensors (speech,
bioacoustics, auditory scene analysis, etc.)
· Deployment issues
· Power management
· Networking/Storage/Data Management
· Array processing
· Remote audio sensors in challenging environments
· Applications of remote sensors with a significant audio component
Submissions and deadlines
The written and spoken language of ISM2006 is English. Authors should submit
an 8-page technical paper manuscript in double-column IEEE format including authors'
names and affiliations, and a short abstract electronically. Submissions should be
directed to Prof. Marie Roch ,
following the formatting instructions available
in the submission guidelines for regular papers. Note that papers should not be
submitted directly to ISM web site. Only electronic submissions will be accepted.
All papers should be in Adobe portable document format (PDF). The paper should have a
cover page, which includes a 200-word abstract, a list of keywords, and author's phone
number and e-mail address. The Conference Proceedings will be published by the IEEE
Computer Society Press.
Important dates:
· August 8 - submission of papers
· September 10 - Notification of acceptance of papers
· September 25 - Camera-ready papers due
· December 11-13 - Conference at
Paradise Point Resort & Spa in
San Diego ,
California
CFP ICASSP 2007
2007 IEEE International Conference on
Acoustics, Speech and Signal Processing
April 15-20, 2007
Honolulu, Hawaii, U.S.A.
conference website
Tutorial Proposals Due August 4, 2006
Special Session and Panel Proposals Due August 4, 2006
Notification of Special Session & Tutorial Acceptance September 8, 2006
TOPICS
* Audio and electroacoustics
* Bio imaging and signal processing
* Design and implementation of signal processing systems
* Image and multidimensional signal processing
* Industry technology tracks
* Information forensics and security
* Machine learning for signal processing
* Multimedia signal processing
* Sensor array and multichannel systems
* Signal processing education
* Signal processing for communications
* Signal processing theory and methods
* Speech processing
* Spoken language processing
Submission of Papers
Prospective authors are invited to submit full-length, four-page papers
, including figures and references, to the ICASSP Technical Committee.
All ICASSP papers will be handled and reviewed electronically. Please note that
the submission dates for papers are strict deadlines.
Tutorial, Special Session, and Panel Proposals
Tutorials will be held on April 15 and 16, 2007. Brief proposals should
be submitted by August 4, 2006, to Hideaki Sakai by email
and must include title, outline, contact information for the presenter, and a
description of the tutorial and material to be distributed to participants
together with a short biography of the presenter and
a list of publications related to the proposal. Special session and panel proposals
should be submitted by August 4, 2006, to Phil Chou through the the ICASSP 2007 website
and must include a topical title, rationale, session outline, contact information,
and a list of invited speakers.
Important Deadlines
Tutorial Proposals Due: August 4, 2006
Special Session and Panel Proposals Due: August 4, 2006
Notification of Special Session & Tutorial Acceptance: September 8, 2006
Submission of Camera-Ready Papers: September 29, 2006
Notification of Acceptance (by email): December 15, 2006
Author's Registration Deadline: February 2, 2007
Chairs
General Chairs
K. J. Ray Liu, University of Maryland, College Park
Todd Reed, University of Hawaii
Technical Program Chairs
Anthony Kuh, University of Hawaii
Yih-Fang Huang, University of Notre Dame
16th International Congress of Phonetic Sciences
Saarland University, Saarbrücken,
6-10 August 2007.
The first call for papers will be made in April 2006. The deadline for
*full-paper submission* to ICPhS 2007 Germany will be February 2007.
Further information is available under
conference website
RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING (RANLP-07)
SAMOKOV hotel, Borovets, Bulgaria
conference website
RANLP-07 tutorials: September 23-25, 2007 (Sunday-Tuesday)
RANLP-07 workshops: September 26, 2007 (Wednesday)
6th Int. Conference RANLP-07: September 27-29, 2007 (Thursday-Saturday)
We are pleased to announce that the dates for RANLP’07 have been finalised
(see above). Building on both the successful international summer schools
organised for more than 17 years, and previous conferences held in 1995,
1997, 2001, 2003 and 2005, RANLP has become one of the most influential,
competitive and far-reaching conferences, with wide international
participation from all over the world. Featuring leading lights in the
area as keynote speakers or tutorial speakers, RANLP has now grown into a
larger-scale meeting with accompanying workshops and other events. In
addition to the 6 keynote speeches and tutorials on hot NLP topics,
RANLP07 will be accompanied by workshops and shared task competitions.
Volumes of selected papers are traditionally published by John Benjamins
Publishers and previous conferences have enjoyed support from the European
Commission.
Important dates :
Conference 1st Call for Papers: October 2006;
Call for Workshop proposals: November 2006,
deadline of proposals end of January 2007;
Workshop selection: early March 2007;
Conference Submission deadline: March 2007 with notification 30 May 2007;
Workshop Submission deadline: 15 June 2007 with notification in July 2007;
RANLP-07 tutorials, workshops and conference: 23-30 September 2007
The conference will be held in the picturesque resort of Borovets. It is
located in the Rila mountains and is one of the best known ski and tourist
resorts in South-East Europe. The conference venue Samokov hotel offers
excellent working and leisure facilities. Borovets is only 1 hour away
from Sofia international airport.
THE TEAM BEHIND RANLP-07
Galia Angelova, Bulgarian Academy of Sciences, Bulgaria
(Chair of the Organising Committee)
Kalina Bontcheva, University of Sheffield, UK
Ruslan Mitkov, University of Wolverhampton, UK
(Chair of the Programme Committee)
Nicolas Nicolov, Umbria Communications, Boulder, USA
Nikolai Nikolov, INCOMA Ltd., Shoumen, Bulgaria
Kiril Simov, Bulgarian Academy of Sciences, Bulgaria
(Workshop Coordinator)
E-mail
Multimedia Content Access: Algorithms and Systems (EI121)
Part of the IS&T/SPIE International Symposium on Electronic Imaging
28 January - 1 February 2007, San Jose, California, USA
Conference Chairs:
Alan Hanjalic, Technische Univ. Delft (Netherlands);
Raimondo Schettini, DISCo/Univ. degli Studi di Milano-Bicocca (Italy);
Nicu Sebe, Univ. van Amsterdam (Netherlands)
Topics
Content Analysis:
* image, audio and video characterization (feature extraction)
* fusion of text, image, video and audio data
* content parsing, clustering and classification
* semantic modeling
* image, video and audio similarity measures
* object and event detection and recognition
* benchmarking of content analysis methods and algorithms
* generic methods and algorithms for content analysis
* affective content analysis.
Content Management and Delivery:
* (Internet) multimedia databases
* multimedia standards (e.g. SVG, SMIL, MPEG-7)
* efficient peer-to-peer storage and search techniques
* indexing and data organization
* system optimization for search and retrieval
* storage hierarchies, scalable storage
* personalized content delivery.
Content Search/Browsing/Retrieval:
* multimedia data mining
* active learning and relevance feedback
* query models
* browsing and visualization
* search issues in distributed and heterogeneous systems
* benchmarking search, browsing, and retrieval algorithms and systems
* generation of video summaries and abstracts
* cognitive aspects of human/machine systems.
Internet Imaging and Multimedia:
* peer-to-peer imaging systems for the Internet
* content creation and presentation for the Internet
* web cameras: impact on content analysis techniques
* interactive multimedia creation for the Internet
* content rating, authentication, non-repudiation,
and cultural differences in content perception
* XML applications
* web crawling, caching, and security
* semantic web
* (adaptable) user interfaces.
Applications:
* commerce
* medicine
* news
* entertainment
* wearable and ubiquitous computing
* management of meetings
* biometrics
* cultural heritage and education
* collaborative systems and multi-device applications
* life log applications
* military and civilian security applications.
The conference program will include invited keynote presentations,
invited special sessions, and a panel of experts who will be
discussing the remaining research challenges related to multimedia
content analysis, management and retrieval.
Important Dates
Paper Proposals (5,000 words): 04 August 2006 (last extension)
Final Manuscript Due Date: 13 November 2006
200-word Final Summary: 20 November 2006
CFP International Conference on Information Sciences, Signal Processing and their Applications (ISSPA 2007)
ISSPA 2007 marks the 20th anniversary of launching the first ISSPA in 1987 in Brisbane,
Australia. Since its inception, ISSPA has provided, through a series of 8 symposia,
a high quality forum for engineers and scientists engaged in research and development of
Signal and Image Processing theory and applications. Effective 2007, ISSPA will extend its
scope to add the new track of information sciences. Hence, the intention that the previous
full name of ISSPA is replaced after 2007 by the following new full name:
International Conference on Information Sciences, Signal Processing and their Applications.
ISSPA is an IEEE indexed conference.
ISSPA 2007 will be organized between February 12 to 15, 2007 in Sharjah, United Arab Emirates (UAE) by three prominent institutions located in Sharjah in the United
Arab Emirates: University of Sharjah, American University of Sharjah, and Etisalat University College.
The regular technical program will run for three days along with an exhibition of signal processing
and information sciences products. In addition, tutorial sessions will be held on the first day
of the symposium.
Topics
Papers are invited in, but not limited to, the following topics:
1.Filter Design Theory and Methods
2. Multirate Filtering & Wavelets
3.Adaptive Signal Processing
4.Time-Frequency/Time-Scale Analysis
5.Statistical Signal & Array Processing
6.Radar & Sonar Processing
7.Speech Processing & Recognition
8.Fractals and Chaos Signal Processing
9.Signal Processing in Communications
10.Signal processing in Networking
11. Multimedia Signal Processing
12. Nonlinear signal processing
13.Biomedical Signal and Image Processing
14.Image and Video Processing
15.Image Segmentation and Scene Analysis
16. VLSI for Signal and Image Processing
17.Cryptology, Steganography, and Digital Watermarking
18. Image indexing & retrieval
19.Soft Computing & Pattern Recognition
20. Natural Language Processing
21.Signal Processing for Bioinformatics
22. Signal Processing for Geoinformatics
23.Biometric Systems and Security
24.Machine Vision
25.Data visualization
26. Data mining
27. Sensor Networks and Sensor Fusion
28.Signal Processing and Information Sciences Education
29.Others
How to submit?
Prospective authors are invited to submit full length (four pages) papers for presentation in
any of the areas listed above (indicate area in your submission).
We also encourage the submission of proposal for student session,
tutorial and sessions on special topics. All articles submitted to ISSPA 2007 will be
peer-reviewed using a blind review process.
For more details and submission of papers please see : conference website
Important Dates
Full Paper Submission: September 15, 2006
Tutorials/Special Sessions Proposals: September 15, 2006
Notification of Paper Acceptance: November 15, 2006
Final Accepted Paper Submission: December 1, 2006
Conference: February 12 to 15, 2007
Contact person:
Dr Mohammed Al-Mualla ISSPA07 Publicity Chair
top |