MESSAGE from Michael Wagner, member of the board, currently responsible for Industry Relations
Dear ISCA Member,
I am happy to report for ISCApad on my seven years of serving the speech science and technology community on the ISCA Board.
When I joined the Board in 2000, I was drafted by then President Roger Moore into the discussions and preparations for the “internationalisation”
of ESCA, the European Speech Communication Association, into ISCA as it is now. Subsequently, I was a member of the ISCA Committee that
negotiated the merger of ISCA and PC-ICSLP, the then Permanent Council for the organisation of the International Conferences on Spoken
Language Processing. I can only guess that the international diplomacy leading to treaties like nuclear non-proliferation were a cinch compared
with the complexities of the merger between ISCA and PC-ICSLP, but eventually we achieved our common goal, and we now have the unified
Interspeech Conferences (this year in Antwerp, next year in Brisbane!), which emerged from the preceding alternating Eurospeech and ICSLP
Conferences. Subsequently, my portfolio on the Board shifted from Internationalisation to Industry Relations, which has included the enjoyable
task of organising an “Industry Lunch” each year at the Interspeech Conference and initiating valuable contacts and discussions on strengthening
the relationship between academic research, both fundamental and applied, and the research & development undertaken in the speech technology
industry. ISCA is now facilitating a vigorous job market for speech science and technology graduates as well as for senior academics and engineers,
partly at the Interspeech Conferences and partly through ISCApad. In addition, I stepped into the ISCA Treasurer portfolio when it became vacant in 2005.
Initially, I found ISCA’s financial status a little difficult to understand, which was mainly due to the fact that a large proportion of ISCA’s assets are
continually rolled over as loans to Interspeech Conferences and ISCA Workshops, only to be repaid one or two years later, a situation not well
captured by ISCA’s previous cash accounting system. In 2006, the system was converted, with the invaluable help of ISCA’s Administrator Manu
Foxonet, to an accrual accounting system, reflecting ISCA’s financial status more clearly, both for the Board and for you, the ISCA Members. I had
to step down as Treasurer at the end of 2006 due to acute work overload – thank you to Christian Wellekens for stepping into the breach! – and,
having served two full terms on the Board, I am now looking forward to my last Board meeting in Antwerp in August. After that? Interspeech 2008
in Brisbane is not much more than a year away and the Organising Committee of the Australasian Speech Science and Technology Association
(ASSTA) is busily preparing to offer all of you yet another most enjoyable Interspeech Conference – see you in Antwerp in a few months and then
again in Brisbane next year!
Michael Wagner
Professor, University of Canberra, Australia
Editorial
Dear Members,
Speech research and development are very active. A growing number of summer schools, master programs
in speech processing,...is
organized. Have also a look of our updated list of job offers that
shows how many speech experts are needed in our universities and industries.
Consider the major part of speech contributions in the last ICASSP.
Take a note of the coming ITRW worlwide organized and do not hesitate to inform our colleague
Professor Sadaoki Furui about new ITRW you are volunteering to organize.
Great news is that ISCA board has now selected Chiba in Japan as the venue for Interpeech 2010: a long way to walk by
the organizers but
a very short time for preparing interesting results along the guidelines of this conference. But meanwhile we will meet
in Antwerp (2007), Brisbane (2008) and in Brighton (2009). The Technical Program Committee is working hard to prepare
the next Interspeech (Antwerp 2007).
Also I draw your attention on the extended deadline for applying for the Christian Benoit Prize.
Christian Wellekens
TABLE OF CONTENTS
- ISCA News
- SIG's activities
- Courses, internships
- Books, databases,
softwares
- Job openings
- Journals
- Future Interspeech Conferences
- Future ISCA Tutorial and Research Workshops (ITRW)
- Forthcoming Events supported (but not organized) by ISCA
- Future Speech Science and technology events
ISCA NEWS
top
Call for applications to the Christian Benoit Award
Extended deadline!
The Christian Benoit Award is delivered periodically by the Association
Christian Benoit (**). It is given to promising young scientists in the
domain of Speech Communication. The Award provides financial support for
the development of a multi-media project promoting the work of these
young scientists, and is valued at 7,622 Euros.
The first award was delivered to Tony Ezzat from MIT in June 2000,
for his research in Audiovisual Speech Synthesis, the second award to
Johanna Barry from University of Melbourne in September 2002 for
her work on the acquisition of lexical tones in profoundly
hearing-impaired speakers using a cochlear implant, and the third award
to Olov Engwal from KTH in Stockholm in October 2004 for the
elaboration of ARTUR, a multi-modal articulation tutor able to give
automatic feedback to real users.
The fourth award will be delivered this year to ANY PROJECT IN THE FIELD
OF SPEECH COMMUNICATION. Candidates should be in the final stages of
their doctoral research or within the five years following the obtention
of their PhD.
The Christian Benoit award will offer financial support to develop a
multi-media project which (a) demonstrates the candidate's research in a
way that helps launching that candidate's career, and (b) leverages
electronic publishing technologies intelligently so as to facilitate the
widest possible dissemination of this content.
In the application, the candidate should provide
-- a statement of research interest,
-- a detailed curriculum vitae, and
-- a description of the proposed multi-media project.
If the project already exists, a copy or link should be provided along
with the application.
Applications should be sent to Pascal Perrier and
received by Friday May 11th, 2007. Electronic submissions are
mandatory.
The successful candidate will be notified by June 1st and invited to make
a brief presentation of his/her work at the Interspeech 2007 Conference
in Antwerp (Belgium).
Travel expenses for attendance at the Award ceremony will be provided by
the Christian Benoit Association. For further information, please contact
Pascal Perrier.
** The Christian Benoit Association is a nonprofit organization, whose
purpose is to facilitate the development of research projects in the
field of speech communication. Established in honor of Christian Benoit,
French CNRS researcher in the field of speech communication who died on
the 26th of April, 1998, at the age of 41, the Award places special
emphasis on multimedia representations of ongoing research.
SIG's activities A list of Speech Interest
Groups can be found on our
web.
top
COURSES, INTERNSHIPS
ELSNET Summer School Belfast 2007
Advanced Dialogue Systems: Affectivity, Adaptability and Multimodality
16 - 27th July 2007
This year's ELSNET Summer School will be held in Belfast, Northern Ireland. It is being hosted by Queen's University at its pleasant
tree-lined campus near Belfast's buzzing city centre.
ELSNET Summer School Belfast is being organised by Computer Science at Queen's University in association with Computing
and Mathematics at the University of Ulster.
The 2007 Summer School focuses on dialogue systems - covering everything from basic prompt and response systems
to systems that adapt to the user's level of experience and even the user's emotional state. And the school includes guidance
for assessing how well implemented systems are actually working.
Bringing together a teaching team of world experts, the school will cover industry-standard technologies, hosting
environments and markup languages for building robust speech-based and multimodal dialogue systems. Alongside
practical strands that teach attendees how to set about building both simple and more complex dialogue systems the school
includes extensive coverage of the latest trends in dialogue development as viewed by academic and industrial dialogue specialists.
At the leading edge of dialogue system development the school considers approaches to emotion-enablement - from analysis
of real-world emotionally coloured interactions to ways of conveying affect through the use of computer-generated embodied
conversational agents.
In addition to the 2-week schedule of lectures and practicals the summer school will be complemented by a social
programme of events and recommended excursions.
Summer School Web Page.
Master in Human Language Technologies and Interfaces at the University of Trento
Website
organized by: University of Trento and Fondazione Bruno Kessler Irst
Call for applications, Academic Year 2007/08
Goal
Human language technology gives people the possibility of using speech
and/or natural language to access a variety of automated services, such as
airline reservation systems or voicemail, to access and communicate
information across different languages, and to keep under control the
increasing amount of information available by automatically extracting
useful content and summarizing it. This master aims at providing skills in
the basic theories, techniques, and applications of this technology through
courses taught by internationally recognized researchers from the
university, research centers and supporting industry partners. Students
enrolled in the master will gain in depth knowledge from graduate courses
and from substantial practical projects carried out in research and industry
labs.
Courses: Speech Processing, Machine Learning for NLP, Human Language, Text
Processing, Spoken Dialog Systems,Human Computer Interaction, Language
Resources, Multilingual Technology
Requisites
Master degree level ( min 4 years) in the area of computer science,
electrical engineering, computational linguistics and cognitive science and
other related disciplines.
English language (official language)
Student Grants
A limited number of fellowships will be available.
Application Deadline
Non EU Students: June, 15
EU Students: end of July
Info
E-mail
University of Trento-Department of Information and Communication
Technologies Via Sommarive, 14-38100 Povo (Trento), Italy
Summer school: Cognitive and physical models of speech production, perception, and perception-production interaction.
Part II : Brain and Speech
Autrans, France
September 16-21, 2007
After the success of the previous summer school held in Lubmin (Germany) 2004, we are happy to announce the second international summer school on Cognitive
and physical models of speech production, perception, and perception-production interaction. This year we will pay special attention to the brain.
The aim of this summer school is to relate fundamental knowledge on speech production and perception to insights about the organization and function of the brain.
Tutorials will be presented by specialists in these domains.
This summer school is intended mainly for graduate students, postdoctoral fellows, and researchers who work in the fields of speech production, perception,
perception-production interaction, and the brain (neurolinguistics). Potential topics are:
Speech and language acquisition
Speech and language disorders
Neural basis of speech production
Speech production control
Neural basis of speech perception
Audio-visual speech perception
Plasticity of speech perception
It is intended to provide a platform for interchanges between students, junior and senior researchers, and hence, we would like each participant to feel free to contribute
to any of these topics.
Submission
For abstract submission, please include the name(s) of the author(s), affiliations, and a contact e-mail address in the first lines of the body of the message.
Texts should be written in English. Since the number of participants is limited to 40, registration will be restricted
and based on the scientific quality of the submitted abstract.
Authors are invited to present their work in discussion groups or poster sessions at the summer school.
All details can be viewed at the summer school website
Important dates
Deadline for the application is the 2nd of May, 2007!
Notification of acceptance May 21st, 2007
Summer school September 16th-21st,2007
Registration
The number of participants is limited to 40.
There will be no registration fee. Participants will have to pay for lodging and board.
We are currently trying to get further funding for participants.
Invited speakers are:
Monica Baciu (LPNC, UPMF, Grenoble)
Grzegorz Dogil (Stuttgart university)
Hélène Loevenbruck (ICP/Gipsa-lab, CNRS, Grenoble)
Marc Sato (CRLMB, McGill university, Montréal)
Jean-Luc Schwartz (ICP/Gipsa-lab, CNRS, Grenoble)
Christophe Pallier (INSERM U562, Gif sur Yvette)
Georg Meyer (School of psychology, university of Liverpool)
Bernd Kröger (UK Aachen)
Organizers
Susanne Fuchs (ZAS, Berlin)
Hélène Loevenbruck (ICP, GIPSA-lab, Grenoble)
Daniel Pape (ZAS, Berlin)
Pascal Perrier (ICP, GIPSA-lab, Grenoble)
Studentships available for 2006/7 at the Department of Computer
Science The University of Sheffield - UK One-Year MSc in HUMAN
LANGUAGE TECHNOLOGY The Sheffield MSc in Human Language Technology has
been carefully tailored to meet the demand for graduates with the
highly-specialised multi-disciplinary skills that are required in HLT,
both as practitioners in the development of HLT applications and as
researchers into the advanced capabilities required for next-generation
HLT systems. The course provides a balanced programme of instruction
across a range of relevant disciplines including speech technology,
natural language processing and dialogue systems. The programme is
taught in a research-led environment. This means that you will study the
most advanced theories and techniques in the field, and also have the
opportunity to use state- of-the-art software tools. You will also have
opportunities to engage in research-level activity through in-depth
exploration of chosen topics and through your dissertation. Graduates
from this course are highly valued in industry, commerce and academia. The
programme is also an excellent introduction to the substantial research
opportunities for doctoral-level study in HLT. A number of studentships
are available, on a competitive basis, to suitably qualified applicants.
These awards pay a stipend in addition to the course fees. See further details
of the course Information on
how to apply
top
BOOKS, DATABASES, SOFTWARES
Databases
HIWIRE database
We would like to draw your attention to the Interspeech 2007 special
session
"Novel techniques for the NATO non-native Air Traffic Control
and HIWIRE cockpit databases"
http://www.interspeech2007.org/Technical/nato_atc.php
that we are co-organizing. For this special session we make available
(free of charge) the
cockpit database, along with training
and testing HTK scripts. Our goal is to investigate feature extraction,
acoustic modelling and adaptation algorithms for the problem of
(hands-free) speech recognition in the cockpit. A description of the task,
database and ordering information can be found at the
website of the project
We hope that you will be able to participate to this special session.
Alex Potamianos, TUC
Thibaut Ehrette, Thales Research
Dominique Fohr, LORIA
Petros Maragos, NTUA
Marco Matassoni, ITC-IRST
Jose Segura, UGR
- Language Resources Catalogue - Update
ELRA is happy to announce that 3 new Speech Related Resources are now
available in its catalogue.
Moreover, we are pleased to announce that years 2005 and 2006 from the
Text Corpus of "Le Monde" (ELRA-W0015) are now available.
*ELRA-S0235 LC-STAR Hebrew (Israel) phonetic lexicon
*The LC-STAR Hebrew (Israel) phonetic lexicon comprises 109,580 words,
including a set of 62,431 common words, a set of 47,149 proper names
(including person names, family names, cities, streets, companies and
brand names) and a list of 8,677 special application words. The lexicon
is provided in XML format and includes phonetic transcriptions in SAMPA.
More information
*ELRA-S0236 LC-STAR English-Hebrew (Israel) Bilingual Aligned Phrasal
lexicon
*The LC-STAR English-Hebrew (Israel) Bilingual Aligned Phrasal lexicon
comprises 10,520 phrases from the tourist domain. It is based on a list
of short sentences obtained by translation from US-English 10,449
phrasal corpus. The lexicon is provided in XML format.
More information
*ELRA-S0237 LC-STAR US English phonetic lexicon
*The LC-STAR US English phonetic lexicon comprises 102,310 words,
including a set of 51,119 common words, a set of 51,111 proper names
(including person names, family names, cities, streets, companies and
brand names) and a list of 6,807 special application words. The lexicon
is provided in XML format and includes phonetic transcriptions in SAMPA.
More information
*ELRA-W0015 Text corpus of "Le Monde"
*Corpus from "Le Monde" newspaper. Years 1987 to 2002 are available in
an ASCII text format. Years 2003 to 2006 are available in .XML format.
Each month consists of some 10 MB of data (circa 120 MB per year).
More information
For more information on the catalogue, please contact
Valérie Mapelli
Our on-line catalogue has moved to
the following address. Please update your bookmarks.
Books
Human Communication Disorders/ Speech therapy
This interesting series can be listed on Wiley website
Incurses em torno do ritmo da fala
Author: Plinio A. Barbosa
Publisher: Pontes Editores (city: Campinas)
Year: 2006 (released 11/24/2006)
(In Portuguese, abstract attached.)
Website
Speech Quality of VoIP: Assessment and Prediction
Author: Alexander Raake
Publisher: John Wiley & Sons, UK-Chichester, September 2006
Website
Self-Organization in the Evolution of Speech, Studies in the Evolution
of Language Author: Pierre-Yves Oudeyer Publisher:Oxford University Press
Website
Speech Recognition Over Digital Channels Authors: Antonio M.
Peinado and Jose C. Segura Publisher: Wiley, July 2006 Website
Multilingual Speech Processing Editors: Tanja Schultz and
Katrin Kirchhoff , Elsevier Academic Press, April 2006 Website
Reconnaissance automatique de la parole: Du signal a
l'interpretation Authors: Jean-Paul Haton Christophe
Cerisara Dominique Fohr Yves Laprie Kamel Smaili 392 Pages
Publisher: Dunod
top
JOB OPENINGS
We invite all laboratories and industrial companies which have job
offers to send them to the ISCApad
editor: they will appear in the newsletter and on our website for
free. (also have a look at http://www.isca-speech.org/jobs.html
as well as http://www.elsnet.org/
Jobs)
Position for a PhD-student in Nijmegen, The Netherlands
The European project "Acoustic reduction in European
languages" investigates how speakers and listeners
process acoustically reduced words, such as the
pronunciation "yesay" for "yesterday" and "onry" for
ordinary", in six European languages (Dutch, English,
Estonian, Finnish, French, and Spanish). Essential to
this research program are corpora of highly spontaneous
speech, which exist for English and Dutch, but which will
have to be compiled for Estonian, Finnish, French, and
Spanish in the course of this project. Complementary to
corpus based research, the processing of acoustic
reduction will be addressed by means of series of
psycholinguistic experiments.
The project has as its principal investigator Dr M.
Ernestus. It is funded by a European Young Investigator
award, as well as by the Max Planck Institute for
Psycholinguistics and by the Radboud University. The
research group is located in the building of the Max
Planck Institute, on the campus of the Radboud
University, in Nijmegen, The Netherlands. This location
guarantees a stimulating research environment with
excellent experimental facilities. It offers researchers
the possibility to develop interdisciplinary skills and
to discuss their work with many internationally renowned
scholars.
The project is now offering a position for a PhD-student
who will investigate acoustic reduction in French and
Spanish. The PhD-student will explore which types of
reduction occur in these languages, and how the
production and comprehension of reduced words are
affected by the morphological and phonological properties
of these languages. At a more general level, the PhD
student will compare synchronic reduction in modern
Spanish with the diachronic reduction that has occurred in
French.
The PhD student will collaborate closely with the principal
investigator. In addition, the PhD-student will be
supported in his/her research (including the compilation of
the speech corpora) by a team of research assistants.
Applicants should be (near-)native in French and Spanish
and also be fluent in English. They should have a
master's degree in linguistics or phonetics, or receive
one within a few months. Moreover, applicants should have
a basic knowledge of the phonology, phonetics, and
morphology of French and Spanish. The successful
candidate will receive a contract for three and a half
years at the Radboud University Nijmegen (www.ru.nl),
under the conditions for PhD-students at this university.
For further information, including a description of the
complete project, please contact Prof.Mirjam Ernestus
(phone: +31-24-3612970).
Application letters, including extensive CVs, should
arrive at the latest on 31 May 2007, addressed to
Mirjam Ernestus
P.O. Box 310
NL-6500 AH Nijmegen
The Netherlands
or emailed.
Research Positions at Stanford: Robust Dialogue Understanding
The Center for the Study of Language and Information (CSLI) at
Stanford University is seeking Research Scientists to work on
multimodal spoken-language dialogue systems, starting approximately
1 June 2007.
The ideal candidate is a Computational Linguist with an interest in
the computational semantics and pragmatics of dialogue, with
experience in formally-inspired and/or statistical/machine-learning
approaches to dialogue modeling. The position requires a Ph.D. in
computational linguistics, natural language processing, artificial
intelligence, cognitive science, or a related field. Applicants should
have a demonstrated capacity to define and implement a research plan,
and to conduct individual and collaborative research consonant with
the dialogue-systems projects underway at CSLI.
Current research into dialogue at CSLI includes both human-human and
human-computer dialogue modeling, and employs a variety of techniques,
including symbolic and stochastic, theory and data-driven. Proficiency
in multiple approaches relevant to current CSLI application areas will
be highly valued, as will an ability to participate in implementation.
Research topics of particular interest include:
- robust semantic interpretation from noisy data (e.g. fragment
parsing, role detection);
- robust context-based pragmatic interpretation (e.g. anaphora/
ellipsis/fragment resolution);
- multi-party discourse modeling (esp. group decision-making);
- topic/issue detection and tracking;
- probabilistic dialogue state/activity modeling and tracking.
The successful candidate will work with faculty, postdoctoral, and
student researchers in the Dialogue Systems Group at CSLI, performing
novel research and developing core infrastructure for natural
multimodal conversational systems for a range of interactions and
applications. Responsibilities will include supervising student
research assistants and participating in proposal-preparation to
attract new funding.
The CSLI Dialogue Systems Group consists of about thirteen people, and
is involved in a number of projects involving close collaboration with
other Stanford departments, numerous other academic institutions,
government agencies such as NASA, not-for-profit research
organizations such as SRI, and various commercial enterprises. Current
projects include: understanding multi-party conversational
interaction; collaborative control of teams of robotic devices;
conversational control of in-vehicle devices; and speech- enabled
intelligent tutoring systems.
The position is initially for 1 year, renewable for up to 3 years
(contingent on continued funding). Salary is dependent on
qualifications and experience, but is expected to be in a range
starting from $70,000 for a junior appointee, to $90,000 for a senior
appointee. Candidates who may be more appropriate for a more senior
appointment are encouraged to contact us directly and may be
considered at a higher salary.
Applicants should submit a letter of application and a full resume or
curriculum vitae with names and email addresses of at least three
references. Please contact Stanley Peters
and Matt Purver
for further information.
Stanford University is an equal opportunity, affirmative action
employer.
Graduated engineer at IRISA (INRIA Rennes)
Context
Irisa (INRIA Rennes) seeks to recruit a recently graduated R&D
engineer in spoken document processing to package, develop and improve
its rich transcription platform.
The Metiss team at Irisa has developed a recognized know-how in the
field of audio processing with emphasis on speaker and speech
recognition. Over the last few years, we have developed a spoken
document rich transcription platform, Irene, in collaboration with
ENST Paris. This platform, mostly built on top of speech processing
softwares developed at Irisa (SPro, AudioSeg, Sirocco), is at the
centre of our research activities enabling the validation and
experimentation of new ideas in a complete application setup. Research
and development activities on the Irene platform are in part related
to the development of a video content server, Telemex. The latter
acts as a key element of our research activities in multimedia
document analysis and indexing, carried out at Irisa in collaboration
with our industrial partners.
The current version of the Irene platform is an experimental version
based on proprietary softwares completed with public ones (HTK, SRI
LM). These softwares provide basic functionalities which are linked
via a set of scripts. Dedicated to research activities, the current
platform does not provide an integrated and easy-to-use solution, thus
limiting its distribution as well as its use within a complete
application setup.
Job description
The R&D engineer will be in charge of the integration and the
development of our rich transcription platform, with the following
missions:
- define and implement an integrated architecture to enable the
distribution of a verstatile spoken document rich transcription
platform;
- implement new functionalities in our proprietary toolkits to
enable a fully proprietary platform, in particular concerning our
speech recognition software Sirocco;
- integrate and validate the latest technology to improve our rich
transcription platform in the framework of the Telemex video server
for TV streams transcription.
The work will be carried out in close relation with our academic and
industrial partners who have expressed an interest for a spoken
document transcription platform.
Skills required
Prospective candidates should have a strong theoretical and practical
background in computer science applied to Communication and
Information Technology. In particular, strong knowledge in at least
one of the following domain is necessary: pattern recognition, graph
theory, dynamic programing, statistical modeling (HMM). Fluency in C
language and in Perl in a Unix/Linux environment is also
required. Knowledge of the French language is welcome but not
required.
Practical information
The appointment is for a one year period, renewable for an additional
one year, starting fall 2007. The engineer will work primarily in the
Metiss team at Irisa, Rennes (France). Net salary is 2020 EUR per
month, including social security benefits.
As this job opportunity is reserved to recently graduated students,
candidates should hold a Master degree for less than two years.
Interested candidates should contact Guillaume Gravier by mail
(guillaume.gravier@irisa.fr) for further information. Application
deadline is May 23, 2007.
Related links:
Joboffers
Project SPRO
Project Audioseg
Project Sirocco
Facial and Vocal Expression: Two Postdoctoral Fellowships available in Geneva
The Swiss Center for the Affective Sciences and the Geneva Emotion Research Group at the University of Geneva invite applications for two postdoctoral fellowships in the areas of facial
and vocal
emotion expression.
The Swiss Center for Affective Sciences (www.affective-sciences.org) is an interdisciplinary research institute affiliated with the University of Geneva. Its mandate is to study all aspects of human emotion,
including psychological and physiological determinants of emotional communication. Together with the Geneva Emotion Research Group, the Center has developed a large corpus of emotional portrayals
called GEMEP (GEneva Multimodal Emotion Portrayals). The analysis of the corpus and its subsequent use in experimentation and clinical testing will result in a better understanding of the processes
involved in the communication of emotion.
As part of the ongoing work on the GEMEP corpus and the general research program in the Center, the two postdoctoral researchers will participate in either 1) the development of novel methods to
digitally analyze the pertinent acoustic parameters in the vocal channel or 2) research on facial expression patterns on the basis of the Component Process Model and collaborative efforts to automatically
extract and synthesize facial features.
Position 1) Physiology of voice and speech production, Acoustic measurement, Phonetics.
This fellowship requires a strong interest and research competence in phonetics and acoustics as well as solid knowledge in the physiological bases of voice and speech. As this research is hypothesis driven, candidates with a solid background of linking voice production processes to the measurement of the associated acoustic parameters will be given preference.
Position 2) Facial expression, Emotion theory, Nonverbal behavior, FACS.
This fellowship requires a strong interest and research competence in facial expression and nonverbal behavior as well as solid knowledge in the theory of emotions.
The candidate must be a certified FACS coder having used FACS in his/her doctoral thesis. Candidates familiar with the timing aspect of the facial expression will be given preference.
As this research could include collaboration with other institutions about testing and development of automatic features-extraction software, experience in using different coding and analysis software
would be desirable.
The salary corresponds to the position of a post-doctoral researcher as fixed by the University of Geneva (CHF 64’000-72’000 per year. depending on experience). This is a full time position,
to be filled as soon as possible.
Please submit your application electronically, before May 31, 2007, including Curriculum Vitae, a letter of intention and supporting documentation to
Sylvie Staehli
Maitre de Conference en Reconnaissance et Comprehension de la Parole -Universite Rene Descartes Paris 5
Un poste de MCF (27 MCF 1616) en Reconnaissance et compréhension de la
parole est à pourvoir à l'université René Descartes-Paris 5 (UFR de
Mathématiques et Informatique) avec le profil suivant en recherche et en
enseignement :
RECHERCHE
Le CRIP5 est un laboratoire d'informatique, avec des axes de recherche
spécifiques et une production de niveau international. C’est aussi un
laboratoire de recherche appliquée, résolument orienté vers les domaines
qui font l’originalité de l’université Paris 5 (sciences de la vie et
sciences humaines). L’équipe Dialogue et Indexation (Diadex) s’intéresse
à tous les domaines de recherche de la reconnaissance et de la
compréhension de la parole (modèles acoustiques et linguistiques,
stratégies de décodage et optimisation, modèles de langage de genre,
planification du dialogue, grammaires formelles pour le langage naturel,
…). Le nouveau maître de conférences devra s’intégrer dans l’équipe
Diadex et avoir une expérience solide dans un ou plusieurs des domaines
précédemment cités. Il devra s’impliquer dans les différents projets de
l’équipe et participer à l’encadrement des étudiants du master recherche.
ENSEIGNEMENT
Le nouveau Maître de Conférences prendra en main les enseignements de
programmation et participera activement à leur organisation tout au long
des trois années de la Licence Mathématiques Informatique et
Applications. Il participera également au développement du parcours «
Parole et Communication Homme-Machine » du Master Recherche MISV,
spécialité Informatique
Contact : Prof. Marie-Jose Caraty
Website
Opening on Speech recognition at Telefonica, Barcelona (Spain)
The speech Technology Group at Telefonica Investigacion y Desarrollo
(TID) is looking for a highly qualified candidate for an
engineering position on speech recognition and related technologies.
The selected person will become a part of a multidisciplinary team of
young highly motivated people in an objective driven, friendly
atmosphere located in a central area of Barcelona (Spain).
Minimum requirements ar:
Degree in Computer Science /Electrical Engineering/Computational
Linguistics or similar with 2+ years of experience (Ph.D. preferred)
on speech technology.
Good knowledge of speech recognition and speech synthesis.
Proven programming expertise in C++ and Java
Good level of English (required) and some knowledge of Spanish
(preferred)
High motivation and teamwork spirit
Salary depending on the experience and value of the applicant
Starting date as soon as possible
The speech technology group is a well established group within TID
with more than 15 years of experience in research and development of
technology for internal use of Telefonica group as well as outside
organizations. It is also a very active partner in many National and
European projects. TID is the research and development company inside
the Telefonica group, currently one of the biggest Telecom companies.
It is the biggest private research center in Spain in number of
employees and available resources.
Please send your resume and contact information to
Sonia Tejero
Tlf: +34 93 365 3024
Sound to Sense:
18 Fellowships in speech research
Sound to Sense (S2S) is a Marie Curie Research Training Network involving collaborative speech research amongst 13 universities in 10 countries.
18 Training Fellowships are available, of which 12 are predoctoral and 6 postdoctoral (or equivalent experience). Most but not all are planned to start in September or October 2007.
A research training network’s primary aim is to support and train young researchers in professional and inter-disciplinary scientific skills that will equip them for
careers in research. S2S’s scientific focus is on cross-disciplinary methods for modelling speech recognition by humans and machines. Distinctive aspects of our
approach include emphasis on richly-informed phonetic models that emphasize communicative function of utterances, multilingual databases, multiple time domain analyses,
hybrid episodic-abstract computational models, and applications and testing in adverse listening conditions and foreign language learning.
Eleven projects are planned. Each can be flexibly tailored to match the Fellows’ backgrounds, research interests, and professional development needs, and will fall into one of four broad themes.
1: Multilinguistic and comparative research on Fine Phonetic Detail (4 projects)
2: Imperfect knowledge/imperfect signal (2 projects)
3: Beyond short units of speech (2 projects)
4: Exemplars and abstraction (3 projects)
The institutions and senior scientists involved with S2S are as follows:
* University of Cambridge, UK (S. Hawkins (Coordinator), M. Ford, M. Miozzo, D. Norris. B. Post)
* Katholieke Universiteit, Leuven, Belgium (D. Van Compernolle, H. Van Hamme, K. Demuynck)
* Charles University, Prague, Czech Republic (Z. Palková, T. Dub?da, J. Volín)
* University of Provence, Aix-en-Provence, France (N. Nguyen, M. d’Imperio, C. Meunier)
* University Federico II, Naples, Italy (F. Cutugno, A. Corazza)
* Radboud University, Nijmegen, The Netherlands (L. ten Bosch, H. Baayen, M. Ernestus, C. Gussenhoven, H. Strik)
* Norwegian University of Science and Technology (NTNU), Trondheim, Norway (W. van Dommelen, M. Johnsen, J. Koreman, T. Svendsen)
* Technical University of Cluj-Napoca, Romania (M. Giurgiu)
* University of the Basque Country, Vitoria, Spain (M-L. Garcia Lecumberri, J. Cenoz)
* University of Geneva, Switzerland (U. Frauenfelder)
* University of Bristol, UK (S. Mattys, J. Bowers)
* University of Sheffield, UK (M. Cooke, J. Barker, G. Brown, S. Howard, R. Moore, B. Wells)
* University of York, UK. (R. Ogden, G. Gaskell, J. Local)
Successful applicants will normally have a degree in psychology, computer science, engineering, linguistics, phonetics, or related disciplines, and want to acquire expertise in one or more of the others.
Positions are open until filled, although applications before 1 May 2007 are recommended for starting in October 2007.
Further details are available from the web about:
+ the research network and how to apply: http://www.ling.cam.ac.uk/s2s/s2sJobAd.pdf (92 kB)
+ the research projects: http://www.ling.cam.ac.uk/s2s/s2sProjects.pdf (328 kB).
>
Post-doc in Japan - Machine learning, kernel machines, computational statistics, Bayesian statistics, or multimodal processing
DeadLine: 01/05/2007
Website
The Institute of Statistical Mathematics (ISM)
Research Organization of Information and Systems (ROIS)
Postdoctoral position:
Applicants are invited to apply to the Transdisciplinary Research Integration Center, ISM/ROIS. ISM is one of member institutes of ROIS
along with the National Institute of Informatics, the National Institute of Genetics and the National Institute of Polar Research.
The ISM mission includes promoting statistical science and developing an innovative methodology for approaching complex problems related
to life science, earth science, environmental science and human sciences from the view point of information and systems
(http://www.ism.ac.jp/index_e.html). The position will start as soon as possible after 1 April 2007. The postdoctoral researcher
will work on the following project “Discovery of Invariants in Multimodal Data” (http://www.ism.ac.jp/~tmatsui/kinou2_p4/index-en.html).
The initial contract is one-year long but could be extended up to three years.
Field of work:
Machine learning, kernel machines, computational statistics, Bayesian statistics, or multimodal processing
Project description:
Multimodal data available to us through the Internet and other electronic media are explosively increasing both in number and in variety.
To handle such massive data for various purposes, new technologies need to be developed. With this in mind, we have started investigating a
new methodology that allows us to discover from multimodal data the information relevant to the purpose at hand (which is referred to as “invariants”). To achieve this goal, we will study several qualitatively different problems from different research areas, in which multimodal data play a central role (e.g., visual/audio/text processing, cognitive science, auditory perception and robotics). The problems are to be tackled with some of the recently developed inductive learning machines including automatic model selection mechanism (e.g., Penalized Logistic Regression Machines and Support Vector Machines). The results will be analyzed in order to establish a new methodology for discovery of invarian!
ts, which will be applicable to problems across different areas of study.
Job description:
The successful candidate will support and coordinate our efforts in the area of investigation of methods for discovery of invariants
with multimodal data.
Requirements:
Applicants should have a PhD and some knowledge of machine learning and statistics. Applicants must be able to program (C/C++ and Matlab
knowledge is an advantage but not requirement) and must also have experience with statistical data analysis.
Payment:
The salary will be in the range of 4,500,000 yen 6,000,000 yen (before tax and insurance).
Application:
Applicants should send their CV, including a list of publications and the names of two potential referees.
Contact:
Prof. Tomoko Matsui
Tel: +1 604 822 9662 (until 9 March 2007 (Vancouver, Canada)
+81 3 5421 8769 (from 10 March 2007 (Japan)
Postdoctoral position- INRIA-LORIA Nancy France
Objective
Despite
recent progresses achieved in speech synthesis, it is still very difficult to
modify the characteristics linked to the speaker since signals are synthesized
by concatenating sounds uttered by a given speaker. It is thus almost
impossible to modify acoustic cues of sounds as well as characteristics linked
to the speaker.
The
objective of the postdoc is to elaborate copy
synthesis algorithms that enable a speech signal to be reproduced as faithfully
as possible while offering the possibility of modifying acoustic cues. For this
reason this postdoctoral work will rely on the utilization of a formant
synthesizer derived from that proposed by Klatt[1]. Synthesis
thus rests on the filtering by a system of resonators (representing formants)
of a sound source, periodic for the voiced sounds as vowels, aperiodic (a noise) for
unvoiced sounds as fricatives phonemes.
Work The work
will consist of adapting the synthesizer so that it does lend itself to copy
synthesis as well as possible and to develop algorithms to optimizing source
and formant parameters.
In order to
copy speech sufficiently finely it is necessary to adjust formant and source
parameters precisely. The LF source model proposed by Fant
and Liljencrants [2] is sufficiently versatile to
approximate a natural speech source. The optimization of the four parameters
was the subject of a number of works in the case where the vocal tract filter
and source are estimated jointly [3,4] or when the source signal is known [5]. The
specificity of copy synthesis is that the filter of the vocal tract is only
roughly approximated by formants hypothesised and that the ratio of noise in
the source has also to be adjusted for each of the formants.
Resonators
of a formant synthesizer can be organized in cascade or in parallel. Only the
second solution is usable in the case of copy synthesis because it enables
formants to be adjusted independently [6]. The frequency, amplitude and
bandwidth of each formant have to be specified. One important advantage of the
parallel architecture is that it is possible to adjust only amplitude by
setting the bandwidth to a default value once the formant frequency is known. The
second aspect of the work will be on the elaboration of an algorithm to adjust
amplitudes and frequencies. The adjustment of amplitudes must be synchronized
on source periods in order to capture fast variations of amplitude, and that of
formant frequencies will rest upon the automatic formant tracking previously
developed [7]. Improvements will be about the choice of the formant number so
as to increase the closeness of the speech copied with respect to the original
signal.
The two
aspects have been presented independently to simplify the presentation of the
work. To a certain extent only they also can be addressed independently. However,
it is clear that the improvement of the synthesis quality will be all the
better since interactions between these two aspects will have been considered
together.
The Parole
team mainly works on automatic speech recognition and speech analysis. In the
domain of analysis a number of algorithms have been developed (F0 detection,
formant tracking, pitch marking, copy synthesis...) and are available in WinSnoori software
which already contains a series of tools for copy synthesis and which is
developed by the team for several years.
Skill and
profile
A good
knowledge in speech analysis or in signal processing is required.
References
Copy
synthesis tools of WinSnoori are presented here.
[1] D.H. Klatt, “Software for a cascade/parallel formant
synthesizer”, J. Acoust. Soc. Amer., 67(3), p. 971-995,
March 1980.
[2] G. Fant and J. Liljencrants, “A four
parameter model of glottal flow”, STL, QPSR, 4, p. 1-13, 1985
[3] M. Frölich , D. Michaelis and H.W. Strube, “SIM-simultaneous inverse filtering and matching of
a glottal flow model for acoustic speech signals”, J. Acoust.
Soc. Amer., 115(1), p.337-351, 2003.
[4] D.
Vincent, O. Rosec and T. Chonavel,
“Estimation of LF glottal source parameters based on an ARX model”, Proc. of Interspeech, p. 333-336, Lisboa,
Sep. 2005.
[5] J. Pérez and A. Bonafonte,
“Automatic Voice-Source Parametrization of Natural
Speech”, Proc. of Interspeech, Lisboa,
Sep. 2005.
[6] W. J.
Holmes, “Copy synthesis of female speech using the JSRU parallel formant
synthesiser”, Proceedings of European Conference on Speech Technology, p.
513-516, Paris, France, Sep., 1989
[7] Y. Laprie, “A concurrent curve strategy for formant tracking”,
Proc. of ICSLP, Jegu, Korea, Oct. 2004
Contact Interested
candidates are invited to contact Yves Laprie
Important
information
This
position is advertised in the framework of the national INRIA campaign for
recruiting post-docs. It is a one year position, renewable, beginning fall
2007. The salary is 2,320€ gross per month.
Selection of candidates will be a two step process. A first selection for a candidate
will be carried out internally by the PAROLE group. The selected candidate
application will then be further processed for approval and funding by an INRIA
committee.
Doctoral
thesis less than one year old (May 2006) or being defended before end of 2007. If
defence has not taken place yet, candidates must specify the tentative date and
jury for the defence.
Useful link
Presentation of INRIA postdoctoral
positions To apply(be patient, loading this link takes times…)
Research scientist- Speech Technology- Princeton, NJ, USA
Company
Profile: Headquartered in Princeton, NJ, ETS (Educational Testing Service)is the world's premier educational measurement institution and a
leader in educational research. As an innovator in developing achievement and occupational tests for clients in business,
education, and government, we are determined to advance educational excellence for the communities we serve.
Job Description: ETS Research & Development has a Research Scientist opening in the Automated Scoring and
Natural Language Processing Group. This group conducts research focusing on the development of new capabilities
in automated scoring and NLP-based analysis and evaluation systems, which are used to improve assessments, learning tools
and test development practices for diverse groups of users that include K-12 students, college students,
English Language Learners and lifelong learners. The Research Scientist position involves applying scientific,
technical and software engineering skills to designing and conducting research studies and developing capabilities
in support of educational products and services. The job is a full-time job.
Required qualifications
· A Ph.D. in Natural Language Processing, Computational Linguistics, Computer Science, or Electrical Engineering with a
focus on speech technology, particularly speech recognition. Knowledge of linguistics is a plus.
· Evidence of at least three years of independent substantive research experience and/or experience in developing
and deploying speech technology capabilities, preferably in educational environments.
· Demonstrable contributions to new and/or modified theories of speech processing and their implementation in
automated systems.
· Practical expertise with speech recognition systems and fluency in at least one major programming language (e.g.,
Java, Perl, C/C++, Python).
· Three years of independent substantive research experience and/or experience in developing and deploying speech
technology capabilities, preferably in educational environments.
How to apply Please send copy of your resume, along with cover letter stating salary requirements and job #2965,
to e-mail
ETS offers competitive salaries, outstanding benefits, a stimulating work environment, and attractive growth potential.
ETS is an Equal Opportunity, Affirmative Action Employer.
Web site
Research Fellow in Speech Synthesis-
Centre for Speech Technology Research/
University of Edinburgh
The Centre for Speech Technology Research at the University of Edinburgh
is seeking a research fellow to work on the speech synthesis project
"Automatically-determined inventories for speech synthesis". This
project uses machine learning techniques to automatically discover, from
speech data, a set of units for speech synthesis - that is, an
alternative to manually-specified phoneme-based units such as diphones.
This research is currently being conducted within a concatenative (i.e.
unit selection) framework, but we now seek to extend this to the other
major synthesis technique: statistical parametric synthesis, based on
Hidden Markov Models (i.e., trajectory HMMs). The successful candidate
will be expected to contribute, plan and execute new research, as well
as extend our existing techniques.
You ideally will have a PhD in speech synthesis and experience of
trajectory Hidden Markov Models. You will have very good programming
skills, preferably in C++, and experience with one or more of:
concatenative speech synthesis techniques; statistical models of speech;
perceptual evaluations; Festival. An automatic speech recognition
background is also appropriate for this position. This post is fixed
term for 15 months.
For more information and application instructions, consult our website and enter vacancy number 3006866.
Software Engineer Position at Be Vocal, Mountain View, CA,USA
We are currently looking for a Software Engineer with previous exposure to Speech, to work in our Speech and Natural Language Technology group.
This group’s mission is to be the center of excellence for speech and natural language technologies within BeVocal. Responsibilities include assisting in the
development of internal tools and processes for building Natural Language based speech applications as well as on ongoing infrastructure/product improvements.
The successful candidate must be able to take direction from senior members of the team and will also be given the opportunity to make original contributions to
new and existing technologies during the application development process. As such, you must be highly motivated and have the ability to work well independently
in addition to working as a team.
Responsibilities
* Develop and maintain speech recognition/NLP tools and supporting infrastructure
* Develop and enhance component speech grammars
* Work on innovative solutions to improve overall Speech/NL performance across BeVocal’s deployments.
Requirements
* BS in Computer Science, Electrical Engineering or Linguistics, an MS is a preferred.
* 2-5 years of software development experience in Perl, Java, C/C++. A willingness and ability to pick up additional software languages as needed is essential.
* Exposure or experience with speech recognition/pattern recognition either from an academic environment or directly related work experience.
* Experience working as part of a world-class speech and language group is highly desirable.
* Experience building natural language applications is preferred.
* Experience building LVCSR speech recognition systems is a plus.
For immediate consideration, please send your resume by email and include
"Software Engineer, Speech" in the subject
line of your email. Principals only please (no 3rd parties or agencies). Contact
for details
BeVocal's policy is to comply with all applicable laws and to provide equal employment opportunity for all applicants and employees without regard to
non-job-related factors such as race, color, religion, sex, national origin, ancestry, age, disability, veteran status, marital status or sexual orientation.
This policy applies to all areas of employment, including recruitment, hiring, training, promotion, compensation, benefits, transfer, and social and recreational
programs.
Postdoctoral Fellow -- Speech Synthesis- Alfred I. Dupont Hospital for Children, Wilmington, DE
The Alfred I. duPont Hospital for Children in Wilmington, DE has an
immediate opening for a Postdoctoral Fellow in Speech Synthesis in the
Speech Research Laboratory, within the Department of Biomedical
Research. The ideal candidate will have a Ph.D. in Computer Science,
Linguistics, Psychology, or a related field, demonstrated experience in
data-based speech synthesis techniques, and an interest in modeling
prosody, particularly intonation, in speech synthesis systems. The
primary responsibilities for this position include: Developing a model
for intonation that can be trained on and capture the important
talker-specific features of an individual's speech while also
representing phonologically motivated f0 characteristics; implementing
the intonation model for the ModelTalker TTS system; and assisting in
the creation of unit concatenation voices for the ModelTalker TTS
system. A Ph.D. in Linguistics, Computer Science, Psychology, or closely
related field with demonstrated knowledge of and experience in
concatenative speech synthesis techniques, speech analysis techniques,
and acoustic phonetics is required. Computer programming experience
with C or C++, knowledge of additional languages is a plus. Experience
with Unix/Linux and Windows operating systems is essential.
This is a two-year grant-funded position. For more information, email
Dr Timothy Bunnel or call at (302) 651-6835. Applicants may also
post their resume on-line at www.nemours.org or send resume with salary
requirements to Dr. Timothy Bunnell, Department of Biomedical Research,
Alfred I. duPont Hospital for Children, P.O. Box 269, Wilmington, DE 19899.
Position at Saybot in China
Job title: Speech Scientist
Location: China (Beijing or Shanghai)
Saybot develops software technology and curricula for learning spoken english. Since 2005, we have been
building software which features state-of-the-art speech technologies and innovative interactive lessons to
help users practice speaking English. We are currently looking for talented speech scientists to help strengthen
our R&D team and to develop our next-generation products. Successful candidates would have proven excellence
and good work ethics in academic or industry context and demonstrated creativity in building speech systems
with revolutionary designs.
* MS/PhD degree in speech technology (or related).
* Expertise in at least one of the following areas and basic knowledge of the others:
o acoustic model training,
o speaker adaptation,
o natural language understanding,
o prosody analysis,
o embedded recognizers.
* Excellent programming skills in both object-oriented languages (C++, C# or Java) and scripting (Perl or Python).
* Good knowledge and experience in at least one commonly used recognizer (HTK, Sphinx, Nuance...).
* Excellent communication skills in written and oral English.
* Experience in machine translation is a plus.
* Experience in VoIP integration is a plus.
* Experience in language teaching is a plus.
Contact: Sylvain Chevalier
2 Positions in Research and Development in "Audio description and indexing" at IRCAM-Paris
PRESENTATION OF THE SAMPLE ORCHESTRATOR PROJECT:
The goal of the Sample Orchestrator project is to develop and test new applications
for managing and manipulating sound samples based on audio content.
On the one hand the commercial availability of large databases of sound samples
available on various supports (CD, DVD, online), are currently limited in their
applications (synthesizers by sampling). On the other hand, recent scientific and
technological development in audio indexing and database management allow the development
of new musical functions: database management based on audio content, audio processing
driven by audio content, development of orchestration tools.
TASKS:
Two positions are available from April 15th 2007 within the "Equipe Analyse/Synthese" of Ircam
for (each) a 12 months total duration (possibility of extending the contracts).
The main tasks to be done for the research and development positions are:
- Research and development of new audio features and algorithms
for the description of instrumental, percussive and FX sounds.
- Research and development of new audio features and algorithms
for the morphological description of sounds
- Research and development of new audio features and algorithms
for sounds containing "loops"
- Research and development of algorithms for automatic audio indexing
- Research and development of algorithms for fast search by similarity in large databases
- Participation in the definition of the specification
- Participation in user evaluation and feedback
- Integration into the final application
RESEARCH POSITION:
REQUIRED EXPERIENCE AND COMPETENCE:
- High skills in Audio indexing and signal processing
- High skills in Matlab programming
- High productivity, methodical work, excellent programming style.
- Good knowledge of UNIX, Mac and Windows environments
SALARY:
According to background and experience.
DEVELOPMENT POSITION:
REQUIRED EXPERIENCE AND COMPETENCE:
- Skills in Audio indexing and signal processing
- High skills in C/C++ programming
- High productivity, methodical work, excellent programming style.
- Good knowledge of UNIX, Mac and Windows environments
SALARY:
According to background and experience.
EEC WORKING PAPERS:
In order to start immediately, the candidate should preferably have EEC citizenship
or already own valid EEC working papers.
AVAILIBILITY:
The positions are available in the "Analysis/Synthesis" team in the R&D department
from April 15th 2007 for (each) a duration of 12 months (possibility of extending the contracts).
TO APPLY:
Please send your resume with qualifications and informations addressing the above issues,
preferably by email
Xavier Rodet, Analyse/Synthese team manager).
or by fax at:
(33 1) 44 78 15 40, care of Xavier.Rodet
or by surface mail to:
Xavier Rodet, IRCAM, 1 Place Stravinsky, 75004 Paris.
IRCAM:
IRCAM is a leading non-profit organization dedicated to musical production,
R&D and education in acoustics and music, located in the center of Paris (France),
next to the Pompidou Center. It hosts composers, researchers and students from many countries
cooperating in contemporary music production, scientific and applied research.
The main topics addressed in its R&D department are acoustics, psychoacoustics, audio synthesis
and processing, computer aided composition, user interfaces, real time systems.
Detailed activities of IRCAM and its groups are presented on our
WWW server.
RESEARCH AND DEVELOPMENT POSITION IN "AUDIO CONTENT ACCESS" at IRCAM (Paris)
PRESENTATION OF THE MUSICDISCOVER PROJECT :
The goal of the MusicDiscover project is to give access to the contents of
musical audios recordings (as it is the case, for example, for texts),
i.e. to a structured description, as complete as possible, of the recordings:
melody, genre/style, rate/rhythm, instrumentation, musical structure,
harmony, etc. The principal objective is thus to develop and evaluate means
directed towards the contents, which include techniques and tools for
analysis, indexing, representation and search for information. These means
will make it possible to build and use such a structured description.
This project of the ACI "Masses of Data" is carried out in collaboration
between Ircam (Paris), Get-Telecom (Paris) and the LIRIS (Lyon) since
October 2004. The principal lines of research are :
- Rhythmic analysis and detection of ruptures
- Recognition of musical instruments and indexing
- Source Separation
- Structured Description
- Research of music by similarity
- Recognition of musical titles
- Classification of musical titles in genre and emotion.
The available position relates to the construction and the use of the
Structured Description in collaboration with the other lines of research.
DEVELOPMENTS TASKS:
A position is available from December 1st 2006 within the "Equipe
Analyse/Synthese" of Ircam for a 9 months total duration.
The contents of work are as follows:
- Participation in the design of a Structured Description
- Software development for construction and use of Structured Descriptions
- Participation in the definition and development of the graphic interface
- Participation in the evaluations
REQUIRED EXPERIENCE AND COMPETENCE:
- Experience of research in Audio Indexing and signal processing
- Experience in Flash, C and C++ and Matlab programming.
- High productivity, methodical work, excellent programming style.
- Good knowledge of UNIX and Windows environments.
AVAILABILITY :
- The position is available in the "Analysis/Synthesis" team in the R&D
department from November 1st 2006 for a duration of 9 months.
EEC WORKING PAPERS :
- In order to start immediately, the candidate should preferably have EEC
citizenship or already own valid EEC working papers.
SALARY:
- According to background and experience.
TO APPLY:
- Please send your resume with qualifications and informations adressing the
above issues, preferably by email to
Xavier Rodet, Analyse/Synthese team manager.
or by fax at:
(33 1) 44 78 15 40, care of Xavier.Rodet
or by surface mail to:
Xavier Rodet, IRCAM, 1 Place Stravinsky, 75004 Paris.
Introducing IRCAM
IRCAM is a leading non-profit organization dedicated to musical
production, R&D and education in acoustics and music, located in the center
of Paris (France), next to the Pompidou Center. It hosts composers,
researchers and students from many countries cooperating in contemporary
music production, scientific and applied research. The main topics
addressed in its R&D departement are acoustics, psychoacoustics, audio
synthesis and processing, computer aided composition, user interfaces, real
time systems.
Detailed activities of IRCAM and its groups are presented on our
WWW
server
top
JOURNALS
CfP: Speech Communication Journal - Special Issue on “Evaluating new methods and models for advanced speech-based interactive systems”
The aim of this special issue is to explore new evaluation techniques and strategies as applied to advanced dialogue systems, including new models and methods.
Original, previously unpublished submissions addressing some (or all) of the following questions are encouraged:
1. What characteristics of spoken language interaction can and should be incorporated into advanced spoken dialogue systems?
2. What are the best methods for designing such systems? To what extent are automatic design methods appropriate or possible?
3. What criteria can be defined for the evaluation of the performance of advanced spoken dialogue systems?
4. Under what circumstances should these criteria be used?
5. How effective are these criteria in isolating problems with a dialogue strategy and in measuring how correction of the problems improves the dialogue?
6. How can these criteria be used to compare and evaluate alternative dialogue strategies and methods for the design and implementation of dialogue systems?
7. How can these criteria be used to compare the pre-modification version and the post-modification version of a dialogue strategy as developers attempt to
improve the dialogue strategy?
8. How can the evaluation process be streamlined so that it can be frequently and effectively applied to the improvement and comparison of dialogue strategies?
Guest Editors
Michael McTear, University of Ulster, UK
Kristiina Jokinen, University of Helsinki, Finland
James Larson, Oregon Graduate Institute, Oregon, USA
Important Dates
Submission Deadline: 30th June 2007
Notification of Acceptance: 30th November 2007
Final manuscript due: 31 January 2008
Tentative Publication Date: June 2008
Submission Procedure
Prospective authors should follow the regular guidelines of the Speech Communication journal for electronic submission (http://ees.elsevier.com/specom).
During submission authors must select the Article Type: "Special Issue: Spoken Dialogue Technology, not "Regular Paper", and also select
Professor Marc Swerts as the handling Editor-in-Chief.
Full text of CFP
CfP:
IEEE SIGNAL PROCESSING MAGAZINE-
Special Issue on Spoken Language Technology
The evolution of speech and language technologies over the past decade
has spawned an exciting new research area known as Spoken Language Tech
nology (SLT). Technological advances in SLT promise to provide ubiquit
ous and personalized access to information, communication, and entertai
nment services. For example, advances in natural language understanding
and large vocabulary continuous speech recognition have resulted in a
new generation of automated contact center services that offer callers
the flexibility to speak their request naturally using their own words
as opposed to the words dictated to them by the machine. Advances in ma
chine translation technology have resulted in speech-to-speech translat
ion products that offer multi-party multi-lingual communication. Advanc
es in information search and data mining are providing the means to ext
ract intelligence information from large corpora of speech data (e.g.,
TV programs, call center data) to help improve business operation and s
earch for information rapidly without having to listen to conversations
.
This special issue on Spoken Language Technology is motivated by the fi
rst SLT workshop, Aruba, December 2006, jointly sponsored by IEEE and A
CL (www.slt2006.org). The goal is to solicit tutorial articles with com
prehensive surveys of important theories, algorithms, tools, and applic
ations of SLT on existing and new commercial, academic and government a
pplications. Prospective authors should submit a white paper summarizin
g the motivation, the significance of the topic, brief history, and an
outline of the content. Authors with accepted proposals will be invited
to write a full manuscript.
Scope of topics:
Publications in the following areas are strongly encouraged
Spoken language understanding
Dialog management
Spoken language generation
Spoken document retrieval
Information extraction from speech
Question answering from speech
Spoken document summarization
Machine translation of spoken language
Speech data mining and search
Voice-based human computer interfaces
Spoken dialog systems, applications and standards
Multimodal processing, systems and standards
Machine learning for spoken language processing
Speech and language processing in the world wide web
Submission Procedure:
Prospective authors should submit their white papers to the web submiss
ion system at http://www.ee.columbia.edu/spm according to the following
timetable. The white papers should be three pages maximum
Important dates
White paper due: June 1, 2007
Invitation notification: July 1, 2007
Manuscript due: October 1, 2007
Acceptance Notification: December 1, 2007
Final Manuscript due: January 15, 2008
Publication date: May, 2008
Guest Editors:
Mazin Gilbert
AT&T Labs - Research
180 Park Avenue
Florham Park, NJ, 07932
Kevin Knight
University of Southern California
4676 Admiralty Way
Marina del Rey, CA 90292
Steve Young
Cambridge University
Trumpington Street
Cambridge, CB2 1PZ
Call for Papers- Special Issue of the
IEEE Transactions on Audio, Speech and Language Processing
on New Approaches to Statistical Speech and Text Processing
Dramatic advances in automatic speech recognition (ASR) technology in
recent years has enabled serious growth in spoken language processing
research, both for human-computer interaction and spoken document
processing. The challenges of working with spoken language, including
ASR errors and disfluencies, were major factors in the adoption of
statistical techniques in the language processing community. Statistical
methods now dominate many areas of text processing as well, enabled by
growing collections of linguistic data resources and developments in
machine learning. While transfer of methods from spoken- to written-
language processing continues, advances in written-language processing
also now have a significant impact on spoken-language processing.
This issue seeks to highlight the cross-fertilization in speech and text
processing by publishing novel statistical modeling and learning methods
that span a variety of language processing applications.
We invite papers describing new approaches to statistical language
processing of both spoken and written language. Submissions must not
have been previously published, with the exception that substantial
extensions of conference papers will be considered. Of particular
interest are methods that transfer recent developments from text
processing to speech processing and vice versa, but new methods in one
domain are also welcome. Papers describing new strategies for
integrating acoustic and linguistic cues in spoken language processing
are also encouraged.
Topics of interest include:
- Unsupervised and semi-supervised learning
- Discriminative learning
- Transfer or adaptation to new domains
- Active learning
- Reinforcement learning
- Memory-based learning and neighborhood methods
- Novel statistical models
- Statistical methods for feature selection or transformation
Specific applications of interest include information extraction,
question answering, text segmentation and classification, summarization,
translation, language generation and spoken language dialogs. Papers
that address component problems of these larger applications are also
encouraged, including parsing, discourse analysis, and talker
interaction analysis. The issue aims to cover a variety of applications
as well as different statistical methods.
Submission procedure:
Prospective authors should prepare manuscripts according to the
Information for Authors as published in any recent issue of the
Transactions.
Note that all rules will apply with regard to submission lengths,
mandatory overlength page charges, and color charges. Manuscripts should
be submitted electronically through the online
IEEE manuscript
submission system.
When selecting a manuscript type, authors must click on "Special Issue
of TASLP on New Approaches to Statistical Speech and Text Processing".
Authors should follow the instructions for the IEEE Transactions Audio,
Speech and Language Processing and indicate in the Comments to the
Editor-in-Chief that the manuscript is submitted for publication in the
Special Issue on New Approaches to Statistical Speech and Text
Processing. We require a completed copyright form to be signed and faxed
to +1-732-562-8905 at the time of submission. Please indicate the
manuscript number on the top of the page.
Schedule:
Submission deadline: 15 June 2007
Notification of final acceptance: 15 December 2008
Final manuscript due: 1 February 2008
Publication date: May 2008
Guest Editors:
Dr. Bill Byrne Cambridge University, UK
Dr. Mark Johnson Brown University, USA
Dr. Lillian Lee Cornell University, USA
Dr. Steve Renals University of Edinburgh, UK
Call for papers for a special issue of Speech Communication on
Iberian Languages
Iberian languages (henceforth IL) are amongst the most widely spoken languages in the world.
Nowadays, 628 million people on virtually all continents have Spanish, Portuguese, Catalan, Basque,
Galician, etc. as their official language. Consequently, important speech research centers and companies,
both public and private, are focusing their interest on those languages. This effort has resulted in novel
and generic approaches applicable to any language, as well as in the optimization of existing techniques
or systems. It is worth highlighting that the community working on speech science and technology in IL
speaking countries has already reached world-class level in many areas and has continuously increased in
size in the last 15 years.
Speech technology proposed in the context of a non-Iberian language (e.g., English) may not be directly
applicable to IL. All linguistic and paralinguistic dimensions, from phonetics to pragmatics, are amongst
the features that certainly distinguish IL from others considered in speech science and technology
research. As a result, original work and optimization of existing techniques and systems may be necessary
in many areas of Iberian spoken language research.
The purpose of this Special Issue is to present recent progress and significant advances in all areas of
speech science and technology research in the context of IL. Submitted papers must address topics
specific to IL and/or issues raised by analyses of spoken data that shed light on speech science and
linguistic theories regarding these languages. Research which deals with IL data, but makes use of
standard techniques should not be submitted for this Special Issue. However, both research presenting
relevant optimization of current technology and systems, and work exploring specific features of IL
spoken corpora will be considered for submission.
This Special Issue is one of the first initiatives proposed by the recently created SIG-IL (ISCA Special
Interest Group on Iberian Languages, URL http://www.il-sig.org). The purposes of the SIG-IL are to
promote research activities on IL, to sponsor and/or organise meetings, workshops and other events on
related topics, and to make speech corpora publicly available by promoting joint evaluation efforts.
Furthermore, the SIG-IL is also strongly committed to encouraging world-class research within its
community in order to contribute with new ideas to the field of speech science and technology.
Original, previously unpublished submissions for the following areas, involving IL and detailing
the language-specific aspects, are encouraged:
Topics
o Linguistics, Phonology and Phonetics
o Prosody
o Paralinguistic & Nonlinguistic Information in
Speech
o Discourse & Dialogue
o Speech Production
o Speech Perception
o Physiology & Pathology
o Spoken Language Acquisition, Development
and Learning
o Spoken Language Generation & Synthesis
o Language/Dialect Identification
o Speech and Speaker Recognition: acoustic,
language and pronunciation modeling.
o Spoken Language Understanding
o Multi-modal / Multi-lingual Processing
o Spoken Language Extraction/Retrieval
o Spoken Language Translation
o Spoken/Multi-modal Dialogue Systems
o Spoken Language Resources and Annotation
o Evaluation and Standardization
o Spoken Language Technology for the Aged
and Disabled (e-inclusion)
o Spoken Language Technology for Education
(e-learning)
o Interdisciplinary Topics in Speech and
Language
o New Applications
Guest Editors
Isabel Trancoso INESC-ID, Portugal
Nestor Becerra-Yoma Univ. de Chile, Chile
Plinio A. Barbosa Univ. of Campinas, Brazil
Rubén San-Segundo UPM, Spain
Kuldip Plaiwal Griffith University, Australia
Important Dates
Submission deadline: May 31st, 2007
Notification of acceptance: October 31st, 2007
Final manuscript due: December 30th, 2007
Tentative publication date: March, 2008
Submission Procedure
Prospective authors should follow the regular guidelines of the Speech Communication Journal for
electronic submission (http://ees.elsevier.com/specom). During submission authors must select the
Section “Special Issue Paper”, not “Regular Paper”, and the title of the special issue should be referenced
in the “Comments” (Special Issue on Iberian Languages) page along with any other information.
Papers accepted for FUTURE PUBLICATION in Speech
Communication Full text available on http://www.sciencedirect.com/ for
Speech Communication subscribers and subscribing institutions. Free
access for all to the titles and abstracts of all volumes and even by
clicking on Articles in
press and then Selected papers.
top
FUTURE CONFERENCES
Publication policy: Hereunder, you will find very short announcements
of future events. The full call for participation can be accessed on the
conference websites See also our Web pages (http://www.isca-speech.org/) on
conferences and workshops.
FUTURE INTERSPEECH CONFERENCES
INTERSPEECH 2007-EUROSPEECHAugust 27-31,2007,Antwerp,
Belgium Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves,
K.U.Nijmegen Website
INTERSPEECH 2007 is the eighth conference in the annual series of
INTERSPEECH events and also the tenth biennial EUROSPEECH conference. The
conference is jointly organized by scientists from the Netherlands and
Belgium, and will be held in Antwerp, Belgium, August 27-31, 2007, under the
sponsorship of the International Speech Communication Association (ISCA).
The INTERSPEECH meetings are considered to be the top international
conferences in spoken language processing, with more than 1000 attendees
from universities, industry, and government agencies. The conference offers
the prospect of meeting the future leaders of our field, exchanging ideas,
and exploring opportunities for collaboration, employment, and sales through
keynote talks, tutorials, technical sessions, exhibits, and poster sessions.
In recent years the INTERSPEECH meetings have taken place in a number of
exciting venues including most recently Pittsburgh, Lisbon, Jeju Island
(Korea), Geneva, Denver, Aalborg (Denmark), and Beijing.
AREAS AND TOPICS OF INTEREST:
Interspeech is the world's largest and most comprehensive conference
on Speech Science and Speech Technology and it solicits papers in the
following areas and topics:
A.Human speech production, perception and communication
Phonology and phonetics
Discourse and dialogue
Prosody (production, perception, prosodic structure)
Paralinguistic and nonlinguistic cues (e.g. emotion and expression)
Speech production
Speech perception
Physiology and pathology
Spoken language acquisition, development and learning
B.Speech and Language technology
Speech and audio processing
Speech enhancement
Speech coding and transmission
Spoken language generation and synthesis
Speech recognition
Spoken language understanding
Accent and language identification
Cross-lingual and multi-lingual processing
Multimodal/multimedia signal processing
Speaker characterization and recognition
C.Spoken language systems and applications
Dialogue systems
Systems for information retrieval
Systems for translation
Applications for aged and handicapped persons
Applications for learning and education
Other applications
D.Resources, standardization and evaluation
Spoken language resources and annotation
Evaluation and standardization
PAPER SUBMISSION
Authors will have to declare that their contribution is original
and not being submitted for publication elsewhere (e.g., another
conference, workshop, or journal).
Each corresponding author will be notified by e-mail of the acceptance
or rejection of his paper by May 25, 2007. Minor updates of accepted
papers will be allowed during May 25 - June 3, 2007.
More information is available on
the conference website
INVITED KEYNOTE SPEAKERS
Keynote speaker: ISCA Medalist Prof. Victor Zue (MIT, Cambridge, MA)
Title: On Organic Interfaces
Keynote speaker: Prof. Sophie Scott (UCL, London, UK)
Title: How the Brain Decodes Speech – Some Perspectives from Functional Imaging
Keynote speaker: Prof. Alex Waibel (CMU, Pittsburgh, PA; University of Karlsruhe, Germany)
Title: Computer-Supported Human-Human Multilingual Communication
Keynote speaker: Prof. Luc Steels (Free University Brussels, Belgium; Sony Computer Science Laboratory Paris, France)
Title: Can Robots Invent their Own Language?
IMPORTANT DATES
Full paper submission deadline: March 23, 2007
Notification of paper acceptance/rejection May 25, 2007
Early registration deadline: June 22, 2007
Further information via website or
email.
ORGANIZERS
Professor Dirk Van Compernolle (General Chair)
Professor Lou Boves (General Co-Chair)
c/o
Annitta De Messemaeker
Katholieke Universiteit Leuven
Department of Electrical Engineering
Kasteelpark Arenberg 10
B3001 Heverlee
Belgium
Fax: +32 16 321723
Email
Website
INTERSPEECH 2008-ICSLP September 22-26, 2008, Brisbane,
Queensland, Australia Chairman: Denis Burnham, MARCS, University of West Sydney.
INTERSPEECH 2009-EUROSPEECH Brighton, UK, Chairman:
Prof. Roger Moore,
University of Sheffield.
INTERSPEECH 2010-ICSLP Chiba, Japan
ISCA is pleased to announce that INTERSPEECH 2010 will take place in
Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by
Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken
Language Processing for All - Regardless of Age, Health Conditions,
Native Languages, Environment, etc."
top
FUTURE ISCA TUTORIAL AND RESEARCH WORKSHOP (ITRW)
Third ITRW on NON-LINEAR SPEECH PROCESSING (NOLISP'07) May 22-25,
2007 , Paris, France
Website
Many specifics of the speech signal are not well addressed by the conventional models currently used in the field
of speech processing. The purpose of the workshop is to present and discuss novel ideas, work and results related
to alternative techniques for speech processing, which depart from mainstream approaches.
SUBMISSION
Prospective authors are invited to submit a 3 to 4-page paper proposal in English, which will
be evaluated by the Scientific Committee. Final papers will be due 1 month after the workshop,
for inclusion in the CD-ROM proceedings. A special issue in Speech Communication (Elsevier) will follows.
KEY DATES
Submission (full paper): 15 January 2007
Notification of acceptance: 23 February 2007
Workshop: 22-25 May 2007
Final (revised) paper: 25 June
6th ISCA Speech Synthesis Research Workshop (SSW-6)
University of Bonn (Germany), August 22-24, 2007
A satellite of
INTERSPEECH 2007 (Antwerp)in collaboration with SynSIG and IfK (University of Bonn)
Organized shortly after the 16th International Congress on Phonetic Sciences (Saarbrücken, Germany, August 6-10, 2007).
Like its predecessors in Autrans (France) 1990, New Paltz (NY, USA) 1994, Jenolan (Australia) 1998, Pitlochry (UK) 2001,
and Pittsburgh (PA, USA) 2004, SSW-6 will cover all aspects of speech synthesis
and adjacent fields, such as:
TOPICS (updated list)
* Text processing for speech synthesis
* Prosody Generation for speech synthesis
* Speech modeling for speech synthesis applications
* Signal processing for speech synthesis
* Concatenative speech synthesis (diphones, polyphones, unit selection)
* Articulatory synthesis
* Statistical parametric speech synthesis
* Voice transformation/conversion/adaptation for speech synthesis
* Expressive speech synthesis
* Multilingual and/or multimodal speech synthesis
* Text-to-speech and content-to-speech
* Singing speech synthesis
* Systems and applications involving speech synthesis
* Techniques for assessing synthetic speech quality
* Language resources for speech synthesis
* Aids for the handicapped involving speech synthesis.
Deadlines (updated)
* Full-paper submission (up to 6 pages) - May 14, 2007 (EXTENDED DEADLINE!)
* Notification of acceptance - June 25, 2007
* Deadline for paper modification - July 15, 2007
Please send your papers, preferably as PDF files, as an e-mail
attachment.
Further information can soon be obtained from the
website of the workshop,
Contact Prof. Wolfgang Hess
8th Workshop on Discourse and Dialogue (SIGdial), Antwerp, Belgium
Antwerp, September 2-3, 2007
Held immediately following Interspeech 2007
Continuing with a series of successful workshops in Sydney, Lisbon, Boston, Sapporo, Philadelphia, Aalborg,
and Hong Kong, this workshop spans the ACL
and ISCA SIGdial interest area of discourse and dialogue.
This series provides a regular forum for the presentation of research in this area to both the larger
SIGdial community as well as researchers outside this community. The workshop is organized by SIGdial,
which is sponsored jointly by ACL and ISCA.
Topics of Interest
We welcome formal, corpus-based, implementation or
analytical work on discourse and dialogue including but not restricted
to the following three themes:
1. Discourse Processing and Dialogue Systems
Discourse semantic and pragmatic issues in NLP applications such as text summarization, question answering,
information retrieval including topics like:
· Discourse structure, temporal structure, information structure
· Discourse markers, cues and particles and their use
. (Co-)Reference and anaphora resolution, metonymy and bridging resolution
· Subjectivity, opinions and semantic orientation
Spoken, multi-modal, and text/web based dialogue systems including topics such as:
· Dialogue management models;
· Speech and gesture, text and graphics integration;
· Strategies for preventing, detecting or handling miscommunication (repair and correction types,
clarification and under-specificity, grounding and feedback strategies);
· Utilizing prosodic information for understanding and for disambiguation;
2. Corpora, Tools and Methodology
Corpus-based work on discourse and spoken, text-based and multi-modal dialogue including its support, in particular:
· Annotation tools and coding schemes;
· Data resources for discourse and dialogue studies;
· Corpus-based techniques and analysis (including machine learning);
· Evaluation of systems and components, including methodology, metrics and case studies;
The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a single sentence) including the following issues:
· The semantics/pragmatics of dialogue acts (including those which are less studied in the semantics/pragmatics framework);
· Models of discourse/dialogue structure and their relation to referential and relational structure;
· Prosody in discourse and dialogue;
· Models of presupposition and accommodation; operational models of conversational implicature.
Submissions
The program committee welcomes the submission of long papers for full plenary presentation as well as short papers
and demonstrations. Short papers and demo descriptions will be featured in short plenary presentations, followed by
posters and demonstrations.
· Long papers must be no longer than 8 pages, including title, examples, references, etc. In addition to this, two additional pages are allowed as an appendix which
may include extended example discourses or dialogues, algorithms, graphical representations, etc.
· Short papers and demo descriptions should aim to be 4 pages or less (including title, examples, references, etc.).
Please use the official ACL style files.
Submission/Reviewing will be managed by the START system. Link to follow.
Papers that have been or will be submitted to other meetings or publications must provide this information
(see submission format). SIGdial 07 cannot accept for publication or presentation work that will be (or has been)
published elsewhere.
Authors are encouraged to make illustrative materials available, on the web or otherwise.
For example, excerpts of recorded conversations, recordings of human-computer dialogues, interfaces to working systems,
etc.
Important Dates (subject to change)
Submission May 2, 2007
Notification June 13, 2007
Final submissions July 6, 2007
Workshop September 2-3, 2007
Websites
Workshop website:To be announced
Submission website:To be announced
Sigdial website
Interspeech 2007 website
Email
Program Committee (confirmed)
Harry Bunt, Tilburg University, Netherlands (co-chair) Tim Paek, Microsoft Research, USA (co-chair)
Simon Keizer, Tilburg University, Netherlands (local chair) Wolfgang Minker, University of Ulm, Germany
David Traum, USC/ICT, USA
CfP-SLaTE Workshop on Speech and Language Technology in Education
ISCA Tutorial and Research Workshop
The Summit Inn, Farmington, Pennsylvania USA October 1-3, 2007.
Website
Speech and natural language processing technologies have evolved from being emerging new technologies to being reliable techniques
that can be used in real applications. One worthwhile application is Computer-Assisted Language Learning. This is not only helpful to
the end user, the language learner, but also to the researcher who can learn more about the technology from observing its use in a real setting.
This workshop will include presentations of both research projects and real applications in the domain of speech and language technology
in education.
IMPORTANT DATES
Full paper deadline: May 1, 2007.
Notification of acceptance: July 1, 2007.
Early registration deadline: August 1, 2007.
Preliminary programme available: September 1, 2007.
Workshop will take place: October 1-3, 2007.
LOCATION
The workshop will be held in the beautiful Laurel Highlands. In early October the vegetation in the Highlands puts on a beautiful show of colors
and the weather is still not too chilly. The event will take place at the Summit Inn, situated on one of the Laurel Ridges. It is close to the Laurel Caverns
where amateur spelunkers can visit the underground caverns. The first night event will be a hayride and dinner at a local winery and the banquet will take
place at Frank Lloyd Wright’s wonderful Fallingwater.
TOPICS
The workshop will cover all topics which come under the purlieu of speech and language technology for education.
In accordance with the spirit of the ITRWs, the upcoming workshop will focus on research and results,
give information on tools and welcome prototype demons
trations of potential future applications.
The workshop will focus on research issues, applications, development tools and collaboration. It will be
concerned with all topics which fit under the purview of speech and language technology for education.
Papers will discuss theories, applications, evaluation, limitations, persistent difficulties, general research
tools and techniques. Papers that critically evaluate approaches or processing strategies will be especially
welcome, as will prototype demonstrations of real-world applications.
The scope of acceptable topic interests includes but is not limited to:
- Use of speech recognition for CALL
- Use of natural language processing for CALL
- Use of spoken language dialogue for CALL
- Applications using speech and/or natural language processing for CALL
- CALL tutoring systems
- Assessment of CALL tutors
ORGANIZATION-CONTACT
The workshop is being organized by the new ISCA Special Interest Group, SLaTE.
The general chair is Dr. Maxine Eskenazi from Carnegie Mellon University .
PROGRAMME
As per the spirit of ITRWs, the format of the workshop will consist of a non-overlapping mixture of oral, poster
and demo sessions. Internationally recognized experts from pertinent areas will deliver several keynote lectures
on topics of particular interest.
All poster sessions will be opened by an oral summary by the session chair.
A number of poster sessions will be succeeded by a discussion session focussing on the subject of the session.
The aim of this structure is to ensure a lively and valuable workshop for all involved.
Furthermore, the organizers would like to encourage researchers and industrialists to bring along
their applications, as well as prototype demonstrations and design tools where appropriate.
The official language of the workshop is English. This is to help guarantee the highest degree of
international accessibility to the workshop. At the opening of the workshop hardcopies and CD-ROM of the
abstracts and proceedings will be available.
CALL FOR PAPERS
We seek outstanding technical articles in the vein discussed above. For those who intend to submit papers,
the deadline is May 1, 2007. Following preliminary review by the committee,
notification will be sent regarding acceptance/rejection. Interested authors should send full 4 page camera-ready
papers.
REGISTRATION FEE
The fee for the workshop, including a booklet of Abstracts, the Proceedings on CD-ROM is:
- $325 for ISCA members and
- $225 for ISCA student members with valid identification
Registrations after August 1, 2007 cannot be guaranteed.
ADDITIONAL REGISTRATION INFORMATION
All meals except breakfast for the two and a half days as well as the two special events are included in this price.
Hotel accommodations are $119 per night , and breakfast is about $10. Upon request we will furnish bus transport
from the Greater Pittsburgh Airport and from Pittsburgh to Farmington at a cost of about $30. ISCA membership
is 55 Euros. You must be a member of ISCA to attend this workshop.
ITRW Odyssey 2008
The Speaker and Language Recognition Workshop
21-25 January 2008, Stellenbosch, South Africa
Topics
* Speaker recognition(identification, verification, segmentation, clustering)
* Text dependent and independent speaker recognition
* Multispeaker training and detection
* Speaker characterization and adaptation
* Features for speaker recognition
* Robustness in channels
* Robust classification and fusion
* Speaker recognition corporaand evaluation
* Use of extended training data
* Speaker recognition with speaker recognition
* Forensics, multimodality and multimedia speaker recogntion
* Speaker and language confidence estimation
* Language, dialect and accent recognition
* Speaker synthesis and transformation
* Biometrics
* Human recognition
* Commercial applications
Paper submission
Proaspective authors are invited to submit papers written in English via the Odyssey website. The style guide, templates,and submission
form can be downloaded from the Odyssey website. Two members of the scientific committee will review each paper.
Each accepted paper must have at least one registered author.
The Proceedings will be published on CD
Schedule
Draft paper due July 15, 2007
Notification of acceptance September 15,2007
Final paper due October 30, 2007
Preliminary program November 30, 2007
Workshop January 21-25, 2008
Futher informations: venue, registation...
On the workshop website
Chairs
Niko Brummer, Spescom Data Voice, South Africa
Johan du Preez.Stellenbosch University,South Africa
ITRW on Evidence-based Voice and Speech Rehabilitation in Head & Neck Oncology
May 2008, Amsterdam, The Netherlands,
Cancer in the head and neck area and its treatment can have debilitating effects on communication. Currently available treatment options such as radiotherapy,
surgery, chemo-radiation, or a combination of these can often be curative. However, each of these options affects parts of the vocal tract and/or voice to a more
or lesser degree. When the vocal tract or voice no longer functions optimally, this affects communication. For example, radiotherapy can result in poor voice quality,
limiting the speaker’s vocal performance (fatigue from speaking, avoidance of certain communicative situations, etc.). Surgical removal of the larynx necessitates
an alternative voicing source, which generally results in a poor voice quality, but further affects intelligibility and the prosodic structure of speech. Similarly, a
commando procedure (resection involving portions of the mandible / floor of the mouth / mobile tongue) can have a negative effect on speech intelligibility.
This 2 day tutorial and research workshop will focus on evidence-based rehabilitation of voice and speech in head and neck oncology. There will be 4 half day
sessions, 3 of which will deal with issues concerning total laryngectomy. One session will be devoted to research on rehabilitation of other head and neck cancer
sites. The chairpersons of each session will prepare a work document on the specific topic at hand (together with the two keynote lecturers assigned), which will
be discussed in a subsequent round table session. After this there will be a 30’ poster session, allowing 9-10 short presentations. Each presentation consists of
maximally 4 slides, and is meant to highlight the poster’s key points. Posters will be visited in the subsequent poster visit session. The final work document will
refer to all research presently available, discuss its (clinical) relevance, and will attempt to provide directions for future research. The combined work document,
keynote lectures and poster abstracts/papers will be published under the auspices of ISCA.
Organizers prof. dr. Frans JM Hilgers
prof. dr. Louis CW Pols,
dr. Maya van Rossum.
Sponsoring institutions:
Institute of Phonetic Sciences - Amsterdam Center for Language and Communication,
The Netherlands Cancer Institute – Antoni van Leeuwenhoek Hospital
Dates and submission details as well as a website address will be announced in a later issue.
Audio Visual Speech Processing Workshop (AVSP 2008)
Tentative location:Queensland coast near Brisbane (most likely South Stradbroke Island)
Tentative date: 27-29 September 2008 (immediately after Interspeech 2008)
Following in the footsteps of previous AVSP workshops / conferences, AVSP workshop (ISCA Research and Tutorial Workshop) will be hold
concomitantly to Interspeech2008, Brisbane, Australia, 22-26 September 2008. The aim of AVSP2008 is to bring together researchers and practitioners
in areas related to auditory-visual speech processing. These include human and machine AVSP, linguistics, psychology, and computer science.
One of the aims of the AVSP workshops is to foster collaborations across disciplines, as AVSP research is inherently multi-disciplinary.
The workshop will include a number of tutorials / keynote addresses by internationally renowned researchers in the area
of AVSP.
Organizers Roland Goecke,
Simon Lucey,
Patrick Lucey
Australian National University,RSISE, Bldg. 115, Australian National University, Canberra, ACT 0200, Australia
top
FORTHCOMING EVENTS SUPPORTED (but not organized) by ISCA
CFP-
ETSI Workshop: Speech and Noise in Wideband Communication
22nd & 23rd May 2007, at ETSI Headquarters in Sophia Antipolis, France.
As new types of voice coders, noise cancellation algorithms, transmission technologies and
consequently transmission impairments enter the scene and convergence becomes ever more a
reality, the standardization community faces new challenges.
Being organised by TC STQ, STF 294 and Mesaqin, under contract to ETSI,
the main objectives of the workshop are to:
* Discuss the status, latest advances and trends in wideband speech and audio coding, in
particular in the presence of interfering sounds and noise
* Present the results of STF 294: Improving the quality of eEurope wideband speech
applications by developing a standardised performance testing and evaluation methodology
for background noise transmission
* Exchange information and establish relationships between research, state and industrial
organizations involved in the topic
Topics that will be addressed will include speech and audio wideband coding, noise suppression and
its artefacts, and quality assessment.
A round table discussion will permit participants to offer views on the current issues and challenges
that we will be facing in the future.
Participation in the workshop is free of charge, and open to everyone.
Candidate speakers are invited to send an abstract of their presentation to Jan Holub
by Friday 16th March 2007.
For further details, consult the workshop Website
For registration please see our web
AVSP 2007
International Conference on Auditory-Visual Speech
Processing 2007,
August 31 - September 3, 2007
Kasteel Groenendael, Hilvarenbeek, The Netherlands
The next International Conference on Auditory-Visual
Speech Processing (AVSP 2007) will be organised by
different members of Tilburg University (The Netherlands).
It will take place in Kasteel Groenendael in Hilvarenbeek
(The Netherlands) from August 31, 2007 till September 3,
2007, immediately following Interspeech 2007 in Antwerp
(Belgium). Hilvarenbeek is located at close distance from
Antwerp, so that attendance at AVSP 2007 can easily be
combined with participation in Interspeech 2007.
Auditory-visual speech production and perception by human
and machine is an interdisciplinary and cross-linguistic
field which has attracted speech scientists, cognitive
psychologists, phoneticians, computational engineers, and
researchers in language learning studies. Since the
inaugural workshop in Bonas in 1995, Auditory-Visual
Speech Processing workshops have been organised on a
regular basis (see an overview at the avisa website). In
line with previous meetings, this conference will consist
of a mixture of regular presentations (both posters and
oral), and lectures by invited speakers. All presentations
will be plenary.
We are happy to announce that the following experts have
agreed to give a keynote lecture at our conference:
Sotaro Kita (Birmingham)
Asif Ghazanfar (Princeton)
More details about the conference can be found on the
website
Further information
CfP SPECOM 2007
The 12th International Conference on Speech and Computer
October 15-18, 2007
Organized by Moscow State Linguistic University
General Chair:
Prof. Irina Khaleeva
(Moscow State Linguistic University)
Chair:
Prof. Rodmonga Potapova
(Moscow State Linguistic University)
SPECOM'07 is the twelfth conference in the annual series of SPECOM events. It is organized by Moscow State
Linguistic University and will be held in Moscow, Russia, under the sponsorship of Russian Foundation for Basic
Research (RFBR), Ministry of Education and Science of Russian Federation, the International Speech Communication
Association (ISCA) and others.
SPECOM'07 will cover various aspects of speech science and technology. The program of the conference will
include keynote lectures by internationally renowned scientists, parallel oral and poster sessions and an
exhibition. The sci-tech exhibition that will be held during the conference will be open to companies and
research institutions.
The official language of the Conference will be English.
Important Dates (Extended)
Paper submission opening
February 1, 2007
Full paper deadline
May 25, 2007
Notification of paper acceptance
June 15, 2007
Conference
October 15-18, 2007
Topics
o Speech signal coding and decoding; multi-channel transmitted speech intelligibility; speech information security
o Speech production and perception modeling
o Automatic processing of multilingual, multimodal and multimedia information
o Linguistic, para- and extralinguistic communicative strategies
o Development and testing of automatic voice and speech systems for speaker verification; speaker psychoemotional
state and native language identification
o Automatic speech recognition and understanding systems
o Language and speech information processing systems for robotechnics
o Automated translation systems
o New information technologies for spoken language acquisition, development and learning
o Text-to-speech conversion systems
o Spoken and written natural language corpora linguistics
o Multifunctional expert and information retrieval systems
o Future of multi-purpose and anti-terrorist speech technologies
PAPER SUBMISSION
The deadline for full paper submission (4-6 pages) is April 25, 2007. Papers are to be sent by e-mail to
specom2007@mail.ru. All manuscripts must be in English. Please note that the size of a single letter must not
exceed 10 Megabytes (that is, the total size of all the attached files should not be greater than 7 Megabytes to
leave room for recoding operations performed by the e-mail software). In case the paper files are larger than 7
Megabytes, it is recommended to pack them into a split WinRar or WinZip archive and send part by part in a
series of letter.
All the papers will be reviewed by an international scientific committee. Each author will be notified by
e-mail of the acceptance or rejection of her/his paper by May 30, 2007. Minor updates of accepted papers will be
allowed during May 30 - June 15, 2007.
EVALUATION CRITERIA
Submission of a paper or poster is more likely to be accepted if it is original, innovative, and contributes to
the practice of worldwide scientific communication. Quality of work, clarity and completeness of the submitted
materials will be considered.
REGISTRATION
Registration will be available at the Conference on arrival.
The registration fees are planned to be approximately as follows:
Regular
500 EUR
Students/PG Students
200 EUR
NIS (New Independent States), Regular
300 EUR
NIS, Students/PG Students
100 EUR
Russia, Regular
150 EUR
Russia, Students/PG Students (no Proceedings)
Free
Extra Copy of Proceedings (hard copy)
20 EUR
Extra Proceedings CD/DVD
10 EUR
Information regarding accommodation costs will be available later.
All the registration and accommodation payments will be accepted in cash during the registration procedure on
arrival.
PAPER PREPARATION GUIDELINES
In the following you will find guidelines for preparing your full paper to SPECOM'07 electronically.
· To achieve the best viewing experience both for the Proceedings and the CD (or DVD), we strongly
encourage you to use Times Roman font. This is needed in order to give the Proceedings a uniform look. Please
use the attached printable version of this newsletter as a model.
· Authors are requested to submit PDF files of their manuscripts, generated from the original Microsoft
Word sources. PDF files can be generated with commercially available tools or with free software such as
PDFCreator.
· Paper Title - The paper title must be in boldface. All non-function words must be capitalized, and all
other words in the title must be lower case. The paper title is centered.
· Authors' Names - The authors' names (italicized) and affiliations (not italicized) appear centered
below the paper title.
· Abstract - Each paper must contain an abstract that appears at the beginning of the paper.
· Major Headings - Major headings are in boldface.
· Sub Headings - Sub headings appear like major headings, except that they are in italics and not bold
face.
· References - Number and list all references at the end of the paper. The references are numbered in
order of appearance in the document. When referring to them in the text, type the corresponding reference number
in square brackets as shown at the end of this sentence [1].
· Illustrations - Illustrations must appear within the designated margins, and must be positioned within
the paper margins. Caption and number every illustration. All half-tone or color illustrations must be clear
when printed in black and white. Line drawings must be made in black ink on white paper.
· Do NOT include headers and footers. The page numbers, session numbers and conference identification
will be inserted automatically in a post processing step, at the time of printing the Proceedings.
· Apart from the paper in PDF format, authors can upload multimedia files to illustrate their submission.
Multimedia files can be used to include materials such as sound files or movies. The proceedings CD (DVD) will
NOT contain readers or players, so only widely accepted file formats should be used, such as MPEG, Windows WAVE
PCM (.wav) or Windows Media Video (.wmv), using only standard codecs to maximize compatibility. Authors must
ensure that they have sufficient author rights to the material that they submit for publication. Archives (RAR,
ZIP or ARJ format) are allowed. The archives will be unpacked on the CD (DVD), so that authors can refer to the
file name of the multimedia illustration from within their paper. The submitted files will be accessible from
the abstract card on the CD (DVD) and via a bookmark in the manuscript. We advise to use SHORT but meaningful
file names. The total unzipped size of the multimedia files should be reasonable. It is recommended that they do
not exceed 32 Megabytes.
· Although no copyright forms are required, the authors must agree that their contribution, when
accepted, will be archived by the Organizing Committee.
· Authors must proofread their manuscripts before submission and they must proofread the exact files
which they submit.
POSTERS AND PRESENTATIONS
Only electronic presentations are accepted. PowerPoint presentations can be supplied on CD, DVD, FD or USB
Flash drives.
Designated poster space will be wooden or felt boards. The space allotted to one speaker will measure 100 cm
(width) x 122 cm (height). Posters will be attached to the boards using pushpins. Pins will be provided.
Thanks for following all of these instructions carefully!
If you have any questions or comments concerning the submission, please don't hesitate to contact the
conference organizers.
Please address all technical issues or questions regarding paper submission or presentation to our technical
assistant Nikolay Bobrov.
CFP IEEE ASRU 2007
Automatic Speech Recognition and Understanding Workshop
The Westin Miyako Kyoto, Japan
December 9 -13, 2007
Conference website
The tenth biannual IEEE workshop on Automatic Speech Recognition and
Understanding (ASRU) cooperated by ISCA will be held during December
9-13, 2007. The ASRU workshops have a tradition of bringing together
researchers from academia and industry in an intimate and collegial
setting to discuss problems of common interest in automatic speech
recognition and understanding.
WORKSHOP TOPICS
Papers in all areas of human language technology are encouraged to be
submitted, with emphasis placed on:
- automatic speech recognition and understanding technology
- speech to text systems
- spoken dialog systems
- multilingual language processing
- robustness in ASR
- spoken document retrieval
- speech-to-speech translation
- spontaneous speech processing
- speech summarization,
- new applications of ASR.
SUBMISSIONS FOR THE TECHNICAL PROGRAM
The workshop program will consist of invited lectures, oral and poster
presentations, and panel discussions. Prospective authors are invited
to submit full-length, 4-6 page papers, including figures and
references, to the ASRU 2007 website. All papers will be handled and
reviewed electronically. The website will provide you with further
details.
There is also a demonstration session, which has become another
highlight of the ASRU workshop. Demonstration proposals will be
handled separately.
Please note that the submission dates for papers are strict deadlines.
IMPORTANT DATES
Paper submission deadline July 16, 2007
Paper acceptance/rejection notification September 3, 2007
Demonstration proposal deadline September 24, 2007
Workshop advance registration deadline October 15, 2007
Workshop December 9-13, 2007
REGISTRATION AND INFORMATION
Registration will be handled via the ASRU 2007 website .
ORGANIZING COMMITTEE
General Chairs:
Sadaoki Furui (Tokyo Inst. Tech.)
Tatsuya Kawahara (Kyoto Univ.)
Technical Chairs:
Jean-Claude Junqua (Panasonic)
Helen Meng (Chinese Univ. Hong Kong)
Satoshi Nakamura (ATR)
Publication Chair:
Timothy Hazen, MIT, USA
Publicity Chair:
Tomoko Matsui, ISM, Japan
Demonstration Chair:
Kazuya Takeda, Nagoya U, Japan
Call for Papers (Preliminary version)
Speech Prosody 2008 Campinas, Brazil,
May 6-9, 2008
Speech Prosody 2008 will be the fourth conference of a series of international events of the Special Interest Groups on Speech Prosody (ISCA), starting by the one held in Aix-en Provence, France, in 2002. The conferences in Nara, Japan (2004), and in Dresden, Germany (2006) followed the proposal of biennial meetings, and now is the time of changing place and hemisphere by trying the challenge of offering a non-stereotypical view of Brazil. It is a great pleasure for our labs to host the fourth International Conference on Speech Prosody in Campinas, Brazil, the second major city of the State of São Paulo.
It is worth highlighting that prosody covers a multidisciplinary area of research involving scientists from very different backgrounds and traditions, including linguistics and phonetics, conversation analysis, semantics and pragmatics, sociolinguistics, acoustics, speech synthesis and recognition, cognitive psychology, neuroscience, speech therapy, language teaching, and related fields. Information: sp2008_info@iel.unicamp.br. Web site: http://sp2008.org.
We invite all participants to contribute with papers presenting original research from all areas of speech prosody, especially, but nor limited to the following.
Scientific Topics
Prosody and the Brain
Long-Term Voice Quality
Intonation and Rhythm Analysis and Modelling
Syntax, Semantics, Pragmatics and Prosody
Cross-linguistic Studies of Prosody
Prosodic variability
Prosody in Discourse
Dialogues and Spontaneous Speech
Prosody of Expressive Speech
Perception of Prosody
Prosody in Speech Synthesis
Prosody in Speech Recognition and Understanding
Prosody in Language Learning and Acquisition
Pathology of Prosody and Aids for the Impaired
Prosody Annotation in Speech Corpora
Others (please, specify)
Organising institutions
Speech Prosody Studies Group, IEL/Unicamp | Lab. de Fonética, FALE/UFMG | LIACC, LAEL, PUC-SP
Important Dates
Call for Papers: May 15, 2007
Full Paper Submission: Sept. 30, 2007
Notif. of Acceptance: Nov. 30, 2007
Early Registration: Dec. 20, 2007
Conference: May 6-9, 2008
top
FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS
VOCOID » VOcalisation, COmmunication,
Imitation and Deixis in infant and adult human and non-human primates
Grenoble, France, May 14th -16th
, 2007
ICP – Speech and Cognition Department of GIPSA-lab organizes an
international workshop entitled *« VOCOID » VOcalisation, COmmunication,
Imitation and Deixis in infant and adult human and non-human primates*,
with the support of EUROCORES European Science Foundation: programme
OMLL "On the Origin of Man, Language and Languages".
Web site: workshop website
A description of this workshop can be found on the web site.
VOCOID will include 4 sessions with 6 invited oral communications each :
*Session 1: « Vocalisations and Communication »*
*Session 2: « Gestures and Communication »*
*Session 3: « Learning, Emulation and Imitation »*
*Session 4: « Interaction with Primates and Primatoids »*
Invited speakers (alphabetical order):
Christian Abry, Aude Billard,
Barbara Davis, Yiannis Demiris, Holger Diessel, Pier Francesco Ferrari,
Leonardo Fogassi, Susan Gathercole, Susan Goldin-Meadow, Willima
Hopkins, Atsushi Iriki, Frédéric Joulian, David Leavens, Elena Lieven,
Peter MacNeilage, Jacqueline Nadel, Chrystopher Nehaniv, Katie Slocombe,
Jean-Luc Schwartz, Jacques Vauclair, Virginia Volterra, Doug Whalen.
A Poster session will be held on Monday May 14th , 2007, from 12:00 am
to 2:00 pm. If you wish to submit a poster, please send a title and
abstract of the poster (maximum 1 page, in English) to
Vocoid@icp.inpg.fr before April 23^rd , 2007.
If you wish to attend the workshop, please send your completed
application form by e-mail to Vocoid@icp.inpg.fr or by fax + 33 4 76 82
43 35.
There will be no registration fee and lunch will be offered to the
attendants, subject to availability...
Scientific Committee
Christian Abry, Jean-Luc Schwartz, Jacques Vauclair
Séminaire AFCP : Traitement automatique du langage parlé et langues peu dotées
jeudi 21 juin 2007 ; 10h-16h
IMAG / Maison Jean Kuntzmann
Domaine Universitaire de Saint-Martin d'Hères (Grenoble)
accès
Programme préliminaire (liste des interventions) :
-V. Berment (C&S / LIG) "Langues peu dotées : définition et problématiques pour le TALN et le TALP"
-Nimaan Abdillahi, P. Nocera (LIA) "Récents développements du LIA en reconnaissance automatique du Somali"
-Nathalie Vallée (GIPSA / Département Langage et Cognition) : "Organisation syllabique des unités lexicales des langues"
-Thomas Pellegrini, L. Lamel ""Determination d'unites lexicales dans les langues peu-dotees pour la reconnaissance automatique de la parole"
-L. Besacier, V-B Le "Méthodologie du CLIPS pour la reconnaissance automatique de langues peu dotées : application aux
langues khmères, vietnamiennes et à l'arabe dialectal"
-Pierette Bouillon (Univ. Geneve, à confirmer) "MedSLT : a multilingual spoken language translation system tailored for medical
domains and its deployement for less-resourced languages"
-Dr. Pushpak Bhattacharyya (Department of Computer Science and Engineering Indian Institute of Technology Mumbai, India) :
"Spoken Language Technologies for Indian Languages"
Inscriptions : séminaire gratuit mais inscription nécessaire
Ce séminaire est organisé par l'AFCP
Adhérer à l'AFCP
Last CFP-Fifth International Workshop on
Content-Based Multimedia Indexing, CBMI-2007
June 25-27, 2007, Bordeaux, France
The Workshop is supported by IEEE, EURASIP, European research networks
COST292 and Muscle, INRIA, CNRS, Region d'Aquitaine, University Bordeaux
1, IBM
Topics
Multimedia indexing and retrieval (image, audio, video, text)
Multimedia content extraction
Matching and similarity search
Construction of high level indices
Multi-modal and cross-modal indexing
Content-based search techniques
Multimedia data mining
Presentation tools
Meta-data compression and transformation
Handling of very large scale multimedia database
Organisation, summarisation and browsing of multimedia documents
Applications
Evaluation and metrics
Paper submission
Perspective contributors are invited to submit papers via conference
web-site
Submission of full paper (to be received by):
January 25, 2007
Notification of acceptance:
March 10, 2007
Submission of camera-ready papers:
April 10, 2007
Submission of extended versions in Special issue of JSPIC March 1, 2007
Oerganizers
Chair of Organising committee : Jenny Benois-Pineau, LABRI, University
Bordeaux 1, France
Technical Program Chair : Eric Pauwels, CWI, The Netherlands
CfP-14th International Conference on Systems, Signals and Image Processing,
IWSSIP 2007
and
6th EURASIP Conference Focused on Speech and Image Processing, Multimedia Communications and Services
EC-SIPMCS 2007
June 27 – 30, 2007, Maribor, Slovenia
UPDATE: WHAT IS NEW?
* Tutorials are added. Please see the conference official web
site for updated list of all tutorials.
* Take a look at the conference official web site for paper
submission procedure and author registration information. Note that
paper submission deadline is March 18, 2007. Online paper submission
system and automatic notification system are prepared and working.
Please, register and login to the system, to upload the paper(s).
* Take care to make hotel reservations in advance. There are
other events in Maribor at the same time, so hotels may have limited
number of rooms to offer. Look at the conference official web site for
accommodation possibilities.
* Four keynote speakers will have their speech at the conference,
all of them are internationally recognized scientists in their research
areas. See conference official web site for their bio and abstracts.
CALL FOR PAPERS
Download Call for Papers
IWSSIP is an International Conference on Systems, Signals and Image Processing which brings together researchers
and developers from both academia and industry to report on the latest scientific and theoretical advances,
to discuss and debate major issues and to demonstrate state of-the-art systems.
The EURASIP conference is initiated by the European Association for Speech, Signal and Image Processing (EURASIP)
that is focused on Speech and Image Processing, Multimedia Communications and Services (EC-SIPMCS).
The goal of EC-SIPMCS is to promote the interface researchers involved in the development and applications of
methods and techniques within the framework of speech/image processing, multimedia communications and services.
Topics of Interest
The program includes keynote and invited lectures by eminent international experts, peer reviewed contributed
papers, posters, invited sessions on the same or related topics, industrial presentations and exhibitions around
but not limited to the following topics for IWSSIP and EC-SIPMCS conferences:
• Signal Processing and Systems
• Artificial Intelligence Technologies
• ICT in E-learning/Consulting
• Standards and Related Issues
• Image Scanning, Display and Printing
• Video Streaming and Videoconferencing
• Digital Video Broadcasting (DVB)
• Watermarking and Encryption
• Implementation Technologies
• Applications Areas
• Speech and Audio Processing
• Image and Video Processing and Coding
• Audio, Image and Video Indexing and Retrieval
• Multimedia Signal Processing
• Multimedia Databases
• Multimedia and DTV Technologies
• Multimedia Communications, Networking, Services and Applications
• Multimedia Human-Machine Interface and Interaction
• Multimedia Content Processing and Content Description
• Multimedia Data Compression
• Multimedia Systems
Keynote speakers:
Prof. Dr. Kamisety R. Rao, IEEE Fellow, University of Texas Arlington, USA
Prof. Dr. Markus Rupp, Vienna University of Technology, Austria
Prof. Dr. Levent Onural, Bilkent University, Ankara, Turkey
Submission of Regular Papers
Papers must be submitted electronically by March 18, 2007. Each paper will be evaluated by at least two independent
reviewers, and will be accepted based on its originality, significance and clarity.
Publications
All accepted papers will be published in CD Proceedings that will be available at the Conference.
Abstracts of accepted papers will be printed and included in the INSPEC database. Selected papers
will be considered for possible publication in scholarly journals.
Tutorial and Special Sessions
Those willing to prepare a tutorial course and those willing to organize special session during EC-SIPMCS 2007 and IWSSIP 2007 Conference should contact dr. Peter Planinši? at ec2007uni-mb.si.
Important Dates
Paper and Poster Submissions: March 18, 2007
Notification of acceptance: April 20, 2007
Early registration deadline: April 26, 2007
Camera ready copy due: May 6, 2007
Author Registration: May 6, 2007
Contact Information:
Fax: +386 2 220 7272
E-mail
Website
Žarko ?u?ej, General Chair
University of Maribor, Slovenia
Peter Planinši?, Program Chair
University of Maribor, Slovenia
International Workshop
Is a neural theory of language possible?
Development of unified representations in natural and artificial systems
organized by
CRIL
Centro di Ricerca Interdisciplinare sul Linguaggio
and
The CONTACT Project “Learning and development of Contextual Action”
University of Salento
Hotel President,
Lecce, June 28, 29, 30, 2007
Workshop website
Scientific issues and aims of discussion
The CONTACT project and CRIL are collaborating closely to build an integrated system for characterizing the motor processes of speech production by the simultaneous acquisition of data during articulation in many modalities: ultrasound, articulography, laryngography, and audiovisual recording. We seek to identify motoric and neural invariants that share a common structure during the development of perception and production for both speech and manipulation. The effectiveness of hypothesized invariants will be tested on an artificial learning system fed with the collected motoric data, which should autonomously develop its perceptual capabilities in speech recognition.
This collaboration prompted us to organize an interdisciplinary workshop, creating an opportunity for dialogue between first class scientists from the
disciplines of cognitive neuroscience, robotics and linguistics. The hope is that in the future, research in the field of cognitive neuroscience will mature
and converge to an integrated epistemological perspective, leading to the elaboration of a unified neural theory of language and motor control.
4th Joint Workshop on Machine Learning
and Multimodal Interaction (MLMI'07)
28-30 June 2007
Brno, Czech Republic
Hotel Continental ,
a modern hotel located in a quiet part of
the city within walking distance from the city center. The local
organizers are members of the Faculty of Information Technology
at Brno University of Technology, which was
founded in 1899 as the Czech Technological University.
Organizing Committee
Honza Cernocky, Brno University of Technology (organization co-chair)
Andrei Popescu-Belis, University of Geneva (programme chair)
Steve Renals, University of Edinburgh (special sessions)
Pavel Zemcik, Brno University of Technology (organization co-chair)
45th Annual Meeting of the Association of Computational Linguistics
Prague, Czech Republic, June 23rd-30th, 2007
The conference is organized by the Institute of Formal and Applied
Linguistics, Faculty of Mathematics and Physics, Charles University in
Prague ("Univerzita Karlova v Praze"), Czech Republic, the oldest
University in Europe to the north of the Alps (founded in 1348).
General Chair of the Conference: John Carroll (University of Sussex, UK)
Programme Chairs: Annie Zaenen (PARC, U.S.A.)
Antal van den Bosch (Tilburg University, The Netherlands)
Local Arrangements Chair: Eva Haji?ová (Charles University, Czech Republic)
Conference Secretary: Anna Kot?šovcová (Charles University, Czech Republic)
The topics of the papers cover substantial, original, and
unpublished research on all aspects of computational
linguistics, including, but not limited to: pragmatics, semantics,
syntax, grammars and the lexicon; phonetics, phonology and morphology;
lexical semantics and ontologies; word segmentation, tagging and
chunking; parsing; generation and summarization; language modeling,
spoken language recognition and understanding; linguistic, psychological
and mathematical models of language; document retrieval, question
answering, information extraction, and text mining; machine learning for
natural language; corpus-based modeling of language, discourse and
dialogue; multi-lingual processing, machine translation and translation
aids; multi-modal and natural language interfaces and dialogue systems;
applications, tools and resources; and evaluation of systems.
The following tutorials will be offered at ACL-07 in Prague,
June 24, 2007:
- Usability and Performance Evaluation for Advanced Spoken Dialogue
Systems (Michael McTear, Kristiina Jokinen)
- Nonparametric Structured Models (Percy Liang, Dan Klein)
- Textual Entailment (Ido Dagan, Fabio Massimo Zanzotto, Dan Roth)
- Quality Control of Corpus Annotation Through Reliability Measures
(Ron Artstein)
- From Web Content Mining to Natural Language Processing (Bing Liu)
The following 15 thematic workshops will be held at ACL 2007:
Two-day
* SemEval 2007: 4th International Workshop on Semantic Evaluations
1.5-day
* Joint Workshop on Entailment and Paraphrase and 3rd PASCAL
Recognizing Textual Entailment (RTE-3) Challenge
* Joint Workshop on Frontiers in Linguistically Annotated Corpora 2
(FLAC2) and Sixth NLPXML Workshop
One-day
* ACL SIGMORPHON Computational Research in Morphology and Phonology,
Special Theme: Computational Historical Linguistics
* NLP for Balto-Slavonic languages, Special Focus on IE
* Grammar-based approaches to spoken language processing
* Deep Linguistic Processing
* Cognitive Aspects of Computational Language Acquisition
* Embodied Language Processing
* BioNLP'07
* Second Workshop on Statistical Machine Translation
Half-day
* Computational Approaches to Semitic Languages: Common Issues and
Resources
* A Broader Perspective on Multiword Expressions
* Language Technology for Cultural Heritage Data
* 4th ACL-SIGSEM Workshop on Prepositions
The conference will take place in the TOP HOTEL Praha, located in the
quiet neighborhood of the Prague 4 district, only 15-20 minutes from the
historic center of Prague. The hotel can accommodate up to 1000
participants on-site (with a small number of dormitory rooms available
nearby). The hotel offers one auditorium, three large lecture rooms,
number of smaller rooms for tutorials and workshops, several restaurants
and cafes and lot of open air space for walks and informal discussions.
The conference banquet and a conference concert will take place in the
historic buildings in the city center -- one in the Municipal Hall
(built in the Art-nouveau style of the early 20th century) and the other
in the 14th century main University Hall.
For accommodation reservation, please go directly to the TOP
Hotel's reservation page. The Local Arrangements Committee
has negotiated a substantial reduction of prices for a block of rooms
available to ACL participants: For these prices to apply, please use
the "code" "the ACL 2007 Congress" in the section named
"Detailed information, comments and desires" on the reservation page.
For further information see the conference web site
CFP-4th Joint Workshop on Machine Learning and Multimodal Interaction (MLMI'07)
28-30 June 2007
Brno, Czech Republic
website
MLMI brings together researchers from the different communities working
on the common theme of advanced machine learning algorithms applied to multimodal human-human and human-computer
interaction. The motivation for creating this joint multi-disciplinary workshop arose from the actual needs of several
large collaborative projects.
MLMI'07 will follow on directly from the annual conference of the Association for Computational
Linguistics (ACL/EACL 2007), which will take place in Prague on June 25-27, 2007.
Important dates
Submission of full papers: 23 February
Submission of extended abstracts: 23 March 2007
Submission of demonstration proposals: 23 March 2007
Acceptance decisions: 17 April 2007
Workshop: 28-30 June 2007
Workshop topics
MLMI'07 will feature talks (including a number of invited speakers), posters and demonstrations.
Prospective authors are invited to submit proposals in the following areas of interest, related to
machine learning and multimodal interaction:
- human-human communication modeling
- human-computer interaction modeling
- speech processing
- image and video processing
- multimodal processing, fusion and fission
- multimodal discourse and dialogue modeling
- multimodal indexing, structuring and summarization
- annotation and browsing of multimodal data
- machine learning algorithms and their applications to the topics above
Satellite events
MLMI'07 will feature special sessions and satellite events such as the Summer school of the
European Masters in Speech and Language (http://www.cstr.ed.ac.uk/emasters/) and the PASCAL Speech
Separation Challenge II (http://homepages.inf.ed.ac.uk/mlincol1/SSC2/). To propose other special
sessions or satellite events for MLMI'07, please contact the organizing committee.
Guidelines for submission
In common with the previous MLMI workshops, revised versions of selected papers
will be published in Springer's Lecture Notes in Computer Science series (cf. LNCS 3361, 3869, 4299).
Submissions are invited in one of the following formats:
- full papers for oral or poster presentation (12 pages)
- extended abstracts for poster presentation only (1-2 pages)
- demonstration proposals (1-2 pages)
Please submit PDF files using the submission website ,
following the Springer LNCS format
for proceedings and other multiauthor volumes.
Venue
Brno is the second largest city in the Czech Republic and the capital of Moravia. Brno had been a royal city since
1347 and is the country's judiciary and trade-fair center. With a population of almost four hundred thousand and
its six universities, Brno is also the cultural center of the region.
Brno can be easily reached by direct flights from Prague, London and Munich and by trains or buses from Prague
(200 km) or Vienna (130 km).
MLMI'07 will take place at the Hotel Continental (http://www.continentalbrno.cz), a modern hotel located in a
quiet part of the city within walking distance from the city center. The local organizers are members of the
Faculty of Information Technology at Brno University of Technology, which was founded
in 1899 as the Czech Technological University.
Organizing Committee
Honza Cernocky, Brno University of Technology (organization co-chair)
Andrei Popescu-Belis, University of Geneva (programme chair)
Steve Renals, University of Edinburgh (special sessions)
Pavel Zemcik, Brno University of Technology (organization co-chair)
FIRST CALL FOR PAPERS:
ICPhS 2007 Satellite Workshop on Speaker Age
Saarland University, Saarbruecken, Germany
August 4, 2007
Website
Submission Deadline: April 15, 2007
SCOPE:
This workshop is dedicated to current research on speaker age, a
speaker-specific quality which is always present in speech. Although
researchers have investigated several of aspects of speaker age,
numerous questions remain, including (1) the accuracy by which human
listeners and automatic recognizers are able to judge child and adult
speaker age from speech samples of different types and lengths, (2)
the acoustic and perceptual features (and combination of features)
which contain the most important age-related information, and (3) the
optimal methods for extracting age-related features and integrating
speaker age into speech technology and forensic applications. The
purpose of the workshop is to bring together participants from
divergent backgrounds (e.g. forensics, phonetics, speech therapy and
speech technology) to share their expertise and results. Further
information can be found on the workshop website.
TOPICS:
The topics cover, among others, the following issues:
- methods and tools to identify acoustic correlates of speaker age
- systems which automatically recognize (or estimate) speaker age
- studies on the human perception of speaker age
- projects on the synthesis of speaker age
SUBMISSION:
If you are interested in contributing to the workshop, please send an
extended abstract to both of the organizers
Christian Mueller and Susanne Schotz by April 15, 2007. Contributions on work in progress are
specifically encouraged. The abstract does not have to be formatted.
Feel free to send .doc, .pdf, .txt or .tex files.
CFP-Interdisciplinary Workshop on "The Phonetics of Laughter
5 August 2007
Saarbrücken, Germany
Website
Aim of the workshop
Research investigating the production, acoustics and perception of
laughter is very rare. This is striking because laughter occurs as an
everyday and highly communicative phonetic activity in spontaneous
discourse. This workshop aims to bring researchers together from various
disciplines to present their data, methods, findings, research
questions, and ideas on the phonetics of laughter (and smiling).
The workshop will be held as a satellite event of the 16th International
Congress of Phonetic Sciences in Saarbrücken,
Germany.
Papers
We invite submission of short papers of approximately 1500 words length.
Oral presentations will be 15 minutes plus 5 minutes discussion time.
Additionally, there will be a poster session.
All accepted papers will be available as on-line proceedings on the web,
there will be no printed proceedings. We plan to publish selected
Submissions
All submissions will be reviewed anonymously by two reviewers.
Please send submissions by e-mail to laughter@coli.uni-sb.de specifying
"short paper" in the subject line and providing
1. for each author: name, title, affiliation in the body of the mail
2. Title of paper
3. Preference of presentation mode (oral or poster)
4. Short paper as plain text
In addition you can submit audio files (as wav), graphical files (as
jpg) and video clips (as mpg). All files together should not exceed 1 Mb.
Important dates
Submission deadline for short papers: March 16, 2007
Notification of acceptance: May 16, 2007
Early registration deadline: June 16, 2007
Workshop dates: August 5, 2007
Plenary lecture
Wallace Chafe (University of California, Santa Barbara)
Organisation Committee
Nick Campbell (ATR, Kyoto)
Wallace Chafe (University of California, Santa Barbara)
Jürgen Trouvain (Saarland University & Phonetik-Büro Trouvain, Saarbrücken)
Location
The laughter workshop will take place in the Centre for Language
Research and Language Technology on the campus of the Saarland
University in Saarbrücken, Germany. The campus is located in the woods
and is 5 km from the town centre of Saarbrücken.
Contact
Jürgen Trouvain
Saarland University
FR. 4.7: Computational Linguistics and Phonetics
Building C7.4
Postfach 15 11 50
66041 Saarbrücken
Germany
16th International Congress of Phonetic Sciences
Saarland University, Saarbrücken, 6-10 August 2007. The
first call for papers will be made in April 2006. The deadline for
*full-paper submission* to ICPhS 2007 Germany will be February 2007.
Further information is available under conference website
ParaLing'07: International workshop on
"Paralinguistic speech - between models and data"
Thursday 2 - Friday 3 August 2007
Saarbrücken, Germany
Workshop website
in association with the
16th International Conference on Phonetic Sciences,//
Saarbrücken, Germany, 6-10 August 2007
Summary of the call for participation
This two-day workshop is concerned with the general area of
paralinguistic speech, and will place special emphasis on attempts to
narrow the gap between "models" (usually built making strong simplifying
assumptions) and "real data" (usually showing a high degree of
complexity).
Papers are invited in a broad range of topics related to paralinguistic
speech. Papers can be submitted for oral or poster presentation;
acceptance for oral presentation is more likely for papers that
explicitly address the general theme of the workshop, i.e. "bridging"
issues.
There are at least two different versions of bridging: a weak one and a
strong one. The weak, more modest one aims at a better mutual
understanding, the strong one at profiting from each other's work. We do
not know yet whether after these two days, we really will be able to
profit from each other in our own work; however, we do hope that we will
have reached a level of mutual understanding that will make future
co-operation easier.
WORKSHOP THEME
Research on various aspects of paralinguistic and extralinguistic speech
has gained considerable importance in recent years. On the one hand,
models have been proposed for describing and modifying voice quality and
prosody related to factors such as emotional states or personality. Such
models often start with high-intensity states (e.g., full-blown
emotions) in clean lab speech, and are difficult to generalise to
everyday speech. On the other hand, systems have been built to work with
moderate states in real-world data, e.g. for the recognition of speaker
emotion, age, or gender. Such models often rely on statistical methods,
and are not necessarily based on any theoretical models.
While both research traditions are obviously valid and can be justified
by their different aims, it seems worth asking whether there is anything
they can learn from each other. For example: "Can models become more
robust by incorporating methods used for dealing with real-world data?";
"Can recognition rates be improved by including ideas from theoretical
models?"; "How would a database need to be structured so that it can be
used for both, research on model-based synthesis and research on
recognition?" etc.
While the workshop will be open to any kind of research on
paralinguistic speech, the workshop structure will support the
presentation and creation of cross-links in several ways:
- papers with an explicit contribution to cross-linking issues will
stand a higher chance to be accepted as oral papers;
- sessions and proceedings will include space for peer comments and
answers from authors;
- poster sessions will be organised around cross-cutting issues rather
than traditional research fields, where possible.
We therefore encourage prospective participants to place their research
into a wider perspective. This can happen in many ways; as
illustrations, we outline two possible approaches.
1. In application-oriented research, such as synthesis or recognition, a
guiding principle could be the requirements of the "ideal" application:
for example, the recognition of finely graded shades of emotions, for
all speakers in all situations; or fully natural-sounding synthesis with
freely specifiable expressivity; etc. This perspective is likely to
highlight the hard problems of today's state of the art, and a
cross-cutting perspective may lead to innovative approaches yielding
concrete steps to reduce the distance towards the "ideal".
2. A second illustration of attaining a wider perspective would be to
attempt to cross-link work in generative modelling (e.g., expressive
speech synthesis) and analysis (e.g., recognition of expressivity from
speech). Researchers on generation are invited to investigate the
relevance of their work for analysis, and vice versa. What
methodologies, corpora or descriptive inventories exist that could be
shared between analysis and generation, or at least mapped onto each
other? If certain parameters have proven to be relevant in one area, to
what degree is it possible to transfer them to the other area? Issues of
relevance in this area may include, among other things, personalisation,
speaker dependency vs. independency, links between voice conversion in
synthesis and speaker calibration in (automatic) recognition or (human)
perception, etc.
TOPICS
Paper are invited in all areas related to paralinguistic speech,
including, but not limited, to the following topics:
- prosody of paralinguistic speech
- voice quality and paralinguistic speech
- synthesis of paralinguistic speech (model-based, data-driven, ...)
- recognition/classification of paralinguistic properties of speech
- analysis of paralinguistic speech (acoustics, physiology, ...)
- assessment and perception of paralinguistic speech
- typology of paralinguistic speech (emotion, expression, attitude,
physical states, ...)
While all papers must be related to paralinguistic speech, papers
making the link with a related area, e.g. investigating the interaction
of the speech signal with the meaning of the verbal content, are
explicitly welcome.
IMPORTANT DATES
1st call for papers 1 December 2006
2nd call for papers 1 February 2007
Deadline for full-paper submission 23April (extended deadline!)
Notification of acceptance 1 June
Final version of accepted papers 15 June
Workshop 2-3 August 2007
LOCATION AND REGISTRATION FEES
The workshop will take place at DFKI on the campus of Saarland
University, Germany; on the same campus, the International Conference
of Phonetic Sciences will take place during the following week.
Workshop registration fees: To be calculated, but will be around ~150 EUR
SUBMISSIONS
The workshop will consist of oral and poster presentations. Submitted
papers will stand a higher chance of being accepted as oral
presentations when the relevance to the workshop theme is evident.
Final submissions should be 6 pages long, and must be in English.
Word+Latex+OpenOffice templates will be made available on the workshop
website.
ORGANISING COMMITTEE
Marc Schröder, DFKI GmbH, Saarbrücken, Germany
Anton Batliner, University of Erlangen-Nürnberg, Germany
Christophe d'Alessandro, LIMSI, Paris, France
SSP 2007 CfP- IEEE Statistical signal processing workshop (SSP)
The Statistical Signal Processing (SSP) workshop, sponsored by the
IEEE Signal Processing Society, brings members of the IEEE Signal
Processing Society together with researchers from allied fields
such as statistics and bioinformatics. The scope of the workshop
includes basic theory, methods and algorithms, and applications of
statistics in signal processing. Topics
Theoretical topics:
- Adaptive systems and signal processing
- Monte Carlo methods
- Detection and estimation theory
- Learning theory and pattern recognition
- Multivariate statistical analysis
- System identification and calibration
- Time-frequency and time-scale analysis
Application areas:
- Bioinformatics and genomic signal processing
- Automotive and industrial applications
- Array processing, radar, and sonar
- Communication systems and networks
- Sensor networks
- Information forensics and security
- Biosignal processing and medical imaging
- New methods, directions, and applications
Date and venue
The workshop will be held on August 26-29, 2007, in Madison,
Wisconsin, a vibrant city situated on a narrow isthmus between two
large lakes. The workshop will be co-located at the spectacular
Frank Lloyd Wright inspired Monona Terrace Convention Center.
Plenary lecturers include:
- William Freeman (MIT)
- Emmanuel Candes (Caltech)
- George Papanicolaou (Stanford)
- Nir Friedman (Hebrew University)
- Richard Davidson (Univ. of Wisconsin)
How to submit
Paper submission: Prospective authors are invited to submit
extended summaries of not more than three (3) pages including
results, figures, and references. Papers will be accepted only by
electronic submission by EMAIL starting March 1, 2007.
Important dates
Submission of 3-page extended summary: April 1, 2007
Notification of acceptance: June 1, 2007
Submission of 5-page camera-ready papers: July 1, 2007
The workshop will include a welcoming reception, banquet, technical
poster sessions, several special invited sessions, and several
plenary lectures.
Further information available on conference web.
CfP-Speech and Audio Processing in Intelligent Environments
Special Session at Interspeech 2007, Antwerp, Belgium
Ambient Intelligence (AmI) describes the vision of technology that is invisible, embedded in our
surroundings and present whenever we need it. Interacting with it should be simple and effortless.
The systems can think on their own and can make our lives easier with subtle or no direction.
Since the early days of this computing and interaction paradigm speech has been considered a
major building block of AmI. The purpose of speech and audio processing is twofold:
• Support of explicit interaction: Speech as an input/output modality that facilitates the
aforementioned simple and effortless interaction, preferably in cooperation with other
modalities like gesture.
• Support of implicit interaction: Speech, and acoustic signals in general, as a source of context
information which provide valuable information, such as “who speaks when and where”, to
be utilized in systems that are context-aware, personalized, adaptive, or even anticipatory.
Goal
The goal of this special session is to give an overview of major achievements, but also to
highlight major challenges. Does state-of-the-art of speech and audio processing meet the high
expectations expressed in the scenarios of AmI, will it ever do? We would also like to address in
this special session what are the perspectives and promising concepts for the future. The session
will consist of an introduction in the form of a short tutorial, followed by the presentation of
contributed papers and the session will conclude with a panel discussion.
Submission
Researchers who are interested in contributing to this special session are invited to submit a paper
according to the regular submission procedure of INTERSPEECH 2007, and to select “Speech
and Audio Processing in Intelligent Environments” as the topic of their first choice. The paper
submission deadline is March 23, 2007.
Topics
The subjects to be covered include, but are not restricted to:
- Speech and audio processing for context acquisition (e.g. online speaker change detection
and tracking, acoustic scene analysis, audio partitioning and labelling)
- Ubiquitous speech recognition (e.g. ASR with distant microphones, distributed speech
recognition)
- Context-aware and personalized speech processing (e.g. in spoken dialogue processing,
acoustic and language modelling)
- Speech processing for intelligent systems (e.g. descriptions of prototypes, projects)
Contacts
Session organizers:
Prof. Dr. Reinhold Haeb-Umbach
Department of Communications Engineering
University of Paderborn, Germany
Prof. Dr. Zheng-Hua Tan
Department of Electronic Systems
Aalborg University, Denmark
CfP :The 2007 International Workshop on
Intelligent Systems and Smart Home (WISH-07)
Conference webite
Niagara Falls, Canada, August 28-September 1, 2007
In Conjunction with The 5th International Symposium on Parallel and
Distributed Processing and Applications (ISPA-07)
http://www.cs.umanitoba.ca/~ispa07/
Workshop Overview
Smart Home Environments (SHE) are emerging rapidly as an exciting new
paradigm including ubiquitous, grid, and peer-to-peer computing to provide
computing and communication services any time and anywhere.
Our workshop is intended to foster the dissemination of state-of-the-art
research in the area of SHE including intelligent system, security services,
business models, and novel applications associated with its utilization.
The goal of this Workshop is to bring together the researchers from academia
and industry as well as practitioners to share ideas, problems and solutions
relating to all aspects of intelligent systems and smart home.
We invite authors to submit papers on any aspect of intelligence systems /
smart home research and practice. All papers will be peer reviewed, and
those accepted for the workshop will be included in a proceedings volume
published by Springer-Verlag.
Workshop Topics (include but are not limited to the following)
I. Intelligent Systems
- Ubiquitous and Artificial Intelligent
- Environment sensing / understanding
- Information retrieval and enhancement
- Intelligent data analysis and e-mail processing
- Industrial applications of AI
- Knowledge acquisition, engineering, discovery and representation
- Machine learning and translation
- Mobile / Wearable intelligence
- Natural language processing
- Neural networks and intelligent databases
- Data mining and Semantic web
- Computer-aided education
- Entertainment
- Metrics for evaluating intelligent systems
- Frameworks for integrating AI and data mining
II. Smart Home
- Wireless sensor networks (WSN) / RFID application for SH
- Smart Space (Home, Building, Office) applications and services
- Smart Home network middleware and protocols
- Context Awareness for Smart Home Services
- Multimedia Security and Services in SH
- Security Issues for SHE
- Access control and Privacy Protection in SH
- Forensics and Security Policy in SH
- WSN / RFID Security in SH
- Commercial and industrial system & application for SH
Important Dates
Paper Submission deadline March 31, 2007
Acceptance notification May 21, 2007
Camera-ready due June 01, 2007
Workshop date August 28th-September 1th, 2007
Organization
* Steering Co-Chairs
Laurence T. Yang, St Francis Xavier University, Canada
Minyi Guo, University of Aizu, Japan
* General Co-chairs
Ching-Hsien Hsu,
Chung Hua University, Taiwan
Jong Hyuk Park
Hanwha S&C Co., Ltd., Korea
* Program Co-chairs
Cho-Li Wang
The University of Hong Kong, Hong Kong
Gang Pan
Zhijiang University, China
Proceeding & Special Issue
The workshop proceedings will be published by Lecture Notes in Computer
Science (LNCS).
Papers not exceed 12 pages with free layout style should be submitted
through the website.
Submission of a paper should be regarded as a commitment that, if the paper
be accepted, at least one of the authors will register and attend the
conference. Otherwise papers will be removed from the LNCS digital library.
A selection of the best papers will be published in a special issue of
Information Systems Frontiers (ISF) and International Journal of Smart Home
(IJSH), respectively.
Contact
For further information regarding the WISH-07 and paper submission, please
contact ASWAN '07 Cyber-chair or
Prof. Hsu.
CfP-Special session at Interspeech 2007:
Novel techniques for the NATO non-native military air traffic
controller database (nn-matc)
Following a series of special interest sessions and (satellite)
workshops, at Lisbon (1995), Leusden (NL, 1999) and Aalborg (2001),
the NATO research task group on speech and language technology, RTO
IST031-RTG013, organizes a special session at Interspeech 2007. After
having studied various aspects of speech in noise, speech under
stress, and non-native speech, the research task group has been
studying the effects of all of these factors on various speech
technologies.
To this end, the task group has collected a corpus of military Air
Traffic Control communication in Belgian air space. This speech
material consists predominantly of non-native English speech, under
varying noise and channel conditions. The database has been annotated
at several levels:
* word transcriptions, which allow research to be conducted on
automatic speech recognition and named entity extraction,
* Speaker turns, identified by call signs, allowing for research
in speaker recognition and clustering and tracking of
conversations.
The database consists of 16 hours of training speech, plus one hour of
development and evaluation test sets.
The NATO research task group is making this annotated speech database
available for speech researchers, who want to develop novel algorithms
for this challenging material. These new algorithms could include
noise-robust speaker recognition, robust speaker and accent adaptation
for ASR, and context driven named entity detection. In order to
facilitate a common task, we have written a suggested test and
evaluation plan to guide researchers. At the special session we will
discuss research results on this common data set.
More information on the special session, the database and the
evaluation plan can be found on the web-site
Submission
Researchers who are interested in contributing to this special session
are invited to submit a paper according to the regular submission
procedure of INTERSPEECH 2007, and to select `Novel techniques for the
NATO non-native Air Traffic Control database' in the special session
field of the paper submission form. The paper submission deadline is
March 23, 2007.
Contact
Session organizer:
David van Leeuwen
TNO Human Factors
P. O. Box 23
3769 ZG Soesterberg
The Netherlands
CfP:
Structure-Based and Template-Based Automatic Speech Recognition -
Comparing parametric and non-parametric approaches
Special Session at INTERSPEECH 2007, Antwerp, Belgium
While hidden Markov modeling (HMM) has been the dominant technology for
acoustic modeling in automatic speech recognition today, many of its
weaknesses have also been well known and they have become the focus of
much intensive research. One prominent weakness in current HMMs is the
handicap in representing long-span temporal dependency in the acoustic
feature sequence of speech, which, nevertheless, is an essential
property of speech dynamics. The main cause of this handicap is the
conditional IID (Independent and Identical Distribution) assumption
inherit in the HMM formalism. Furthermore, in the standard HMM approach
the focus is on verbal information. However, experiments have shown that
non-verbal information also plays an important role in human speech
recognition which the HMM framework has not attempted to address
directly. Numerous approaches have been taken over the past dozen years
to address the above weaknesses of HMMs. These approaches can be broadly
classified into the following two categories.
The first, parametric, structure-based approach establishes mathematical
models for stochastic trajectories/segments of speech utterances using
various forms of parametric characterization, including polynomials,
linear dynamic systems, and nonlinear dynamic systems embedding hidden
structure of speech dynamics. In this parametric modeling framework,
systematic speaker variation can also be satisfactorily handled. The
essence of such a hidden-dynamic approach is that it exploits knowledge
and mechanisms of human speech production so as to provide the structure
of the multi-tiered stochastic process models. A specific layer in this
type of models represents long-range temporal dependency in a parametric
form.
The second, non-parametric and template-based approach to overcoming the
HMM weaknesses involves direct exploitation of speech feature
trajectories (i.e., 'template') in the training data without any
modeling assumptions. Due to the dramatic increase of speech databases
and computer storage capacity available for training, as well as the
exponentially expanded computational power, non-parametric methods using
the traditional pattern recognition techniques of kNN
(k-nearest-neighbor decision rule) and DTW (dynamic time warping) have
recently received substantial attention. Such template-based methods
have also been called exemplar-based or data-driven techniques in the
literature.
The purpose of this special session is to bring together researchers who
have special interest in novel techniques that are aimed at overcoming
weaknesses of HMMs for acoustic modeling in speech recognition. In
particular, we plan to address issues related to the representation and
exploitation of long-range temporal dependency in speech feature
sequences, the incorporation of fine phonetic detail in speech
recognition algorithms and systems, comparisons of pros and cons between
the parametric and non-parametric approaches, and the computation
resource requirements for the two approaches.
This special session will start with an oral presentation in which an
introduction of the topic is provided, a short overview of the issues
involved, directions that have already been taken, and possible new
approaches. At the end there will be a panel discussion, and in between
the contributed papers will be presented.
.
Session organizers:
Li Deng
Helmer Strik
Information about this special session can also be found at the
Interspeech Website
or at the Special session website
Machine Learning for Spoken Dialogue Systems:
Special Session at INTERSPEECH 2007, Antwerp, Belgium
Submission deadline: 23rd March
Interspeech 2007 website
During the last decade, research in the field of Spoken Dialogue
Systems (SDS) has experienced increasing growth. Yet the design and
optimization of SDS does not simply involve combining speech and
language processing systems such as Automatic Speech Recognition
(ASR), parsers, Natural Language Generation (NLG), and Text-to-Speech
(TTS) synthesis. It also requires the development of dialogue
strategies taking into account the performances of these subsystems,
the nature of the dialogue task (e.g. form filling, tutoring, robot
control, or search), and the user's behaviour (e.g. cooperativeness,
expertise). Currently, statistical learning techniques are emerging
for training and optimizing speech recognition, parsing, and
generation in SDS, depending on representations of context. Automatic
learning of optimal dialogue strategies is also a leading research
topic.
Among machine learning techniques for dialogue strategy optimization,
Reinforcement Learning using Markov Decision Processes (MDPs) and
Partially Observable MDP (POMDPs) has become a particular focus. One
concern for such approaches is the development of appropriate dialogue
corpora for training and testing.
Dialogue simulation is often required to expand existing corpora and
so spoken dialogue simulation has become a research field in its
own right. Other areas of interest are statistical approaches in
context-sensitive speech recognition, trainable NLG, and statistical
parsing for dialogue.
The purpose of this special session is to offer the opportunity to the
international community concerned with these topics to share ideas and
have constructive discussions in a single, focussed, special
conference session.
Submission instructions
Researchers who are interested in contributing to this special session
are invited to submit a paper according to the regular submission
procedure of INTERSPEECH 2007, and to select "Machine Learning for
Spoken Dialogue Systems" in the special session field of the paper
submission form. The paper submission deadline is March 23, 2007.
The subjects to be covered include, but are not restricted to:
* Reinforcement Learning of dialogue strategies
* Partially Observable MDPs in dialogue strategy optimization
* Statistical parsing in dialogue systems
* Machine learning and context-sensitive speech recognition
* Learning and NLG in dialogue
* User simulation techniques for strategy learning and testing
* Corpora and annotation for machine learning approaches to SDS
* Machine learning for multimodal interaction
* Evaluation of statistical approaches in SDS
Contact
Session organizers:
Oliver Lemon,
Edinburgh University
School of Informatics
Olivier Pietquin
SUPELEC - Metz Campus
IMS Research Group
Metz
CALL FOR PAPERS:
"Speech and language technology for less-resourced languages"
Two-hour Special Session at INTERSPEECH 2007, Antwerp, Belgium
Interspeech website
Special Session website
Speech and language technology researchers who work on less-resourced
languages often have very limited access to funding, equipment and software.
This makes it all the more important for them to come together to share best
practice, in order to avoid a duplication of effort. This special session
will therefore be devoted to speech and language technology for
less-resourced languages.
In view of the limited resources available to the targeted researchers, there
will be a particular emphasis on "free" software, which may be either
open-source or closed-source. However, submissions are also invited from
those using commercial software.
Topics may include (but are not limited to) the following:
* Examples of systems built using free or purpose-built software
(possibly with demonstrations).
* Presentations of bugs in free software, and strategies for dealing
with them.
* Presentations of additions and enhancements made to the software by
a research group.
* Presentations of linguistic challenges for a particular less-resourced
language.
* Descriptions of desired features for possible future implementation.
Submission
Researchers who are interested in contributing to this special session are
invited to submit either a paper or a demo or both, as follows.
1. Papers can be submitted by proceeding according to the regular
submission procedure of Interspeech 2007 and selecting "Speech and language
technology for less-resourced languages" as the topic of your first choice.
The paper submission deadline is March 23, 2007.
2. We offer a light submission procedure for demos. (Please note: unlike
regular papers, texts submitted with a demo will NOT be published in the
proceedings, but will be made available for download from the
SALTMIL website ). In this case, please submit a short
description of the system demonstrated, the demo, required materials for the
demo, and references, to the first of the session organisers (see below) and
to special_sessions@interspeech2007.org before April 27, 2007. Demo
submission texts should be formatted in accordance with the Interspeech 2007
author kit, and should be between 1 and 4 pages in length.
Session organisers
Dr Briony Williams
Language Technologies Unit, Canolfan Bedwyr, University of Wales, Bangor, UK
Email: b.williams@bangor.ac.uk
Dr Mikel Forcada
Departament de Llenguatges i Sistemes Informrtics, Universitat d'Alacant,
E-03071 Alacant, Spain
Email: mlf@dlsi.ua.es
Dr Kepa Sarasola
Dept of Computer Languages, Univ of the Basque Country, PK 649 20080
Donostia, Basque Country, Spain
Email: ksarasola@si.ehu.es
Important dates
Four-page paper deadline:
March 23, 2007
Demo submission deadline:
April 27, 2007
Notification of acceptance:
May 25, 2007
Early registration deadline:
June 22, 2007
Main Interspeech conference:
August 28-31, 2007
SYNTHESIS OF SINGING challenge
Special Session at INTERSPEECH 2007, Antwerp, Belgium
Tuesday afternoon, August 28, 2007
Webpage > Special Sessions > Synthesis
of Singing Challenge
Organized by Gerrit Bloothooft, Utrecht University, The Netherlands
Singing is perhaps the most expressive usage of human voice and speech.
An excellent singer, whether in classical opera, musical, pop, folk
music, or any other style, can express a message and emotion so
intensely that it moves and delights a wide audience. Synthesizing
singing may be considered therefore as the ultimate challenge to our
understanding and modeling of human voice. In this two hours interactive
special session of INTERSPEECH 2007 on synthesized singing, we hope to
present an enjoyable demonstration of the current state of the art, and
we challenge you to contribute!
Topics
The session will be special in many ways:
* Participants have to submit a composition of their own choice, and
they have to produce their own version of a compulsory musical
score.
* During the special session, each participant will be allowed to
demonstrate the free and compulsory composition, with additional
explanation.
* The contribution will be commented by a panel consisting of
synthesis experts and singers, and the audience.
* Evaluative statements will be voted for by everyone, if possible
by a voting box system.
* The most preferred system will be allowed to play the
demonstration during the closing session of the conference.
Submission
If you are interested to join the challenge, you are invited to submit a
paper on your system and to include an example composition of your own
choice (in .wav format) within the regular submission procedure of
INTERSPEECH 2007, and to select "Synthesis of Singing Challenge" for
Special Session. The deadline is March 23, 2007.
We also offer a light submission procedure that will not result in a
regular peer reviewed paper in the Proceedings. In that case you can
submit the composition of your own choice in .wav format /to the session
organizer/ (see below) before April 27, 2007. See the website for more
details.
The composition may have a maximum duration of two minutes; no
accompaniment is allowed. There are no restrictions with respect to the
synthesis method used, which may range from synthesis of singing by
rule, articulatory modelling, sinusoidal modelling, unit selection, to
voice conversion (include the original in your two minutes demo as well).
All accepted contributors (notification on May 25) will be required to
produce their own version of a musical score published by July 1, 2007.
The corresponding sound file should be sent as a .wav file /to the
session organizer/ (see below) before August 21, 2007.
Contact
Session organiser:
Gerrit Bloothooft
UiL-OTS, Utrecht University, The Netherlands
Webpage > Special Sessions > Synthesis
of Singing Challenge
CALL FOR PAPERS:
NATURAL LANGUAGE PROCESSING AND KNOWLEDGE REPRESENTATION
FOR eLEARNING ENVIRONMENTS
Borovets, Bulgaria
September 26, 2007
Workshop site
RANLP'2007 site
AIMS
Several initiatives have been launched in the
area of Computational Linguistics,
Language Resources and Knowledge Representation
both at the national and international level
aiming at the development of resources and
tools. Unfortunately, there are few initiatives that integrate these results
within eLearning. The situation is slightly better with respect to the results
achieved within Knowledge Representation since ontologies are being
developed which describe not only the content of the learning material but
crucially also its context and the structure.
=46urthermore, knowledge representation techniques
and natural language processing play an important
role in improving the adaptivity of learning environments even though they
are not fully exploited yet.
On the other hand, eLearning environments constitute valuable scenarios
to demonstrate the maturity of computational linguistic methods as well
as of natural language technologies and tools.
This kind of task-based evaluation
of resources, methods and tools is a crucial issue for the further
development of language and information technology.
The goal of this workshop is to discuss:
*the use of language and knowledge resources and tools in eLearning;
* requirements on natural language resources, standards, and applications
originating in eLearning activities and environments;
* the expected added value of natural language resources and technology
to learning environments and the learning process;
* strategies and methods for the task based evaluation of Natural Language
Processing applications.
The workshop will bring together computational linguists, language resources
developers, knowledge engineers, researchers involved in technology-enhanced
learning as well as developers of eLearning material, ePublishers and eLearn
ing
practitioners. It will provide a forum for interaction among members of
di=0Berent research communities, and a means for attendees to increase their
knowledge and understanding of the potential of computational resources in
eLearning.
TOPICS
Topics of interest include, but are not limited to:
* ontology modelling in the eLearning domain;
* Natural Language Processing techniques for supplying metadata for
learning objects on a (semi)-automatic basis, e.g. for the automatic
extraction of key terms and their definitions;
* techniques for summarization of discussion threads and support of
discourse coherence in eLearning;
* improvements on (semantic, cross-lingual) search methods to in learning
environments;
* techniques of matching the semantic representation of learning objects
with the users knowledge in order to support personalized and adaptive
learning;
* adaptive information filtering and retrieval (content-based filtering and
retrieval, collaborative filtering)
* intelligent tutoring (curriculum sequencing, intelligent solution analysis
,
problem solving support)
* intelligent collaborative learning (adaptive group formation and peer
help, adaptive collaboration)
SUBMISSION INSTRUCTIONS
Submissions by young researchers are especially welcomed.
* Format. Authors are invited to submit full
papers on original, unpublished work in the topic
area of this workshop. Papers should be submitted
as a PDF file, formatted according to the RANLP
2007 stylefiles and not exceeding 8 pages. The
RANLP 2007 stylefiles are available at:
http://lml.bas.bg/ranlp2007/submissions.htm
* Demos. Submission of demos are also welcome.
Papers submitted as demo should not exceed 4
pages and should describe extensively the system
to be presented.
* Submission procedure. Submission of papers will be handled using
the START system, through the RANLP Conference.
Specific submission guidelines will be posted on
the workshop site shortly.
* Reviewing. Each submission will be reviewed at least by two members
of the Program Committee.
* Accepted papers policy. Accepted papers will be published in the
workshop proceedings. By submitting a paper at the workshop the
authors agree that, in case the paper is accepted for publication, at
least one of the authors will attend the workshop; all workshop
participants are expected to pay the RANLP-2007 workshop registration
fee.
IMPORTANT DATES
Paper submission deadline: June 15, 2007
Paper acceptance notification: July 25, 2007
Camera-ready papers due: August 31, 2007
Workshop date: September 26, 2007
KEYNOTE SPEAKERS
Keynote speakers will be announced shortly before the workshop.
ORGANIZING COMMITTEE
Paola Monachesi
University of Utrecht, The Netherlands
Lothar Lemnitzer
University of T=FCbingen, Germany
Cristina Vertan
University of Hamburg, Germany
CONTACT
Dr. Cristina Vertan
Natural Language Systems Division
Computer Science Department
University of Hamburg
Vogt-Koelln-Str. 30
22527 Hamburg GERMANY
Tel. 040 428 83 2519
Fax. 040 428 83 2515
http://nats-www.informatik.uni-hamburg.de/~cri
CFP-
International Conference:
"Where Do
Come From ?
Phonological Primitives in the Brain, the Mouth, and the Ear"
Universite Paris-Sorbonne (1, rue Victor Cousin 75230 Paris cedex)
Website
Deadline: May 6, 2007!
Speech sounds are made up of atomic units termed "distinctive features", "p
honological features" or "phonetic features", according to the researcher.
These units, which have achieved a notable success in the domain of phonolo
gical description, may also be central to the cognitive encoding of speech,
which allows the variability of the acoustic signal to be related to a sma
ll number of categories relevant for the production and perception of spoke
n languages. In spite of the fundamental role that features play in current
linguistics, current research continues to raise many basic questions conc
erning their cognitive status, their role in speech production and percepti
on, the relation they have to measurable physical properties in the articul
atory and acoustic/auditory domains, and their role in first and second lan
guage acquisition. The conference will bring together researchers working i
n these and related areas in order to explore how features originate and ho
w they are cognitively organized and phonetically implemented. The aim is t
o assess the progress made and future directions to take in this interdisci
plinary enterprise, and to provide researchers and graduate students from d
iverse backgrounds with a stimulating forum for discussion.
How to submit
Authors are invited to submit an anonymous two-page abstract (in English or
French) by April 30, 2007 to Rachid Ridouane
, accompanied by a
separate page stating name(s) of author(s), contact information, and a pref
erence for oral paper vs. poster presentation. Contributions presenting new
experimental results are particularly welcome. Notification e-mails will b
e sent out by June 15, 2007. Publication of selected papers is envisaged.
Topics
Conference topics include, but are not limited to:
Phonetic correlates of distinctive features
Acoustic-articulatory modeling of features
Quantal definitions of distinctive features
Role of subglottal and/or side-cavity resonances in defining
feature boundaries
Auditory/acoustic cues to acoustic feature correlates
Visual cues to distinctive features
Within- and across-language variability in feature realizati
on
Enhancement of weak feature contrasts
Phonological features and speech motor commands
Features and the mental lexicon
Neurological representation of features
Features in early and later language acquisition
Features in the perception and acquisition of non-native lan
guages
Features in speech disorders
The two-day conference (October 4-5, 2007) will consist of four invited tal
ks, four half-day sessions of oral presentations (30 minutes including disc
ussion), and one or two poster sessions.
Important dates
April 30, 2007 Submission deadline
June 15, 2007 Acceptance notification date
October 4-5, 2007 Conference venue
Organizers
Rachid Ridouane (Laboratory of Phonetics and Phonology, Paris)
Nick Clements (Laboratory of Phonetics and Phonology, Paris)
Contact
Rachid Ridouane
Ce colloque est finance par le Ministere Delegue Francais de la
la Recherche sous le programme "Action Concertee Incitative PROSODIE
" (Programme de soutien dans l'innovation et l'excellence en sciences humai
nes et sociales).
CFP 3rd Language and Technology
Conference (LTC2007): Human Language Technologies as a Challenge for Computer
Science and Linguistics October 5-7, 2007, Faculty of
Mathematics and Computer Science of the Adam Mickiewicz
University, Poznan, Poland, Website CONFERENCE
TOPICS The conference program will include the following
topics: * electronic language resources and tools * formalisation
of natural languages * parsing and other forms of NL processing *
computer modelling of language competence * NL user modelling * NL
understanding by computers * knowledge representation * man-machine
NL interfaces * Logic Programming in Natural Language Processing *
speech processing * NL applications in robotics * text-based
information retrieval and extraction, question answering * tools and
methodologies for developing multilingual systems * translation
enhancement tools * methodological issues in HLT * prototype
presentations * intractable language-specific problems in HLT (for
languages other than English) * HLT standards * HLT as foreign
language teaching support * new challenge: communicative
intelligence * vision papers in the field of HLT * HLT related
policies This list is not closed and we are open to further proposals.
The Program Committee is also open to suggestions concerning accompanying
events (workshops, exhibits, panels, etc). Suggestions, ideas and
observations may be addressed directly to the LTC Chair.
IMPORTANT DATES
Deadline for submission of papers for review - May 20, 2007
Acceptance/Rejection notification - June 15, 2007
Submission of final versions of accepted papers. - July 15, 2007
FURTHER
INFORMATION Further details will be available soon. The call for
papers will be distributed by mail and published on the conference site . The site currently
contains information about LTC’05 including freely-downloadable abstracts
of the papers presented. Zygmunt
Vetulani LTC’07 Chair
PRELIMINARY CFP-
2007 IEEE International Conference on
Signal Processing and Communications, United Arab Emirates
24–27 November 2007
Dubai, United Arab Emirates
The IEEE International Conference on Signal Processing and Communications (ICSPC 2007)
will be held in Dubai, United Arab Emirates (UAE) on 24–27 November 2007. The ICSPC will
be a forum for scientists, engineers, and practitioners throughout the Middle East region and
the World to present their latest research results, ideas, developments, and applications in all
areas of signal processing and communications. It aims to strengthen relations between
industry, research laboratories and universities. ICSPC 2007 is organized by the IEEE UAE
Signal Processing and Communications Joint Societies Chapter. The conference will include
keynote addresses, tutorials, exhibitions, special, regular and poster sessions. All papers will
be peer reviewed. Accepted papers will be published in the conference proceedings and will
be included in IEEE Explore. Acceptance will be based on quality, relevance and originality.
SCOPE
Topics will include, but are not limited to, the following:
• Digital Signal Processing
• Analog and Mixed Signal Processing
• Audio/Speech Processing and Coding
• Image/Video Processing and Coding
• Watermarking and Information Hiding
• Multimedia Communication
• Signal Processing for Communication
• Communication and Broadband Networks
• Mobile and Wireless Communication
• Optical Communication
• Modulation and Channel Coding
• Computer Networks
• Computational Methods and Optimization
• Neural Systems
• Control Systems
• Cryptography and Security Systems
• Parallel and Distributed Systems
• Industrial and Biomedical Applications
• Signal Processing and Communications Education
SUBMISSION
Prospective authors are invited to submit full-length (4 pages) paper proposals for review.
Proposals for tutorials, special sessions, and exhibitions are also welcome. The submission
procedures can be found on the
conference web site:
All submissions must be made on-line and must follow the guidelines given on the web site.
ICSPC 2007 Conference Secretariat,
P. O. Box: 573, Sharjah, United Arab Emirates (U.A.E.),
Fax: +971 6 5611789
ORGANIZERS
Honorary Chair
Arif Al-Hammadi,
Etisalat University College, UAE
General Chair
Mohammed Al-Mualla
Etisalat University College, UAE
IMPORTANT DATES
Submission of proposals for tutorials,
special sessions, and exhibitions March 5th, 2007
Submission of full-paper proposals April 2nd, 2007
Notification of acceptance June 4th, 2007
Submission of final version of paper October 1st, 2007
5th International Workshop
on
Models and Analysis of Vocal Emissions for Biomedical Applications
MAVEBA 2007
December 13 - 15, 2007
Conference Hall - Ente Cassa di Risparmio di Firenze
Via F. Portinari 5r, Firenze, Italy
DEADLINES:
30 May 2007 - Submission of extended abstracts (1-2 pages, 1 column),
special session proposal
30 July, 2007 - Notification of paper acceptance
30 September 2007 - Final full paper submission (4 pages, 2 columns, pdf
format) and early registration
13-15 December 2007 - Conference venue
CONTACT:
Dr. Claudia Manfredi - Conference Chair
Dept. of Electronics and Telecommunications
Universita degli Studi di Firenze
Via S. Marta 3
50139 Firenze, Italy
Phone: +39-055-4796410
Fax: +39-055-494569
top |