Dear ISCA members,
Interspeech-ICSLP2004 in Jeju (Korea) was an excellent conference and an excellent opportunity for many of us to discover the marvelous South Korea. We also had a very interesting General Assembly. One question was raised: how can we improve the current organization of Interspeech events. A message from Julia Hirschberg, vice-president of ISCA elicits your participation to fill a questionnaire that could put into the light some new
organizational concepts. Members are strongly concerned by the way our conferences will evolve: please behave like active members!
forget to send the information you want to display for members in time to be
included in IscaPad (last week of each month).
TABLE OF CONTENTS
- ISCA News
- Courses, internships, data bases, softwares
- Job openings
- Journals and Books
- Future Interspeech Conferences
- Future ISCA Tutorial and Research Workshops (ITRW)
- Forthcoming Events supported (but not organized) by ISCA
- Future Speech Science and technology events
Note for new members registered at INTERSPEECH - ICSLP 2004: The ISCA
Secretariat apologises for the delay in sending you your membership
details. This is due to the fact that it is still waiting to receive the
final and complete database from the conference organisers. This is
expected within the next week.
The ISCA Board would like to ask you to fill out the survey found at
http://www.cs.columbia.edu/~julia/julias_survey.html, to find out what
you prefer in INTERSPEECH conferences. The survey contains general
questions about conference preferences and attendance, plus more
detailed questions about the recent ICSLP 2004 conference for those
who attended. We would welcome any additional comments or questions
that may not appear on the survey, to improve what we hope will be a
regular polling of our members' opinions. If you have any problems
with the survey, please send email to me.
Thanks for your help!
Vice President and International Conference Liaison
Organisation of INTERSPEECH 2008 ICSLP
CALL FOR PROPOSALS
Individuals or organisations interested in organizing INTERSPEECH 2008-ICSLP should submit by 15 December 2004 a brief preliminary
* The name and position of the proposed general chair and other principal organizers.
* The proposed period in September 2008 when the conference would be held
* The institution assuming financial responsibility for the conference and any other sponsoring institutions
* The city and conference center proposed (with information on that center's capacity)
* Information on transportion and housing for conference participants
* Likely support from local bodies (e.g. governmental)
* The commercial conference organizer (if any)
* A preliminary budget
Guidelines for the preparation of the proposal are available at
http://www.isca-speech.org/guidelinesEurospeech.html . Additional
information can be provided by Julia Hirschberg.
Proposals should be submitted by email to the above address.
Candidates fulfilling basic requirements will be asked to submit a
detailed proposal by 28 February 2005.
Professor Wolfgang Hess has started an important archiving process of all publications
of ISCA including ICSLP, Eurospeech and ITRW proceedings. Access to full papers is
restricted to ISCA members on our website http://www.isca-speech.org/archive.html.
Recently ICSLP 2002 and Eurospeech 2003 have been added to the collection.
To access the ISCA archive online at
username : Isca-Archive and password : 1ibdzgaz2
-New development on membership services :
It is now possible to apply forISCA membership and renew
Members benefit a discounted rate for Speech Communication subscription.
The online subscription gives members access not only to the current year's
Speech Communication volumes but also to the Speech Communication archive dating back to 1995.
If you are interested in subscribing either to the paper version alone or to
the paper version+online access, please indicate this on the renewal form (http://www.isca-speech.org/index_VIP.html)
) and it will be billed directly by Elsevier.
Individual, FULL member and STUDENT : paper version only: 85 EUR
Individual, FULL member and STUDENT : paper version + online access*: 95 EUR
Institutional Member, paper version only : 600 EUR
- A full list of members (including membership numbers and
subcription expiry dates) is available online at: http://www.isca-speech.org/member_list.html
Secretariat apologises for the delay in sending you your membership
details. This is due to the fact that it is still waiting to receive the
final and complete database from the conference organisers. This is
expected within the next week.
-ISCApad publishes now a list of papers
accepted for publication in Speech Communication.
These papers can be also viewed on the website of ScienceDirect (http://www.sciencedirect.com)
if your institution has subscribed to Speech Communication.
are available for students attending meetings. Even if no information
on the grants is advertised on the conference announcement,students may apply.
For more information:
COURSES, DATABASES, SOFTWARES
-Information on on-going theses
could be very useful for thesis supervisors,
researchers as well as PhD students. A list of speech theses is available under the section HLTheses at
(also have a look at http://www.isca-speech.org/jobs
as well as http://www.elsnet.org > Jobs)
RESEARCH AND DEVELOPPEMENT OPPORTUNITIES IN THE LORIA LABORATORY,
Position at the LORIA laboratory, Speech group.
Job Title: Research Engineer
Job Function :Research and development
Job Type : Fixed Term (11 months)
Closing date for applications: 31 December 2004
Further Information : http://www.loria.fr/equipes/parole
UNIX programming skills, C, Shell
Required level : PhD in Computer Science with specialty of Speech Recognition or engineer level with at
last 2 years of experience in Speech Recognition
A research engineer is needed for the LORIA laboratory to work on the
Project HIWIRE. The applicant will develop, run and analyze the
experiments in automatic speech recognition. Applicants should hold a PhD
in Computer Science with specialty in Speech Recognition or have an engineer level
with at last 2 years of experience in Speech Recognition. He should have a
good grasp of statistics and speech processing.
The ideal candidate will have experience in noise
robustness and speaker adaptation. UNIX programming skills are highly
desirable (C, C++, Java, Shell).
HIWIRE (Human Input That Works In Real Environments)
European project concerned with vocal technologies to
support human-machine interaction, in the context of aircraft cockpits.
The goal of the project is :
How to apply
- to enable vocal dialogues with equipments in commercial
- to improve the potential for vocal interaction with PDAs and other
mobile devices in aeronautic application environments.
The project will focus on two main targets :
- improved robustness of speech recognition system against the
- improved tolerance to user behavior.
This appointment is for a fixed term of up to 11 months, starting
in January, 2005.
Send your motivation letter, your CV and academic references
to contact persons:
Irina Illina or
LORIA-CNRS & INRIA Lorraine
THE UNIVERSITY OF SHEFFIELD
Department of Computer Science
RESEARCH OPPORTUNITY in AUDIO-VISUAL SPEECH RECOGNITION
Applications are invited for a postdoctoral research position in the
Speech and Hearing research group at Sheffield. The post is available
from 1 November 2004, or soon thereafter, for a period of 2 years and 3
months. The appointment will be on the UK RA1A scale according to
The successful applicant will be working on an EPSRC support project
concerning the development of novel techniques for exploiting visual
speech information (e.g. lip and face movements) in the design of
automatic speech recognition (ASR) systems. The project will consist of
two major components. Firstly, a new audio-visual speech corpus will be
recorded. The corpus will be designed for testing ASR performance in
highly non-stationary noise backgrounds. The aim will be to use this
corpus within the project and to make it available to other researchers
involved in audio-visual speech research. The second component of the
project involves extending novel ASR techniques, developed previously at
Sheffield, to allow them to be applied in the audio-visual domain.
Candidates should have either an MSc or PhD in Computer Science or a
related discipline. A background in speech processing research is
desirable. Duties would involve managing the AV speech corpus recording
and preparation, and aiding in the development and evaluation of
audio-visual ASR systems.
Informal enquiries may be made to Dr Jon Barker.
Full Post Details:
Job Reference No: R3469
Closing date: 11th November 2004
NEW JOB OPPORTUNITY AT ELRA/ELDA
ELRA/ELDA offers a position for its Language Resources department.
He/she will be in charge of managing the activities in relation with
the identification of language resources and the negotiation of rights
in relation with their distribution.
The position includes, but is not limited to, the responsibility of the
- Identification of language resources,
- Implementation of a =ABuniversal catalogue=BB aiming at collecting
- Negotiation of distribution rights and definition of prices of language
resources to be integrated in the ELRA/ELDA catalogue.
- Knowledge in computational linguistics, information science, knowledge
management or similar fields,
- Contact and communication skills,
- Ability to work independently and as part of a team,
- Fluent French and English required,
Experience in project management (especially European projects), as well
as practice in contract and partnership negotiation at an international
would be a plus.
Applications will be considered until the position is filled; however, a
decision will be made by the end of year 2004/very beginning of 2005.
The position is based in Paris and candidates should have the citizenship
(or residency papers) of a European Union country.
Salary: Commensurate with qualifications and experience.
Applicants should email, fax, or post a cover letter addressing the points
listed above together with a curriculum vitae to:
ELRA / ELDA
55-57, rue Brillat-Savarin
Fax: 01 43 13 33 30
For further information about ELDA/ELRA, visit
Speech Recognition Programmer & Scientist
at the ESAT/PSI SPEECH GROUP
Katholieke Universiteit Leuven, Belgium
A position is available for a speech recognition programmer and
within the ESAT Speech Group.
Focus of the work will be on further implementation of our speech
software architecture and further optimization of the existing system.
The work is situated on the edge of research and implementation.
The candidate will also become responsible for supporting novice users
of the software package.
The position is currently open and is initially available till 30 SEP
Candidates should have a degree in electrical engineering or computer
Given the type of work, candidates should have programming
experience on a UNIX or Windows platform using a higher level language
such as C/C++/JAVA.
Good communication skills will be an asset as well.
Previous experience in speech recognition is not required, but
The work will be carried out within the framework of several ongoing and
projects of which the main one is FLAVOR. More details about the ESAT
speech group and the Flavor project in particular can be found at
Interested applicants should send their CV to
Prof. Dirk VAN COMPERNOLLE
Kasteelpark Arenberg 10
POST DOC or RESEARCH ENGINEER POSITION at Institut Eurecom-Sophia Antipolis-France
Department: Multimedia Communications
Eurecom (http://www.eurecom.fr ) is an international teaching and research institute ,
founded in 1991 as a joint initiative by Ecole Polytechnique Federale de Lausanne (EPFL)
and Ecole Nationale Superieure des Telecommunications (ENST- Paris).
It welcomes students from several engineering schools and universities ENST Paris,
ENST Brittany, INT Evry, EPFL, ETHZ (Zurich), Helsinki University of Technology, Politecnico
di Torino...They receive an education in Communications Systems (Networking,
Multimedia, Security, Mobile Communications, Web services...)
Professors, lecturers and PhD students conduct research in these domains.
Speech processing is under the responsibility of Professor Chris Wellekens
in the Dpt Multimedia Communications.
Spoken languages at the Institute are French end English for the
lectures. English is the usual language for research exchanges.
Speech research involves speaker identification using speaker clustering
or eigenvoices, phonemic variabilities of lexicons, optimal feature extraction, Bayesian
networks and variational techniques, navigation in audio databases (segmentation in speakers,
Job description: POST DOC or RESEARCH ENGINEER
The European project DIVINES, a STREP/6th FP has been accepted
by the Commission and will start in January 2004. Eight labs and
companies are partners:
Multitel (B), Eurecom (F), France Telecom R/D (F), University of
Oldenburg (D), Babeltechnologies (B), Loquendo (I),
Politecnico di Torino (I), LIA (F). A collaboration with Mac Gill University (Montreal)
has also be negotiated.
The aim of the project is to analyse the reasons why recognizers are
unable to reach the human recognition rates even in the case of
lack of semantic content. All weaknesses will be analyzed at the level
of feature extraction, phone and lexical models. Focus will be
put on intrinsic variabilities of speech in quiet and noisy environment
as well as in read and spontaneous speech. The analysis will not be
restricted to tests on several databases with different features and
models but will go into the detailed behavior of the algorithms
and models. Suggestions of new solutions will arise and be
experimented. The duration of the project is for 3 years.
The Speech group is looking for a Post-doc or research engineer who acquired a
hands-on practice of speech processing. He/she must have an excellent
practice of signal and speech analysis as well as a good knowledge of
optimal classification using Bayesian criteria. He/she must be
open-minded to original solutions proposed after a rigorous analysis of
the low level phenomena in speech processing. Fluency in
english is mandatory (write, understand and speak). He/she should be able to
represent Eurecom at the periodical meetings. Ability to
work in a small team is also required.
-send a detailed resume (give details on your activity since your PhD
-send a copy of your thesis report (either as a a printed document or as
a CDROM) DO NOT attach your thesis in an e-mail!)
-send a copy of your diploma)
-send the names and email addresses of two referees.)
-send the list of your publications (you must have several))
to Professor Chris J. Wellekens, Dpt of Multimedia Communications, 2229
route des Cretes, BP 193, F-06904 Sophia Antipolis Cedex, France.
Contact Professor Chris Wellekens
POST-DOCTORAL POSITION IN ASR
Application Deadline: 30th November 2004
Start date (latest): 1st March 2005
Applications are invited for a Post-Doctoral position to be held 9
months in Technical University of Crete, Chania, Greece followed by 9
months at LORIA, Nancy, France.
The successful applicant should have a PhD in the area of Computer
Science, Statistics, Engineering, Mathematics or Physics. The candidate
sould have a strong speech/signal processing background with emphasis in
speech recognition. Good knowledge of statistical modeling and front-end
techniques for robust speech recognition is a plus. Strong software
skills are important (C/C++, script languages).
The project involves the development of novel feature extraction
algorithms and statistical models for automatic speech recognition. The
collaboration is part of the MUSCLE Network of Excellence EU Project
(www.muscle-noe.org) and extends over 18 months. The parties involved are:
- The "Speech and Dialogue Group" at the Technical University of Crete,
- The "Computer Vision, Speech and Signal Processing Group" at the
Technical University of Athens, Greece (http://cvsp.cs.ntua.gr)
-The "Speech Group" at INRIA-LORIA of Nancy, France
Note that for the first 9 months, the candidate will be based at TUC
Chania (or NTUA Athens), Greece and for the the last 9 months the
candidate will work at INRIA-LORIA, Nancy, France.
The stipend is 29660 Euros/year tax free with social security paid by
MUSCLE. More information about MUSCLE Fellowships can be found at:
Please send a CV and the names of 3 referees to
Alex Potamianos and Khalid Daoudi
by Oct 30th 2004.
For further information, interested candidates can contact any of the
* Alex Potamianos
* Khalid Daoudi
* Petros Maragos
* Vasilis Digalakis
RESEARCH OPPORTUNITIES IN THE AMI TRAINING
PROGRAMME AMI (Augmented Multiparty
Interaction) is an integrated project funded by the EC Framework
6 programme from January 2004 for 3 years.
AMI is concerned with multimodal technologies to support human interaction,
in the context of smart meeting rooms and remote meeting assistants. The project
aims to develop new tools for understanding, searching and browsing meetings
data captured from a wide range of devices, as part of an integrated multimodal
group communication. AMI will thus address a range of multidisciplinary research
including natural speech recognition, speaker tracking and segmentation, visual
shape tracking, gesture recognition, multimodal dialogue modelling, meeting
dynamics, summarisation, browsing and retrieval.
AMI supports a training programme whose objective is to provide opportunites
for undergraduates, masters students, Ph.D. students and postdoctoral researchers
to take part in AMI.
* The training programme funds internships and exchanges.
* Visits typically occupy at least 3 months for undergraduates and masters students
and at least 6 months for Ph.D. students and postdoctoral researchers.
* Funding covers travel and living expenses, but not salary. Living expenses
will typically be 1250 Euro/month.
* The programme is open to all, but priority is given to researchers who are
members of AMI teams, researchers who intend to visit AMI teams, researchers
who can demonstrate close connections with AMI research, proposals with an industrial
* A specific programme funds visits of 6 months or more to the International
Computer Science Institute, Berkeley, CA.
In this case typical living expenses are 2000 Euro/Month. For Ph.D. students
and postdoctoral researchers, visits to ICSI will typically be at least 6 months.
Senior scientists are also encouraged to apply, in which case proposals for
shorter visits will also be entertained.
HOSTING SITES AMI's 15 partners and associated companies and institutions (details
on http://www.amiproject.org) will act as hosts for
the training programme. The project is jointly managed by IDIAP (CH) and The
University of Edinburgh (UK). The training programme is managed by the University
of Sheffield (UK).
HOW TO APPLY
The application form can be downloaded from http://www.dcs.shef.ac.uk/~linda/AMI/training.htm.
You will need the written support of your home institution and the host institution.
You will also need an academic reference.
Enquiries may be addressed to
Linda Perna, AMI training programme administrator.
WHEN TO APPLY You can apply at any time but applications will be considered
on a quarterly basis, with deadlines of 15th September
and 15th December.
Professor Phil Green
AMI Training Manager
Department of Computer Science
University of Sheffield
Regent Court 211 Portobello St., Sheffield S1 4DP UK phone: (44) 114 22 21828
fax: (44) 114 22 21810
Contact person: Phil Green
Submission deadline: December 1st, 2004 (early submission is encouraged)
SPEECH CODING POSTDOCTORAL FELLOWSHIP OPENING AT ICSI
The International Computer Science Institute (ICSI) invites
applications for a postdoctoral Fellow position in speech
processing. The Fellow will be working with Nelson Morgan, along with
international colleagues, in the area of medium bit-rate speech
coding. Some experience with modern speech coding approaches
(particularly CELP-related) is required, along with strong
capabilities in signal processing in general.
ICSI is an independent not-for-profit Institute located a few blocks
from the Berkeley campus of the University of California. It is
closely affiliated with the University, and particularly with the
Electrical Engineering and Computer Science (EECS) Department. See
http://www.icsi.berkeley.edu to learn more about ICSI.
The ICSI Speech Group (including its predecessor, the ICSI Realization
Group) has been a source of novel approaches to speech processing
since 1988. It is primarily known for its work in speech recognition,
although it has housed major projects in speaker recognition and
metadata extraction in the last few years. The new effort in speech
coding will draw upon lessons learned in our feature extraction work
for these classification-oriented topics.
Applications should include a cover letter, vita, and the names of at
least 3 references (with both postal and email addresses). Applications
should be sent by email to Nelson Morgan
and by postal mail to
Director (re Speech Postdoctoral Search)
1947 Center Street
Berkeley, CA 94704
SPEECH SYNTHESIS at CSTR
The Centre for Speech Technology Research at the University of Edinburgh is seeking a research fellow to work on the leading text-to-speech
research toolkit, Festival (http://www.cstr.ed.ac.uk/projects/festival) through the ongoing project "Expressive Prosody for Unit-selection Speech
Synthesis". The project's aims are to add explicit control of prosody to
unit-selection speech synthesis, generate prosody appropriate for
communicating specific meanings and information structures and to
realise this prosody with sequences of appropriately-sized pitch accents, arranged into valid intonation tunes. This project is jointly lead by Simon King, Mark Steedman and Rob Clark (Edinburgh) and Dan
Jurafsky (Stanford, USA).
Full details can be found at http://www.cstr.ed.ac.uk/opportunities
JOURNALS and BOOKS
Call for Papers
Speech Communication Journal
Special issue on
Robustness Issues in Conversational Interaction
Following the ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction (Robust2004) held at the University of East Anglia in August 2004 a special edition of the Speech Communication Journal is planned along the same theme of robustness. This special edition will focus on methods of developing robustness against effects that are known to degrade the performance of components within conversational interaction systems. Degradation can arise from many different sources (acoustic noise, packet loss, speaker variability, etc) and compensation against these may come from a variety of different techniques; from signal processing, model adaptation, confidence measures, dialogue strategies and inclusion of additional modalities. In particular the special edition will focus on the following areas:
*Robustness against environmental noise
-Filtering and transformations
*Robustness against unreliable transmission channels
-Distributed approaches to ASR
-Error concealment – reconstruction or adaptation
*Robust conversational system design
-User modelling and adaptation
*Non-speech modalities to improve robustness
-Modality fusion and synchronisation
*Robustness to speaker variability
-Dialects and non-native speakers
Submission of papers is open to both participants of Robust2004 (through submission of an extended workshop paper) and non-participants alike.
Dr. Ben Milner, University of East Anglia, UK
Prof. Borge Lindberg, Aalborg University, Denmark
Prof. Christian Wellekens, EURECOM, France
Submission deadline 31st March 2005
Notification of acceptance 31st May 2005
Tentative publication 1st September 2005
Prospective authors should follow the regular guidelines of the Speech Communication Journal for electronic submission (http://ees.elsevier.com/specom ). During submission authors must select the Section as “Special Issue Paper”, not “Regular Paper”, and the title of the special issue should be referenced in the “Comments” page along with any other information.
-Papers accepted for future publication in Speech Communication
Full text available on http://www.sciencedirect.com
for Speech Communication subscribers and subscribing institutions.
Click on Publications, then on Speech Communication and on Articles in press.
The list of papers in press is displayed and a .pdf file for each paper is available.
Taisuke Ito, Kazuya Takeda and Fumitada Itakura, Analysis and recognition of
whispered speech, Speech Communication, In Press, Corrected Proof, Available
online 23 September 2004
Jean Vroomen, Sabine van Linden, Mirjam Keetels, Béatrice de Gelder and Paul Bertelson,
Selective adaptation and recalibration of auditory speech by lipread information:
dissipation, Speech Communication, In Press, Corrected Proof, Available online
23 September 2004
J.P. Barker, M.P. Cooke and D.P.W. Ellis, Decoding speech in the presence of other
sources, Speech Communication, In Press, Corrected Proof, Available online
22 September 2004
Akiko Kusumoto, Takayuki Arai, Keisuke Kinoshita, Nao Hodoshima and Nancy Vaughan,
Modulation enhancement of speech by a pre-processing algorithm for improving
intelligibility in reverberant environments, Speech Communication, In Press,
Uncorrected Proof, Available online 23 July 2004
Mark M.J. Houben, Armin Kohlrausch and Dik J. Hermes, Perception of the size and speed of rolling balls by sound,
Speech Communication, In Press, Corrected Proof, Available online 20 August 2004, .
Kalle J. Palomäki, Guy J. Brown and DeLiang Wang, A binaural processor for missing data speech recognition
in the presence of noise and small-room reverberation, Speech Communication, In Press, Corrected Proof,
Available online 20 August 2004,
Kuldip K. Paliwal and Leigh D. Alsteris, On the usefulness of STFT phase spectrum in human listening tests, Speech Communication, In Press, Corrected Proof, Available online 2 November 2004,
Ken W. Grant, Virginie van Wassenhove and David Poeppel, Detection of auditory (cross-spectral) and auditory-visual (cross-modal) synchrony, Speech Communication, In Press, Uncorrected Proof, Available online 30 October 2004,
B.J. Theobald, J.A. Bangham, I.A. Matthews and G.C. Cawley, Near-videorealistic synthetic talking faces: implementation and evaluation, Speech Communication, In Press, Uncorrected Proof, Available online 30 October 2004,
Gokhan Tur, Dilek Hakkani-Tür and Robert E. Schapire, Combining active and semi-supervised learning for spoken language understanding, Speech Communication, In Press, Uncorrected Proof, Available online 30 October 2004,
Hae Kyung Jung, Nam Soo Kim and Taejeong Kim, A new double-talk detector using echo path estimation, Speech Communication, In Press, Uncorrected Proof, Available online 27 October 2004,
Gang Peng and William S.-Y. Wang, Tone recognition of continuous Cantonese speech based on support vector machines, Speech Communication, In Press, Uncorrected Proof, Available online 27 October 2004,
M. Nordstrand, G. Svanfeldt, B. Granström and D. House, Measurements of articulatory variation in expressive speech for a set of Swedish vowels, Speech Communication, In Press, Uncorrected Proof, Available online 27 October 2004,
Soundararajan Srinivasan and DeLiang Wang, A schema-based model for phonemic restoration, Speech Communication, In Press, Uncorrected Proof, Available online 20 October 2004,
Mark A. Pitt, Keith Johnson, Elizabeth Hume, Scott Kiesling and William Raymond, The Buckeye corpus of conversational speech: labeling conventions and a test of transcriber reliability, Speech Communication, In Press, Uncorrected Proof, Available online 20 October 2004,
Publication policy: Hereunder, you will find very short announcements of future
events. The full call for participation can be accessed on the conference websites
See also our Web pages (http://www.isca-speech.org)
on conferences and workshops.
FUTURE INTERSPEECH CONFERENCES
-Interspeech (Eurospeech)-2005, Lisbon, Portugal,September 4-8, 2005
Chair: Isabel Trancoso, INESC ID Lisboa
-Interspeech (ICSLP)-2006, Pittsburg, PA, USA
Chair: Richard M.Stern, Carnegie Mellon University,USA
-Interspeech (Eurospeech)-2007, Antwerp, Belgium , August 27-31,2007
Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves, K.U.Nijmegen
FUTURE ISCA TUTORIAL AND RESEARCH WORKSHOP (ITRW)
- NOLISP'05: Non linear speech processing
April 19-22 April 2005, Barcelona,
organized by Cost 277 Contact person:
Marcos Faundez-Zanuy(see ISCApad 66)
-ISCA Workshop on Plasticity in Speech Perception
Organized by: UCL Centre for Human Communication, UCL, London, UK
co-sponsored by the Acoustical Society of America
15-17 June 2005; London, UK
Anne Cutler, Max Planck Institute, Netherlands
James Flege, University of Alabama at Birmingham, USA
Patricia Kuhl, University of Washington, USA
David Moore, MRC Institute of Hearing Research, UK
Christophe Pallier, Inserm Cognitive Neuroimaging Unit, France
David Pisoni, Indiana University, USA
Franck Ramus, CNRS Cognitive and Psycholinguistic Sciences Laboratory,
Stuart Rosen, UCL, UK
Jenny Saffran, University of Wisconsin - Madison, USA
Glenn Schellenberg, University of Toronto Mississauga, Canada
Sophie Scott, UCL, UK
Contact: Valerie Hazan
FORTHCOMING EVENTS SUPPORTED (but not organized) by ISCA
- International Workshop on Spoken Language Translation Evaluation campaign on
spoken language translation
A satellite event of Interspeech-ICSLP 2004 September
30 - October 1, 2004, Kyoto, Japan (see ISCApad 71)
-4th International Symposium on Chinese Spoken Language Processing (ISCSLP'04)
December 16-18, 2004, Hong Kong, China
Pan European Voice Conference (PEVOC 6)
August 31 - September 3, 2005, London, UK
FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS
-Workshop MIDL 2004, Language and dialectal variety identification by humans and machines
organised by the MIDL consortium of the
Modelling for the Identification of Languages project supported by the
interdisciplinary STIC-SHS program of CNRS.
Partners are LIMSI-CNRS,
ILPGA/LPP Paris3, TELECOM PARIS (ENST) and DGA, with the support of AFCP.
Place and date: Paris, 29-30 November 2004
Tenth Australian International Conference on
Speech Science & Technology
Macquarie University, Sydney, 8th-10th December, 2004
For details: http://www.assta.org/sst/2004
Steve Cassidy,Conference Chair
IEICE/IPS/ATR workshop on statistical modeling approach
for speech recognition
Kyoto, Japan, December 20, 2004
-Conference on A Century of Experimental Phonetics: Its History and Development
from Theodore Rosset to John Ohala
Universite Stendhal, Grenoble, France on February 24-25, 2005.
Conference room Jacques Cartier, Maison des Langues et de la Culture.
Contributions of 20 minutes or posters are welcome
Send a 200 word abstract at
100ans at icp.inpg.fr
Philadelphia, PA, USA, March 19-23, 2005
the 15th Nordic Conference of Computational
Joensuu, Finland, May 20-21, 2005
1000 word abstract before January 31,2005 to be mailed to
Registration before March 31,2005
Contact: Stefan Werner
Deaf and Hearing Impaired Children Europe 2005
London 13-14 May 2005
Promoting joined-up working for Europe’s professionals with deaf children and their
The meeting is designed to bring together the wide range of professionals and
voluntary organisations throughout Europe with an interest in childhood
hearing impairment. There is a growing awareness of the need to work collaboratively
across organisations and professional boundaries and with the users themselves if the
goal of delivering high quality hearing services to all of Europe’s children who need
them is to be achieved. This meeting will provide exciting opportunities to explore
these challenges. It will also raise the profile of hearing impairment and an awareness
of the needs of our children across Europe.
The meeting will be held under the auspices of NDCS (National Deaf Children’s Society)
and RNID (Royal National Institute for Deaf People) and will be organised by The Ear
Foundation. The meeting will be held in central London on 13/14 May 2005..
Registration form and further details from:
European Conference on Circuit Theory and Design (ECCTD)
University College Cork, Ireland
29 August - 1 September, 2005
"Innovation through Understanding"
Authors are invited to submit a Full 4-page Paper according to posted guidelines.
Only electronic submissions will be accepted via the Web at: http://ecctd05.ucc.ie
The 13th European Conference on Signal Processing
Antalya, Turkey, September 4-8, 2005
The main conference themes are:
- Statistical Signal Processing
- Sensor Array and Multichannel Processing
- Biosignal Processing
- Signal Processing for Communications
- Speech Processing
- Image and Multidimensional Signal Processing
- Multimedia Signal Processing
- Nonlinear Signal Processing
- Audio and Electroacoustics
- DSP Implementations and Embedded Systems
- Rapid Prototyping and Tools for DSP Design
- Industrial Applications of Signal Processing
- Signal Processing Education
- Emerging Technologies in Signal Processing
FOR FURTHER INFORMATION: http://www.eusipco2005.org/