ISCApad number 78

December 10th, 2004

Dear ISCA members,

After INTERSPEECH (ICSLP) 2004, several authors were invited to publish their results (apparently for a "registration" fee) in an on-line journal. As we have noted earlier, this invitation is entirely independent of the ICSLP 2004 organizers and of ISCA. As you all know, from an ethical point of view, a paper cannot be published twice: a paper should bring new theoretical or experimental arguments. Other than that, we fully support extending papers from the INTERSPEECH 2004 papers into full journal papers for a publication in well established and well recognized refereed journals.

We thank all members who have answered the questionnaire about the organization of our INTERSPEECH conferences. We are always happy to receive your suggestions also in informal form (please send to Julia Hirschberg, vice-president.

As you know, it has been the aim of the Board to extend membership services for its student members, and a first step in that direction has been the creation of a Student Advisory Committee. The SAC has just set up the beginnings of a student section on the website. Please check this out and send your comments to the group .

Finally, please note that the login and password for the ISCA archive have just been changed. You will find the login hereunder in our section ISCA News. Password will be sent will to you via a separate email.

The ISCA board wishes you a peaceful end of year and sends their season greetings with the hope that 2005 will be a wonderful year for speech processing (we indeed need wonders!)

Do not forget to send the information you want to display for members in time to be included in IscaPad (last week of each month).

Christian Wellekens


  1. ISCA News
  2. Courses, internships, data bases, softwares
  3. Job openings
  4. Journals and Books
  5. Future Interspeech Conferences
  6. Future ISCA Tutorial and Research Workshops (ITRW)
  7. Forthcoming Events supported (but not organized) by ISCA
  8. Future Speech Science and technology events


Organisation of INTERSPEECH 2008 ICSLP
Individuals or organisations interested in organizing INTERSPEECH 2008-ICSLP should submit by 15 December 2004 a brief preliminary proposal, including:
* The name and position of the proposed general chair and other principal organizers.
* The proposed period in September 2008 when the conference would be held
* The institution assuming financial responsibility for the conference and any other sponsoring institutions
* The city and conference center proposed (with information on that center's capacity)
* Information on transportion and housing for conference participants
* Likely support from local bodies (e.g. governmental)
* The commercial conference organizer (if any)
* A preliminary budget
Guidelines for the preparation of the proposal are available on our website. Additional information can be provided by Julia Hirschberg.
Proposals should be submitted by email to the above address. Candidates fulfilling basic requirements will be asked to submit a detailed proposal by 28 February 2005.

ISCA Archives
Our colleague Wolfgang Hess informed us that recently, archives have been enriched with ICSLP 1998 and Eurospeech 1993 Proceedings. We remind you that the abstract of papers of our archives can be freely accessed by anybody and that the full paper is restricted to members. Due to change of computer equipment our archives have migrated and a new login has been defined:
login: ISCA_Archive (case sensitive)
The new password will be sent to the members in a separate email.

-New development on membership services :
It is now possible to apply for ISCA membership and renew on line. ISCA members benefit from a discounted rate for Speech Communication subscription. >From this year, a ‘print plus online’ subcription only will be available. Members will received the 2005 volumes in print, and the online access will enable members to also access the Speech Communication archive dating back to 1995. If you are interested in subscribing to Speech Communication, please indicate this on the renewal form and you will be billed directly by Elsevier. Individual, FULL member and STUDENT : paper version + online access: 90 EUR Institutional Member, ‘Print only’ subcription: 624 EUR

- A full list of members (including membership numbers and subcription expiry dates) is available online at:

-ISCApad publishes now a list of papers
accepted for publication in Speech Communication (under heading Journals,...). These papers can be also viewed on the website of ScienceDirect ( if your institution has subscribed to Speech Communication.

-ISCA Grants
are available for students attending meetings. Even if no information on the grants is advertised on the conference announcement,students may apply.
For more information:



-Information on on-going theses
could be very useful for thesis supervisors, researchers as well as PhD students. A list of speech theses is available under the section HLTheses at



(also have a look at as well as > Jobs)

Postdocs at DEUTSCHE TELEKOM LABORATORIES, Berlin Deutsche Telekom, in collaboration with the Technical University of Berlin, is setting up a new corporate research and development center under the name of "Deautsche Telekom Laboratories". Several postdoc positions are open each in the broad areas of Human Interface, Multimedia, Security, and Networking ( please refer to http://www.deutsche-telekom-laboratories. de ). Outstanding applicants will be invited to interview and speak at one of three symposia which we will hold in Berlin in January, April and June 2005.

The Department of Computer Science at the UNIVERSITY OF TEXAS at EL PASO
is seeking someone to contribute to our goal of building the spoken dialog systems of tomorrow. Specifically we are interested in discovering, modeling, and exploiting highly real- time (sub-second) aspects of human language use. The overall aim is to develop systems which demonstrate much more efficient and usable interactions.
Rank: open (assistant, associate or full professor)
Qualifications: a Ph.D. or D.Eng. in Computer Science or a related field
Start Date: August or September 2005
Teaching Load: two courses per semester
Salary: competitive
Deadline: open, but applications received before January 5 may be given priority
Informal enquiries are welcome: please contact Nigel Ward

Computer Science at UT El Paso
Responsive Systems Project
Official Position Announcement

ELRA/ELDA offers a position for its Language Resources department.
He/she will be in charge of managing the activities in relation with the identification of language resources and the negotiation of rights in relation with their distribution.
The position includes, but is not limited to, the responsibility of the following tasks:
- Identification of language resources,
- Implementation of a universal catalogue aiming at collecting existing language resources,
- Negotiation of distribution rights and definition of prices of language resources to be integrated in the ELRA/ELDA catalogue.
- Knowledge in computational linguistics, information science, knowledge management or similar fields,
- Contact and communication skills,
- Ability to work independently and as part of a team,
- Fluent French and English required,
Experience in project management (especially European projects), as well as practice in contract and partnership negotiation at an international level, would be a plus.
Applications will be considered until the position is filled; however, a final decision will be made by the end of year 2004/very beginning of 2005.
The position is based in Paris and candidates should have the citizenship (or residency papers) of a European Union country.
Salary: Commensurate with qualifications and experience.
Applicants should email, fax, or post a cover letter addressing the points listed above together with a curriculum vitae to:
55-57, rue Brillat-Savarin
75013 Paris
Fax: 01 43 13 33 30
For further information about ELDA/ELRA, visit

Speech Recognition Programmers & Scientists
Katholieke Universiteit Leuven, Belgium

Several positions are available for speech recognition programmers and scientists within the ESAT Speech Group.
Focus of the work will be on further implementation of our speech recognition software architecture, further optimization of the existing system and the use of it in educational or clinical applications. Depending on the project (see below), the work is oriented towards fundamental research, technology deployment or implementation.
Candidates should have a degree in electrical engineering or computer science and programming experience on a UNIX or Windows platform using a higher level language such as C/C++/JAVA. Good communication skills will be an asset as well. Previous experience in speech recognition is not required, but definitely welcome.
The work will be carried out within the framework of several ongoing and future research projects such as:
- speech modelling for CAL.
More details about the ESAT speech group can be found at our website
Interested applicants should send their CV to Mrs. Annitta De Messemaeker
Kasteelpark Arenberg 10
3001 Heverlee

POST DOC or RESEARCH ENGINEER POSITION at Institut Eurecom-Sophia Antipolis-France
Department: Multimedia Communications
Eurecom ( ) is an international teaching and research institute , founded in 1991 as a joint initiative by Ecole Polytechnique Federale de Lausanne (EPFL) and Ecole Nationale Superieure des Telecommunications (ENST- Paris). It welcomes students from several engineering schools and universities ENST Paris, ENST Brittany, INT Evry, EPFL, ETHZ (Zurich), Helsinki University of Technology, Politecnico di Torino...They receive an education in Communications Systems (Networking, Multimedia, Security, Mobile Communications, Web services...) Professors, lecturers and PhD students conduct research in these domains. Speech processing is under the responsibility of Professor Chris Wellekens in the Dpt Multimedia Communications.
Spoken languages at the Institute are French end English for the lectures. English is the usual language for research exchanges. Speech research involves speaker identification using speaker clustering or eigenvoices, phonemic variabilities of lexicons, optimal feature extraction, Bayesian networks and variational techniques, navigation in audio databases (segmentation in speakers, wordspotting,...).
Job description: POST DOC or RESEARCH ENGINEER The European project DIVINES, a STREP/6th FP has been accepted by the Commission and will start in January 2004. Eight labs and companies are partners: Multitel (B), Eurecom (F), France Telecom R/D (F), University of Oldenburg (D), Babeltechnologies (B), Loquendo (I), Politecnico di Torino (I), LIA (F). A collaboration with Mac Gill University (Montreal) has also be negotiated. The aim of the project is to analyse the reasons why recognizers are unable to reach the human recognition rates even in the case of lack of semantic content. All weaknesses will be analyzed at the level of feature extraction, phone and lexical models. Focus will be put on intrinsic variabilities of speech in quiet and noisy environment as well as in read and spontaneous speech. The analysis will not be restricted to tests on several databases with different features and models but will go into the detailed behavior of the algorithms and models. Suggestions of new solutions will arise and be experimented. The duration of the project is for 3 years.
The Speech group is looking for a Post-doc or research engineer who acquired a hands-on practice of speech processing. He/she must have an excellent practice of signal and speech analysis as well as a good knowledge of optimal classification using Bayesian criteria. He/she must be open-minded to original solutions proposed after a rigorous analysis of the low level phenomena in speech processing. Fluency in english is mandatory (write, understand and speak). He/she should be able to represent Eurecom at the periodical meetings. Ability to work in a small team is also required.
-send a detailed resume (give details on your activity since your PhD graduation)
-send a copy of your thesis report (either as a a printed document or as a CDROM) DO NOT attach your thesis in an e-mail!)
-send a copy of your diploma)
-send the names and email addresses of two referees.)
-send the list of your publications (you must have several))
to Professor Chris J. Wellekens, Dpt of Multimedia Communications, 2229 route des Cretes, BP 193, F-06904 Sophia Antipolis Cedex, France.
Additional informations
Contact Professor Chris Wellekens ( )

PROGRAMME AMI (Augmented Multiparty Interaction) is an integrated project funded by the EC Framework 6 programme from January 2004 for 3 years.
AMI is concerned with multimodal technologies to support human interaction, in the context of smart meeting rooms and remote meeting assistants. The project aims to develop new tools for understanding, searching and browsing meetings data captured from a wide range of devices, as part of an integrated multimodal group communication. AMI will thus address a range of multidisciplinary research including natural speech recognition, speaker tracking and segmentation, visual shape tracking, gesture recognition, multimodal dialogue modelling, meeting dynamics, summarisation, browsing and retrieval.
AMI supports a training programme whose objective is to provide opportunites for undergraduates, masters students, Ph.D. students and postdoctoral researchers to take part in AMI.
* The training programme funds internships and exchanges.
* Visits typically occupy at least 3 months for undergraduates and masters students and at least 6 months for Ph.D. students and postdoctoral researchers.
* Funding covers travel and living expenses, but not salary. Living expenses will typically be 1250 Euro/month.
* The programme is open to all, but priority is given to researchers who are members of AMI teams, researchers who intend to visit AMI teams, researchers who can demonstrate close connections with AMI research, proposals with an industrial component,.
* A specific programme funds visits of 6 months or more to the International Computer Science Institute, Berkeley, CA.
In this case typical living expenses are 2000 Euro/Month. For Ph.D. students and postdoctoral researchers, visits to ICSI will typically be at least 6 months. Senior scientists are also encouraged to apply, in which case proposals for shorter visits will also be entertained.

HOSTING SITES AMI's 15 partners and associated companies and institutions (details on will act as hosts for the training programme. The project is jointly managed by IDIAP (CH) and The University of Edinburgh (UK). The training programme is managed by the University of Sheffield (UK).

The application form can be downloaded from
You will need the written support of your home institution and the host institution. You will also need an academic reference.
Enquiries may be addressed to Linda Perna, AMI training programme administrator.

WHEN TO APPLY You can apply at any time but applications will be considered on a quarterly basis, with deadlines of 15th September and 15th December.
Professor Phil Green
AMI Training Manager
Department of Computer Science
University of Sheffield
Regent Court 211 Portobello St., Sheffield S1 4DP UK phone: (44) 114 22 21828 fax: (44) 114 22 21810
Contact person: Phil Green /people/P.Green

Important Dates:
Submission deadline: December 1st, 2004 (early submission is encouraged)


Call for Papers for a Special Issue of Speech Communication Journal on
Spoken Language Understanding for Conversational Systems
Paper submission deadline has been extended by one month to January 1st, 2005!
The special issue follows on the related HLT/NAACL 2004 Workshop and will address topics such as:

* Approaches to building an SLU system ( Rule-based, data-driven, or hybrid, Automatic adaptation)
* Approaches to robustness in SLU system ( Handling uncertain and erroneous input, Handling dysfluencies and language variations)
* Output representation of SLU systems (Customizable domain independent representations based on task knowledge)
* Tighter integration of ASR and SLU systems
* Evaluation of SLU systems
* SLU in multilingual systems
* SLU in multimodal systems
* Question answering based SLU systems
* Machine learning algorithms for SLU
* Information retrieval and extraction for/from spoken dialogs

Guest Editors:
Dr. Srinivas Bangalore, AT&T Labs - Research
Dr. Dilek Hakkani-Tur , AT&T Labs-Research
Dr. Gokhan Tur , AT&T Labs -Research

Important Dates:
Extended Submission deadline: January 1st, 2005 (early submission is encouraged)
Notification of acceptance: April 1st, 2005
Final manuscript due: June 1st, 2005
Tentative publication date: September 1st, 2005

Submission Procedure:
Electronic submission During submission authors must select the Section as "Special Issue Paper", not "Regular Paper", and the title of the special issue should be referenced in the "Comments" page along with any other information.

The web pages for the CFP are: html version
pdf version

Call for Papers for a
Special issue of Speech Communication Journal on Robustness Issues in Conversational Interaction
Following the ISCA Tutorial and COST 278 Research Workshop (ITRW) on Robustness Issues in Conversational Interaction (Robust2004) held at the University of East Anglia in August 2004 a special edition of the Speech Communication Journal is planned along the same theme of robustness. This special edition will focus on methods of developing robustness against effects that are known to degrade the performance of components within conversational interaction systems. Degradation can arise from many different sources (acoustic noise, packet loss, speaker variability, etc) and compensation against these may come from a variety of different techniques; from signal processing, model adaptation, confidence measures, dialogue strategies and inclusion of additional modalities. In particular the special edition will focus on the following areas:
*Robustness against environmental noise
-Model adaptation
-Feature extraction
-Filtering and transformations
*Robustness against unreliable transmission channels
-Distributed approaches to ASR
-Channel protection
-Error concealment – reconstruction or adaptation
*Robust conversational system design
-Utterance verification
-Confidence measures
-Error handling
-Dialogue strategies
-User modelling and adaptation
*Non-speech modalities to improve robustness
-Multi-modal interaction
-Modality fusion and synchronisation
-Non-speech audio
-Non-acoustic features
*Robustness to speaker variability
-Spontaneous speech
-Dialects and non-native speakers
-Speaker adaptation
Submission of papers is open to both participants of Robust2004 (through submission of an extended workshop paper) and non-participants alike.
Guest Editors

Dr. Ben Milner, University of East Anglia, UK
Prof. Borge Lindberg, Aalborg University, Denmark
Prof. Christian Wellekens, EURECOM, France

Important Dates
Submission deadline 31st March 2005
Notification of acceptance 31st May 2005
Tentative publication 1st September 2005

Submission Procedure:
Electronic submission During submission authors must select the Section as "Special Issue Paper", not "Regular Paper", and the title of the special issue should be referenced in the "Comments" page along with any other information.

-Papers accepted for future publication in Speech Communication
Full text available on for Speech Communication subscribers and subscribing institutions. Click on Publications, then on Speech Communication and on Articles in press. The list of papers in press is displayed and a .pdf file for each paper is available.

F. Torres, L.F. Hurtado, F. García, E. Sanchis and E. Segarra, Error handling in a stochastic dialog system through confidence measures[star, open], Speech Communication, In Press, Uncorrected Proof, Available online 8 December 2004,

Rolf Carlson, Julia Hirschberg and Marc Swerts, Error handling in spoken dialogue systems, Speech Communication, In Press, Uncorrected Proof, Available online 8 December 2004,

David Sodoyer, Laurent Girin, Christian Jutten and Jean-Luc Schwartz, Developing an audio-visual speech source separation algorithm, Speech Communication, In Press, Corrected Proof, Available online 7 December 2004,

Matthias Odisio, Gérard Bailly and Frédéric Elisei, Tracking talking faces with shape and appearance models, Speech Communication, In Press, Corrected Proof, Available online 7 December 2004,

Sascha Fagel and Caroline Clemens, An articulation model for audiovisual speech synthesis--Determination, adjustment, evaluation, Speech Communication, In Press, Corrected Proof, Available online 1 December 2004,

Jean-Luc Schwartz, Frédéric Berthommier, Marie-Agnès Cathiard and Renato de Mori, Editorial, Speech Communication, In Press, Uncorrected Proof, Available online 30 November 2004,

Frédéric Berthommier, A phonetically neutral model of the low-level audio-visual interaction, Speech Communication, In Press, Corrected Proof, Available online 25 November 2004,

Lynne E. Bernstein, Edward T. Auer, Jr. and Sumiko Takayanagi, Auditory speech detection in noise enhanced by lipreading[star, open], Speech Communication, In Press, Corrected Proof, Available online 25 November 2004,

Kazuhiro Nakadai, Daisuke Matsuura, Hiroshi G. Okuno and Hiroshi Tsujino, Improvement of recognition of simultaneous speech signals using AV integration and scattering theory for humanoid robots, Speech Communication, In Press, Corrected Proof, Available online 25 November 2004,

Emanuela Magno Caldognetto, Piero Cosi, Carlo Drioli, Graziano Tisato and Federica Cavicchio, Modifications of phonetic labial targets in emotive speech: effects of the co-production of speech and emotions, Speech Communication, In Press, Uncorrected Proof, Available online 25 November 2004,

Virginie Attina, Denis Beautemps, Marie-Agnès Cathiard and Matthias Odisio, A pilot study of temporal organization in Cued Speech production of French syllables: rules for a Cued Speech synthesizer, Speech Communication, In Press, Uncorrected Proof, Available online 25 November 2004,

Jeesun Kim and Chris Davis, Investigating the audio-visual speech detection advantage[star, open], Speech Communication, In Press, Corrected Proof, Available online 24 November 2004,

Ilkka Linnankoski, Lea Leinonen, Minna Vihla, Maija-Liisa Laakso and Synnöve Carlson, Conveyance of emotional connotations by a single word in English, Speech Communication, In Press, Corrected Proof, Available online 24 November 2004,

Jing Huang, Gerasimos Potamianos, Jonathan Connell and Chalapathy Neti, Audio-visual speech recognition using an infrared headset, Speech Communication, In Press, Corrected Proof, Available online 24 November 2004,

Marion Dohen, Hélène Loevenbruck, Marie-Agnès Cathiard and Jean-Luc Schwartz, Visual perception of contrastive focus in reiterant French speech, Speech Communication, In Press, Corrected Proof, Available online 24 November 2004,

I. Bulyko, K. Kirchhoff, M. Ostendorf and J. Goldberg, Error-correction detection and response generation in a spoken dialogue system, Speech Communication, In Press, Uncorrected Proof, Available online 20 November 2004,

Jan Zera, Erratum to "Speech intelligibility measured by adaptive maximum-likelihood procedure" [Speech Communication Volume 42 Issues 3-4 (2004) 313-328][star, open], Speech Communication, In Press, Uncorrected Proof, Available online 19 November 2004,

Pashiera Barkhuysen, Emiel Krahmer and Marc Swerts, Problem detection in human-machine interactions based on facial expressions of users, Speech Communication, In Press, Uncorrected Proof, Available online 10 November 2004,

Julie Baca and Joseph Picone, Effects of displayless navigational interfaces on user prosodics, Speech Communication, In Press, Uncorrected Proof, Available online 5 November 2004,



Publication policy: Hereunder, you will find very short announcements of future events. The full call for participation can be accessed on the conference websites
See also our Web pages ( on conferences and workshops.


-Interspeech (Eurospeech)-2005, Lisbon, Portugal,September 4-8, 2005
Chair: Isabel Trancoso, INESC ID Lisboa

-Interspeech (ICSLP)-2006, Pittsburg, PA, USA
Chair: Richard M.Stern, Carnegie Mellon University,USA

-Interspeech (Eurospeech)-2007, Antwerp, Belgium , August 27-31,2007
Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves, K.U.Nijmegen



- NOLISP'05: Non linear speech processing
April 19-22 April 2005, Barcelona, Spain
organized by Cost 277 Contact person: Marcos Faundez-Zanuy(see ISCApad 66)

Organized by: UCL Centre for Human Communication, UCL, London, UK
co-sponsored by the Acoustical Society of America
15-17 June 2005; London, UK
Anne Cutler, Max Planck Institute, Netherlands
James Flege, University of Alabama at Birmingham, USA
Patricia Kuhl, University of Washington, USA
David Moore, MRC Institute of Hearing Research, UK
Christophe Pallier, Inserm Cognitive Neuroimaging Unit, France
David Pisoni, Indiana University, USA
Franck Ramus, CNRS Cognitive and Psycholinguistic Sciences Laboratory, France
Stuart Rosen, UCL, UK
Jenny Saffran, University of Wisconsin - Madison, USA
Glenn Schellenberg, University of Toronto Mississauga, Canada
Sophie Scott, UCL, UK
Contact: Valerie Hazan

-6th SIGdial Workshop on DISCOURSE and DIALOGUE
Lisbon, Portugal, 2-3 September 2005
(held in conjunction with Eurospeech/Interspeech 2005) workshop6/
1. Dialogue Systems
Spoken, multi-modal, and text/web based dialogue systems
2. Corpora, Coding Schemes and Tools
Corpus-based work on discourse and spoken, text-based and multi-modal dialogue including its support.
3. Pragmatic and/or Semantic Modelling
The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a single sentence).
Long papers (10 pages max) for full plenary presentation as well as short papers (5 pages max) and demonstrations.
Deadline: April 25,2005 at
Style files are available at style/

Workshop website
Sigdial website
Eurospeech website

Laila Dybkjær, University of Southern Denmark,
Wolfgang Minker , University of Ulm, Germany.

Aix-en-Provence, France, September 10-12,2005
organised by the DELIC team of the University of Provence.
The meeting is timed to allow participants at INTERSPEECH (Lisbon, September 4-8) to attend. Previous meetings (Berkeley, 1999; Edinburgh, 2001; Gothenburg, 2003) have seen papers addressing normal disfluency from a wide range of disciplines, from automatic speech recognition and computational linguistics to linguistic analysis, psycholinguistics (production and comprehension), and beyond. Papers comparing normal disfluencies to those occurring in communication disorders are also welcome. 4-page papers by April 8 2005 to
Once accepted, papers may be revised and extended to 6 pages.
Additional infos
Jean Veronis DELIC, Université de Provence, France.
Robert Eklund Teliasonera, Sweden.
Robin Lickley Queen Margaret University College, Edinburgh, UK.
Liz Shriberg SRI International, Menlo Park, CA, USA.
Åsa Wengelin Lund University, Sweden.



-4th International Symposium on Chinese Spoken Language Processing (ISCSLP'04)
December 16-18, 2004, Hong Kong, China

-Pan European Voice Conference (PEVOC 6)
August 31 - September 3, 2005, London, UK

10th International Conference on Speech and Computer

October 17-19 2005, University of Patras, Patras, Greece

Topics of interest for paper submission include but are not limited to:
a.. Speech production and perception
b.. Speech analysis and processing
c.. Natural language processing
d.. Speech coding and transmission
e.. Speech recognition and understanding
f.. Speech synthesis
g.. Spoken dialog systems
h.. Speaker recognition
i.. Multi-modal processing
j.. Speech and language resources
k.. Applied systems for Human-Computer Interaction

Four page papers in english will be accepted only by electronic submission through e-mail in ASCII format

Submission of full paper: June 27, 2005
Notification of acceptance: July 18, 2005
Early registration: July 29, 2005
Late registration: September 10, 2005

George Kokkinakis, WCL, University of Patras, Greece
For further SPECOM2005 or e-mail to our Claudia Manfredi



Beyond HMM
IEICE/IPS/ATR workshop on statistical modeling approach for speech recognition
Kyoto, Japan, December 20, 2004

-Conference on A CENTURY OF EXPERIMENTAL PHONETICS: Its History and Development from Theodore Rosset to John Ohala
Universite Stendhal, Grenoble, France on February 24-25, 2005.
Conference room Jacques Cartier, Maison des Langues et de la Culture.
Contributions of 20 minutes or posters are welcome
Send a 200 word abstract at 100ans at

-ICASSP 2005
Philadelphia, PA, USA, March 19-23, 2005

the 15th Nordic Conference of Computational Linguistics
Joensuu, Finland, May 20-21, 2005 nodalida2005/
1000 word abstract before January 31,2005 to be mailed to nodalida2005
Registration before March 31,2005
Contact: Stefan Werner

London 13-14 May 2005
Promoting joined-up working for Europe’s professionals with deaf children and their families.
The meeting is designed to bring together the wide range of professionals and voluntary organisations throughout Europe with an interest in childhood hearing impairment. There is a growing awareness of the need to work collaboratively across organisations and professional boundaries and with the users themselves if the goal of delivering high quality hearing services to all of Europe’s children who need them is to be achieved. This meeting will provide exciting opportunities to explore these challenges. It will also raise the profile of hearing impairment and an awareness of the needs of our children across Europe.
The meeting will be held under the auspices of NDCS (National Deaf Children’s Society) and RNID (Royal National Institute for Deaf People) and will be organised by The Ear Foundation. The meeting will be held in central London on 13/14 May 2005..
Registration form and further details from:
Brian Archbold

Eighth International Symposium
on Signal Processing and its Applications, ISSPA 2005,

22-25 August 2005, Sydney, AUSTRALIA, University of Wollongong

1. Digital Filter Design & Structures
2. Signal Processing for Communications
3. Multirate Filtering & Wavelets
4. Image and Video Coding
5. Adaptive Signal Processing
6. Image Enhancement and Restoration
7. Time-Frequency/Time-Scale Analysis
8. Biomedical Signal and Image Processing
9. Security Signal Processing & Digital Watermarking
10. Neural Networks & Pattern Recognition
11. Statistical Signal & Array Processing
12. Blind Source Separation
13. Radar & Sonar Processing
14. Signal Processing Education
15. Speech Processing & Recognition
16. Multimedia Signal Processing
17. Image & Multidimensional Signal Processing
18. Image Sequence Analysis & Processing
19. Machine Learning
20. Photonic & Optical Signal Processing
21. VLSI for Signal and Image Processing
22. Other Signal Processing Applications

Submit full length (four pages)

Full paper submission: March 15, 2005
Tutorial & special session proposals: March 15, 2005
Notification of acceptance: May 16, 2005
Camera ready paper: June 10, 2005

conference website

European Conference on Circuit Theory and Design (ECCTD)
University College Cork, Ireland
29 August - 1 September, 2005
"Innovation through Understanding"
*Mathematical Methods
*Computational Methods
*Industrial Applications
Authors are invited to submit a Full 4-page Paper according to posted guidelines. Only electronic submissions will be accepted via the Web at:
Contact Details:

The 13th European Conference on Signal Processing
Antalya, Turkey, September 4-8, 2005
The main conference themes are:
- Statistical Signal Processing
- Sensor Array and Multichannel Processing
- Biosignal Processing
- Signal Processing for Communications
- Speech Processing
- Image and Multidimensional Signal Processing
- Multimedia Signal Processing
- Nonlinear Signal Processing
- Audio and Electroacoustics
- DSP Implementations and Embedded Systems
- Rapid Prototyping and Tools for DSP Design
- Industrial Applications of Signal Processing
- Signal Processing Education
- Emerging Technologies in Signal Processing

September 12-16, 2005
The Hilton Phuket Arcadia Resort and Spa
Phuket, Thailand
Conference Schedule

12 September 2005 Tutorials
13-15 September 2005 Papers, panels and exhibitions
16 September 2005 Workshops


- MT for the Web
- Practical MT systems (MT for professionals, MT for multilingual eCommerce, MT for localization, etc.)
- Translation aids (translation memory, terminology databases, etc.)
- Translation environments (workflow, support tools, conversion tools for lexica etc.)
- Methodologies for MT
- Human factors in MT and user interfaces
- Speech and dialogue translation
- Natural language analysis and generation techniques geared towards MT
- Dictionaries and lexicons for MT systems
- Text and speech corpora for MT and knowledge extraction from corpora
- MT evaluation techniques and evaluation results
- Standards in text and lexicon encoding for MT
- Cross-lingual information retrieval
- MT and related technologies (information retrieval, text categorization, text summarization, information extraction, etc.)

Details of submission procedure will be announced in the next Call for Papers.

Call for Exhibitions
Kunio Matsui by April 15, 2005.

Call for Panel / Special Session / Invited Speaker Proposals
Program Chair by January 31, 2005.

Call for Tutorial and Workshop Proposals
We would like proposals for workshops and tutorials that are to be co-located with MT Summit X. We will provide the required facilities. If you are willing to take an initiative in organizing satellite events, please submit a brief description of the events to Eiichiro Sumita by December 15, 2004.

Important Dates
15 Dec. 2004 Proposals for tutorials and workshops
31 Jan. 2005 Proposals for panels etc.
Notification for tutorials and workshops
15 Apr. 2005 Paper submission deadline
Exhibition registration deadline
31 May 2005 Paper acceptance notifications
29 Jul. 2005 Final camera-ready copy deadline


- ISPA 2005
4th Int'l Symposium on Image and Signal Processing and Analysis
Zagreb, Croatia, September 15-17, 2005
Conference website

A. Image and Video Processing
B. Image and Video Analysis
C. Image Formation and Reproduction
D. Signal Processing
E. Signal Analysis
F. Applications

Electronic submission of full paper: February 1, 2005 Notification of acceptance/rejection: April 15, 2005 Submission of camera-ready papers and registration: May 15, 2005

Hrvoje Babic and Maurice Bellanger, General Co-Chairs Sven Loncaric and Philip Regalia, Program Co-Chairs