============================================================================ ISCApad number 68 January 22nd 2003 ============================================================================ Dear ISCA members, Several of you have been recently bothered by fraudulous messages containining a virus. The protected ISCA mailing list has been delictuously used and all messages apparently sent under my name do not emanate from here. At the date I was attending a meeting 1200 kms from my computer and I had no opportunity to login from far! We are trying to track the origin of the fraud with our security experts. I apologize from the trouble it may have caused in your mailing system. The regular date of issue is the first week of each month. Do not forget to send the information you want to display for the members in time to be included (last week of each month). Chris Wellekens TABLE OF CONTENTS ================= *ISCA News *COurses, internships, data bases, softwares *Job openings *Journals and Books *Future ISCA Tutorial and Research Workshops (ITRW) *Future ISCA supported events *Future Speech Science and technology events ============================================================================= ISCA NEWS ========= -New development on membership services : online access to the Speech Communication journal at a discounted rate. The online subscription gives members access not only to the current year's volumes but also to the Speech Communication archive dating back to 1995. If you are interested in subscribing either to the paper version alone or to the paper version+online access, please indicate this on the renewal form (http://www.isca-speech.org/Apply_member.html ) and will be billed directly by Elsevier. Individual, FULL member and STUDENT : paper version only: 85 EUR Individual, FULL member and STUDENT : paper version + online access*: 95 EUR Institutional Member, paper version only : 550 EUR -ISCApad publishes now a list of accepted papers for publication in Speech Communication. These papers can be also viewed on the website ofb bScienceDirect (http://www.sciencedirect.com) if your institution has subscribed to Speech Communication. -ISCA Grants are available for students attending meetings. Even if no information on the grants is advertised on the conference announcement,students may apply. For more information: http://www.isca-speech.org -Membership of many members expired on December 31, 2003. Our secretary should have contacted you. In case you haven't yet sent your renewal, you will find a form together with the annouucements of meetings in the companion files: ext68 ============================================================================== COURSES, DATABASES, SOFTWARES ============================= -MSDR2003 now in the ISCA Archive As you might know, the ISCA Workshop on Multilingual Spoken Document Retrieval (MSDR2003) was planned to be held Hong Kong in April 2003 but had to be cancelled due to the SARS epidemia which hit parts of South East Asia at that time. The proceedings of this workshop containing 17 full papers, which were ready at the time of the workshop, have now been released and are available in the ISCA Online Archive (abstracts accessible for everybody, full papers for ISCA members only). -Information on on-going theses could be very useful for thesis supervisors, researchers as well as PhD students. A list of speech theses is available at http://HLTheses.elsnet.org -The University of Amsterdam offers a two-year Research MA in Linguistics and a one-year MA in General Linguistics. Both programmes are taught in English or, in the case of language-specific courses, in the target language. Since the University of Amsterdam is interested in attracting talented students, the tuition rates are competitive. The Research MA in Linguistics is directed towards students of proven ability who are interested in conducting research in one of the many areas of linguistics that are studied in the research institutes of the University of Amsterdam (a.o. 'Language Technology' and 'Speech Communication and Speech Technology'). The programmes offer students the opportunity of specializing in a wide range of linguistic subdisciplines studied from various theoretical perspectives. The programme lasts two years for selected students with a relevant BA or equivalent, and one year for selected students with a relevant MA or equivalent. The MA in General Linguistics also offers a wide range of specializations (a.o. 'Language Technology' and 'Speech Communication and Speech Technology'), and is aimed at students with a BA in Linguistics or an equivalent programme, involving at least three years of full-time study at university level. The programme lasts one year. Further information about the MA programmes may be found at or requested from . - lingPARSER Converts text reliably into the corresponding phonemes. Converting a written text into spoken sounds and vice versa is not a straightforward task. In many languages such as German and English there are numerous exceptions to pronunciation rules. In addition altered pronunciation rules apply to thousands of proper names (persons, =ompanies, products) and foreign loan words. lingPARSER pronunciation tools LingCom's lingPARSER converts text reliably into the corresponding phonemes. The standard version is based on huge lexicons with 64 000 German) to 82 000 (UK and US English) items which cover the whole vocabulary of standard texts. Since the implemented pronunciation rules consider linguistic principles it is possible to pronounce more than 50000 word forms correctly, which is more than sufficient for unrestricted text. Special extensions like scientific and foreign words, first names, last names, geographical names, English loan words in German "Denglish"), music titles and artists can be added or are available on request. lingPARSER has been derived from the Logox text-to-speech pronunciation lexicon, which has been developed and maintained by G DATA and the Institute of Phonetics of the University Saarbrücken for more than 10 years. The original lexicon produces SAMPA output and considers a few assimilation processes which are common in sentence production. It is also possible to output canonical forms as used in pronunciation lexicons and single word utterances. The phoneme set has been adapted to HAPI as used in entropic's HTK based speech recognisers and SAPI the Microsoft Speech Application Programming Interface. Other phoneme inventories can be supported on request. Applications Reliable computer readable pronunciation lexicons are a prerequisite for numerous speech technology applications such as speech synthesis and speech recognition. Also, in language teaching and phonetic research computationally accessible pronunciation dictionaries are beneficial resources.In speech synthesis the pronunciation must be adequate for news and e-mail reading as well as reading scientific articles and the most recent changes in e.g. a Star Trek web page. For the Logox text to speech system we have been working for more than 10 years to achieve this goal. A pronunciation lexicon enables a speech recogniser to assign a spoken utterance to the corresponding words. The development of dialogues in IVR-systems would be a time consuming and unpleasant without automatic word to sound conversion and dynamic dialogue modelling would be impossible without it. The tremendous variation which a speaker independent system must cope with - e.g. while dictating addresses of a telephone register - also affect the pronunciation. A good system must account for alternative pronunciations and dialectal or foreign accent deviations from the canonical pronunciation. A special case of speech recognition is audio mining or text-to-speech alignment. A given text (of e.g. a news reader) is aligned to the corresponding speech signal. Without a proper pronunciation lexicon the alignment is going to fail. In addition the lexicon must account for hesitation sounds, interrupted words and many more characteristics of conversational speech. Another field of deploying computer readable word to sound conversion is in language teaching. Printed dictionaries usually contain information about the pronunciation of a word. Language training in e-learning courses will benefit from a proper pronunciation lexicon, not only if it is connected to a speech synthesiser. A phoneme based recogniser (such as LingCom's) can offer information about the pronunciation accuracy of the language learner. Last not least a reliable grapheme-to-phoneme converter can be a precious research tool which supports researchers during phonetic transcription or the design of experiments. Service and maintenance included A pronunciation lexicon requires extensive maintenance. New words appear every day in news or are created by politicians, product designers, and artists. This can only partially be accounted for by applying standard pronunciation rules. We regularly scan the net for new words and keep the lingPARSER pronunciation lexicon up-to-date. We also check your word list and add uncommon words to customer lexicon. Facts in brief o lingPARSER creates a reliable pronunciation for each text input. We can adapt it to your needs. o Available languages: British English, American English, German , Spanish and Italian under development). Other languages on request. o Output formats: SAPI 5, HAPI, SAMPA. Other formats on request. o Operation systems: Windows 9.x, 2000, NT, XP, Linux and other OS on request. o Deliverable versions: DLL for integrating lingPARSER into your software , Command line tool , Application with textual input. o System requirements:: Pentium II PC, 16 MB RAM. About LingCom LingCom is a service provider specialized on solving speech technology related problems. I.e. we offer our expertise in speech recognition, speech synthesis, speech signal analysis, and the creation and maintenance of pronunciation lexicons to come up with the most economic solution. In this sense it is evident that we provide all kinds of adaptations for your convenience. Ingolf Franke -------------------------------- LingCom GmbH Dechant-Reuder-Str. 4 D-91301 Forchheim info@lingcom.de www.lingcom.com =========================================================================== JOB OPENINGS (have also a look at http://www;isca-speech.org Jobs as well as http://www.elsnet.org Jobs) =========================================================================== 1. TWO POSITIONS at Institut Eurecom . Department: Multimedia Communications . Description: Eurecom () is an international teaching and research institute , founded in 1991 as a joint initiative by Ecole Polytechnique Federale de Lausanne (EPFL) and Ecole Nationale Superieure des Telecommunications (ENST- Paris). It welcomes students from several engineering schools and universities ENST Paris, ENST Brittany, INT Evry, EPFL, ETHZ (Zurich), Helsinki University of Technology, Politecnico di Torino...They receive an education in Communications Systems (Networking, Multimedia, Security, Mobile Communications, Web services...) Professors, lecturers and PhD students conduct research in these domains. Speech processing is under the responsibility of Professor Chris Wellekens in the Dpt Multimedia Communications. Spoken languages at the Institute are French end English for the lectures. English is the usual language for research exchanges. Speech research involves speaker identification using speaker clustering or eigenvoices, phonemic variabilities of lexicons, optimal feature extraction, Bayesian networks and variational techniques, navigation in audio databases (segmentation in speakers, wordspotting,...). The following jobs are open in the framework of a EU project that will start by February 2004. First Job description: POST DOC or RESEARCH ENGINEER The European project DIVINES, a STREP/ 6th FP has been accepted by the Commission and will start in January 2004. Eight labs and companies are partners: Multitel (B), Eurecom (F), France Telecon R/D (F), University of Oldenburg (D), Babeltechnologies (B), Loquendo (I), Politecnico di Torino (I), LIA (F). The aim of the project is to analyse the reasons why recognizers are unable to reach the human recognition rates even in the case of lack of semantic content. All weaknesses will be analyzed at the level of feature extraction, phone and lexical models. Focus will be put on intrinsic variabilities of speech in quiet and noisy environment as well as in read and spontaneous speech. The analysis will not be restricted to tests on several databases with different features and models but will go into the detailed behavior of the algorithms and models. Suggestions of new solutions will arise and be experimented. The duration of the project is for 3 years. The Speech group is looking for a Post-doc student who acquired a hands-on practice of speech processing. He/she must have an excellent practice of signal and speech analysis as well as a good knowledge of optimal classification using Bayesian criteria. He/she must be open-minded to original solutions proposed after a rigorous analysis of the low level phenomena in speech processing. Fluency in english is mandatory (write, understand and speak). He/she should be able to represent Eurecom at the periodical meetings. Ability to work in a small team is also required. Application. -send a detailed resume (give details on your activity since your PhD graduation) -send a copy of your thesis report (either as a a printed document or as a CDROM) DO NOT attach your thesis in an e-mail! -send a copy of your diploma -send the names and email addresses of two referees. -send the list of your publications (you must have several) to Professor Chris J. Wellekens, Dpt of Multimedia Communications, 2229 route des Cretes, BP 193, F-06904 Sophia Antipolis Cedex, France 2nd Job description at Eurecom: Ph.D. STUDENT The European project DIVINES, a STREP/ 6th FP has been accepted by the Commission and will start in January 2004. Eight labs and companies are partners: Multitel (B), Eurecom (F), France Telecon R/D (F), University of Oldenburg (D), Babeltechnologies (B), Loquendo (I), Politecnico di Torino (I), LIA (F). The aim of the project is to analyse the reasons why recognizers are unable to reach the human recognition rates even in the case of lack of semantic content. All weaknesses will be analyzed at the level of feature extraction, phone and lexical models. Focus will be put on intrinsic variabilities of speech in quiet and noisy environment as well as in read and spontaneous speech. The analysis will not be restricted to tests on several databases with different features and models but will go into the detailed behavior of the algorithms and models. Suggestions of new solutions will arise and be experimented. The duration of the project is for 3 years. The Speech group is looking for a top level PhD student who has a good knowledge of speech processing. Preference is for a student who worked in speech in his/her predoctoral school or worked on a speech project for his graduation project. He/she must have an excellent practice of signal and speech analysis as well as a good knowledge of optimal classification using Bayesian criteria. Fluency in english is mandatory (write, understand and speak). Ability to work in a small team is also required. Application. -send a detailed resume -send a copy of your graduation project report or Master thesis (either as a printed document or as a CDROM) DO NOT attach your report in an e-mail! -send a copy of your diploma -send the names and email addresses of two referees. -send the list of your publications (if any) to Professor Chris J. Wellekens, Dpt of Multimedia Communications, 2229 route des Cretes, BP 193, F-06904 Sophia Antipolis Cedex, FRANCE Additional informations Contact Professor Chris Wellekens at christian.wellekens@eurecom.fr . () 2. Postdoctoral Positions in Multimodal Interaction and Systems Applications are invited from recent PhDs for postdoctoral positions on a new research project that is modeling aspects of multimodal interaction and human performance, as well as designing and prototyping new multimodal systems. Project research areas include user modeling of multimodal interaction, user/system learning and adaptive multimodal processing, collaborative multimodal interaction, multimodal dialogue and processing techniques, mobile and multimodal-multisensor interface design, and other topics. Applicants are encouraged to apply who have a broad interest in issues related to cognitive science and quantitative user modeling, linguistics and natural language processing, machine learning and adaptive interface design, and computational processing and system development of varied multimodal input (e.g., speech, vision, pen). Applicants with experience participating in multidisciplinary team-oriented research also are especially encouraged. This work is being conducted in a state-of-the-art laboratory facility at the Center for Human-Computer Communication (CHCC) at the Oregon Health & Science University (OHSU) in the Portland metropolitan area. Priority will be given to applications received by July 1st, 2003, although positions will remain open until filled. Postdoctoral salary range and benefits are competitive, and positions are for 1-2 years with renewals possible. To apply, submit a resume, xerox of graduate transcripts, names and contact information for 3 references, and a brief statement of research/career interests to: Deb DeShais, Center Administrator Center for Human-Computer Communication (CHCC) Department of Computer Science Oregon Health & Science University (OHSU) 20,000 N.W. Walker Road Beaverton, Oregon 97006 FAX: (503) 748-1875; Ph: (503) 748-1248 For general CHCC information & publications, see: http://www.cse.ogi.edu/CHCC. For further information or to apply via email, contact: deshais@cse.ogi.edu. Women and minority applicants encouraged to apply. 3. Postdoctoral position in the field of automatic speech recognition The Speech Processing Group of INRS-Telecommunications (a part of the University of Quebec, and located in Montreal, Canada - http:\\www.inrs-telecom.uquebec.ca) invites applications for a post-doctoral position in the area of automatic speech recognition. The research is part of a larger effort to design efficient natural dialogue systems using both English and French speech. Specifically, we aim to develop novel techniques to render speech recognition more efficient and with increased accuracy. Our relatively small research group offers freedom to examine new ideas, without the constraints of simply pursuing incremental modifications to existing systems. Desired profile: The highly qualified applicant should possess a Ph.D. degree in the field of speech signal processing. He/she should be familiar with spectral analysis techniques, statistical modeling, natural language processing, and acoustic-phonetics. Programming skills are essential (C, Matlab, etc.), and a familiarity with UNIX platforms is helpful. We offer a challenging research environment. The applicant will work in a long-standing research group in a modern technological environment, in the telecommunications capital of Canada. Montreal remains the most active region in Canada for speech recognition research, with four local companies dedicated to the field, besides our own university labs. Our INRS facilities offer interaction with other related fields, since we do research in image processing, protocols, radiotelephony, and software engineering. We are well known in the speech field, presenting papers at virtually all of the major speech conferences during the last two decades. Send your CV (including the names and contact information of three references), bibliography and how to contact you by mail/fax/email/phone to: Prof. Douglas O'Shaughnessy INRS-EMT (Telecommunications) Place Bonaventure, Box 6900 800 de la Gauchetiere Ouest Montreal, Quebec Canada H5A 1K6 telephone: 514-875-1266 x2012 fax: 514-875-0344 E-mail: dougo@inrs-emt.uquebec.ca URL: 4. The Interactive Systems Labs (ISL) at the University of Karlsruhe and at Carnegie Mellon University has several immediate openings at all levels in the area of Automatic Speech Recognition and Acoustic Modeling The successful candidate(s) are expected to make successful contributions to the state-of-the art of modern recognition systems. He/she will participate in the design, development and exploration of innovative methods, algorithms and techniques toward acoustic and language modeling leading to improvements in recognizer performance. A primary focus of the research will be to develop robust high-performance algorithms for the recognition of spontaneous, conversational speech. Candidates interested in application-oriented research for the integration and fusion of such recognizers in multimodal interfaces and computing and communication services are also encouraged to apply. The Interactive Systems Labs operates in two locations, University of Karlsruhe, Germany and at Carnegie Mellon University, Pittsburgh. International joint and collaborative research at and between our centers is common and encouraged, and offers greater international exposure and activity. The focus of our research is to develop better communication and computing services that take advantage of an understanding of the human context and activities. Two examples of the laboratories work are speech-translation systems and multimodal user interfaces. The former has led to the JANUS system, one of the first speech translation systems proposed. Other multilingual and multimodal systems include portable speech translators, video conferencing speech translators, meeting browsers and lecture tracker, multimodal dialog systems and navigation aids for tourists, machine translation of text, speech and OCR and computer support of human-human interaction. We seek qualified candidates at all levels with a B.S., M.S. or PhD degree in Electrical Engineering, Computer Science or related fields. For candidates with Bachelor or Master's degrees, the position offers the opportunity to work toward a PhD degree. A record of academic achievements, relevant experience and knowledge in relevant areas, good programming skills are expected. Outstanding candidates at the post-doctoral or junior faculty level are also encouraged to apply. Post-doctoral positions offer the opportunity to engage in teaching and research, build and organize a research team; develop an academic career and publication record in a well-equipped, supportive, international and state-of-the art environment. The University of Karlsruhe and Carnegie Mellon University are equal opportunity employers. Questions or on-line applications may be directed to Florian Metze, Tel. +49-721-608-4734 E-Mail: metze@ira.uka.de , WWW: Applications should be sent to: Prof. A. Waibel Director, Interactive Systems Labs Fakultät für Informatik Universität Karlsruhe Am Fasanengarten 5 D-76131 Karlsruhe Germany _________________________________________________________________________________ 5.Speech Tech Positions at Multitel, Belgium - Junior speech technologies engineer - Experienced speech technologies engineer * MULTITEL: MULTITEL () is a competency center that has stepped out from the Faculté Polytechnique de Mons, Belgium (). The activities include innovative research and developement in the fields of speech processing, image processing, telecommunications and networking. MULTITEL has a strong expertise in speech technologies through participation to national and international research and development projects. * Positions and profiles: Multitel is seeking to strenghten its Speech Technologies Group. Candidates with M.S. or Ph.D. degrees in Electrical Engineering, Computer Science and related fields are invited to submit their application. The candidates will be fluent in english and willing to learn french. Junior candidates (B.S. or M.S. degree) are sought. They will have good programming skills (C/C++), autonomy and the required ability to work in an team. Qualifications and knowledge in the relevant fields of signal processing, speech processing, statistical inference, data mining and human-computer interfaces are highly encouraged. We also seek candidates with Ph.D. degree or similar R&D experience in the academia or industry (> 4 years). We expect candidates with hand-on experience an a demonstrated record of achievements in pre-competitive and/or applied research in the relevant fields of sound and speech processing, ASR decoding technology, voice-driven systems, natural language processing applied to ASR. The position will offer the opportunity to exploit and extend your talents in applied research projects with a pan-european dimension (EC supported and Eureka projects). These positions offer competitive salary and benefits. * Applications (Ref. 2004/01) You are invited to fax and send applications including an application letter with a statement of R&D interests, and a CV to the following address (please include the reference number in you application letter) :=MULTITEL asbl Service du recrutement Parc Scientifique Initialis Avenue Copernic, 1 7000 MONS BELGIUM Fax: +32(0)65/37.47.29 ================================================================================= JOURNALS and BOOKS ==================== -Call for Papers: Special Issue of Speech Communication on Error Handling in Spoken Dialogue Systems Editors: Rolf Carlson, KTH Julia Hirschberg, Columbia University Marc Swerts, University of Antwerp and Tilburg University Spoken dialogue systems in real applications as well as research have attracted increased attention in recent years. Given the limitations of current speech technologies, both in recognition and understanding and in generation, this interest in `real' systems has led to an increased awareness of the problems raised by system errors. These errors may lead to increased confusion for both users and the system in the rest of the dialogue. The need to devise better strategies for detecting and dealing with problems in human-machine dialogues has become critical for spoken dialogue systems. After a workshop held in August 2003 on this topic (), we are now soliciting journal papers not only from workshop participants but also from other researchers for a special issue on "Error Handling in Spoken Dialogue Systems." Submissions are invited on the following broad topic areas: What can we learn from errors in human-human and wizard-of-Oz systems that will help us to handle error in human-machine dialogue systems? How do systems detect when a dialogue is `going wrong'? How do they define such conditions? What factors are the key contributors to and indicators of `bad' dialogues? How do systems identify their own errors? What are the most important causes of such errors, from the user side (e.g. out-of-vocabulary words, non-native accent or dialect, disfluencies, hyperarticulated speaking style, gender, age, lack of experience with the system) and from the system side (e.g. inappropriate prompts, poor confidence modeling, dialog modeling failures)? How difficult is it to determine the causes of particular errors? How can we predict which dialogues will be successful? How should we define `success'? What features can best predict it? How can we evaluate system success? How can we compare different error-handling strategies? What mechanisms can be devised to allow systems to recover from error gracefully? Can we develop adaptive strategies to identify patterns of error and respond accordingly? What sorts of behavior do users exhibit when faced with system errors? Can these be taken into account in error handling? What measures (better prompts, anticipation of likely error, better help information) can be taken to minimize potential errors? Important Dates: Submissions due: February 1, 2004 First Notification of Decisions: May 1, 2004 Submission requirements: Papers should follow the submissions requirements for Speech Communications submissions, as specified at . ____________________________________________________________________________________ - Elsevier, the publisher of the official ISCA journal Speech Communication, has one series of print copies of all volumes of Speech Communication available for a research institute that is active in speech research, but is not in a position to acquire the full archive itself. Parties who are interested in this offer are requested to state their interest to Hilde van der Togt (h.togt@elsevier.com ), who is as a Publishing Editor responsible for the journal in Elsevier. ISCA and the Speech Communication Editors will collaborate with Elsevier to select the deserving institute amongst applicants. -IEEE Transactions on Speech and Audio Processing is preparing a special issue on Data Mining of Speech, Audio and Dialog. Submission deadline: July 1st, 2004 (see CFP in attached documents) -Papers accepted for future publication in Speech Communication. Full text available on http://www.sciencedirect.com for Speech Communication subscribers and subscribing institutions. Click on Publications, then on Speech Communication and on Articles in press. The list of papers in press appear and a .pdf file for each paper is available. 1.Chaojun Liu and Yonghong Yan, Robust state clustering using phonetic decision trees, In Press, Uncorrected Proof, Available online 29 December 2003, 2.Sherif Abdou and Michael S. Scordilis, Beam search pruning in speech recognition using a posterior probability-based confidence measure, In Press, Uncorrected Proof, Available online 24 December 2003, 3. V. Kamakshi Prasad, T. Nagarajan and Hema A. Murthy, Automatic segmentation of continuous speech using minimum phase group delay functions, In Press, Uncorrected Proof, Available online 24 December 2003 4. René Carré, From an acoustic tube to speech production, In Press, Uncorrected Proof, Available online 18 December 2003, 5 Andrew N. Pargellis, Hong-Kwang Jeff Kuo and Chin-Hui Lee, An automatic dialogue generation platform for personalized dialogue applications, In Press, Corrected Proof, Available online 5 December 2003 6 Jan Zera, Speech intelligibility measured by adaptive maximum-likelihood procedure, In Press, Corrected Proof, Available online 3 December 2003, ============================================================================== FUTURE INTERSPEECH CONFERENCES ================================ -Interspeech (ICSLP)-2004 , Jeju, KOREA, OCTOBER 5-9, 2004 (see CFP in attached files) -Interspeech (Eurospeech)-2005, Lisbon, Portugal,September 4-8, 2005 ------------------------------------------------------------------------------ FUTURE ISCA TUTORIAL AND RESEARCH WORKHOPS (ITRW) ================================================== Publication policy: Hereunder, you will find very short announcements of future events. The full call for participation appears in attached files ext#.doc or ext#.pdf only once and later referred to a previous issue of ISCApad. See also our Web pages (http://www.isca-speech.org) on conferences and workshops. - 2004: A Speaker Odyssey, http://www.odyssey04.org/ UPMadrid 31May/4June 2004 - 5th ISCA Speech Synthesis Research Workshop Carnegie Mellon University Pittsburg USA June 14-16 2004 http://www.ssw5.org (see attached document) -InSTIL/ICALL Symposium 2004 NLP and Speech Technologies in Advanced Language Learning Systems 17-19 June 2004 Venice, Italy Submission deadline: February 21st, 2004 (see ISCApad 66) -NOLISP'05: Non linear speech processing, April 19-22 April 2005, Barcelona, Spain organized by Cost 277 Contact person: Marcos Faundez-Zanuy (faundez@eupmt.es) (see ISCApad 66 ) =========================================================================== FUTURE ISCA SUPPORTED EVENTS ============================ - Speech Prosody 2004, the second International Conference on Prosody Date: March 23 (Tuesday) to 26 (Friday), 2004 Venue: Nara New Convention Center, Nara, Japan Contact person: Keikichi HIROSE e-mail: pro-office@gavo.t.u-tokyo.ac.jp Web: http://www.gavo.t.u-tokyo.ac.jp/sp2004/ (see ISCApad 56)) - International Symposium on Tonal Aspect of Languages: Emphasis on Tone Languages (Tonal Symposium China 2004), March 28-30, 2004, Beijing, R.P.China A satellite of Speech Prosody 2004 Organizer: The Institute of Linguistics, Chinese Academy of Social Sciences (CASS). Symposium website: http:\\www.tal2004.com - The XXVth Journées d'Etude sur la Parole (JEP) Fes, Morocco, April 19-22 2004. (in conjunction with TALN 2004 (Traitement automatique des langues naturelles)). contact: Noel Nguyen (mailto:jep-taln@lpl.univ-aix.fr) URL: http://www.lpl.univ-aix.fr/jep-taln04/ electronic list: http://mailup.univ-mrs.fr/wws/info/jep-taln (see ISCApad 61) - ICA 2004 International Congress on Acoustics April 4-9, 2004 Kyoto Japan Theme: Acoustic science for quality of life http://www.ica2004.or.jp/ Congress secretariat: Dpt of of Environmental Psychology Graduate School of Human Sciences Osaka University 1-2 Yamadaoka, Suita, Osaka, 565-0871 Japan Fax: +81 6 6879 8025 Email: secretariat@ica2004.or.jp (see ISCApad 63) - HLT-NAACL Workshop on Interdisciplinary Approaches to Speech Indexing and Retrieval Boston, May 6 Deadline FEBRUARY 8th 2004 (see CFP in attached files) - LREC2004 Lisbon (Portugal) 24-30 May 2004 Chairman: Khaled Choukri choukri@elda.fr http://www.lredc-conf.org (see ISCApad 62) - 4th International SALTMIL (ISCA SIG) LREC workshop on First Steps for Language Documentation of Minority Languages: Computational Linguistic Tools for Morphology, Lexicon and Corpus Compilation 24 May 2004, Lisbon, Portugal (see attached in attached files) - Affective Dialog Systems (ADS'04) Kloster Irsee, Germany June 14-16, 2004 http://www.sigmedia.org/ADS04 (see ISCApad 66) extended deadline January "& ============================================================================= FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS =========================================== - The 1st International Joint Conference of Natural Language Processing organized by the Asia Federation of NLP associations (AFNLP) Website: www.cipsc.org.cn/IJCNLP-04/ Main Conference: March 22-24, 2004 Workshops: March 25, 2004 Sanya, Hainan island, China http://www.regenttour.com/chinaplanner/hainan/ (see ISCApad 66) - The 4th Workshop on Asian Language Resources (sattelite workshop of AFNLP) () Extension of paper submission deadline: Paper submission deadline: January 3, 2004 (extended) Notification of acceptance: January 15, 2004 (extended) Camera ready papers due: January 24, 2004 Workshop date: March 25, 2004 (Thursday) - 2nd COST 275 Workshop Call for Papers Second COST275 Workshop on BIOMETRICS ON THE INTERNET: Fundamentals, Advances and Applications 25-26 March 2004, Vigo (Spain) http://cost275.gts.tsc.uvigo.es (see attached files for important updated informations) - HLT/NAACL 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics http://www.hlt-naacl04.org May 2-7, 2004 Boston, Mass USA (see ISCApad 64) - HLT/NAACL 2004 Workshop on Spoken Language Understanding for Conversational Systems The Park Plaza Hotel, Boston, Massachusetts May 7, 2004 (see ISCApad 67) - HLT/NAACL 2004 Student Research Workshop at The Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, May 2-7, 2004 Boston, Massachusetts, USA Submission deadline: January 30, 2004 Notification: March 1, 2004 Camera-ready papers: March 15, 2004 Tentative workshop date: May 2, 2004 (final date will be posted on web site) - HIGHER-LEVEL LINGUISTIC AND OTHER KNOWLEDGE FOR AUTOMATIC SPEECH PROCESSING (Workshop in conjunction with NAACL/HLT 2004) The Park Plaza Hotel, Boston, Massachusetts Thursday, May 6, 2004 (see attached files) - ICASSP 2004, Montreal, CANADA, MAY 17-21, 2004 http://icassp2004.org - From sound to sense: Fifty+ Years of Discoveries in Speech Communication 12-13 June at MIT Cambridge , MA, USA http://www.rle.mit.edu/soundto sense/ (see CFP in a&ttached files) - Incremental parsing: bringing engineering and cognition together Workshop at ACL 2004 Barcelona Spain 25 july 2004 (see attached files) - Inter-Noise 2004, 33rd International Congress and Exposition on Noise Control Engineering, Prague, Czech Republic, 2004, August 22 - 25. Abstract submission with the deadline January 31, 2004. (see attached document) - EUSIPCO 2004. 12th European Signal Processing Conference. September 7-10, 2004 Vienna, Austria http://www.nt.tuwien.ac.at/eusipco2004/ Chair: Prof. Wolfgang Mecklenbrauker, Institute of Communications and Radio-Frequency Engineering Vienna University of Technology Gusshausstrasse 25/389 A-1040 Vienna w.mecklenbraeuker@tuwien.ac.at (see ISCApad 63) - Seventh International Conference on TEXT, SPEECH and DIALOGUE (TSD 2004) http://nlp.fi.muni.cz/tsd2004/ (see ISCApad 67) Brno, Czech Republic, 8-11 September 2004 - CLEF 2004 Workshop 16-17 september 2004 (see attached files) Evaluation campaign. - 2004 IEEE International Workshop on Multimedia Signal Processing (MMSP 04) http://mmsp.unisi.it September 29-October 1 2004 Sienna Italy - ACM Multimedia 2004 October 10-15, New York, NY USA http://www.mm2004.org (see attached document) -----------------------------------------------------------------------------