Subject: ISCApad #24 Date: Fri, 12 May 2000 08:27:56 +0100 From: Isabel Trancoso To: isca_members@isca-speech.org CC: info@isca-speech.org ============================================================================ ISCApad number 24 May 11th, 2000 ============================================================================ Dear ISCA members, Here is the table of contents of ISCApad #24: ISCA news: --------- - Extended ISCA BOARD meeting in Bonn: new coordinators A separate message about the composition and responsabilities of the new ISCA Board will follow soon. - EduSIG list website: http://www.mailbase.ac.uk/lists/isca-edusig Future conferences and workshops: -------------------------------- - Odyssey Speaker Recognition Workshop: first announcement The Hebrew University of Jerusalem, Israel June 18-22, 2001 http://www.odyssey.westhost.com/ We invite you to "2001, A Speaker Odyssey", an ISCA Tutorial and Research Workshop on speaker recognition, organized in cooperation with the IEEE Signal Processing Society and in collaboration with the ISCA-SIG SPLC. Proposal due: 15 January 2001 - ICSLP'2000: latest news on special sessions (see below) - SST-2000: Call for Papers (see below) Canberra, Australia 4-7 December, 2000 http://www.cs.adfa.edu.au/sst2000 - SPECOM 2000 International Workshop Speech and Computer 25-28 September 2000 St.-Petersburg, Russia http://www.spiiras.nw.ru/speech Organized by St-Petersburg Institute for Informatics and Automation (SPIIRAS). Deadline for abstracts is 15 May 2000, but may be extended. Job offers: ---------- - Job on NESPOLE! - Post Doctoral Fellowship in France ISCA Greetings, Isabel Trancoso ============================================================================ EduSIG list website: Hi ISCA People This is to announce a new special interest group of the International Speech Communication Association. The Group is called EduSIG - and recently a description of it was sent to all ISCA members via your newsletter (ISCAPad #22). To sign up you should access our mailbase list website and fill in the appropriate form. The website gives you a list of the facilities on offer. Access to the site: http://www.mailbase.ac.uk/lists/isca-edusig Very best wishes Mark Tatham ============================================================================ ICSLP'2000 - latest news On behalf of the organizing committee of ICSLP 2000 (6th International Conference on Spoken Language Processing), I would like to extend our cordial invitation to you and hope you could join us to share the success of the coming conference. I would also like to announce that the deadline date for submitting abstracts for papers has been extended to May 20, 2000. However, we hope that we will receive as many of your abstracts as possible before then. For your information, we have invited Prof. Zongji Wu (from China), Prof. Karalyn Eve Patterson (from U.K.) and Prof. Kenneth N. Stevens (from U.S.A.) to give keynote addresses at the conference. In addition, we have arranged an attractive set of exhibitions, satellite meetings, technical visits to some local research institutes and universities, and some colorful local tours in Beijing -- as well as some post-conference tours to other beautiful Chinese cities. The topics of the special sessions which will be held are: 1. Prosody and Paralinguistics 2. Rules and Corpora-Description and Acquisition 3. Cross-and Multi-lingual acquisition of spoken language 4. Speech and Acoustic Information Processing using Multiple Observations 5. Trans-modal, multi-modal, human-computer interaction 6. Problems and Prospects in Trans-lingual Communication 7. Language Resources and Evaluation---Globalized Efforts and Directions 8. Speech Production Control Best regards and looking forward to seeing you in Beijing. Sincerely yours, Dinghua Guan Chairman of ICSLP 2000 ============================================================================ CALL FOR PAPERS 8th Australian International Conference on Speech Science and Technology SST-2000 Canberra, Australia, 4-7 December 2000 http://www.cs.adfa.edu.au/sst2000 SST-2000 covers fundamental spoken language research in the areas of linguistics, phonetics, language acquisition etc.; together with technologically motivated research such as speech and speaker recognition, speech synthesis and speech understanding systems, plus the use of these technologies in such domains as business, health care, and education. Papers are welcome in all of the above areas and on any other topic related to speech science and technology. All submissions will be peer-reviewed by Australian and international reviewers. Submissions can take the form of either a full paper or an abstract. Full paper submissions should be no longer than 6 pages, while abstract submissions are limited to 500 words. For full details on the conference, the Call for Papers, deadlines, submission and registration procedures, please check the conference web page at http://www.cs.adfa.edu.au/sst2000 The submission deadline is the 21 July 2000. All accepted papers will be published in the conference proceedings. Papers judged to be outstanding by the SST-2000 reviewers will be recommended to the editor of the journal "Speech Communication". Selected papers will also be published in a special issue of "Acoustics Australia" (April 2001). * Submission deadline: 21 July 2000 * Notification of acceptance: 1 September 2000 * Camera-ready copy and early registration deadline: 13 October 2000 * Conference: 4-7 December 2000 Contact: Spike Barlow (Secretary, SST-2000) spike@cs.adfa.edu.au ============================================================================ Job on NESPOLE ! European Project CLIPS laboratory (Communication Langagière et Interaction Personne-Système), Grenoble, France The CLIPS laboratory expects to appoint a PostDoc or Engineer position to be based in the GEOD team of CLIPS laboratory in Grenoble, from September, 2000. Created in 1995, the CLIPS deals with themes related to human-computer interfaces, interactive systems, multimedia systems, virtual realities. See http://www-clips.imag.fr/clips-en.html for a description of current work. The appointee will be expected to conduct and publish research of the highest standard, in the framework of NESPOLE ! European Project (Negotiating through SPOken Language in E-commerce). This project deals with speech-to-speech translation research with 4 languages (English, German, French and Italian). Six partners are involved in the project : Carnegie Mellon University (Pittsburgh, USA), University of Karlsruhe (Karlsruhe, Germany), ITC-irst (Trente, Italy), CLIPS (Grenoble, France), AETHRA (Italy) and The Trentino Tourist Board (Italy). The GEOD group of CLIPS is involved in different aspects of the speech-to-speech translation engine : - system architecture design and implementation - multimodal environment platform design and testing - speech recognition and synthesis modules improvement Applicants are preferred who have a relevant Ph.D. or a degree in Computer Science, Software Engineering, Information Technologies or other relevant fields. Background and interests in aspects, such as network architecture (IP, H320), multimodal interaction, speech processing, or spoken dialogue systems, will be an advantage. Appointments will be 1 year (from September 00) and gross salary is about 14000 FF / month. Application is by curriculum vitae including list of publications and references. Applications may be sent by email or by regular mail to : Laurent.Besacier@imag.fr Or Laurent BESACIER Laboratoire CLIPS – Equipe GEOD Université Joseph Fourier BP 53 - 38041 GRENOBLE Cedex 9 Phone : (33) 4 76 63 56 95 ============================================================================= ONE YEAR POST DOCTORAL FELLOWSHIP Institut national des télécommunications (Int), Evry, France and Laboratore d'Informatique de Paris VI (Lip 6), Paris, France Pen interfaces for mobile devices and recognition of handwriting For background, see, for instance: www.innovate.bt.com/showcase/smartquill/, www.smartpen.net/, www.cross-pcg.com/ www.paragraph.com/, http://www.fxpal.xerox.com/PersonalMobile/xlibris/ http://hwr.nici.kun.nl/pen-computing/ www-poleia.lip6.fr/CONNEX/HWR OBJECTIVES OF THIS PROJECT : In the framework of a collaboration with CNET (France Telecom Research Center) we will work at the integration of the software REMUS with a "smart" pen in order to realize a prototype. a/ Extension of the software REMUS REMUS is a software which, at the moment, is able to recognize isolated words. We want to extand its fonctionalities to the recognition of handwriten sentences. This problem is interesting because most of future applications of the pen, either note taking, e-mail or annotations will demand recognition of words sequences. b/ Pen software integration In a second time, the so far enriched software will be integrated with an electronic pen. A good knowledge of the sensor, as well as the recuperation of the signals emitted by the pen will be necessary. These signals will also have to be adapted to the standard which is now used in the software REMUS. By the end of the project, a prototype will have to be realized. THE APPLICANT MUST POSSESS A PHD, IF POSSIBLE IN THE DOMAIN OF HANDWRITING OF SPEECH RECOGNITION. HE MUST ALSO HAVE APTITUDES FOR THE DEVELOPMENT OF SOFTWARE APPLICATIONS CONTACTS : Bernadette DORIZZI Institut national des télécommunications, Int 9, rue Charles Fourier 91011 EVRY CEDEX phone : 33 1 60 76 44 30 E-mail : Bernadette.Dorizzi@int-evry.fr Patrick GALLINARI Laboratoire d'Informatique de PARIS 6, LIP6, Bureau B.747, 8, rue du Capitaine Scott, 75015 PARIS Phone : 33 1 44 27 73 70 E-mail : Patrick.Gallinari@lip6.fr ============================================================================ All additional information at the ISCA web-site: http://www.isca-speech.org The ISCA secretariat can be contacted at: info@isca-speech.org Requests concerning membership, Speech Communication and ordering Proceedings should be forwarded to the Secretariat. For message distribution at isca_list contact: public@isca-speech.org Short messages will be forwarded on a monthly scheme basis to all ISCA members. =============================================================================