Editor: Chris Wellekens
Dear Members,
We apologize for the difficulty some of you were facing for reading our previous issue.
For technical reasons, we still use the same software but ISCApad 113 November 2007 is already posted on our web site http://www.isca-speech.org/. Helen Meng and her student Laurence Liu, our system engineer Matt Bridger and myself are working on a version that will be totally accessible from our website but that will also be push with a summary with links emailed to all members.
I invite you to send me at public@isca-speech.org all informations about new books, job offers, organized conferences and workshops and all information you want to display in ISCApad. Whatever will the location of the information you will be informed and redirected to details. Don't forget that if you care for wide information for the speech community, it is initiated by yourself when spreading the information via ISCApad!
Professor em. Chris Wellekens
Institut Eurecom France
Back to Top
ISCA News
-
ISCA Distinguished lecturers Program
Announcement and Call for Nominations
Introduction
ISCA has started a new Distinguished Lecturers Program in 2006 to send Distinguished Lecturers to travel to different parts of the world to give lectures to help promote research activities on speech science and technologies. Two Distinguished Lecturers have been selected at the end of 2006, Professor Chin-Hui Lee, Georgia Institute of Technology, USA and Professor Marc Swerts, Tilburg University, the Netherlands, with 2-year terms of 2007-2008. Professor Marc Swerts has finished the first Distinguished Lecturer Tour to go to Latin America in June 2007, and the next by professor Chin-Hui Lee to go to South Asia will be realized in Nov 2007.In 2007, ISCA will begin its Fellow Program to recognize and honor outstanding members who have made significant contributions to the field of speech communication science and technology.
To qualify for this distinction, a candidate must have been an ISCA member for five years or more with a minimum of ten years experience in the field. Nominations may be made by any ISCA member (see Nomination Form).
The nomination must be accompanied by references from three current ISCA Fellows (or, during the first three years of the program, by ISCA Board members). A Fellow may be recognized by his/her outstanding technical contributions and/or continued significant service to ISCA. The candidate's technical contribution should be summarized in the nomination in terms of publications, patents, projects, prototypes and their impact in the community.Fellows will be selected by a Fellow Selection Committee of nine members who each serve three-year terms. In the first year of the program, the Committee will be formed by ISCA Board members.
Over the next three years, one third of the members of the Selection Committee will be replaced by ISCA Fellows until the Committee consists entirely of ISCA Fellows. Members of the Committee will be chosen by the ISCA Board.
The committee will hold a virtual meeting during June to evaluate the nominations submitted until the past month. The total number selected in any one year will not exceed one-third percent of the total ISCA membership in that year. Since ISCA will be initiating this process from its present membership, there will be a larger pool to consider initially. As such, ISCA would select fellows not to exceed 1% of the membership for the first three years, and then adopt the 0.33% for subsequent years.Nominations and Selection
A Distinguished Lecturers Committee has been organized. The chair of the Committee for 2006-2009 is Professor Sadaoki Furui. Nominations of candidates for 2008-2009 terms are called. Each nomination should include information (short biography, selected publications, website, etc. plus topics/titles of up to 3 possible lectures) of no more than 2 pages to be sent to the Committee Chair (furui@cs.titech.ac.jp).
Only those who receive the highest votes by the Committee, exceeding a minimum threshold of 2/3, are selected. Nominations for this year should be received before the deadline of Nov 15 2007.Commitments of the Lecturers
The candidates selected by the Committee will be contacted and asked for the commitment of making time available for Lecture Tours, including the possibility of traveling to some regions specially identified as under-represented in ISCA programs (China, India, Eastern Europe, Latin America, South and West Asia, Africa). Those who agree are announced as ISCA Distinguished Lecturers.Distinguished Lecturers Tours
Distinguished Lecturers Tours are arranged by ISCA upon invitation only. The local hosts should be responsible for making and funding the local arrangements including accommodation and meals, and ISCA will pay travel costs. A Distinguished Lecturer Tour is realizable when at least three lectures are included, preferably in at least two different locations.
More details of this Program can be found at ISCA website . -
ISCA Fellow Program
In 2007, ISCA will begin its Fellow Program to recognize and honor outstanding members who have made significant contributions to the field of speech communication science and technology.
To qualify for this distinction, a candidate must have been an ISCA member for five years or more with a minimum of ten years experience in the field. Nominations may be made by any ISCA member (see Nomination Form).
The nomination must be accompanied by references from three current ISCA Fellows (or, during the first three years of the program, by ISCA Board members). A Fellow may be recognized by his/her outstanding technical contributions and/or continued significant service to ISCA. The candidate's technical contribution should be summarized in the nomination in terms of publications, patents, projects, prototypes and their impact in the community.Fellows will be selected by a Fellow Selection Committee of nine members who each serve three-year terms. In the first year of the program, the Committee will be formed by ISCA Board members.
Over the next three years, one third of the members of the Selection Committee will be replaced by ISCA Fellows until the Committee consists entirely of ISCA Fellows. Members of the Committee will be chosen by the ISCA Board.
The committee will hold a virtual meeting during June to evaluate the nominations submitted until the past month. The total number selected in any one year will not exceed one-third percent
of the total ISCA membership in that year. Since ISCA will be initiating this process from its present membership, there will be a larger pool to consider initially. As such, ISCA would select fellows not to exceed 1% of the membership for the first three years, and then adopt the 0.33% for subsequent years. -
Organization of INTERSPEECH 2011 : CALL FOR PROPOSALS
Back to Top
Organization of INTERSPEECH 2011 : CALL FOR PROPOSALS
Individuals or organizations interested in organizing:
INTERSPEECH 2011
should submit by 25 November 2007 a brief preliminary proposal, including:
* The name and position of the proposed general chair and other principal organizers.
* The proposed period in September/October 2011 when the conference would be held.
* The institution assuming financial responsibility for the conference and any other cooperating institutions.
* The city and conference center proposed (with information on that center's capacity).
* The commercial conference organizer (if any).
* Information on transportation and housing for conference participants.
* Likely support from local bodies (e.g. governmental).
* A preliminary budget.
Interspeech conferences may be held in any country, although they generally should not occur in the same continent in two consecutive years. The coming Interspeech events will take place in Antwerp (Belgium, 2007), Brisbane (Australia, 2008), Brighton (UK, 2009) and Makuhari (Japan, 2010).
Guidelines for the preparation of the proposal are available on our website.
Additional information can be provided by Tanja Schultz.
Those who plan to put in a bid are asked to inform ISCA of their intentions as soon as possible. They should also consider attending Interspeech 2007 in Anwerp to discuss their bids, if possible.
Proposals should be submitted by email to the above address. Candidates fulfilling basic requirements will be asked to submit a detailed proposal by 28 February 2008 -
GOOGLE SCHOLAR AND ISCA PROCEEDINGS
GOOGLE SCHOLAR AND ISCA PROCEEDINGS
The September ISCApad stated that ISCA's online proceedings Archive (http://www.isca-speech.org/archive/) has been indexed by Google Scholar (http://scholar.google.com/). It turns out that the indexing is incomplete.
The title, author list, and abstract for every paper in the Archive appear to be indexed as of September 2007, although the search results sometimes point to a copy elsewhere on the web rather than to the Archive.
However, there are many papers for which the full text in the PDF file is not indexed. This affects keyword searches as well as citation extraction for citation tracking. We are working with the Google Scholar team to resolve this.
Also, please note that there may be a time lag between when a new
event is added to the Archive in the future and when it appears in the
Google Scholar index.The summer 2007 issue of the IEEE SPS Speech & Language Technical Committee newsletter mentioned that the ISCA Archive had been opened to the Google Scholar crawlers, but there may have been some delay between the time that newsletter was sent out and the time the Google Scholar index reached its current state.
SIG's activities
-
A list of Speech Interest Groups can be found on our web.
Back to Top
Courses, Internships
Books, Databases, Softwares
-
Books
La production de la parole
Author: Alain Marchal, Universite d'Aix en Provence, France
Publisher: Hermes Lavoisier
Year: 2007Speech enhancement-Theory and Practice
Author: Philipos C. Loizou, University of Texas, Dallas, USA
Publisher: CRC Press
Year:2007Speech and Language Engineering
Editor: Martin Rajman
Publisher: EPFL Press, distributed by CRC Press
Year: 2007Human Communication Disorders/ Speech therapy
This interesting series can be listed on Wiley websiteIncurses em torno do ritmo da fala
Author: Plinio A. Barbosa
Publisher: Pontes Editores (city: Campinas)
Year: 2006 (released 11/24/2006)
(In Portuguese, abstract attached.) WebsiteSpeech Quality of VoIP: Assessment and Prediction
Author: Alexander Raake
Publisher: John Wiley & Sons, UK-Chichester, September 2006
WebsiteSelf-Organization in the Evolution of Speech, Studies in the Evolution of Language
Author: Pierre-Yves Oudeyer
Publisher:Oxford University Press
WebsiteSpeech Recognition Over Digital Channels
Authors: Antonio M. Peinado and Jose C. Segura
Publisher: Wiley, July 2006
WebsiteMultilingual Speech Processing
Editors: Tanja Schultz and Katrin Kirchhoff ,
Elsevier Academic Press, April 2006
WebsiteReconnaissance automatique de la parole: Du signal a l'interpretation
Back to Top
Authors: Jean-Paul Haton
Christophe Cerisara
Dominique Fohr
Yves Laprie
Kamel Smaili
392 Pages
Publisher: Dunod
Job openings
-
We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobs.html as well as http://www.elsnet.org/ Jobs)
Back to Top -
Speech Engineer/Senior Speech Engineer at Microsoft, Mountain View, CA,USA
Job Type: Full-Time
Back to Top
Send resume to Bruce Buntschuh
Responsibilities:
Tellme, now a subsidiary of Microsoft, is a company that is focused on delivering the highest quality voice recognition based applications while providing the highest possible automation to its clients. Central to this focus is the speech recognition accuracy and performance that is used by the applications. The candidate will be responsible for the development, performance analysis, and optimization of grammars, as well as overall speech recognition accuracy, in a wide variety of real world applications in all major market segments. This is a unique opportunity to apply and extend state of the art speech recognition technologies to emerging spaces such as information search on mobile devices.
Requirements:
· Strong background in engineering, linguistics, mathematics, machine learning, and or computer science.
· In depth knowledge and expertise in the field of speech recognition.
· Strong analytical skills with a determination to fully understand and solve complex problems.
· Excellent spoken and written communication skills.
· Fluency in English (Spanish a plus).
· Programming capability with scripting tools such as Perl.
Education:
MS, PhD, or equivalent technical experience in an area such as engineering, linguistics, mathematics, or computer science. -
Software Development Engineer, Automatic Speech Recognition at Microsoft Redmond WA, USA
Send resume to Yifan Gong
Back to Top
Speech Technologies and Modeling- Speech Component Group
Microsoft Corporation
Redmond WA, USA
Background:
Microsoft's Speech Component Group has been working on automatic speech recognition (SR) in real environments. We develop SR products for multiple languages for mobile devices, desktop computers, and communication servers. The group now has an open position for software development engineers to work on our acoustic and language modeling technologies. The position offers great opportunities for innovation and technology and product development.
Responsibilities:
Design and implement speech/language modeling and recognition algorithms to improve recognition accuracy.
Create, optimize and deliver quality speech recognition models and other components tailored to our customers' needs.
Identify, investigate and solve challenging problems in the areas of recognition accuracy from speech recognition system deployments.
Improve speech recognition language expansion engineering process that ensures product quality and scalability.
Required competencies and skills:
Passion about speech technology and quality software, demonstrated ability relative to the design and implementation of speech recognition algorithms.
Strong desire for achieving excellent results, strong problem solving skills, ability to multi-task, handle ambiguities, and identify issues in complex SR systems.
Good software development skills, including strong aptitude for software design and coding. 3+ years of experience in C/C++. Programming with scripting languages highly desirable.
MS or PhD degree in Computer Science, Electrical Engineering, Mathematics, or related disciplines, with strong background in speech recognition technology, statistical modeling, or signal processing.
Track record of developing SR algorithms, or experience in linguistic/phonetics, is a plus. -
PhD Research Studentship in Spoken Dialogue Systems- Cambridge UK
Applications are invited for an EPSRC sponsored studentship in Spoken Dialogue Systems leading to the PhD degree. The student will join a team lead by Professor Steve Young working on statistical approaches to building Spoken Dialogue Systems. The overall goal of the team is to develop complete working end-to-end systems which can be trained from real data and which can be continually adapted on-line. The PhD work will focus specifically on the use of Partially Observable Markov Decision Processes for dialogue modelling and techniques for learning and adaptation within that framework. The work will involve statistical modelling, algorithm design and user evaluation. The successful candidate will have a good first degree in a relevant area. Good programming skills in C/C++ are essential and familiarity with Matlab would be useful.
Back to Top
The studentship will be for 3 years starting in October 2007 or January 2008. The studentship covers University and College fees at the Home/EU rate and a maintenance allowance of 13000 pounds per annum. Potential applicants should email Steve Young with a brief CV and statement of interest in the proposed work area -
Elektrobit seeks SW-Engineers (m/f) for multimodal HMI Solutions (Speech Dialog)
Elektrobit Automotive Software is located in Erlangen, Germany and delivers ready-to-mass product implementations of a variety of software standards of the automotive industry and services to implement large software projects. The spectrum is enhanced with tools for HMI and control device development and in-house development, such as a navigation solution. We are developing solutions for multimodal HMIs in automotive infotainment/navigation systems. One focus are speech dialog systems. The challenge lies in realizing natural speech dialogue systems for different applications (navigation, mp3 player etc.) in an embedded environment. You will be designing and developing such speech solutions.
Back to Top
You have know how in one or more of the following areas:
Experience project co-ordination
Programming C/C++, perl, (Java) for windows and/or Linux
Speech recognition
Multimodal dialog systems
Speech synthesis /TTS
SW- Processes and SW-Tests
Experience in Object oriented Programming
Experience with Embedded Operating Systems
Your job description
Project coordination
Coordination of supplier and requirements from different applications
Development and specification of concepts for speech-related SW modules for different applications in embedded environments
Implementation of multimodal HMIs
Integration of speech modules in HMIs
Testing
We expect from you:
A degree in IT, electrical/electronic engineering, computational linguistics or similar
Good working knowledge of German and English
Innovative streak
Willingness to take responsibility in international Teams
We offer you:
A motivating working environment
Challenging work
Support in advancement
Please apply at www.elektrobit.com -> Automotive Software -> jobs
If you have any further questions Mr. Schrör (Tel.-Nr. +49 (9131) 7701-516) or Mr. Huck (-217) will gladly answer them. -
Chef de Projet technique TAL / Text Mining - basé dans le Nord Pas de Calais
Société :
Back to Top
Vous intégrez la division digitalisation du leader Européen du traitement de l'information (1200 collaborateurs en France, Europe, Asie et aux USA), internationalement reconnu pour ses expertises et ses savoir-faire au service du client. Ses clients sont principalement les institutions (centres de recherche, grandes bibliothèques, offices des brevets,...) et les grands acteurs internationaux de l'édition. L'activité Content Mining s'adresse notamment à une clientèle appartenant aux secteurs de l'industrie pharmaceutique et de la bioinformatique.
Offre :
En coordination avec le service commercial et la production, vous êtes responsable de la bonne réalisation des projets, et de la mise en oeuvre des solutions les mieux adaptées aux besoins des clients, et aux contraintes économiques des projets. Pour cela, vous vous investissez dans une parfaite connaissance des moyens et solutions de l'entreprise et de leurs applications. Vos missions s'articulent autour de trois activités:
1/ Consultant Avant Vente (25% du poste)
- Accompagner l'équipe commerciale auprès des clients et prospects, dans un objectif d'information, de compréhension, d'identification des besoins, et leur apporter une réponse technique complète;
- Définir la ou les solutions adaptées pour couvrir l'ensemble des besoins en adéquation avec les moyens techniques à disposition; le cas échéant, reformuler la demande du client en fonction des savoir-faire de l'entreprise; - En coordination avec le commercial, présenter la proposition technique et financière au client, s'accorder sur la solution retenue;
- Savoir repérer auprès des clients les besoins non satisfaits ou futurs;
- Assurer une fonction de veille tant commerciale que technologique.
2/ Gestion de Projet Technique (75% du poste):
- Apporter les solutions techniques et financières aux besoins des clients avec la collaboration des services techniques de l'entreprise (production, R&D, Méthodes,...);
- Assurer la mise en œuvre de la solution en s'appuyant sur l'ensemble des acteurs de l'entreprise, avec pour objectif la satisfaction client.
3/ Administratif:
- Établir des comptes-rendus d'activité sur l'ensemble des projets pris en charge;
- Assurer l'interface entre les services commerciaux, de production, et le client.
Profil :
- Formation supérieure (Bac+4/5) en informatique.
- Vous avez acquis une expérience de 3 à 5 ans en tant que Chef de Projet technique, dans les domaines suivants:
Text Mining, Reconnaissance des formes, Traitement du Langage, Linguistique, TAL.
Une connaissance du domaine de la GED ou du Knowledge Management serait un plus.
- Autonome en Anglais dans un contexte professionnel (oral et écrit).
Comment candidater?
Entrez votre profil sur le site Web du cabinet de recrutement e-Match consulting en mentionnant la reference 73
Contact
Olivier Grootenboer +33.6.88.39.37.39
cabinet de Recrutement & Chasse e-Match consulting -
Expert Linguistique, basé à Nancy, FRANCE
travaillant typiquement dans un laboratoire dédié TAL.
Back to Top
Profil recherché
- Formation d'origine: Ingénieur ou DEA/DESS informatique / Intelligence Artificielle de formation
- 5 à 10 ans d'expérience min
- Salaire jusqu'à 45/48K€ en fonction du profil
- Fibre ou expérience en R&D
- Compétences reconnues en Text Mining (étiquetage morpho syntaxique, Reconnaissance de forme, Analyse sémantique, Traduction automatique, Apprentissage naturel et Linguitique)
- Profil humain: MANAGER, leader d'équipe.
Comment candidater:
Entrez votre profil sur le site Web du cabinet de recrutement e-Match consulting
Contact
Olivier Grootenboer +33.6.88.39.37.39
cabinet de Recrutement & Chasse e-Match consulting -
Senior Speech Engineer in UK
Presented by http://www.voxid.co.uk/
Back to Top
This description is NOT exhaustive and we would welcome applications from any speech developers who feel they can add value to this company. Please forward your CV with salary expectations as we may have an alternative position for you.
Job Role:
To develop and deliver products, meeting customer needs and expectations.
To undertake areas of research and development on speech recognition technology within the constraints/direction of the Client Business development plan.
Duties and Responsibilities:
*To assist in the ongoing development of speech technologies for commercial deployment.
*To further improve existing client products
*Organise and manage own work to meet goal and objectives
*Use personal judgement and initiative to develop effective and constructive solutions to challenges and obstacles
*Provide technical support to customers of the company on projects and activities when required.
*Document and maintain such records as required by the company operating procedures.
*Maintain an understanding of Speech recognition technology in general and the client products and technology specifically
*Communicate and present to customers and others, information about the client product range and its technology.
*Providing technical expertise to other members of the team and staff at the client
*R&D in various aspects of automatic speech recognition and related speech technologies (e.g. speech data mining, Keyword/phrase spotting, multilingual speech recognition, LVCSR).
*Adhere to local and externally relevant health and safety laws and policies and bring any matters to the attention of local management without delay
*Take responsibility for self-development and continuing personal development
Person specification
Good degree in a relevant subject
Further degree in the Speech Recognition field or 2 years experience
Experience in a product based environment
Special Skills:
Product orientated
Experienced C++
Ability to work in an interdisciplinary team (Speech Scientists and software engineers)
Special Aptitudes:
Ability to put theory in to practice to apply knowledge to develop products
Can work and communicate effectively with colleagues outside their own area of expertise.
Quick to develop new skills where required
Analytical, with good problem solving ability.
High level of attention to detail
Experience of multi-threaded and computational intensive algorithms
Disposition:
Team Player but also able to work independently
Self - starter
Motivated & Enthusiastic
Results orientated
Location: UK
Salary : Dependent On experience
We are a specialist UK recruitment company seeking speech recognition developers to join one of our many UK speech companies. This description is NOT exhaustive and we would welcome applications from any speech developers who feel they can add value to this company. Please forward your CV with salary expectations as we may have an alternative position for you -
R&D Engineer
Presented by http://www.voxid.co.uk/
Back to Top
Job Description
As a member of a small team of research and development engineers you will work both independently and in teams developing algorithms and models for large vocabulary speech recognition. Good C and Linux shell scripting skills are essential, as is experience of either digital signal processing or statistical modelling. The ability to administer a network of Linux PCs is highly desirable, as are language/linguistic skills. Knowledge of Python and Perl would be advantageous.
In addition to the technical skills mentioned above, the successful candidate will have a proactive personality, excellent oral and written communication skills, and a desire to learn all aspects of speech recognition.
Key Skills
1st/2.1 degree in numerate subject.
2+ years of C and Linux shell programming.
Excellent communication skills.
Experience of digital signal processing or statistical modelling (this could be an undergraduate/masters project).
Additional Skills Linux system administration.
Experience with HTK.
Python and/or Perl programming.
Language/linguistic knowledge.
Salary : Commensurate with experience + discretionary share option scheme. -
R&D Language Modeller in UK
An excellent opportunity presented by http://www.voxid.co.uk/,we are looking for a self-motivated speech engineer, who enjoys working in the dynamic and flexible environment of a successful company.
Back to Top
Requirements
*3+ years relevant industrial / commercial experience or very relevant academic experience
In-depth knowledge about probabilistic language modelling, including estimation methods, smoothing, pruning, efficient representation, interpolation, trigger-based models
*Experience in development and/or deployment of a speech recognition engines, with emphasis on efficient training techniques and large vocabulary systems
*Understanding of the technology and techniques used in ASR engines
*A good degree in an appropriate subject, PhD preferred
*Good software engineering skills, knowledge of scripting languages (e.g. shell scripting, Perl) and experience working under both Linux and Windows
Desirable
*Experience working with HTK, Sphinx and/or Julius
*Experience with probabilistic lattice parsing
*Have worked on a live application or service that includes speech recognition
Salary : 40K BP
Any speech candidates who believe they can add value to our global clients , we would be happy to represent you. -
Sound to Sense: 18 Fellowships in speech research
Sound to Sense (S2S) is a Marie Curie Research Training Network involving collaborative speech research amongst 13 universities in 10 countries. 18 Training Fellowships are available, of which 12 are predoctoral and 6 postdoctoral (or equivalent experience). Most but not all are planned to start in September or October 2007.
Back to Top
A research training network's primary aim is to support and train young researchers in professional and inter-disciplinary scientific skills that will equip them for careers in research. S2S's scientific focus is on cross-disciplinary methods for modelling speech recognition by humans and machines. Distinctive aspects of our approach include emphasis on richly-informed phonetic models that emphasize communicative function of utterances, multilingual databases, multiple time domain analyses, hybrid episodic-abstract computational models, and applications and testing in adverse listening conditions and foreign language learning.
Eleven projects are planned. Each can be flexibly tailored to match the Fellows' backgrounds, research interests, and professional development needs, and will fall into one of four broad themes.
1: Multilinguistic and comparative research on Fine Phonetic Detail (4 projects)
2: Imperfect knowledge/imperfect signal (2 projects)
3: Beyond short units of speech (2 projects)
4: Exemplars and abstraction (3 projects)
The institutions and senior scientists involved with S2S are as follows:
* University of Cambridge, UK (S. Hawkins (Coordinator), M. Ford, M. Miozzo, D. Norris. B. Post)
* Katholieke Universiteit, Leuven, Belgium (D. Van Compernolle, H. Van Hamme, K. Demuynck)
* Charles University, Prague, Czech Republic (Z. Palková, T. Dub?da, J. Volín)
* University of Provence, Aix-en-Provence, France (N. Nguyen, M. d'Imperio, C. Meunier)
* University Federico II, Naples, Italy (F. Cutugno, A. Corazza)
* Radboud University, Nijmegen, The Netherlands (L. ten Bosch, H. Baayen, M. Ernestus, C. Gussenhoven, H. Strik)
* Norwegian University of Science and Technology (NTNU), Trondheim, Norway (W. van Dommelen, M. Johnsen, J. Koreman, T. Svendsen)
* Technical University of Cluj-Napoca, Romania (M. Giurgiu)
* University of the Basque Country, Vitoria, Spain (M-L. Garcia Lecumberri, J. Cenoz)
* University of Geneva, Switzerland (U. Frauenfelder)
* University of Bristol, UK (S. Mattys, J. Bowers)
* University of Sheffield, UK (M. Cooke, J. Barker, G. Brown, S. Howard, R. Moore, B. Wells)
* University of York, UK. (R. Ogden, G. Gaskell, J. Local)
Successful applicants will normally have a degree in psychology, computer science, engineering, linguistics, phonetics, or related disciplines, and want to acquire expertise in one or more of the others.
Positions are open until filled, although applications before 1 May 2007 are recommended for starting in October 2007.
Further details are available from the web about:
+ the research network (92kB) and how to apply, + the research projects(328 kB). -
AT&T - Labs Research: Research Staff Positions - Florham Park, NJ
AT&T - Labs Research is seeking exceptional candidates for Research Staff positions. AT&T is the premiere broadband, IP, entertainment, and wireless communications company in the U.S. and one of the largest in the world. Our researchers are dedicated to solving real problems in speech and language processing, and are involved in inventing, creating and deploying innovative services. We also explore fundamental research problems in these areas. Outstanding Ph.D.-level candidates at all levels of experience are encouraged to apply. Candidates must demonstrate excellence in research, a collaborative spirit and strong communication and software skills. Areas of particular interest are
- Large-vocabulary automatic speech recognition
- Acoustic and language modeling
- Robust speech recognition
- Signal processing
- Speaker recognition
- Speech data mining
- Natural language understanding and dialog
- Text and web mining
- Voice and multimodal search
AT&T Companies are Equal Opportunity Employers. All qualified candidates will receive full and fair consideration for employment. More information and application instructions are available on our website at http://www.research.att.com/. Click on "Join us". For more information, contact Mazin Gilbert (mazin at research dot att dot com).
Back to Top -
Research Position in Speech Processing at UGent, Belgium
Background
Since March 2005, the universities of Leuven, Gent, Antwerp and Brussels have joined forces in a big research project, called SPACE (SPeech Algorithms for Clinical and Educational applications). The project aims at contributing to the broader application of speech technology in educational and therapeutic software tools. More specifically, it pursues the automatic detection and classification of reading errors in the context of an automatic reading tutor, and the objective assessment of disordered speech (e.g. speech of the deaf, dysarthric speech, ...) in the context of computer assisted speech therapy assessment. Specific for the target applications is that the speech is either grammatically and lexically incorrect or a-typically pronounced. Therefore, standard technology cannot be applied as such in these applications.
Job description
The person we are looking for will be in charge of the data-driven development of word mispronunciation models that can predict expected reading errors in the context of a reading tutor. These models must be integrated in the linguistic model of the prompted utterance, and achieve that the speech recognizer becomes more specific in its detection and classification of presumed errors than a recognizer which is using a more traditional linguistic model with context-independent garbage and deletion arcs. A challenge is also to make the mispronunciation model adaptive to the progress made by the user.
Profile
We are looking for a person from the EU with a creative mind, and with an interest in speech & language processing and machine learning. The work will require an ability to program algorithms in C and Python. Having experience with Python is not a prerequisite (someone with some software experience is expected to learn this in a short time span). Demonstrated experience with speech & language processing and/or machine learning techniques will give you an advantage over other candidates.
The job is open to a pre-doctoral as well as a post-doctoral researcher who can start in November or December. The job runs until February 28, 2009, but a pre-doctoral candidate aiming for a doctoral degree will get opportunities to do follow-up research in related projects.
Interested persons should send their CV to Jean-Pierre Martens (martens@elis.ugent.be). There is no real deadline, but as soon as a suitable person is found, he/she will get the job.
Back to Top -
Summer Inter positions at Motorola Schaumburg Illinois USA
Motorola Labs - Center for Human Interaction Research (CHIR) located in Schaumburg Illinois, USA, is offering summer intern positions in 2008 (12 weeks each).
CHIR's mission:
Our research lab develops technologies that provide access to rich communication, media and information services effortless, based on natural, intelligent interaction. Our research aims on systems that adapt automatically and proactively to changing environments, device capabilities and to continually evolving knowledge about the user.
Intern profiles:
1) Acoustic environment/event detection and classification.
Successful candidate will be a PhD student near the end of his/her PhD study and is skilled in signal processing and/or pattern recognition; he/she knows Linux and C/C++ programming. Candidates with knowledge of acoustic environment/event classification are preferred.
2) Speaker adaptation for applications on speech recognition and spoken document retrieval.
The successful candidate must currently be pursuing a Ph.D. degree in EE or CS with complete understanding and hand-on experience on automatic speech recognition related research. Proficiency in Linux/Unix working environment and C/C++ programming. Strong GPA. A strong background in speaker adaptation is highly preferred.
3) Development of voice search-based web applications on a smartphone
We are looking for an intern candidate to help create an "experience" prototype based on our voice search technology. The app will be deployed on a smartphone and demonstrate intuitive and rich interaction with web resources. This intern project is oriented more towards software engineering than research. We target an intern with a master's degree and strong software engineering background. Mastery of C++ and experience with web programming (AJAX and web services) is required. Development experience on Windows CE/Mobile desired.
4) Integrated Voice Search Technology For Mobile Devices.
Candidate should be proficient in information retrieval, pattern recognition and speech recognition. Candidate should program in C++ and script languages such as Python or Perl in Linux environment. Also, he/she should have knowledge on information retrieval or search engines.
We offer competitive compensation, fun-to-work environment and Chicago-style pizza.
If you are interested, please send your resume to:
Dusan Macho, CHIR-Motorola Labs
Email: dusan.macho@motorola.com
Tel: +1-847-576-6762
Back to Top -
Post-doc at France Telcom R&D Lannion Brittany France
Post-Doc à France Télécom R&d, Lannion : acquisition de contexte à partir de prise de son ambiante.
DeadLine: 31/12/2007
claude.marro@orange-ftgroup.com
Description du contexte
Des données physiques de toute nature, provenant de l'environnement de l'utilisateur, peuvent être utilisées dans la communication ambiante comme informations de contexte pour offrir des fonctionnalités de service ou d'interface nouvelles, en particulier au niveau de l'adaptation du service à la situation et l'activité des utilisateurs. Ces données sont acquises par divers capteurs répartis dans l'environnement. Les données de scène audio issues de microphones sont parmi les plus riches que l'on puisse exploiter parmi toutes ces données de capteurs, et elles présentent surtout la particularité, dans les applications de communication ambiante, d'être utilisables à la fois comme inputs fonctionnels (communication audio interpersonnelle) et comme inputs de contexte, pour lequel elles peuvent être combinées avec des données issues d'autres types de capteurs. L'objectif ici est de développer des dispositifs permettant d'aboutir à cette double utilisation des données audio. Le système de prise de son doit disparaître de l'attention des utilisateurs et cette dématérialisation en fait la principale difficulté. En effet, l'éloignement de la prise de son entraîne une dégradation de la parole utile et nécessite une localisation de locuteur et une "focalisation" dans sa direction.
Acquisition et restitution audio fonctionnelle
Idéalement, l'objectif très ambitieux pourrait être d'obtenir une acquisition et restitution de son qui puisse être efficace quel que soit l'endroit de la pièce où se trouve une personne, en utilisant des microphones répartis dans l'environnement. En raison des difficultés évoquées ci-dessus, remporter ce challenge est hors de portée de cette étude, on proposera comme alternative un dispositif permettant une prise et restitution du son en un nombre limité de points précisés à l'avance.
Deux approches multi-capteurs sont envisagées : l'antenne acoustique à directivité contrôlée et le microphone ambisonique. L'intérêt d'aborder ces deux techniques réside dans leur complémentarité. En particulier, la première permet un design souple du diagramme de directivité (en fonction de la géométrie, de la fréquence, etc..) et est performante en moyenne et basse fréquence (au détriment de l'encombrement). Quant au microphone ambisonique, il a une taille réduite (au détriment des performances en moyenne et basse fréquence) et permet de reproduire à l'identique un champ acoustique à distance. La première phase de l'étude permettra de déterminer laquelle des approches est la plus adaptée.
Acquisition de contexte sonore
La première fonctionnalité à étudier est la localisation de sources sonores, fonction incontournable pour identifier la source vers lequel le système doit pointer. La détection de présence et la position du locuteur sont les informations contextuelles de base à extraire.
Si l'on considère que la localisation couplée à la prise de son multi-capteur constitue un outil d'analyse du champ sonore, il sera possible d'apporter d'autres informations de contexte. En effet, le développement de traitements spécifiques permettra par exemple de donner le nombre de locuteurs et leurs positions dans la pièce, leur pourcentage de locution, le niveau de bruit de la pièce, etc.
Une analyse plus fine du contexte sonore est à envisager comme une perspective de ce travail et ne sera abordé que si le temps le permet. Ceci concerne les informations à extraire qui nécessiteraient l'usage de technologies telles que la reconnaissance vocale, la classification et l'indexation audio.
Notons que le système de prise de son et les traitements développés dans ce projet constitueront des pré-requis nécessaires pour la continuité des travaux sur l'analyse fine du contexte sonore.
Profil
Aspects pratiques
- Aucune condition de nationalité n'est requise
- Le chercheur bénéficiera d'un contrat à durée déterminée de France Télécom, pour une durée de 12 à 18 mois, non renouvelable ni prolongeable.
- Le chercheur sera intégré à la division R&D de France Télécom sur son site de Lannion, CRD Technologies, Laboratoire « Speech and Sound technologies and Processing».
Compétences Techniques
- Traitement numérique du signal (analyse spectrale, filtrage, etc.)
- Traitements multi-microphones
- Si possible bases en traitement de la parole et en acoustique
- Goût pour les travaux de recherche applicative (analyse, mise au point et adaptation)
- Langages Matlab et C
Aptitudes
- Goût pour le travail en équipe.
- Bon niveau en anglais.
Niveau poste : ingénieur en CDD de type post-doctorat - Durée : 12 à 18 mois.
http://gdr-isis.org/rilk/gdr/Kiosque/poste.php?jobid=2399
Back to Top -
Ph.D. Program CMU-PORTUGAL
Ph.D. Program CMU-PORTUGAL in the area of Language and Information Technologies
The Language Technologies Institute (LTI) of the School of Computer Science at Carnegie Mellon University (CMU) offers a dual degree Ph.D.
Program in Language and Information Technologies in cooperation with Portuguese Universities. This Ph.D. program is part of the activities of the recently created Information and Communication Technologies Institute (ICTI), resulting from the Portugal-CMU Partnership.
The Language Technologies Institute, a world leader in the areas of speech processing, language processing, information retrieval, machine translation, machine learning, and bio-informatics, has been formed 20 years ago. The breadth of language technologies expertise at LTI enablesnew research in combinations of the core subjects, for example, inspeech-to-speech translation, spoken dialog systems, language-based tutoring systems, and question/answering systems.
The Portuguese consortium of Universities includes the Spoken LanguageSystems Lab (L2F) of INESC-ID Lisbon/IST, the Center of Linguistics of the University of Lisbon (CLUL/FLUL), the Centre for Human Language Technology and Bioinformatics at the University of Beira Interior(HULTIG/UBI) and the linguistics group at the University of Algarve (UALG). These four research centers (and the corresponding Universities), share expertise in the same language technologies as LTI, although with a strong focus on processing the Portuguese language.
Each Ph.D. student will receive a dual degree from LTI and the selected Portuguese University, being co-supervised by one advisor from each institute, and spending approximately half of the 5-year doctoral program at each institute. Most of the academic part will take place at LTI, during the first 2 years, where most of the required 8 courses will be taken, with a proper balance of focus areas (Linguistic, Computer Science, Statistical/Learning, Task Orientation). The remaining 3 years of the doctoral program will be dedicated to research, mostly spent at the Portuguese institute, with one or two visits to CMU per year.
The thesis topic will be in one of the research areas of the cooperation program, defined by the two advisors. Two multilingual topics have been identified as priority research areas: computer aided language learning (CALL) and speech-to-speech machine translation (S2SMT).
The doctoral students will be involved in one of these two projects aimed at building real HLT systems. These projects will involve at least two languages, one of them being Portuguese, the target language for the CALL system to be developed and either the source or target language (or both) for the S2SMT system. These two projects provide a focus for the proposed research; through them the collaboration will explore the maincore areas in language technology.
The scholarship will be funded by the Foundation for Science and Technology (FCT), Portugal.
How to Apply
The application deadline for all Ph.D. programs in the scope of the CMU-Portugal partnership is December 15, 2007.
Students interested in the dual doctoral program must apply by filling the corresponding form at the CMU webpage http://www.lti.cs.cmu.edu/About/how-to-apply.html
The application form will be forwarded to the Portuguese University and to the Foundation for Science and Technology. Simultaneously, they should send an email to the coordinators of the Portuguese consortium and of the LTI admissions (Isabel Trancoso/Lori Levin):
Isabel.Trancoso at inesc-id dot pt
lsl at cs dot cmu dot edu
All questions about the joint degree doctoral program should be directed to these two addresses.
The applications will be screened by a joint committee formed by representatives of LTI and representatives of the Portuguese Universities involved in the joint degree program. The candidates should indicate their scores in GRE and TOEFL tests.
Letters of recommendation are due by January 3rd.
Despite this particular focus on the Portuguese language, applications are not restricted to native or non-native speakers of Portuguese.
Back to Top
Journals
-
Papers accepted for FUTURE PUBLICATION in Speech
Full text available on http://www.sciencedirect.com/ for Speech Communication subscribers and subscribing institutions. Free access for all to the titles and abstracts of all volumes and even by clicking on Articles in press and then Selected papers.
-
Special Issue on Non-Linear and Non-Conventional Speech Processing-Speech Communication
Speech Communication
Call for Papers: Special Issue on Non-Linear and Non-Conventional Speech Processing
Editors: Mohamed CHETOUANI, UPMC
Marcos FAUNDEZ-ZANUY, EUPMt (UPC)
Bruno GAS, UPMC
Jean Luc ZARADER, UPMC
Amir HUSSAIN, Stirling
Kuldip PALIWAL, Griffith University
The field of speech processing has shown a very fast development in the past twenty years, thanks to both technological progress and to the convergence of research into a few mainstream approaches. However, some specificities of the speech signal are still not well addressed by the current models. New models and processing techniques need to be investigated in order to foster and/or accompany future progress, even if they do not match immediately the level of performance and understanding of the current state-of-the-art approaches.
An ISCA-ITRW Workshop on "Non-Linear Speech Processing" will be held in May 2007, the purpose of which will be to present and discuss novel ideas, works and results related to alternative techniques for speech processing departing from the mainstream approaches: http://www.congres.upmc.fr/nolisp2007
We are now soliciting journal papers not only from workshop participants but also from other researchers for a special issue of Speech Communication on "Non-Linear and Non-Conventional Speech Processing"
Submissions are invited on the following broad topic areas:
I. Non-Linear Approximation and Estimation
II. Non-Linear Oscillators and Predictors
III. Higher-Order Statistics
IV. Independent Component Analysis
V. Nearest Neighbours
VI. Neural Networks
VII. Decision Trees
VIII. Non-Parametric Models
IX. Dynamics of Non-Linear Systems
X. Fractal Methods
XI. Chaos Modelling
XII. Non-Linear Differential Equations
All fields of speech processing are targeted by the special issue, namely :
1. Speech Production
2. Speech Analysis and Modelling
3. Speech Coding
4. Speech Synthesis
5. Speech Recognition
6. Speaker Identification / Verification
7. Speech Enhancement / Separation
8. Speech Perception
Back to Top -
IEEE Transactions on Audio, Speech and Language Processing
Special Issue on Multimodal Processing in Speech -based Interactions in the IEEE Transactions on Audio, Speech and Language Processing has been extended to January 15, 2008. A URL to point to is: http://www.ewh.ieee.org/soc/sps/tap/sp_issue/special-issue-multimodal.pdf
Back to Top
Future Conferences
-
Publication policy: Hereunder, you will find very short announcements of future events. The full call for participation can be accessed on the conference websites
Back to Top
See also our Web pages (http://www.isca-speech.org/) on conferences and workshops.
Future Interspeech conferences
-
INTERSPEECH 2008-ICSLP
September 22-26, 2008, Brisbane, Queensland, Australia
Conference Website
Chairman: Denis Burnham, MARCS, University of West Sydney. -
INTERSPEECH 2009-EUROSPEECH
Brighton, UK,
Conference Website
Chairman: Prof. Roger Moore, University of Sheffield. -
INTERSPEECH 2010-ICSLP
Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."
Future ISCA Technical and Research Workshops
-
ITRW Odyssey 2008
The Speaker and Language Recognition Workshop
Back to Top
21-25 January 2008, Stellenbosch, South Africa
Topics
* Speaker recognition(identification, verification, segmentation, clustering)
* Text dependent and independent speaker recognition
* Multispeaker training and detection
* Speaker characterization and adaptation
* Features for speaker recognition
* Robustness in channels
* Robust classification and fusion
* Speaker recognition corporaand evaluation
* Use of extended training data
* Speaker recognition with speaker recognition
* Forensics, multimodality and multimedia speaker recogntion
* Speaker and language confidence estimation
* Language, dialect and accent recognition
* Speaker synthesis and transformation
* Biometrics
* Human recognition
* Commercial applications
Paper submission
Proaspective authors are invited to submit papers written in English via the Odyssey website. The style guide, templates,and submission form can be downloaded from the Odyssey website. Two members of the scientific committee will review each paper. Each accepted paper must have at least one registered author. The Proceedings will be published on CD
Schedule
Draft paper due July 15, 2007
Notification of acceptance September 15,2007
Final paper due October 30, 2007
Preliminary program November 30, 2007
Workshop January 21-25, 2008
Futher informations: venue, registation...
On the workshop website
Chairs
Niko Brummer, Spescom Data Voice, South Africa
Johan du Preez.Stellenbosch University,South Africa -
ISCA TR Workshop on Experimental Linguistics
August 2008, Athens, Greece
Back to Top
Website
Prof. Antonis Botinis -
ITRW on Evidence-based Voice and Speech Rehabilitation in Head
May 2008, Amsterdam, The Netherlands,
Back to Top
Cancer in the head and neck area and its treatment can have debilitating effects on communication. Currently available treatment options such as radiotherapy, surgery, chemo-radiation, or a combination of these can often be curative. However, each of these options affects parts of the vocal tract and/or voice to a more or lesser degree. When the vocal tract or voice no longer functions optimally, this affects communication. For example, radiotherapy can result in poor voice quality, limiting the speaker's vocal performance (fatigue from speaking, avoidance of certain communicative situations, etc.). Surgical removal of the larynx necessitates an alternative voicing source, which generally results in a poor voice quality, but further affects intelligibility and the prosodic structure of speech. Similarly, a commando procedure (resection involving portions of the mandible / floor of the mouth / mobile tongue) can have a negative effect on speech intelligibility. This 2 day tutorial and research workshop will focus on evidence-based rehabilitation of voice and speech in head and neck oncology. There will be 4 half day sessions, 3 of which will deal with issues concerning total laryngectomy. One session will be devoted to research on rehabilitation of other head and neck cancer sites. The chairpersons of each session will prepare a work document on the specific topic at hand (together with the two keynote lecturers assigned), which will be discussed in a subsequent round table session. After this there will be a 30' poster session, allowing 9-10 short presentations. Each presentation consists of maximally 4 slides, and is meant to highlight the poster's key points. Posters will be visited in the subsequent poster visit session. The final work document will refer to all research presently available, discuss its (clinical) relevance, and will attempt to provide directions for future research. The combined work document, keynote lectures and poster abstracts/papers will be published under the auspices of ISCA.
Organizers
prof. dr. Frans JM Hilgers
prof. dr. Louis CW Pols,
dr. Maya van Rossum.
Sponsoring institutions:
Institute of Phonetic Sciences - Amsterdam Center for Language and Communication,
The Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital
Dates and submission details as well as a website address will be announced in a later issue. -
Audio Visual Speech Processing Workshop (AVSP )
Tentative location:Queensland coast near Brisbane (most likely South Stradbroke Island)
Back to Top
Tentative date: 27-29 September 2008 (immediately after Interspeech 2008)
Following in the footsteps of previous AVSP workshops / conferences, AVSP workshop (ISCA Research and Tutorial Workshop) will be hold concomitantly to Interspeech2008, Brisbane, Australia, 22-26 September 2008. The aim of AVSP2008 is to bring together researchers and practitioners in areas related to auditory-visual speech processing. These include human and machine AVSP, linguistics, psychology, and computer science. One of the aims of the AVSP workshops is to foster collaborations across disciplines, as AVSP research is inherently multi-disciplinary. The workshop will include a number of tutorials / keynote addresses by internationally renowned researchers in the area of AVSP.
Organizers
Roland Goecke, Simon Lucey, Patrick Lucey
Australian National University,RSISE, Bldg. 115, Australian National University, Canberra, ACT 0200, Australia -
ISCA ITRW speech analysis and processing for knowledge discovery
June 4 - 6, 2008
Back to Top
Aalborg, Denmark
Workshop website
Humans are very efficient at capturing information and messages in speech, and they often perform this task effortlessly even when the signal is degraded by noise, reverberation and channel effects. In contrast, when a speech signal is processed by conventional spectral analysis methods, significant cues and useful information in speech are usually not taken proper advantage of, resulting in sub-optimal performance in many speech systems. There exists, however, a vast literature on speech production and perception mechanisms and their impacts on acoustic phonetics that could be more effectively utilized in modern speech systems. A re-examination of these knowledge sources is needed. On the other hand, recent advances in speech modelling and processing and the availability of a huge collection of multilingual speech data have provided an unprecedented opportunity for acoustic phoneticians to revise and strengthen their knowledge and develop new theories. Such a collaborative effort between science and technology is beneficial to the speech community and it is likely to lead to a paradigm shift for designing next-generation speech algorithms and systems. This, however, calls for a focussed attention to be devoted to analysis and processing techniques aiming at a more effective extraction of information and knowledge in speech.
Objectives:
The objective of this workshop is to discuss innovative approaches to the analysis of speech signals, so that it can bring out the subtle and unique characteristics of speech and speaker. This will also help in discovering speech cues useful for improving the performance of speech systems significantly. Several attempts have been made in the past to explore speech analysis methods that can bridge the gap between human and machine processing of speech. In particular, the time varying aspects of interactions between excitation and vocal tract systems during production seem to elude exploitation. Some of the explored methods include all-pole and polezero modelling methods based on temporal weighting of the prediction errors, interpreting the zeros of speech spectra, analysis of phase in the time and transform domains, nonlinear (neural network) models for information extraction and integration, etc. Such studies may also bring out some finer details of speech signals, which may have implications in determining the acoustic-phonetic cues needed for developing robust speech systems.
The Workshop:
G will present a full-morning common tutorial to give an overview of the present stage of research linked to the subject of the workshop
G will be organised as a single series of oral and poster presentations
G each oral presentation is given 30 minutes to allow for ample time for discussion
G is an ideal forum for speech scientists to discuss the perspectives that will further future research collaborations.
Potential Topic areas:
G Parametric and nonparametric models
G New all-pole and pole-zero spectral modelling
G Temporal modelling
G Non-spectral processing (group delay etc)
G Integration of spectral and temporal processing
G Biologically-inspired speech analysis and processing
G Interactions between excitation and vocal tract systems
G Characterization and representation of acoustic phonetic attributes
G Attributed-based speaker and spoken language characterization
G Analysis and processing for detecting acoustic phonetic attributes
G Language independent aspects of acoustic phonetic attributes detection
G Detection of language-specific acoustic phonetic attributes
G Acoustic to linguistic and acoustic phonetic mapping
G Mapping from acoustic signal to articulator configurations
G Merging of synchronous and asynchronous information
G Other related topics
Call for papers. Notification of review:
The submission deadline is January 31, 2008.
Registration
Fees for early and late registration for ISCA and non-ISCA members will be made available on the website during September 2007.
Venue:
The workshop will take place at Aalborg University, Department of Electronic Systems, Denmark. See the workshop website for further and latest information.
Accommodation:
There are a large number of hotels in Aalborg most of them close to the city centre. The list of hotels, their web sites and telephone numbers are given on the workshop website. Here you will also find information about transportation between the city centre and the university campus.
How to reach Aalborg:
Aalborg Airport is half an hour away from the international Copenhagen Airport. There are many daily flight connections between Copenhagen and Aalborg. Flying with Scandinavian Airlines System (SAS) or one of the Star Alliance companies to Copenhagen enables you to include Copenhagen-Aalborg into the entire ticket, and this way reducing the full transportation cost. There is also an hourly train connection between the two cities; the train ride lasts approx. five hours
Organising Committee:
Paul Dalsgaard, B. Yegnanarayana, Chin-Hui Lee, Paavo Alku, Rolf Carlson, Torbjørn Svendsen,
Important dates
Submission of full and final: January 31, 2008 on the Website
http://www.es.aau.dk/ITRW/
Notification of review results: No later than March 30., 2008. -
Robust ASR Workshop
Santiago, Chile
October-November 2008
Dr. Nestor Yoma
Forthcoming events supported (but not organized) by ISCA
-
IEEE ASRU 2007
Automatic Speech Recognition and Understanding Workshop
Back to Top
The Westin Miyako Kyoto, Japan
December 9 -13, 2007
Conference website
The tenth biannual IEEE workshop on Automatic Speech Recognition and Understanding (ASRU) cooperated by ISCA will be held during December 9-13, 2007. The ASRU workshops have a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding.
WORKSHOP TOPICS
Papers in all areas of human language technology are encouraged to be submitted, with emphasis placed on:
- automatic speech recognition and understanding technology
- speech to text systems
- spoken dialog systems
- multilingual language processing
- robustness in ASR
- spoken document retrieval
- speech-to-speech translation
- spontaneous speech processing
- speech summarization,
- new applications of ASR.
SUBMISSIONS FOR THE TECHNICAL PROGRAM
The workshop program will consist of invited lectures, oral and poster presentations, and panel discussions. Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the ASRU 2007 website. All papers will be handled and reviewed electronically. The website will provide you with further details. There is also a demonstration session, which has become another highlight of the ASRU workshop. Demonstration proposals will be handled separately. Please note that the submission dates for papers are strict deadlines.
IMPORTANT DATES
Paper submission deadline July 16, 2007
Paper acceptance/rejection notification September 3, 2007
Demonstration proposal deadline September 24, 2007
Workshop advance registration deadline October 15, 2007
Workshop December 9-13, 2007
REGISTRATION AND INFORMATION
Registration will be handled via the ASRU 2007 website .
ORGANIZING COMMITTEE
General Chairs:
Sadaoki Furui (Tokyo Inst. Tech.)
Tatsuya Kawahara (Kyoto Univ.)
Technical Chairs:
Jean-Claude Junqua (Panasonic)
Helen Meng (Chinese Univ. Hong Kong)
Satoshi Nakamura (ATR)
Publication Chair:
Timothy Hazen, MIT, USA
Publicity Chair:
Tomoko Matsui, ISM, Japan
Demonstration Chair:
Kazuya Takeda, Nagoya U, Japan -
3rd International Conference on Large-scale Knowledge Resources (LKR 2008)
3-5 March, 2008, Tokyo Institute of Technology, Tokyo Japan
Back to Top
Website
Sponsored by: 21st Century Center of Excellence (COE) Program "Framework for Systematization and Application of Large-scale",Tokyo Institute of Technology
In the 21st century, we are now on the way to the knowledge-intensive society in which knowledge plays ever more important roles. Research interest should inevitably shift from information to knowledge, namely how to build, organize, maintain and utilize knowledge are the central issues in a wide variety of fields. The 21st Century COE program, "Framework for Systematization and Application of Large-scale Knowledge Resources (COE-LKR)" conducted by Tokyo Institute of Technology is one of the attempts to challenge these important issues. Inspired by this project, LKR2008 aims at bringing together diverse contribution in cognitive science, computer science, education and linguistics to explore design, construction, extension, maintenance, validation, and application of knowledge.
Topics of Interest to the conference includes:
Infrastructure for Large-scale Knowledge
Grid computing
Network computing
Software tools and development environments
Database and archiving systems
Mobile and ubiquitous computing
Systematization for Large-scale Knowledge
Language resources
Multi-modal resources
Classification, Clustering
Formal systems
Knowledge representation and ontology
Semantic Web
Cognitive systems
Collaborative knowledge
Applications and Evaluation of Large-scale Knowledge
Archives for science and art
Educational media
Information access
Document analysis
Multi-modal human interface
Web applications
Organizing committee General conference chair: Furui, Sadaoki (Tokyo Institute of Technology)
Program co-chairs: Ortega, Antonio (University of Southern California)
Tokunaga, Takenobu (Tokyo Institute of Technology)
Publication chair: Yonezaki, Naoki (Tokyo Institute of Technology)
Publicity chair: Yokota, Haruo (Tokyo Institute of Technology)
Local organizing chair: Shinoda, Koichi (Tokyo Institute of Technology)
Submission
Since we are aiming at an interdisciplinary conference covering wide range of topics concerning large-scale knowledge resources, authors are requested to add general introductory description in the beginning of the paper so that readers of other research area can understand the importance of the work. Note that one of the reviewers of each paper is assigned from other topic area to see if this requirement is fulfilled.
There are two categories of paper presentation: oral and poster. The category of the paper should be stated at submission. Authors are invited to submit original unpublished research papers, in English, up to 12 pages for oral presentation and 4 pages for poster presentation, strictly following the LNCS/LNAI format guidelines available at the Springer LNCS Web page. . Details of the submission procedure will be announced later on.
Reviewing
The reviewing of the papers will be blind and managed by an international Conference Program Committee consisting of Area Chairs and associated Program Committee Members. Final decisions on the technical program will be made by a meeting of the Program Co-Chairs and Area Chairs. Each submission will be reviewed by at least three program committee members, and one of the reviewers is assigned from a different topic area.
Publication
The conference proceedings will be published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI), which will be available at the conference.
Important dates
Paper submission deadline: 30 August, 2007
Notification of acceptance: 10 October, 2007
Camera ready papers due: 10 November, 2007
e-mail correspondence -
Call for Papers (Preliminary version) Speech Prosody 2008
Campinas, Brazil, May 6-9, 2008
Speech Prosody 2008 will be the fourth conference of a series of international events of the Special Interest Groups on Speech Prosody (ISCA), starting by the one held in Aix-en Provence, France, in 2002. The conferences in Nara, Japan (2004), and in Dresden, Germany (2006) followed the proposal of biennial meetings, and now is the time of changing place and hemisphere by trying the challenge of offering a non-stereotypical view of Brazil. It is a great pleasure for our labs to host the fourth International Conference on Speech Prosody in Campinas, Brazil, the second major city of the State of São Paulo. It is worth highlighting that prosody covers a multidisciplinary area of research involving scientists from very different backgrounds and traditions, including linguistics and phonetics, conversation analysis, semantics and pragmatics, sociolinguistics, acoustics, speech synthesis and recognition, cognitive psychology, neuroscience, speech therapy, language teaching, and related fields. Information: sp2008_info@iel.unicamp.br. Web site: http://sp2008.org. We invite all participants to contribute with papers presenting original research from all areas of speech prosody, especially, but nor limited to the following.
Scientific Topics
Prosody and the Brain
Long-Term Voice Quality
Intonation and Rhythm Analysis and Modelling
Syntax, Semantics, Pragmatics and Prosody
Cross-linguistic Studies of Prosody
Prosodic variability
Prosody in Discourse
Dialogues and Spontaneous Speech
Prosody of Expressive Speech
Perception of Prosody
Prosody in Speech Synthesis
Prosody in Speech Recognition and Understanding
Prosody in Language Learning and Acquisition
Pathology of Prosody and Aids for the Impaired
Prosody Annotation in Speech Corpora
Others (please, specify)
Organising institutions
Speech Prosody Studies Group, IEL/Unicamp | Lab. de Fonética, FALE/UFMG | LIACC, LAEL, PUC-SP
Important Dates
Call for Papers: May 15, 2007
Full Paper Submission: Nov. 2nd, 2007
Notif. of Acceptance: Dec. 14th, 2007
Early Registration: Jan. 14th, 2008
Conference: May 6-9, 2008
-
CFP:The International Workshop on Spoken Languages Technologies for Under- The International Workshop on Spoken Languages Technologies for Under-resourced languages (SLTU)
The International Workshop on Spoken Languages Technologies for Under-resourced languages (SLTU)
languages (SLTU) Hanoi University of Technology, Hanoi, Vietnam,
May 5 - May 7, 2008.
Workshop Web Site : http://www.mica.edu.vn/sltu
The STLU meeting is a technical conference focused on spoken language processing for
under-resourced languages. This first workshop will focus on Asian languages, and
the idea is to mainly (but not exclusively) target languages of the area (Vietnamese,
Khmer, Lao, Chinese dialects, Thai, etc.). However, all contributions on other
under-resourced languages of the world are warmly welcomed. The workshop aims
at gathering researchers working on:
* ASR, synthesis and speech translation for under-resourced languages
* portability issues * fast resources acquisition (speech, text, lexicons, parallel corpora)
* spoken language processing for languages with rich morphology
* spoken language processing for languages without separators
* spoken language processing for languages without writing system
Important dates
* Paper submission: January 15, 2008
* Notification of Paper Acceptance: February 20, 2008
* Author Registration Deadline: March 1, 2008 Scientific Committee
* Pr Tanja Schultz, CMU, USA
* Dr Yuqing Gao, IBM, USA
* Dr Lori Lamel, LIMSI, France
* Dr Laurent Besacier, LIG, France
* Dr Pascal Nocera, LIA, France
* Pr Jean-Paul Haton, LORIA, France
* Pr Luong Chi Mai, IOIT, Vietnam
* Pr Dang Van Chuyet, HUT, Vietnam
* Pr Pham Thi Ngoc Yen, MICA, Vietnam
* Dr Eric Castelli, MICA, Vietnam
* Dr Vincent Berment, LIG Laboratory, France
* Dr Briony Williams, University of Wales, UK
Local Organizing Committee
* Pr Nguyen Trong Giang, HUT/MICA
* Pr Ha Duyen Tu, HUT
* Pr Pham Thi Ngoc Yen, HUT/MICA
* Pr Geneviève Caelen-Haumont, MICA
* Dr Trinh Van Loan, HUT
* Dr Mathias Rossignol, MICA
* M. Hoang Xuan Lan, HUT
Back to Top
Future Speech Science and Technology Events
-
2007 IEEE International Conference on Signal Processing and Communications, United Arab Emirates
24-27 November 2007
Dubai, United Arab Emirates
The IEEE International Conference on Signal Processing and Communications (ICSPC 2007) will be held in Dubai, United Arab Emirates (UAE) on 24-27 November 2007. The ICSPC will be a forum for scientists, engineers, and practitioners throughout the Middle East region and the World to present their latest research results, ideas, developments, and applications in all areas of signal processing and communications. It aims to strengthen relations between industry, research laboratories and universities. ICSPC 2007 is organized by the IEEE UAE Signal Processing and Communications Joint Societies Chapter. The conference will include keynote addresses, tutorials, exhibitions, special, regular and poster sessions. All papers will be peer reviewed. Accepted papers will be published in the conference proceedings and will be included in IEEE Explore. Acceptance will be based on quality, relevance and originality.
SCOPE
Topics will include, but are not limited to, the following:
- Digital Signal Processing
- Analog and Mixed Signal Processing
- Audio/Speech Processing and Coding
- Image/Video Processing and Coding
- Watermarking and Information Hiding
- Multimedia Communication
- Signal Processing for Communication
- Communication and Broadband Networks
- Mobile and Wireless Communication
- Optical Communication
- Modulation and Channel Coding
- Computer Networks
- Computational Methods and Optimization
- Neural Systems
- Control Systems
- Cryptography and Security Systems
- Parallel and Distributed Systems
- Industrial and Biomedical Applications
- Signal Processing and Communications Education
Prospective authors are invited to submit full-length (4 pages) paper proposals for review. Proposals for tutorials, special sessions, and exhibitions are also welcome. The submission procedures can be found on the conference web site:
All submissions must be made on-line and must follow the guidelines given on the web site.
ICSPC 2007 Conference Secretariat,
P. O. Box: 573, Sharjah, United Arab Emirates (U.A.E.),
Fax: +971 6 5611789
ORGANIZERS
Honorary Chair
Arif Al-Hammadi, Etisalat University College, UAE
General Chair
Mohammed Al-Mualla Etisalat University College, UAE
IMPORTANT DATES
Submission of proposals for tutorials, special sessions, and exhibitions March 5th, 2007
Submission of full-paper proposals April 2nd, 2007
Notification of acceptance June 4th, 2007
Submission of final version of paper October 1st, 2007 Back to Top -
Premier appel à communications-Workshop AFCP-Montpellier France
Montpellier 7 décembre 2007
Back to Top
Workshop AFCP
Sujets
Les communications affichées pourront aborder la coarticulation sous les angles suivants :
1- Les méthodes d'observation et instrumentation, les modèles théoriques ;
2- Les aspects moteurs et les contraintes articulatoires ;
3- La directionnalité de la coarticulation en fonction de la langue, des particularités phonologiques et prosodiques ;
4- Les effets spatio-temporels, les mécanismes de contrôle et de coordination dans les positions des articulateurs ;
5- Les indices acoustiques dans divers contextes phonétiques ;
6- La perception et la sensibilité des sujets à la variation coarticulatoire ;
7- La représentation phonologique de la coarticulation ;
8- L'émergence et le développement de la coarticulation chez l'enfant.
Conférenciers invités
Daniel RECASENS, Department of Catalan Philology, Universitat Autònoma de Barcelona, and Phonetics Laboratory, Institut d'Estudis Catalans, Spain: "Mechanisms of segmental adaptation in VCV and CC sequences in the light of the DAC model".
Björn LINDBLOM, Department of Linguistics, Stockholm University, Sweden / Department of Linguistics, University of Texas, Austin Texas, USA, "An H&H perspective on coarticulation".
Christian ABRY, Université Stendhal, Grenoble III : « Le Modèle d'Expansion du Mouvement (MEM) : un modèle d'anticipation universel, individuel et développemental pour la parole ».
René CARRE, DDL UMR 5596 CNRS-Lyon II : « Coarticulation en production de parole : aspects acoustiques ».
Edward FLEMMING, Department of Linguistics and Philosophy, MIT, Cambridge, USA, "The grammar of coarticulation".
Comité d'organisation
Jalal AL-TAMIMI, DDL UMR 5596 CNRS-Lyon II
Josiane CLARENC, Dipralang EA 739, Montpellier III
Christelle DODANE, Dipralang EA 739, Montpellier III
Mohamed EMBARKI, Praxiling UMR 5267 CNRS-Montpellier III
Christian GUILLEMINOT, Centre Tesnière EA 2283, Université de Franche-Comté
Mohamed YEOU, Université Chouaib Doukkali, El Jadida (Maroc)
Soumissions
Les soumissions portent sur des articles complets, rédigés en français ou en anglais, d'une longueur maximale de 4 pages. Les articles devront être soumis au format PDF impérativement. Les soumissions se feront uniquement par courriel
Dates à retenir
Soumission article complet : 15 octobre 2007
Notification d'acceptation ou de rejet : 31 octobre 2007
Envoi version définitive : 15 novembre 2007.
Inscriptions
Membre AFCP : 60 euros
Etudiant AFCP : 35 euros
Normal : 120 euros
Etudiant : 70 euros
Site web -
2ndes Journees de Phonetique clinique
Les inscriptions au colloque JPC2 "Deuxièmes Journees de Phonetique Clinique" qui se tiendra les 13 et 14 decembre 2007 a Grenoble sont ouvertes.
Le formulaire d'inscription est disponible sur le site a l'adresse suivante: http://www.icp.inpg.fr/JPC2/
Attention, Grenoble accueille tout au long de l'annee de nombreux colloques et seminaires; Nous vous conseillons donc de reserver un hotel dans les plus brefs delais (Une liste d'hebergements vous est proposee sur le site).
Back to Top -
5th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications MAVEBA 2007
December 13 - 15, 2007
Back to Top
Conference Hall - Ente Cassa di Risparmio di Firenze
Via F. Portinari 5r, Firenze, Italy
DEADLINES:
EXTENDED DEADLINE: 15 June 2007 - Submission of extended abstracts (1-2 pages, 1 column), special session proposal
30 July, 2007 - Notification of paper acceptance
30 September 2007 - Final full paper submission (4 pages, 2 columns, pdf format) and early registration
13-15 December 2007
- Conference venue
CONTACT:
Dr. Claudia Manfredi - Conference Chair
Dept. of Electronics and Telecommunications
Universita degli Studi di Firenze
Via S. Marta 3
50139 Firenze, Italy
Phone: +39-055-4796410
Fax: +39-055-494569 -
e-Forensics 2008: The 1st International Conference on Forensic Applications and Techniques in Telecommunications, Information and Multimedia
with associated Workshops:
Back to Top
WKDD: First International Workshop on Knowledge Discovery and Data Mining
and WFST: First International Workshop on Forensics Sensor Technology
Adelaide, Australia, January 21-23, 2008
Website
The birth of the Internet as a commercial entity and the development of increasingly sophisticated digital consumer technology has led to new opportunities for criminals and new challenges for law enforcement. Those same technologies also offer new tools for scientific investigation of evidence.
In telecommunications, Voice over IP raises significant challenges for call intercept and route tracing. Information systems are becoming overwhelmingly large and the challenges of associating relevant information from one source with another require new sophisticated tools. And consumer multimedia devices, especially video and still cameras, are increasingly becoming the tools of choice to create potentially illegal content. At the same time, the scientific gathering and investigation of evidence at a crime scene is being pushed towards digital techniques, raising questions of the veracity and completeness of evidence.
PAPERS: The aim of this conference is to bring together state of the art research contributions in the development of tools, protocols and techniques which assist in the investigation of potentially illegal activity associated with electronic communication and electronic devices. Investigative practice and requirements for presentation of evidence in court are to be considered key underlying themes. This might include discovery, analysis, handling and storage of digital evidence; meeting the legal burden of proof; and the establishment of the forensic chain of evidence.
Technical papers describing original, previously unpublished research, not currently under review by another conference or journal, are solicited. Technical papers which clearly identify how the specific contributions fit to an overall working solution are particularly of interest.
Topics include, but are not limited to, the following:
* Voice over IP call tracing and intercept
* Records tracing and data mining
* Fraud management in commercial transactions
* Techniques for addressing identity theft
* Geo-location techniques for cellular, ad-hoc, wireless and IP network communications
* Distributed data association across massive, disparate database systems
* Data carving
* Multimedia source identification
* Image tamper identification
* Image association and recognition
* Motion analysis
* Voice analysis
* Watermarking and applications
* Transaction tracking
* Digital evidence storage and handling protocols
General papers on electronic security will be considered where there is a clear application to the underlying topic of forensic investigation.
Important Dates:
Paper Registration Deadline: September 28, 2007
Notification of Acceptance: November 16, 2007
Final paper submission and author's registration: December 3, 2007
Conference Dates: January 21-23, 2008
Committee
General Chair - Matthew Sorell, University of Adelaide, Australia
Technical Program Committee Chair - Chang-Tsun Li, University of Warwick, UK
Publicity Chair - Gale Spring, RMIT University, Australia
For details of the Workshops, and further information regarding paper submission and the conference, please refer to the conference website
Please direct all enquiries by e-mail -
LangTech2008
The language and speech technology conference.
Rome, 28-29 February 2008
San Michele a Ripa conference centre.
Website
We are most delighted to welcome you to join us at the LangTech2008 conference which will be held at the San Michele a Ripa convention center in Rome, February 28-29, 2008. After two successful national conferences on speech and language technology (2002, 2006), the ForumTal decided to promote an international event in the field. A follow up of the previous LangTech conferences (Berlin, Paris), LangTech2008 aims at giving a chance to the industrial and research communities, and public administration, to share and discuss language and speech technologies. The conference will feature world-class speakers, exhibits, lecture and poster sessions.
PAPERS SUBMISSION DEADLINE: 30th November 2007
EXHIBITION BOOTHS RESERVATION: Reduced Fares until 15th November 2007
REGISTRATION: Reduced Fees until 31st December 2007
A golden promotional opportunity for all language technology SMEs!
LangTech 2008, http://www.langtech.it/en/, the language technology business conference, is featuring a special elevator session for small and medium sized enterprises, SMEs.
An elevator session is a session with very short presentations.
If you seek business partners, you are invited to participate in LangTech 2008 in Rome, February 28-29, and make yourself known to the audience.
A committee of European experts shall choose a total of 10 SMEs from anywhere in Europe and beyond to give a 5 min self-promotional presentation in English before a floor of venture capitalists, business peers, large technology corporations and other interested parties.
A jury will select three of the presenting companies, and award the first, second and third LangTech Prize.
Submissions must be received by 30 December 2007.
The lucky candidates will be informed by 15 January 2008.
We will offer a reduced fee to LangTech 2008 to all SMEs selected to present at the elevator session.
If you wish to submit a request to present your SME for this unique opportunity, please contact sme@langtech.it immediately, and visit the web site dedicated to LangTech 2008, http://www.langtech.it/en/, where you can download a short slide set with guidelines for preparing your candidature.
Dr Calzolari would be pleased if you could spread the Conference Announcement and the Call for SMEs
Presentations to anyone you consider potentially interested in the event.
Dr PAOLA BARONI
Researcher
Consiglio Nazionale delle Ricerche
Istituto di Linguistica Computazionale
Area della Ricerca di Pisa
Via Giuseppe Moruzzi
56124 Pisa
ITALY
Phone: [+39] 050 315 2873
Fax: [+39] 050 315 2834
e-Mail: paola.baroni@ilc.cnr.it
URL: http://www.ilc.cnr.it
Skype: paola.baroni
Back to Top -
AVIOS
San Diego, March 10 - 12, 2008
Back to Top
The defining conference on Voice Search
From the Applied Voice Input Output Society and Bill Meisel's TMA Associates
Voice Search 2008 will be held at the San Diego Marriott Hotel and Marina, San Diego, California, March 10 - 12, 2008. Voice Search is a rapidly evolving technology and market. AVIOS (the Applied Voice Input Output Society) and Bill Meisel (president of TMA Associates and Editor of Speech Strategy News) are joining together to launch this new conference as a definitive resource for companies that will be impacted by this important trend.
"Voice Search" suggests an analogy to "Web Search," which has been a runaway success for both users and providers. The maturing of speech recognition and text-to-speech synthesis--and the recent involvement of large companies--has validated the availability of the core functionality necessary to support this analogy. The conference explores the possibilities, limitations, and differences of Voice Search and Web search.
Web search made the Web an effective and required marketing tool. Will Voice Search do the same for the telephone channel? The potential impact on call centers is another key issue covered by the conference.
The agenda covers:
§ What Voice Search is and will become
§ Applications of Voice Search
§ The appropriate use of speech technology to support voice search
§ Insight for service providers, enterprises, Web services, and call centers that want to take advantage of this new resource
§ Marketing channels and business models in Voice Search
§ Emerging supporting technology, development tools, and delivery platforms supporting Voice Search
§ Dealing with the surge of calls created by Voice Search.
Specific topics that will be covered at Voice Search 2008 include:
Applications
- Automated directory assistance and local search
- Voice information searches by telephone
- Ad-supported information access by phone
- Audio/Video searches on the Web and enterprises
- Speech analytics-extracting business intelligence from audio files
- Converting voicemail to searchable text
- Other new applications and services
- Application examples and demonstrations
Markets
- How the voice search market is developing
- The changing role of the telephone in marketing
- Business models
- The right way to deliver audio ads
- Justifying subscriber fees
Delivery
- Platforms, tools, and services for effectively delivering these applications
- Implementation examples and demonstrations
- Hosted versus customer-premises solutions
- Supporting multiple modes of interaction
- Key sources of technology and service
Contact centers
- The impact of Voice Search on contact centers
- Speech automation to handle the increased call flow
- Moving from handling problems to building customer relationships
Technology
- Speech recognition methods supporting voice search
- Text-to-speech quality and alternatives
- Supporting multimodal solutions
- Supporting standards
- Delivering responsive applications
- Voice User Interface issues and solutions in voice search
Sponsorships are available:
http://www.voicesearchconference.com/sponsor.htm
We're interested in proposals for speaking (available slots are limited):
http://www.voicesearchconference.com/talk.htm
Registration is open with an early-registration discount:
http://www.voicesearchconference.com/registration.htm
Other information:
What is Voice Search?
Voice Search News
About AVIOS
About Bill Meisel and TMA Associates
Or contact cONTACT. -
CfP-2nd INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS (LATA 2008)
Tarragona, Spain, March 13-19, 2008
Back to Top
Website http://www.grlmc.com
AIMS:
LATA is a yearly conference in theoretical computer science and its applications. As linked to the International PhD School in Formal Languages and Applications that is being developed at the host institute since 2001, LATA 2008 will reserve significant room for young computer scientists at the beginning of their career. It will aim at attracting scholars from both classical theory fields and application areas (bioinformatics, systems biology, language technology, artificial intelligence, etc.). SCOPE:
Topics of either theoretical or applied interest include, but are not limited to:
- words, languages and automata
- grammars (Chomsky hierarchy, contextual, multidimensional, unification, categorial, etc.)
- grammars and automata architectures
- extended automata
- combinatorics on words
- language varieties and semigroups
- algebraic language theory
- computability
- computational and structural complexity
- decidability questions on words and languages
- patterns and codes
- symbolic dynamics
- regulated rewriting
- trees, tree languages and tree machines
- term rewriting
- graphs and graph transformation
- power series
- fuzzy and rough languages
- cellular automata
- DNA and other models of bio-inspired computing
- symbolic neural networks
- quantum, chemical and optical computing
- biomolecular nanotechnology
- automata and logic
- algorithms on automata and words
- automata for system analysis and programme verification
- automata, concurrency and Petri nets
- parsing
- weighted machines
- transducers
- foundations of finite state technology
- grammatical inference and algorithmic learning
- text retrieval, pattern matching and pattern recognition
- text algorithms
- string and combinatorial issues in computational biology and bioinformatics
- mathematical evolutionary genomics
- language-based cryptography
- data and image compression
- circuits and networks
- language-theoretic foundations of artificial intelligence and artificial life
- digital libraries
- document engineering
STRUCTURE:
LATA 2008 will consist of:
- 3 invited talks (to be announced in the second call for papers)
- 2 tutorials (to be announced in the second call for papers)
- refereed contributions
- open sessions for discussion in specific subfields or on professional issues
SUBMISSIONS:
Authors are invited to submit papers presenting original and unpublished research. Papers should not exceed 12 pages and should be formatted according to the usual LNCS article style. Submissions have to be sent through the web page.
PUBLICATION:
A volume of proceedings (expectedly LNCS) will be available by the time of the conference. A refereed volume of selected proceedings containing extended papers will be published soon after it as a special issue of a major journal.
REGISTRATION:
The period for registration will be open since January 7 to March 13, 2008. Details about how to register will be provided through the website of the conference.
Early registration fees: 250 euros
Early registration fees (PhD students): 100 euros
Registration fees: 350 euros
Registration fees (PhD students): 150 euros
FUNDING:
25 grants covering partial-board accommodation will be available for nonlocal PhD students. To apply, the candidate must e-mail her/his CV together with a copy of the document proving her/his status as a PhD student.
IMPORTANT DATES:
Paper submission: November 16, 2007
Application for funding (PhD students): December 7, 2007
Notification of funding acceptance or rejection: December 21, 2007
Notification of paper acceptance or rejection: January 18, 2008
Early registration: February 1, 2008
Final version of the paper for the proceedings: February 15, 2008
Starting of the conference: March 13, 2008
Submission to the journal issue: May 23, 2008
FURTHER INFORMATION:
E-mail
Website http://www.grlmc.com
ADDRESS:
LATA 2008
Research Group on Mathematical Linguistics
Rovira i Virgili University
Plaza Imperial Tarraco, 1
43005 Tarragona, Spain
Phone: +34-977-559543
Fax: +34-977-559597 -
CfP- LREC 2008 - 6th Language Resources and Evaluation Conference
Palais des Congrès Mansour Eddahbi, MARRAKECH - MOROCCO
MAIN CONFERENCE: 28-29-30 MAY 2008
WORKSHOPS and TUTORIALS: 26-27 MAY and 31 MAY- 1 JUNE 2008
Conference web site
The sixth international conference on Language Resources and Evaluation (LREC) will be organised in 2008 by ELRA in cooperation with a wide range of international associations and organisations.
CONFERENCE TOPICS
Issues in the design, construction and use of Language Resources (LRs): text, speech, multimodality
- Guidelines, standards, specifications, models and best practices for LRs
- Methodologies and tools for LRs construction and annotation
- Methodologies and tools for the extraction and acquisition of knowledge
- Ontologies and knowledge representation
- Terminology
- Integration between (multilingual) LRs, ontologies and Semantic Web technologies
- Metadata descriptions of LRs and metadata for semantic/content markup
Exploitation of LRs in different types of systems and applications
- For: information extraction, information retrieval, speech dictation, mobile communication, machine translation, summarisation, web services, semantic search, text mining, inferencing, reasoning, etc.
- In different types of interfaces: (speech-based) dialogue systems, natural language and multimodal/multisensorial interactions, voice activated services, etc.
- Communication with neighbouring fields of applications, e.g. e-government, e-culture, e-health, e-participation, mobile applications, etc.
- Industrial LRs requirements, user needs
Issues in Human Language Technologies evaluation
- HLT Evaluation methodologies, protocols and measures
- Validation, quality assurance, evaluation of LRs
- Benchmarking of systems and products Usability evaluation of HLT-based user interfaces, interactions and dialog systems
- Usability and user satisfaction evaluation
General issues regarding LRs & Evaluation
- National and international activities and projects
- Priorities, perspectives, strategies in national and international policies for LRs
- Open architectures
- Organisational, economical and legal issues
Special Highlights
LREC targets the integration of different types of LRs - spoken, written, and other modalities - and of the respective communities. To this end, LREC encourages submissions covering issues which are common to different types of LRs and language technologies.
LRs are currently developed and deployed in a much wider range of applications and domains. LREC 2008 recognises the need to encompass all those data that interact with language resources in an attempt to model more complex human processes and develop more complex systems, and encourages submissions on topics such as:
- Multimodal and multimedia systems, for Human-Machine interfaces, Human-Human interactions, and content processing
- Resources for modelling language-related cognitive processes, including emotions
- Interaction/Association of language and perception data, also for robotic systems
The Scientific Programme will include invited talks, oral presentations, poster and demo presentations, and panels. There is no difference in quality between oral and poster presentations. Only the appropriateness of the type of communication (more or less interactive) to the content of the paper will be considered.
SUBMISSIONS AND DATES
Submitted abstracts of papers for oral and poster or demo presentations should consist of about 1500-2000 words.
- Submission of proposals for oral and poster/demo papers: 31 October 2007
- Submission of proposals for panels, workshops and tutorials: 31 October 2007
The Proceedings on CD will include both oral and poster papers, in the same format. In addition a Book of Abstracts will be printed. Back to Top -
CALL for JEP/TALN/RECITAL 2008 - Avignon France
JEP-TALN-RECITAL'08
- XXVIIemes Journees d'Etude sur la Parole (JEP'08)
- 15eme conference sur le Traitement Automatique des Langues Naturelles (TALN'08)
- 10eme Rencontre des Etudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RECITAL'08)
Universite d'Avignon et des Pays de Vaucluse
Avignon du 9 au 13 Juin 2008Pour la troisieme fois, apres Nancy en 2002 et Fes en 2004, l'AFCP(Association Francophone pour la Communication Parlee) et l'ATALA (Association pour le Traitement Automatique des Langues) organisent conjointement leur principale conference afin de reunir en un seul lieu les deux communautes du traitement de la langue orale et ecrite.
Le colloque jeunes chercheurs RECITAL'08 est egalement associe a cet evenement.
Les conferences invitees ainsi qu'une session orale thematique seront organisees sous forme de sessions pleinieres communes aux trois conferences. L'inscription est commune aux trois evenements et les participants recevront l'ensemble des actes sur CDROM. La langue officielle est le francais.
Organise par l e LIA (Laboratoire Informatique d'Avignon), cet evenement se tiendra, du 9 au 13 juin 2008, a l'Universite d'Avignon et des Pays du Vaucluse (site centre ville - sainte Marthe).
Les JEP 2008 sont organisees sous l'egide de l'AFCP (Association Francophone pour la Communication Parlee), avec le soutien de l'ISCA (International Speech Communication Association).
CALENDRIER
Date limite de soumission: 11 fevrier 2008
Notification aux auteurs: 28 mars 2008
Confeence: 9-13 juin 2008
THEMES
Les communications porteront sur la communication parlee et le traitement de la parole dans leurs differents aspects. Les themes de la conference incluent, de facon non limitative:
Production de parole
Acoustique de la parole
Perception de parole
Phonetique et phonologie
Prosodie
Reconnaissance et comprehension de la parole
Reconnaissance de la langue et du locuteur
Modeles de langage
Synthese de la parole
Analyse, codage et compression de la parole
Applications a composantes orales (dialogue, indexation...)
Evaluation, corpus et ressources
Psycholinguistique
Acquisition de la parole et du langage
Apprentissage d'une langue seconde
Pathologies de la parole
CRITERES DE SELECTION
Les auteurs sont invites a soumettre des travaux de recherche
originaux, n'ayant pas fait l'objet de publications anterieures. Les
contributions proposees seront examinees par au moins deux
specialistes du domaine. Seront considerees en particulier:
- l'importance et l'originalite de la contribution,
- la correction du contenu scientifique et technique,
- la discussion critique des resultats, en particulier par rapport aux autres travaux du domaine
- la situation des travaux dans le contexte de la recherche internationale,
- l'organisation et la clarte de la presentation
- l'adequation aux themes de la conference.
Les articles selectionnes seront publies dans les actes de la conference.
MODALITES DE SOUMISSION
Les articles soumis ne devront pas depasser 4 pages en Times 10, sur deux colonnes, format A4. Une feuille de style LaTeX et un modee Word sont disponibles sur le site de la conference www.lia.univ-avignon.fr/jep-taln08/.
Les articles devront etre soumis - sous forme electronique - sur le site web de la conference avant le 11 fevrier 2008. Les documents devront etre envoyes exclusivement au format PDF.
BOURSES
L'AFCP offre un certain nombre de bourses pour les doctorants et jeunes chercheurs desireux de prendre part a la conference, voir : www.afcp-parole.org/doc/bourses.htm
L'ISCA apporte egalement un soutien financier aux jeunes chercheurs participant a des manifestations scientifiques sur la parole et le langage, voir : www.isca-speech.org/grants.html
CALL FOR WORKSHOPS AND TUTORIALSFor the third time, after Nancy in 2002 and Fes in 2004, the French speech association AFCP and the French NLP association ATALA are jointly organising their main conference in order to group together the two research community working in the fields of Speech and Natural Language Processing.
The conference will include oral and poster communications, invited conferences, workshops and tutorials. Workshop and tutorials will be held on June 13, 2008.
The official languages are French and English.
IMPORTANT DATES
Deadline for proposals: November 22nd 2007
Approval by the TALN committee: November 30th 2007
Final version for inclusion in the proceedings: April 4th 2008
Workshop and tutorials: June 13th 2008
OBJECTIVES
Workshops can be organized on any specific aspect of NLP. The aim of these sessions is to facilitate an in-depth discussion of this theme.
A workshop has its own president and its own program committee. The president is responsible for organizing a call for paper/participation and for the coordination of his program committee. The organizers ofthe main TALN conference will only take in charge the organization of the usual practical details (rooms, coffee breaks, proceedings).
Workshops will be organized in parallel sessions on the last day of the conference (2 to 4 sessions of 1:30).
Tutorials will be held on the same day.
HOW TO SUBMIT
Workshop and Tutorial proposals will be sent by email to taln08@atala.org before November 22nd, 2007.
** Workshop proposals will contain an abstract presenting the proposed theme, the program committee list and the expected length of the session.
** Tutorial proposals will contain an abstract presenting the proposed theme, a list of all the speakers and the expected length of the session (1 or 2 sessions of 1:30).
The TALN program committee will make a selection of the proposals and announce it on November 30th, 2008.
FORMAT
Conferences will be given in French or English (for non French native speakers). Papers to be published in the proceedings will conform to the TALN style sheet which is available on the conference web site. Worshop papers should not be longer than 10 pages in Times 12 (references included).
Contact: taln08@atala.org
-
4th IEEE Tutorial and Research Workshop on PERCEPTION AND INTERACTIVE TECHNOLOGIES FOR SPEECH-BASED SYSTEMS
June 16 - 18, 2008
Back to Top
Kloster Irsee, Germany
The 4th IEEE Tutorial and Research Workshop on PERCEPTION AND INTERACTIVE TECHNOLOGIES FOR SPEECH-BASED SYSTEMS (PIT08)will be held at the Kloster Irsee in southern Germany from June 16 to June 18, 2008.
The workshop focuses on advanced speech-based human-computer interaction where various contextual factors are modelled and taken into account when users communicate with computers. This includes mechanisms, architectures, design issues, applications, evaluation and tools. Prototype and product demonstrations will be very welcome.
The workshop will bring together researchers from various disciplines such as, for example, computer science and engineering sciences, medical, psychological and neurosciences, as well as mathematics. It will provide a forum for the presentation of research and applications and for lively discussions among researchers as well as industrialists in different fields.
WORKSHOP THEMES
Papers may discuss theories, applications, evaluation, limitations, general tools and techniques. Discussion papers that critically evaluate approaches or processing strategies and prototype demonstrations are especially welcome.
- Speech recognition and semantic analysis
- Dialogue management models
- Adaptive dialogue modelling
- Recognition of emotions from speech, gestures, facial expressions and physiological data
- User modelling
- Planning and reasoning capabilities for coordination and conflict description
- Conflict resolution in complex multi-level decisions
- Multi-modality such as graphics, gesture and speech for input and output
- Fusion and information management
- Computer-supported collaborative work
- Attention selection and guidance
- Learning and adaptability
- Visual processing and recognition for advanced human-computer interaction
- Databases and corpora
- Psychophysiological evaluation and usability analysis
- Evaluation strategies and paradigms
- Prototypes and products
WORKSHOP PROGRAMME
The format of the workshop will be a non-overlapping mixture of oral and poster sessions. A number of tutorial lectures will be given by internationally recognised experts from the area of Perception and Interactive Technologies for Speech-Based Systems.
All poster sessions will be opened by an oral summary by the session chair. A number of poster sessions will be succeeded by a discussion session focussing on the subject of the session. It is our belief that this general format will ensure a lively and valuable workshop.
The organisers would like to encourage researchers and industrialists to take the opportunity to bring their applications as well as their demonstrator prototypes and design tools for demonstration to the workshop. If sufficient interest is shown, a special demonstrator/poster session will be organised and followed by a discussion session.
The official language of the workshop is English. At the opening of the workshop hardcopies of the proceedings, published in the LNCS/LNAI/LNBI Series by Springer, will be available.
TIMING AND DATES
February 10, 2008: Deadline for Long, Short and Demo Papers
March 15, 2008: Author notification
April 1, 2008: Deadline for final submission of accepted paper
April 18, 2008: Deadline for early bird registration
June 7, 2008: Final programme available on web
June 16 - 18, 2008: Workshop
Further information may be found on our workshop website.
CONTACT
Wolfgang Minker,/a>
University of Ulm
Department of Information Technology
Albert-Einstein-Allee 43
D-89081 Ulm
Phone: +49 731 502 6254/-6251
Fax: +49 691 330 3925516 -
JHU Summer Workshop on Language Engineering
JHU Summer Workshops
CALL FOR TEAM RESEARCH PROPOSALS
Deadline: Wednesday, October 17, 2007
The Center for Language and Speech Processing at Johns Hopkins University invites one-page research proposals for a Summer Workshop on Language Engineering, to be held in Baltimore, MD, USA, July 7 to August 14, 2008.
Workshop proposals should be suitable for a six-week team exploration, and should aim to advance the state of the art in any of the various fields of Language Engineering including speech recognition, machine translation, information retrieval, text summarization and question answering. Research topics selected for investigation by teams in previous workshops may serve as good examples for your proposal. (See http://www.clsp.jhu.edu/workshops.)
This year's workshop will be sponsored by NSF and supported in part by the newly established Human Language Technology Center of Excellence (CoE). All relevant topics of scientific interest are welcomed. Proposals can receive special priority if they contribute to one of the following long-term challenges:
* AUTOMATIC POPULATION OF A KNOWLEDGE BASE FROM TEXT: Devise and develop technology to automatically populate a large knowledge base (KB) by accumulating entities, events, and relations from vast quantities of text from various formal and informal genres in multiple languages. Devise methods to do this effectively for resource rich and/or resource poor languages. The aim is to disambiguate and normalize entities, events, and relations in such a way that the KB could represent changes over time thus reflecting text sources.
* ROBUST TECHNOLOGY FOR SPEECH: Technologies like speech-to-text, speaker identification, and language identification share a common weakness: accuracy degrades disproportionately with changes in input (microphone, genre, speaker, etc.). Seemingly small amounts of noise or diverse data sources cause machines to break where humans would quickly and effectively adapt. The aim is to develop technology whose performance would be minimally degraded by input signal variations.
* PARALLEL PROCESSING FOR SPEECH AND LANGUAGE: A broad variety of pattern recognition problems in speech and language require a large amount of computation and must be run on a large amount of data. There is a need to optimize these algorithms to increase throughput and improve cost effectiveness. Proposals are invited both for novel parallelizable algorithms and for hardware configurations that achieve higher throughput or lower speed-power product than can be achieved by optimizing either alone.
An independent panel of experts will screen all received proposals for suitability. Results of this screening will be communicated no later than October 19, 2007. Authors passing this initial screening will be invited to Baltimore to present their ideas to a peer-review panel on November 2-4, 2007. It is expected that the proposals will be revised at this meeting to address any outstanding concerns or new ideas. Two or three research topics and the teams to tackle them will be selected for the 2008 workshop.
We attempt to bring the best researchers to the workshop to collaboratively pursue the selected topics for six weeks. Authors of successful proposals typically become the team leaders. Each topic brings together a diverse team of researchers and students. The senior participants come from academia, industry and government. Graduate student participants familiar with the field are selected in accordance with their demonstrated performance, usually by the senior researchers. Undergraduate participants, selected through a national search, will be rising seniors who are new to the field and have shown outstanding academic promise.
If you are interested to participate in the 2008 Summer Workshop we ask that you submit a one-page research proposal for consideration, detailing the problem to be addressed. If your proposal passes the initial screening, we will invite you to join us for the organizational meeting in Baltimore (as our guest) for further discussions aimed at consensus. If a topic in your area of interest is chosen as one of the two or three to be pursued next summer, we expect you to be available for participation in the six-week workshop. We are not asking for an ironclad commitment at this juncture, just a good faith understanding that if a project in your area of interest is chosen, you will actively pursue it.
Proposals should be submitted via e-mail to clsp@jhu.edu by 5PM ET on Wed, October 17, 2007.
-
Calls for EUSIPCO 2008-Lausanne Switzerland
CALL FOR PAPERS
CALL FOR SPECIAL SESSIONS AND CALL FOR TUTORIALS
EUSIPCO-2008 - 16th European Signal Processing Conference
August 25-29, 2008, Lausanne, Switzerland - http://www.eusipco2008.org
The 2008 European Signal Processing Conference (EUSIPCO-2008) is the sixteenth in a series of conferences promoted by EURASIP, the European Association for Signal, Speech, and Image Processing (www.eurasip.org ). Formerly biannual, this conference is now ayearly event. This edition will take place in Lausanne, Switzerland, organized by the Swiss Federal Institute of Technology, Lausanne (EPFL).
EUSIPCO-2008 will focus on the key aspects of signal processing theory and applications. Exploration of new avenues and methodologies of signal processing will also be encouraged. Accepted papers will be published in the Proceedings of EUSIPCO-2008. Acceptance will be based on quality, relevance and originality. Proposals for special sessions and tutorials are also invited.
For the first time, access to the tutorials will be free to all registered participants!
IMPORTANT DATES:
Proposals for Special Sessions: December 7, 2007
Proposals for Tutorials: February 8, 2008
Electronic submission of Full papers (5 pages A4): February 8, 2008
Notification of Acceptance: April 30, 2008
Conference: August 25-29, 2008
More details on how to submit papers and proposals for special sessions and tutorials can be found on the conference web site http://www.eusipco2008.org
Prof. Jean-Philippe Thiran
EPFL - Signal Processing Institute
EUSIPCO-2008 General Chair
Back to Top -
International Seminar on Speech Production ISSP 2008
The International Seminar on Speech Production (ISSP-2008) will be held in Strasbourg (Haguenau, 25 km north of Strasbourg-France) in December 2008, from Monday the 8th to Friday the 12th.
Please take note of these dates, while expecting more precise information which would be provided in a couple of weeks.
Back to Top