Editor: Chris Wellekens
Dear Members,
In this issue, you will find many new job offers and new conference announcements. Please pay a special attention to the upcoming deadlines for submission.
I remind you that ISCApad is published monthly. In consequence, it is difficult to inform members of last minute extensions of a submision deadlines.
May I ask those of you who send me job openings to specify if possible the expiration date for the offer ,or, if the offer remains valid until the position is filled, to inform me as soon it is filled. I try to keep only actual offers in the job listings.
Professor em. Chris Wellekens
Institut Eurecom France
Back to Top
ISCA News
-
GOOGLE SCHOLAR AND ISCA ARCHIVE
Back to Top
Google Scholar and the ISCA Archive
The indexing of the ISCA Archive (http://www.isca-speech.org/archive/) by the Google Scholar search engine (http://scholar.google.com/) is now thorough enough to be quite useful, so this seems like a good time to give an overview of the service. Google Scholar is a research literature search engine that provides full-text search for ISCA papers whose full text cannot be searched with other search engines. Google Scholar's citation tracking shows what papers have cited a particular paper, which can be very useful for finding follow-up work, related work and corrections. More details about these and other features are given below.
The titles, author lists, and abstracts of ISCA Archive papers are all on the public web, so they can be searched by a general-purpose search engine such as Google. However, the full texts of most ISCA papers are password protected and thus cannot be searched with a general-purpose search engine. Google Scholar, through an arrangement with ISCA, has access to the full text of ISCA papers. Google Scholar has similar arrangements with many other publishers. (On the other hand, general-purpose search engines index all sorts of web pages and other documents accessible through the public web, many of which will not be in the Google Scholar index. So it's often useful to perform the same search using both Google Scholar and a general-purpose search engine.)
Google Scholar automatically extracts citations from the full text of papers. It uses this information to provide a "Cited by" list for each paper in the Google Scholar index. This is a list of papers that have cited that paper. Google Scholar also provides an automatically generated "Related Articles" list for each paper. The "Cited by" and "Related Articles" lists are powerful tools for discovering relevant papers. Furthermore, the length of a paper's "Cited by" list can be used as a convenient (although imperfect) measure of the paper's impact. Discussions about the subtleties of using Google Scholar to measure impact can be found at http://www.harzing.com/resources.htm#/pop_gs.htm and http://blogs.nature.com/nautilus/2007/07/google_scholar_as_a_measure_of.html.
It's possible to restrict Google Scholar searches to papers published by ISCA by using Google Scholar's Advanced Search feature and entering "ISCA" in the "Return articles published in" field. If "ISCA" is entered in that field, and nothing is entered in the main search field, then the search results will show what ISCA papers are the most highly cited.
It should be noted that that there are many papers on ISCA-related topics which are not in the Google Scholar index. For example, it seems many ICPhS papers are missing. And old papers which have been scanned in from paper copies will either not have their full contents indexed, or will be indexed using imperfect OCR technology. Furthermore, as of November 2007 the indexing of the ISCA Archive by Google Scholar is still not 100% complete. There are a few different areas which are not perfectly indexed, but the biggest planned improvement is to start using OCR for the ISCA papers which have been scanned in from paper copies.
There may be a time lag between when a new event is added to the ISCA Archive in the future and when it appears in the Google Scholar index. This time lag may be longer than the usual lag of general-purpose search engines such as Google, because ISCA must create Google Scholar catalog data for every new event and because the Google Scholar index seems to update considerably more slowly than the Google index.
Acknowledgements: ISCA's arrangement with Google Scholar is a project of students Rahul Chitturi, Tiago Falk, David Gelbart, Agustin Gravano, and Francis Tyers, ISCA webmaster Matt Bridger, and ISCA Archive coordinator Wolfgang Hess. Our thanks to Google's Christian DiCarlo and Darcy Dapra, and the rest of the Google Scholar team.
SIG's activities
-
A list of Speech Interest Groups can be found on our web.
Back to Top
Courses, Internships
-
Motorola Labs - Center for Human Interaction Research (CHIR) l
Motorola Labs - Center for Human Interaction Research (CHIR)
located in Schaumburg Illinois, USA,
is offering summer intern positions in 2008 (12 weeks each).
CHIR's mission
Our research lab develops technologies that provide access to rich communication, media and
information services effortless, based on natural, intelligent interaction. Our research
aims on systems that adapt automatically and proactively to changing environments, device
capabilities and to continually evolving knowledge about the user.
Intern profiles
1) Acoustic environment/event detection and classification.
Successful candidate will be a PhD student near the end of his/her PhD study and is skilled
in signal processing and/or pattern recognition; he/she knows Linux and C/C++ programming.
Candidates with knowledge of acoustic environment/event classification are preferred.
2) Speaker adaptation for applications on speech recognition and spoken document retrieval
The successful candidate must currently be pursuing a Ph.D. degree in EE or CS with complete
understanding and hand-on experience on automatic speech recognition related research. Proficiency
in Linux/Unix working environment and C/C++ programming. Strong GPA. A strong background in speaker
adaptation is highly preferred.
3) Development of voice search-based web applications on a smartphone
We are looking for an intern candidate to help create an "experience" prototype based on our
voice search technology. The app will be deployed on a smartphone and demonstrate intuitive and
rich interaction with web resources. This intern project is oriented more towards software engineering
than research. We target an intern with a master's degree and strong software engineering background.
Mastery of C++ and experience with web programming (AJAX and web services) is required.
Development experience on Windows CE/Mobile desired.
4) Integrated Voice Search Technology For Mobile Devices
Candidate should be proficient in information retrieval, pattern recognition and speech recognition.
Candidate should program in C++ and script languages such as Python or Perl in Linux environment.
Also, he/she should have knowledge on information retrieval or search engines.
We offer competitive compensation, fun-to-work environment and Chicago-style pizza.
If you are interested, please send your resume to:
Dusan Macho, CHIR-Motorola Labs
Email: dusan [dot] macho [at] motorola [dot] com
Tel: +1-847-576-6762
Back to Top -
Studentships in Human Language Technology
*** Studentships available for 2008/9 ***
One-Year Masters Course in HUMAN LANGUAGE TECHNOLOGY
Department of Computer Science
The University of Sheffield - UK
The Sheffield MSc in Human Language Technology (HLT) has been carefully tailored
to meet the demand for graduates with the highly-specialised multi-disciplinary skills
that are required in HLT, both as practitioners in the development of HLT applications
and as researchers into the advanced capabilities required for next-generation HLT
systems. The course provides a balanced programme of instruction across a range
of relevant disciplines including speech technology, natural language processing and
dialogue systems. The programme is taught in a research-led environment.
This means that you will study the most advanced theories and techniques in the field,
and have the opportunity to use state-of-the-art software tools. You will also have
opportunities to engage in research-level activity through in-depth exploration of
chosen topics and through your dissertation. As well as readying yourself for
employment in the HLT industry, this course is also an excellent introduction to the
substantial research opportunities for doctoral-level study in HLT.
*** A number of studentships are available, on a competitive basis, to suitably
qualified applicants. These awards pay a stipend in addition to the course fees.
*** For further details of the course,
see ... http://www.shef.ac.uk/dcs/postgrad/taught/hlt
For information on how to apply
see ... http://www.shef.ac.uk/dcs/postgrad/taught/apply.html
Back to Top
Books, Databases, Softwares
-
Reviewing a book?
The author of the book Advances in Digital Speech Transmission told me that you might be interested in doing a review of her book. If so I would be pleased to send you a free review copy. Please just answer to this email and let me know the address where I can send to book to.
Back to Top
Martin, Rainer / Heute, Ulrich / Antweiler, Christiane
Advances in Digital Speech Transmission
1. Edition - January 2008
99.90 Euro
2008. 572 Pages, Hardcover
- Practical Approach Book -
ISBN-10: 0-470-51739-5
ISBN-13: 978-0-470-51739-0 - John Wiley & Sons
Best regards
Tina Heuberger
----------------------------------------------------
Public Relations Associate
Physical Sciences and Life Sciences Books
Wiley-Blackwell
Wiley-VCH Verlag GmbH & Co. KGaA
Boschstr. 12
69469 Weinheim
Germany
phone +49/6201/606-412
fax +49/6201/606-223
mailto:theuberger@wiley-vch.de -
Books
La production de la parole
Author: Alain Marchal, Universite d'Aix en Provence, France
Publisher: Hermes Lavoisier
Year: 2007Speech enhancement-Theory and Practice
Author: Philipos C. Loizou, University of Texas, Dallas, USA
Publisher: CRC Press
Year:2007Speech and Language Engineering
Editor: Martin Rajman
Publisher: EPFL Press, distributed by CRC Press
Year: 2007Human Communication Disorders/ Speech therapy
This interesting series can be listed on Wiley websiteIncurses em torno do ritmo da fala
Author: Plinio A. Barbosa
Publisher: Pontes Editores (city: Campinas)
Year: 2006 (released 11/24/2006)
(In Portuguese, abstract attached.) WebsiteSpeech Quality of VoIP: Assessment and Prediction
Author: Alexander Raake
Publisher: John Wiley & Sons, UK-Chichester, September 2006
WebsiteSelf-Organization in the Evolution of Speech, Studies in the Evolution of Language
Author: Pierre-Yves Oudeyer
Publisher:Oxford University Press
WebsiteSpeech Recognition Over Digital Channels
Authors: Antonio M. Peinado and Jose C. Segura
Publisher: Wiley, July 2006
WebsiteMultilingual Speech Processing
Editors: Tanja Schultz and Katrin Kirchhoff ,
Elsevier Academic Press, April 2006
WebsiteReconnaissance automatique de la parole: Du signal a l'interpretation
Back to Top
Authors: Jean-Paul Haton
Christophe Cerisara
Dominique Fohr
Yves Laprie
Kamel Smaili
392 Pages
Publisher: Dunod -
News from LDC
- 50,000th LDC Corpus Distributed! -
- LDC at the ALA Midwinter Meeting -
- Survey Responses Are In! -
LDC2008T03
- ACE 2005 English SpatialML Annotations -
LDC2008S01
- CSLU: Portland Cellular Telephone Speech Version 1.3 -
LDC2008T01
- Hungarian-English Parallel Text, Version 1.0 -
50,000th LDC Corpus Distributed!
Last year marked the LDC's 15th Anniversary Year and it proved to be an exciting one for the LDC. We commemorated this anniversary with a Fidelity Celebration which rewarded our loyal members who continually support the consortium through membership. Additionally, we provided our list serve readers with a glimpse into the research activities at the LDC through each of our monthly Spotlights.
At the very end of our anniversary year, the LDC observed another significant milestone: the distribution of our 50,000th publication! This corpus was licensed by Helsinki University of Technology, Adaptive Informatics Research Centre (AIRC). AIRC's research includes basic algorithmic analysis, multimodal interfaces (speech, vision and language), bioinformatics, neuroinformatics and computational cognitive systems. In appreciation, the LDC is offering Helsinki University of Technology a US$2000 benefit to be used towards membership or data licensing fees.
We would like to thank both members and nonmembers for helping the LDC reach this landmark distribution. Your persistent demand for LDC data supports our mission to develop and share resources for research in human language technologies.
LDC at the ALA Midwinter Meeting
The LDC was delighted to attend the American Library Association's (ALA) Midwinter Conference here in Philadelphia from 11-14 January 2008 and to meet more members of our community. We demonstrated the search capabilities of the LDC Catalog and LDC Online and provided attendees with insight into our diverse publications and membership options. We would like to thank everyone who came by the LDC display at booth #239 and to invite all ALA attendees to contact us with any follow-up questions. Please read more about the Midwinter Conference on the ALA's homepage.
Survey Responses Are In!
The LDC is pleased to announce the results of LDC's 2007 Member Survey. We sent the survey to all those who received LDC data in 2006 and 2007 (members and nonmembers), a total of nearly 1700 recipients. The survey was customized to respondents' affiliation with the LDC (Standard, Subscription or Former Members and Non-Members) and focused on a few key issues:
- Satisfaction levels with LDC's data, homepage and Catalog
- Satisfaction levels with LDC Memberships (where applicable)
- Suggestions for future data releases and publication options
Those who responded to the survey are generally satisfied with their membership benefits and the LDC catalog and homepage. Nevertheless, some of the individuals surveyed indicated areas for improvement and we will be evaluating each response and replying to your queries within the next few weeks.
To survey respondents: Thank you for your participation! You will be receiving a more detailed evaluation of the survey shortly along with the announcement of the lucky winner of the $500 benefit.
New Publications
(1) The ACE (Automatic Contact Extraction) program focuses on developing automatic content extraction technology to support automatic processing of human language in text form. The kind of information recognized and extracted from text includes entities, values, temporal expressions, relations and events. SpatialML is a mark-up language for representing spatial expressions in natural language documents. SpatialML's focus is primarily on geography and culturally-relevant landmarks, rather than biology, cosmology, geology, or other regions of the spatial language domain. The goal is to allow for potentially better integration of text collections with resources such as databases that provide spatial information about a domain, including gazetteers, physical feature databases and mapping services. In ACE 2005 English SpatialML Annotations, the authors applied SpatialML tags to the English training data (originally annotated for entities, relations and events) in ACE 2005 Multilingual Training Corpus, LDC2006T06.
The main SpatialML tag is the PLACE tag. The central goal of SpatialML is to map PLACE information in text to data from gazetteers and other databases to the extent possible. Therefore, semantic attributes such as country abbreviations, country subdivision and dependent area abbreviations (e.g., US states), and geo-coordinates are used to help establish such a mapping. LINK and PATH tags express relations between places, such as inclusion relations and trajectories of various kinds. To the extent possible, SpatialML leverages ISO and other standards towards the goal of making the scheme compatible with existing and future corpora. The SpatialML guidelines are compatible with existing guidelines for spatial annotation and existing corpora within the ACE research program. ACE 2005 English SpatialML Annotations is distributed via web download.
2008 Subscription Members will automatically receive two copies of this corpus on disc. 2008 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$1000.
*
(2) CSLU: Portland Cellular Telephone Speech Version 1.3 was created by the Center for Spoken Language Understanding (CSLU) at OGI School of Science and Engineering, Oregon Health and Science University, Beaverton, Oregon. It consists of cellular telephone speech and corresponding transcripts, specifically, 7,571 utterances from 515 speakers who made calls in the Portland, Oregon area using cellular telephones.
Speakers called the CSLU data collection system on cellular telephones, and they were asked to repeat certain phrases and to respond to other prompts. Two prompt protocols were used: an In Vehicle Protocol for speakers calling from inside a vehicle and a Not in Vehicle Protocol for those calling from outside a vehicle. The protocols shared several questions, but each protocol contained distinct queries designed to probe the conditions of the caller's in vehicle/not in vehicle surroundings. Not every caller provided a response to each prompt.
The text transcriptions were produced using the non time-aligned word-level conventions described in The CSLU Labeling Guide, which is included in the documentation for this release. The corpus contains both orthographic and phonetic transcriptions of corresponding speech files. CSLU: Portland Cellular Telephone Speech Version 1.3 is distributed on one CD-ROM.
2008 Subscription Members will automatically receive two copies of this corpus, provided that they have submitted a signed copy of the LDC User Agreement for CSLU Corpora. 2008 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$150.
(3) Hungarian-English Parallel Text, Version 1.0 (also known as the "Hunglish Corpus") is a sentence-aligned Hungarian-English parallel corpus consisting of approximately two million sentence pairs. The corpus contains additional language resources for the Hungarian text, including a monolingual corpus, morphological toolset and aligner. Hungarian-English Parallel Text, Version 1.0 is a joint work of the Media Research and Education Center at the Budapest University of Technology and Economics (BUTE) and the Corpus Linguistics Department at the Hungarian Academy of Sciences Institute of Linguistics.
Sentence pair (.bi) files consist of tab-separated, matching sentence pairs. The .bi files do not contain segments where deletion or contraction occurred. They are also filtered based on quality, so the full reconstruction of the raw texts is impossible. Some .bi files were shuffled (sorted alphabetically).Alignment "ladder" (.lad) files preserve the whole of both input texts with ordering, even those segments that were not successfully aligned. In .lad files, every line is tab-separated into two columns. The first is a segment of the Hungarian text. The second is a (supposedly corresponding) segment of the English text. Such segments of the source or target text will generally consist of exactly one sentence on both sides, but can also consist of zero, or more than one, sentence. Hungarian-English Parallel Text, Version 1.0 is distributed on one CD-ROM.
2008 Subscription Members will automatically receive two copies of this corpus, provided that they have submitted a signed copy of the User License Agreement for Hungarian-English Parallel Text, Version 1. 2008 Standard Members may request a copy as part of their 16 free membership corpora. Nonmembers may license this data for US$1000.
Job openings
-
We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobs.html as well as http://www.elsnet.org/ Jobs)
Back to Top -
Speech Engineer/Senior Speech Engineer at Microsoft, Mountain View, CA,USA
Job Type: Full-Time
Back to Top
Send resume to Bruce Buntschuh
Responsibilities:
Tellme, now a subsidiary of Microsoft, is a company that is focused on delivering the highest quality voice recognition based applications while providing the highest possible automation to its clients. Central to this focus is the speech recognition accuracy and performance that is used by the applications. The candidate will be responsible for the development, performance analysis, and optimization of grammars, as well as overall speech recognition accuracy, in a wide variety of real world applications in all major market segments. This is a unique opportunity to apply and extend state of the art speech recognition technologies to emerging spaces such as information search on mobile devices.
Requirements:
· Strong background in engineering, linguistics, mathematics, machine learning, and or computer science.
· In depth knowledge and expertise in the field of speech recognition.
· Strong analytical skills with a determination to fully understand and solve complex problems.
· Excellent spoken and written communication skills.
· Fluency in English (Spanish a plus).
· Programming capability with scripting tools such as Perl.
Education:
MS, PhD, or equivalent technical experience in an area such as engineering, linguistics, mathematics, or computer science. -
Speech Technology and Software Development Engineer at Microsoft Redmond WA, USA
Speech Technology and Software Development Engineer
Speech Technologies and Modeling
Speech Component Group
Microsoft Corporation
Redmond WA, USA
Please contact: Yifan.Gong@microsoft.com
Microsoft's Speech Component Group has been working on automatic speech recognition (SR) in real environments. We develop SR products for multiple languages for mobile devices, desktop computers, and communication servers. The group now has an open position for speech scientists with a software development focus to work on our acoustic and language modeling technologies. The position offers great opportunities for innovation and technology and product development.
Responsibilities:
· Design and implement speech/language modeling and recognition algorithms to improve recognition accuracy.
· Create, optimize and deliver quality speech recognition models and other components tailored to our customers' needs.
· Identify, investigate and solve challenging problems in the areas of recognition accuracy from speech recognition system deployments.
· Improve speech recognition language expansion engineering process that ensures product quality and scalability.
Required competencies and skills:
· Passion about speech technology and quality software, demonstrated ability relative to the design and implementation of speech recognition algorithms.
· Strong desire for achieving excellent results, strong problem solving skills, ability to multi-task, handle ambiguities, and identify issues in complex SR systems.
· Good software development skills, including strong aptitude for software design and coding. 3+ years of experience in C/C++ and programming with scripting languages are highly desirable.
· MS or PhD degree in Computer Science, Electrical Engineering, Mathematics, or related disciplines, with strong background in speech recognition technology, statistical modeling, or signal processing.
· Track record of developing SR algorithms, or experience in linguistic/phonetics, is a plus. -
PhD Research Studentship in Spoken Dialogue Systems- Cambridge UK
Applications are invited for an EPSRC sponsored studentship in Spoken Dialogue Systems leading to the PhD degree. The student will join a team lead by Professor Steve Young working on statistical approaches to building Spoken Dialogue Systems. The overall goal of the team is to develop complete working end-to-end systems which can be trained from real data and which can be continually adapted on-line. The PhD work will focus specifically on the use of Partially Observable Markov Decision Processes for dialogue modelling and techniques for learning and adaptation within that framework. The work will involve statistical modelling, algorithm design and user evaluation. The successful candidate will have a good first degree in a relevant area. Good programming skills in C/C++ are essential and familiarity with Matlab would be useful.
Back to Top
The studentship will be for 3 years starting in October 2007 or January 2008. The studentship covers University and College fees at the Home/EU rate and a maintenance allowance of 13000 pounds per annum. Potential applicants should email Steve Young with a brief CV and statement of interest in the proposed work area -
AT&T - Labs Research: Research Staff Positions - Florham Park, NJ
AT&T - Labs Research is seeking exceptional candidates for Research Staff positions. AT&T is the premiere broadband, IP, entertainment, and wireless communications company in the U.S. and one of the largest in the world. Our researchers are dedicated to solving real problems in speech and language processing, and are involved in inventing, creating and deploying innovative services. We also explore fundamental research problems in these areas. Outstanding Ph.D.-level candidates at all levels of experience are encouraged to apply. Candidates must demonstrate excellence in research, a collaborative spirit and strong communication and software skills. Areas of particular interest are
- Large-vocabulary automatic speech recognition
- Acoustic and language modeling
- Robust speech recognition
- Signal processing
- Speaker recognition
- Speech data mining
- Natural language understanding and dialog
- Text and web mining
- Voice and multimodal search
AT&T Companies are Equal Opportunity Employers. All qualified candidates will receive full and fair consideration for employment. More information and application instructions are available on our website at http://www.research.att.com/. Click on "Join us". For more information, contact Mazin Gilbert (mazin at research dot att dot com).
Back to Top -
Research Position in Speech Processing at UGent, Belgium
Background
Since March 2005, the universities of Leuven, Gent, Antwerp and Brussels have joined forces in a big research project, called SPACE (SPeech Algorithms for Clinical and Educational applications). The project aims at contributing to the broader application of speech technology in educational and therapeutic software tools. More specifically, it pursues the automatic detection and classification of reading errors in the context of an automatic reading tutor, and the objective assessment of disordered speech (e.g. speech of the deaf, dysarthric speech, ...) in the context of computer assisted speech therapy assessment. Specific for the target applications is that the speech is either grammatically and lexically incorrect or a-typically pronounced. Therefore, standard technology cannot be applied as such in these applications.
Job description
The person we are looking for will be in charge of the data-driven development of word mispronunciation models that can predict expected reading errors in the context of a reading tutor. These models must be integrated in the linguistic model of the prompted utterance, and achieve that the speech recognizer becomes more specific in its detection and classification of presumed errors than a recognizer which is using a more traditional linguistic model with context-independent garbage and deletion arcs. A challenge is also to make the mispronunciation model adaptive to the progress made by the user.
Profile
We are looking for a person from the EU with a creative mind, and with an interest in speech & language processing and machine learning. The work will require an ability to program algorithms in C and Python. Having experience with Python is not a prerequisite (someone with some software experience is expected to learn this in a short time span). Demonstrated experience with speech & language processing and/or machine learning techniques will give you an advantage over other candidates.
The job is open to a pre-doctoral as well as a post-doctoral researcher who can start in November or December. The job runs until February 28, 2009, but a pre-doctoral candidate aiming for a doctoral degree will get opportunities to do follow-up research in related projects.
Interested persons should send their CV to Jean-Pierre Martens (martens@elis.ugent.be). There is no real deadline, but as soon as a suitable person is found, he/she will get the job.
Back to Top -
Summer Inter positions at Motorola Schaumburg Illinois USA
Motorola Labs - Center for Human Interaction Research (CHIR) located in Schaumburg Illinois, USA, is offering summer intern positions in 2008 (12 weeks each).
CHIR's mission:
Our research lab develops technologies that provide access to rich communication, media and information services effortless, based on natural, intelligent interaction. Our research aims on systems that adapt automatically and proactively to changing environments, device capabilities and to continually evolving knowledge about the user.
Intern profiles:
1) Acoustic environment/event detection and classification.
Successful candidate will be a PhD student near the end of his/her PhD study and is skilled in signal processing and/or pattern recognition; he/she knows Linux and C/C++ programming. Candidates with knowledge of acoustic environment/event classification are preferred.
2) Speaker adaptation for applications on speech recognition and spoken document retrieval.
The successful candidate must currently be pursuing a Ph.D. degree in EE or CS with complete understanding and hand-on experience on automatic speech recognition related research. Proficiency in Linux/Unix working environment and C/C++ programming. Strong GPA. A strong background in speaker adaptation is highly preferred.
3) Development of voice search-based web applications on a smartphone
We are looking for an intern candidate to help create an "experience" prototype based on our voice search technology. The app will be deployed on a smartphone and demonstrate intuitive and rich interaction with web resources. This intern project is oriented more towards software engineering than research. We target an intern with a master's degree and strong software engineering background. Mastery of C++ and experience with web programming (AJAX and web services) is required. Development experience on Windows CE/Mobile desired.
4) Integrated Voice Search Technology For Mobile Devices.
Candidate should be proficient in information retrieval, pattern recognition and speech recognition. Candidate should program in C++ and script languages such as Python or Perl in Linux environment. Also, he/she should have knowledge on information retrieval or search engines.
We offer competitive compensation, fun-to-work environment and Chicago-style pizza.
If you are interested, please send your resume to:
Dusan Macho, CHIR-Motorola Labs
Email: dusan.macho@motorola.com
Tel: +1-847-576-6762
Back to Top -
Nuance: Software engineer speech dialog tools
In order to strengthen our Embedded ASR Research team, we are looking for a:
SOFTWARE ENGINEER SPEECH DIALOGUE TOOLS
As part of our team, you will be creating solutions for voice user interfaces for embedded applications on mobile and automotive platforms.
OVERVIEW:
- You will work in Nuance's Embedded ASR R&D team, developing technology, tools, and run-time software to enable our customers to develop and test embedded speech applications. Together with our team of speech and language experts, you will work on natural language dialogue systems for our customers in the Automotive and Mobile sector.
- You will work either at Nuance's Office in Aachen, a beautiful, old city right in the heart of Europe with great history and culture, or at Nuance's International Headquarters in Merelbeke, a small town just 5km away from the heart of the vibrant and picturesque city of Ghent, in the Flanders region of Belgium. Both Aachen and Ghent offer some of the most spectacular historic town centers in Europe, and are home to large international universities.
- You will work in an international company and cooperate with people on various locations including in Europe, America and Asia. You may occasionally be asked to travel.
RESPONSIBILITIES:
- You will work on the development of tools and solutions for cutting edge speech and language understanding technologies for automotive and mobile devices.
- You will work on enhancing various aspects of our advanced natural language dialogue system, such as the layer of connected applications, the configuration setup, inter-module communication, etc.
- In particular, you will be responsible for the design, implementation, evaluation, optimization and testing, and documentation of tools such as GUI and XML applications that are used to develop, configure, and fine-tune advanced dialogue systems.
QUALIFICATIONS:
- You have a university degree in computer science, engineering, mathematics, physics, computational linguistics, or a related field.
- You have very strong software and programming skills, especially in C/C++, ideally also for embedded applications.
- You have experience with Python or other scripting languages.
- GUI programming experience is a strong asset.
The following skills are a plus:
- Understanding of communication protocols
- Understanding of databases
- Understanding of computational agents and related frameworks (such as OAA).
- A background in (computational) linguistics, dialogue systems, speech processing, grammars, and parsing techniques, statistics and machine learning, especially as related to natural language processing, dialogue, and representation of information
- You can work both as a team player and as goal-oriented independent software engineer.
- You can work in a multi-national team and communicate effectively with people of different cultures.
- You have a strong desire to make things really work in practice, on hardware platforms with limited memory and processing power.
- You are fluent in English and you can write high quality documentation.
- Knowledge of other languages is a plus.
CONTACT:
Please send your applications, including cover letter, CV, and related documents (maximum 5MB total for all documents, please) to
Deanna Roe Deanna.roe@nuance.com
Please make sure to document to us your excellent software engineering skills.
ABOUT US:
Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world. Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched. With more than 3000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.
-
Nuance: Speech scientist London UK
Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world. Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched. With more than 2000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.
To strengthen our International Professional Services team, based in London, we are currently looking for a
Speech Scientist, London, UK
Nuance Professional Services (PS) has designed, developed, and optimized thousands of speech systems across dozens of industries, including directory search, call center automation, applications in telecom, finance, airline, healthcare, and other verticals; applications for video games, mobile dictation, enhanced search services, SMS, and in-car navigation. Nuance PS applications have automated approximately 7 billion phone conversations for some of the world's most respected companies, including British Airways, Vodafone, Amtrak, Bank of America, BellCanada, Citigroup, General Electric, NTT and Verizon.
The PS organization consists of energetic, motivated, and friendly individuals. The Speech Scientists in PS are among the best and brightest, with PhDs from universities such as Cambridge (UK), MIT, McGill, Harvard, Penn, CMU, and Georgia Tech, and having worked at research labs such Bell Labs, Motorola Labs, and ATR (Japan), culminating in over 300 years of Speech Science experience and covering well over 20 languages.
Come and join Nuance PS and work on the latest technology from one of the prominent speech recognition technology providers, and make a difference in the way the world communicates.
Job Overview
As a Speech Scientist in the Professional Services group, you will work on automated speech recognition applications, covering a broad range of activities in all project phases, including the design, development, and optimization of the system. You will:
- Work across application development teams to ensure best possible recognition performance in deployed systems
- Identify recognition challenges and assess accuracy feasibility during the design phase,
- Design, develop, and test VoiceXML grammars and create JSPs, Java, and ECMAscript grammars for dynamic contexts
- Optimize accuracy of applications by analyzing performance and tuning statistical language models, pronunciations, and acoustic models, including identifying areas for improvement by running the recognizer offline
- Contribute to the generation and presentation of client-facing reports
- Act as technical lead on more intensive client projects
- Develop methodologies, scripts, procedures that improve efficiency and quality
- Develop tools and enhance algorithms that facilitate deployment and tuning of recognition components
- Act as subject matter domain expert for specific knowledge domains
- Provide input into the design of future product releases
Required Skills
- MS or PhD in Computer Science, Engineering, Computational Linguistics, Physics, Mathematics, or related field (or equivalent)
- Strong analytical and problem solving skills and ability to troubleshoot issues
- Good judgment and quick-thinking
- Strong programming skills, preferably Perl or Python
- Excellent written and verbal communications skills
- Ability to scope work taking technical, business and time-frame constraints into consideration
- Works well in a team and in a fast-paced environment
Beneficial Skills
- Strong programming skills in either Perl, Python, Java, C/C++, or Matlab
- Speech recognition knowledge
- Strong pattern recognition, linguistics, signal processing, or acoustics knowledge
- Statistical data analysis
- Experience with XML, VoiceXML, and Wiki
- Ability to mentor or supervise others
- Additional language skills, eg French, Dutch, German, Spanish
-
Nuance: Research engineer speech engine
n order to strengthen our Embedded ASR Research team, we are looking for a:
RESEARCH ENGINEER SPEECH ENGINE
As part of our team, you will be creating solutions for voice user interfaces for embedded applications on mobile and automotive platforms.
OVERVIEW:
- You will work in Nuance's Embedded ASR R&D team, developing, improving and maintaining core ASR engine algorithms for our customers in the Automotive and Mobile sector.
- You will work either at Nuance's Office in Aachen, a beautiful, old city right in the heart of Europe with great history and culture, or at Nuance's International Headquarters in Merelbeke, a small town just 5km away from the heart of the vibrant and picturesque city of Ghent, in the Flanders region of Belgium. Both Aachen and Ghent offer some of the most spectacular historic town centers in Europe, and are home to large international universities.
- You will work in an international company and cooperate with people on various locations including in Europe, America and Asia. You may occasionally be asked to travel.
RESPONSIBILITIES:
- You will work on the developing, improving and maintaining core ASR engine algorithms for cutting edge speech and natural language understanding technologies for automotive and mobile devices.
- You will work on the design and development of more efficient, flexible ASR search algorithms with high focus on low memory and processor requirements.
QUALIFICATIONS:
- You have a university degree in computer science, engineering, mathematics, physics, computational linguistics, or a related field. PhD is a plus.
- A background in (computational) linguistics, speech processing, ASR search, confidence values, grammars, statistics and machine learning, especially as related to natural language processing.
- You have very strong software and programming skills, especially in C/C++, ideally also for embedded applications.
The following skills are a plus:
- You have experience with Python or other scripting languages.
- Broad knowledge about architectures of embedded platforms and processors.
- Understanding of databases
- You can work both as a team player and as goal-oriented independent software engineer.
- You can work in a multi-national team and communicate effectively with people of different cultures.
- You have a strong desire to make things really work in practice, on hardware platforms with limited memory and processing power.
- You are fluent in English and you can write high quality documentation.
- Knowledge of other languages is a plus.
CONTACT:
Please send your applications, including cover letter, CV, and related documents (maximum 5MB total for all documents, please) to
Deanna Roe Deanna.roe@nuance.com
Please make sure to document to us your excellent software engineering skills.
ABOUT US:
Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world. Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched. With more than 3000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.
-
Nuance RESEARCH ENGINEER SPEECH DIALOG SYSTEMS:
In order to strengthen our Embedded ASR Research team, we are looking for a:
RESEARCH ENGINEER SPEECH DIALOGUE SYSTEMS
As part of our team, you will be creating speech technologies for embedded applications varying from simple command and control tasks up to natural language speech dialogues on mobile and automotive platforms.
OVERVIEW:
-You will work in Nuance's Embedded ASR research and production team, creating technology, tools and runtime software to enable our customers develop embedded speech applications. In our team of speech and language experts, you will work on natural language dialogue systems that define the state of the art.
- You will work at Nuance's International Headquarters in Merelbeke, a small town just 5km away from the heart of the picturesque city of Ghent, in the Flanders region of Belgium. Ghent has one of the most spectacular historic town centers of Europe and is known for its unique vibrant yet cozy charm, and is home to a large international university.
- You will work in an international company and cooperate with people on various locations including in Europe, America, and Asia. You may occasionally be asked to travel.
RESPONSIBILITIES:
- You will work on the development of cutting edge natural language dialogue and speech recognition technologies for automotive embedded systems and mobile devices.
- You will design, implement, evaluate, optimize, and test new algorithms and tools for our speech recognition systems, both for research prototypes and deployed products, including all aspects of dialogue systems design, such as architecture, natural language understanding, dialogue modeling, statistical framework, and so forth.
- You will help the engine process multi-lingual natural and spontaneous speech in various noise conditions, given the challenging memory and processing power constraints of the embedded world.
QUALIFICATIONS:
- You have a university degree in computer science, (computational) linguistics, engineering, mathematics, physics, or a related field. A graduate degree is an asset.
-You have strong software and programming skills, especially in C/C++, ideally for embedded applications. Knowledge of Python or other scripting languages is a plus. [HQ1]
- You have experience in one or more of the following fields:
dialogue systems
applied (computational) linguistics
natural language understanding
language generation
search engines
speech recognition
grammars and parsing techniques.
statistics and machine learning techniques
XML processing
-You are a team player, willing to take initiative and assume responsibility for your tasks, and are goal-oriented.
-You can work in a multi-national team and communicate effectively with people of different cultures.
-You have a strong desire to make things really work in practice, on hardware platforms with limited memory and processing power.
-You are fluent in English and you can write high quality documentation.
-Knowledge of other languages is a strong asset.
CONTACT:
Please send your applications, including cover letter, CV, and related documents (maximum 5MB total for all documents, please) to
Deanna Roe Deanna.roe@nuance.com
ABOUT US:
Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world. Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched. With more than 3000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.
-
Research Position in Speech Processing at Nagoya Institute of
Research Position in Speech Processing at Nagoya Institute of
Technology, Japan
Nagoya Institute of Technology is seeking a researcher for a
post-doctoral position in a new European Commission-funded project
EMIME ("Efficient multilingual interaction in mobile environment")
involving Nagoya Institute of Technology and other five European
partners, starting in March 2008 (see the project summary below).
The earliest starting date of the position is March 2007. The initial
duration of the contract will be one year, with a possibility for
prolongation (year-by-year basis, maximum of three years). The
position provides opportunities to collaborate with other researchers
in a variety of national and international projects. The competitive
salary is calculated according to qualifications based on NIT scales.
The candidate should have a strong background in speech signal
processing and some experience with speech synthesis and recognition.
Desired skills include familiarity with latest spectrum of technology
including HTK, HTS, and Festival at the source code level.
For more information, please contact Keiichi Tokuda
(http://www.sp.nitech.ac.jp/~tokuda/).
About us
Nagoya Institute of Technology (NIT), founded on 1905, is situated in
the world-quality manufacturing area of Central Japan (about one hour
and 40 minetes from Tokyo, and 36 minites from Kyoto by Shinkansen).
NIT is a highest-level educational institution of technology and is
one of the leaders of such institutions in Japan. EMIME will be
carried at the Speech Processing Laboratory (SPL) in the Department of
Computer Science and Engineering of NIT. SPL is known for its
outstanding, continuous contribution of developing high-performance,
high-quality opensource software: the HMM-based Speech Synthesis
System "HTS" (http://hts.sp.nitech.ac.jp/), the large vocabulary
continuous speech recognition engine "Julius"
(http://julius.sourceforge.jp/), and the Speech Signal Processing
Toolkit "SPTK" (http://sp-tk.sourceforge.net/). The laboratory is
involved in numerous national and international collaborative
projects. SPL also has close partnerships with many industrial
companies, in order to transfer its research into commercial
applications, including Toyota, Nissan, Panasonic, Brother Inc.,
Funai, Asahi-Kasei, ATR.
Project summary of EMIME
The EMIME project will help to overcome the language barrier by
developing a mobile device that performs personalized speech-to-speech
translation, such that a user's spoken input in one language is used
to produce spoken output in another language, while continuing to
sound like the user's voice. Personalization of systems for
cross-lingual spoken communication is an important, but little
explored, topic. It is essential for providing more natural
interaction and making the computing device a less obtrusive element
when assisting human-human interactions.
We will build on recent developments in speech synthesis using hidden
Markov models, which is the same technology used for automatic speech
recognition. Using a common statistical modeling framework for
automatic speech recognition and speech synthesis will enable the use
of common techniques for adaptation and multilinguality.
Significant progress will be made towards a unified approach for
speech recognition and speech synthesis: this is a very powerful
concept, and will open up many new areas of research. In this
project, we will explore the use of speaker adaptation across
languages so that, by performing automatic speech recognition, we can
learn the characteristics of an individual speaker, and then use those
characteristics when producing output speech in another language.
Our objectives are to:
1. Personalize speech processing systems by learning individual
characteristics of a user's speech and reproducing them in
synthesized speech.
2. Introduce a cross-lingual capability such that personal
characteristics can be reproduced in a second language not spoken
by the user.
3. Develop and better understand the mathematical and theoretical
relationship between speech recognition and synthesis.
4. Eliminate the need for human intervention in the process of
cross-lingual personalization.
5. Evaluate our research against state-of-the art techniques and in a
practical mobile application.
Back to Top -
C/C++ Programmer Munich, Germany
Digital publishing AG is one of Europe's leading producers of interactive software for foreign language training. In our e- learning courses we want to place the emphasis on speaking and spoken language understanding. In order to strengthen our Research & Development Team in Munich, Germany, we are looking for experienced C or C++ programmers with at least 3 years experience in the design and coding of sophisticated software systems under Windows.
We offer
-a creative working atmosphere in an international team of software engineers, linguists and editors working on challenging research projects in speech recognition and speech dialogue systems
- participation in all phases of a product life cycle, as we are interested in the fast transfer of research results into products.
- the possibility to participate in international scientific conferences.
- a permanent job in the center of Munich.
- excellent possibilities for development within our fast growing company.
- flexible working times, competitive compensation and arguably the best espresso in Munich.
We expect
-several years of practical experience in software development in C or C++ in a commercial or academic environment.
-experience with parallel algorithms and thread programming.
-experience with object-oriented design of software systems.
-good knowledge of English or German.
Desirable is
-experience with optimization of algorithms.
-experience in statistical speech or language processing, preferably speech recognition, speech synthesis, speech dialogue systems or chatbots.
-experience with Delphi or Turbo Pascal.
Interested? We look forward to your application: (preferably by e-mail)
digital publishing AG
Freddy Ertl f.ertl@digitalpublishing.de
Tumblinger Straße 32
D-80337 München Germany
Back to Top -
Speech and Natural Language Processing Engineer at M*Modal, Pittsburgh.PA,USA
Speech and Natural Language Processing Engineer
M*Modal is a fast-moving speech technology company based in Pittsburgh, PA. Our portfolio of conversational speech recognition and natural language understanding technologies is widely recognized as the most advanced in the industry. We are a leading innovator in the field of conversational documentation services (CDS) - where speech recognition and natural language understanding are combined in a unique setup targeted to truly understand conversational speech and turn it directly into actionable and meaningful data. Our proprietary speech understanding technology - operating on M*Modal's computing grid hosted in our national data center - is already redefining the way clinical information is captured in healthcare.
We are seeking an experienced and dedicated speech and natural language processing engineer who wants to push the frontiers of conversational speech understanding. Join our renowned research and development team, and add to our unique blend of scientific and engineering excellence.Responsibilities:
- You will be working with other members of the R&D team to continuously improve our speech and natural language understanding technologies.
- You will participate in designing and implementing algorithms, tools and methodologies in the area of automatic speech recognition and natural language processing/understanding.
- You will collaborate with other members of the R&D team to identify, analyze and resolve technical issues.
Requirements:
- Solid background in speech recognition, natural language processing, machine learning and information extraction.
- 2+ years of experience participating in software development projects
- Proficient with Java, C++ and scripting (e.g. Python, Perl, ...)
- Excellent analytical and problem-solving skills
- Integrate and communicate well in small R&D teams
- Masters degree in CS or related engineering fields
- Experience in a healthcare-related field a plus
In June 2007 M*Modal moved to a great new office space in the Squirrel Hill area of Pittsburgh. We are excited to be growing and are looking for individuals who have a passion for the work they do and are interested in becoming a member of a dynamic work group of smart passionate drivers who also know how to have fun.
M*Modal offers a top-notch benefits package that includes medical, dental and vision coverage, short-term disability, matching 401K savings plan, holidays, paid-time-off and tuition refund. If you would like to be considered for this opportunity, please send your resume and cover letter to Mary Ann Gamble at maryann.gamble@mmodal.com.
-
Senior Research Scientist -- Speech and Natural Lgage Processing at M*Modal, Pittsburgh, PA,USA
Senior Research Scientist -- Speech and Natural Language Processing
M*Modal is a fast-moving speech technology company based in Pittsburgh, PA. Our portfolio of conversational speech recognition and natural language understanding technologies is widely recognized as the most advanced in the industry. We are a leading innovator in the field of conversational documentation services (CDS) - where speech recognition and natural language understanding are combined in a unique setup targeted to truly understand conversational speech and turn it directly into actionable and meaningful data. Our proprietary speech understanding technology - operating on M*Modal's computing grid hosted in our national data center - is already redefining the way clinical information is captured in healthcare.
We are seeking an experienced and dedicated senior research scientist who wants to push the frontiers of conversational speech understanding. Join our renowned research and development team, and add to our unique blend of scientific and engineering excellence.Responsibilities:
- Plan and perform research and development tasks to continuously improve a state-of-the-art speech understanding system
- Take a leading role in identifying solutions to challenging technical problems
- Contribute original ideas and turn them into product-grade software implementations
- Collaborate with other members of the R&D team to identify, analyze and resolve technical issues
Requirements:
- Solid research & development background with 3+ years of experience in speech recognition research, covering at least two of the following topics: speech processing, acoustic modeling, language modeling, decoding, LVCSR, natural language processing/understanding, speaker verification/identification, audio mining
- Working knowledge of Machine Learning, Information Extraction and Natural Language Processing algorithms
- 3+ years of experience participating in large-scale software development projects using C++ and Java.
- Excellent analytical, problem-solving and communication skills
- PhD with focus on speech recognition or Masters degree with 3+ years industry experience working on automatic speech recognition
- Experience and/or education in medical informatics a plus
- Working experience in a healthcare related field a plus
In June 2007 M*Modal moved to a great new office space in the Squirrel Hill area of Pittsburgh. We are excited to be growing and are looking for individuals who have a passion for the work they do and are interested in becoming a member of a dynamic work group of smart passionate drivers who also know how to have fun.
M*Modal offers a top-notch benefits package that includes medical, dental and vision coverage, short-term disability, matching 401K savings plan, holidays, paid-time-off and tuition refund. If you would like to be considered for this opportunity, please send your resume and cover letter to Mary Ann Gamble at maryann.gamble@mmodal.com.
-
Postdoc position at LORIA, Nancy, France
Building an articulatory model from ultrasound, EMA and MRI data
Postdoctoral position
Research project
An articulatory model comprises both the visible and the internal mobile articulators which are involved in speech articulation: the lower jaw, tongue, lips and velum) as well as the fixed walls (the palate, the rear wall of the pharynx). An articulatory model is dynamic since the articulators deform during speech production. Such a model has a potential interest in the field of language learning by providing visual feedback on the articulation conducted by the learner, and many other applications.
Building an articulatory model is difficult because the different articulators have to be detected from specific image modalities: the lips are acquired through video, the tongue shape is acquired through ultrasound imaging with a high frame rate but these 2D images are very noisy. Finally, 3D images of all articulators can be obtained with MRI but only for sustained sounds (as vowels) due to the long acquisition time of MRI images.
The subject of this post-doc is to construct a dynamic 3D model of the entire vocal tract by merging the 3D information available in the MRI acquisitions and temporal 2D information provided by the contours of the tongue visible on the ultrasound images or X-ray images.
We are working on the construction of an articulatory model within the European project ASPI (http://aspi.loria.fr/ ).
We already built an acquisition system which allows us to obtain synchronized data from ultrasound, MRI, video and EM modalities.
Only a few complete articulatory models are currently available in the world and a real challenge in the field is to design set-ups and easy-to-use methods for automatically building the model of any speaker from 3D and 2D images. Indeed, the existence of more articulatory models would open new directions of research about speaker variability and speech production.
Objectives
The aim of the subject is to build a deformable model of the vocal tract from static 3D MRI images and 2D dynamic 2D sequences. Previous works have been conducted on the modelling of the vocal tract, and especially of the tongue (M. Stone[1] O. Engwall[2]). Unfortunately, important human interaction is required to extract tongue contours in the images. In addition, only one image modality is often considered in these works, thus reducing the reliability of the model obtained.
The aim of this work is to provide automatic methods for segmenting features in the images as well as methods for building a parametric model of the 3D vocal tract with these specific aims:
- The segmentation process is to be guided by prior knowledge on the vocal tract. In particular shape, topologic as well as regularity constraints must be considered.
- A parametric model of the vocal tract has to be defined (classical models are linear and built from a principal component analysis). Special emphasis must be put on the problem of matching the various features between the images.
- Besides classical geometric constraints, both the building and the assessment of the model will be guided by acoustic distances in order to check for the adequation between the sound synthesized from the model and the sound realized by the human speaker.
Skill and profile
The recruited person must have a solid background in computer vision and in applied mathematics. Informations and demonstrations on the research topics addressed by the Magrit team are available at http://magrit.loria.fr/
References
[1] M. Stone : Modeling tongue surface contours from Cine-MRI images. Journal of Speech, language, hearing research, 2001.
[2]:P. Badin, G. Bailly, L. Reveret: Three-dimensional linear articulatory modeling of tongue, lips and face based on MRI and video images, Journal of Phonetics, 2002, vol 30, p 533-553
Contact
Interested candidates are invited to contact Marie-Odile Berger, berger@loria.fr, +33 3 54 95 85 01
Important information
This position is advertised in the framework of the national INRIA campaign for recruiting post-docs. It is a one year position, renewable, beginning fall 2008. The salary is 2,320€ gross per month.
Selection of candidates will be a two step process. A first selection for a candidate will be carried out internally by the Magrit group. The selected candidate application will then be further processed for approval and funding by an INRIA committee.
Doctoral thesis less than one year old (May 2007) or being defended before end of 2008. If defence has not taken place yet, candidates must specify the tentative date and jury for the defence.
Important - Useful links
Presentation of INRIA postdoctoral positions
To apply (be patient, loading this link takes times...)
Journals
-
Papers accepted for FUTURE PUBLICATION in Speech Communication
Full text available on http://www.sciencedirect.com/ for Speech Communication subscribers and subscribing institutions. Free access for all to the titles and abstracts of all volumes and even by clicking on Articles in press and then Selected papers.
-
Special Issue on Non-Linear and Non-Conventional Speech Processing-Speech Communication
Speech Communication
Call for Papers: Special Issue on Non-Linear and Non-Conventional Speech Processing
Editors: Mohamed CHETOUANI, UPMC
Marcos FAUNDEZ-ZANUY, EUPMt (UPC)
Bruno GAS, UPMC
Jean Luc ZARADER, UPMC
Amir HUSSAIN, Stirling
Kuldip PALIWAL, Griffith University
The field of speech processing has shown a very fast development in the past twenty years, thanks to both technological progress and to the convergence of research into a few mainstream approaches. However, some specificities of the speech signal are still not well addressed by the current models. New models and processing techniques need to be investigated in order to foster and/or accompany future progress, even if they do not match immediately the level of performance and understanding of the current state-of-the-art approaches.
An ISCA-ITRW Workshop on "Non-Linear Speech Processing" will be held in May 2007, the purpose of which will be to present and discuss novel ideas, works and results related to alternative techniques for speech processing departing from the mainstream approaches: http://www.congres.upmc.fr/nolisp2007
We are now soliciting journal papers not only from workshop participants but also from other researchers for a special issue of Speech Communication on "Non-Linear and Non-Conventional Speech Processing"
Submissions are invited on the following broad topic areas:
I. Non-Linear Approximation and Estimation
II. Non-Linear Oscillators and Predictors
III. Higher-Order Statistics
IV. Independent Component Analysis
V. Nearest Neighbours
VI. Neural Networks
VII. Decision Trees
VIII. Non-Parametric Models
IX. Dynamics of Non-Linear Systems
X. Fractal Methods
XI. Chaos Modelling
XII. Non-Linear Differential Equations
All fields of speech processing are targeted by the special issue, namely :
1. Speech Production
2. Speech Analysis and Modelling
3. Speech Coding
4. Speech Synthesis
5. Speech Recognition
6. Speaker Identification / Verification
7. Speech Enhancement / Separation
8. Speech Perception
Back to Top -
Journal of Multimedia User Interfaces
Journal on Multimodal User Interfaces
The development of Multimodal User Interfaces relies on systemic research involving signal processing, pattern analysis, machine intelligence and human computer interaction. This journal is a response to the need of common forums grouping these research communities. Topics of interest include, but are not restricted to:
- Fusion & Fission,
- Plasticity of Multimodal interfaces,
- Medical applications,
- Edutainment applications,
- New modalities and modalities conversion,
- Usability,
- Multimodality for biometry and security,
- Multimodal conversational systems.
The journal is open to three types of contributions:
- Articles: containing original contributions accessible to the whole research community of Multimodal Interfaces. Contributions containing verifiable results and/or open-source demonstrators are strongly encouraged.
- Tutorials: disseminating established results across disciplines related to multimodal user interfaces.
- Letters: presenting practical achievements / prototypes and new technology components.
JMUI is a Springer-Verlag publication from 2008.
The submission procedure and the publication schedule are described at:
www.jmui.org
The page of the journal at springer is:
http://www.springer.com/east/home?SGWID=5-102-70-173760003-0&changeHeader=true
More information:
Imre Váradi (varadi@tele.ucl.ac.be)
Back to Top -
CfP CALL FOR PAPERS -- CURRENT RESEARCH IN PHONOLOGY AND PHONETICS: INTERFACES WITH NATURAL LANGUAGE PROCESSING
CALL FOR PAPERS -- CURRENT RESEARCH IN PHONOLOGY AND PHONETICS: INTERFACES WITH NATURAL LANGUAGE PROCESSING
Back to Top
A SPECIAL ISSUE OF THE JOURNAL TAL (Traitement Automatique des Langues)
Guest Editors: Bernard Laks and Noël Nguyen
EXTENDED DEADLINE: 11 February 2008
There are long-established connections between research on the sound shape of language and natural language processing (NLP), for which one of the main driving forces has been the design of automatic speech synthesis and recognition systems. Over the last few years, these connections have been made yet stronger, under the influence of several factors. A first line of convergence relates to the shared collection and exploitation of the considerable resources that are now available to us in the domain of spoken language. These resources have come to play a major role both for phonologists and phoneticians, who endeavor to subject their theoretical hypotheses to empirical tests using large speech corpora, and for NLP specialists, whose interest in spoken language is increasing. While these resources were first based on audio recordings of read speech, they have been progressively extended to bi- or multimodal data and to spontaneous speech in conversational interaction. Such changes are raising theoretical and methodological issues that both phonologists/phoneticians and NLP specialists have begun to address.
Research on spoken language has thus led to the generalized utilization of a large set of tools and methods for automatic data processing and analysis: grapheme-to-phoneme converters, text-to-speech aligners, automatic segmentation of the speech signal into units of various sizes (from acoustic events to conversational turns), morpho-syntactic tagging, etc. Large-scale corpus studies in phonology and phonetics make an ever increasing use of tools that were originally developed by NLP researchers, and which range from electronic dictionaries to full-fledged automatic speech recognition systems. NLP researchers and phonologists/phoneticians also have jointly contributed to developing multi-level speech annotation systems from articulatory/acoustic events to the pragmatic level via prosody and syntax.
In this scientific context, which very much fosters the establishment of cross-disciplinary bridges around spoken language, the knowledge and resources accumulated by phonologists and phoneticians are now being put to use by NLP researchers, whether this is to build up lexical databases from speech corpora, to develop automatic speech recognition systems able to deal with regional variations in the sound pattern of a language, or to design talking-face synthesis systems in man-machine communication.
LIST OF TOPICS
The goal of this special issue will be to offer an overview of the interfaces that are being developed between phonology, phonetics, and NLP. Contributions are therefore invited on the following topics:
. Joint contributions of speech databases to NLP and phonology/phonetics
. Automatic procedures for the large-scale processing of multi-modal databases
. Multi-level annotation systems
. Research in phonology/phonetics and speech and language technologies: synthesis, automatic recognition
. Text-to-speech systems
. NLP and modelisation in phonology/phonetics
Papers may be submitted in English (for non native speakers of French only) or French and will relate to studies conducted on French, English, or other languages. They must conform to the TAL guidelines for authors available at http://www.atala.org/rubrique.php3?id_rubrique=1.
DEADLINES
. 11 February 2008: Reception of contributions
. 11 April 2008: Notification of pre-selection / rejection
. 11 May 2008: Reception of pre-selected articles
. 16 June 2008: Notification of final acceptance
. 30 June 2008: Reception of accepted articles' final versions
This special issue of Traitement Automatique des Langues will appear in autumn 2008.
THE JOURNAL
TAL (Traitement Automatique des Langues / Natural Language Processing, http://www.atala.org/rubrique.php3?id_rubrique=1) is a forty-year old international journal published by ATALA (French Association for Natural Language Processing) with the support of CNRS (French National Center for Scientific Research). It has moved to an electronic mode of publication, with printing on demand. This affects in no way its reviewing and selection process.
SCIENTIFIC COMMITTEE
. Martine Adda-Decker, LIMSI, Orsay
. Roxane Bertrand, LPL, CNRS & Université de Provence
. Philippe Blache, LPL, CNRS & Université de Provence
. Cédric Gendrot, LPP, CNRS & Université de Paris III
. John Goldsmith, University of Chicago
. Guillaume Gravier, Irisa, CNRS/INRIA & Université de Rennes I
. Jonathan Harrington, IPS, University of Munich
. Bernard Laks, MoDyCo, CNRS & Université de Paris X
. Lori Lamel, LIMSI, Orsay
. Noël Nguyen, LPL, CNRS & Université de Provence
. François Pellegrino, DDL, CNRS & Université de Lyon II
. François Poiré, University of Western Ontario
. Yvan Rose, Memorial University of Newfoundland
. Tobias Scheer, BCL, CNRS & Université de Nice
. Atanas Tchobanov, MoDyCo, CNRS & Université de Paris X
. Jacqueline Vaissière, LPP, CNRS & Université de Paris III
. Nathalie Vallée, DPC-GIPSA, CNRS & Université de Grenoble III
Future Conferences
-
Publication policy: Hereunder, you will find very short announcements of future events. The full call for participation can be accessed on the conference websites
Back to Top
See also our Web pages (http://www.isca-speech.org/) on conferences and workshops.
Future Interspeech conferences
-
INTERSPEECH 2008
September 22-26, 2008, Brisbane, Queensland, Australia
Conference Website
Chairman: Denis Burnham, MARCS, University of West Sydney. -
INTERSPEECH 2009
Brighton, UK,
Conference Website
Chairman: Prof. Roger Moore, University of Sheffield. -
INTERSPEECH 2010
Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."
Future ISCA Technical and Research Workshops
-
ISCA ITRW speech analysis and processing for knowledge discovery
June 4 - 6, 2008
Back to Top
Aalborg, Denmark
Workshop website
Humans are very efficient at capturing information and messages in speech, and they often perform this task effortlessly even when the signal is degraded by noise, reverberation and channel effects. In contrast, when a speech signal is processed by conventional spectral analysis methods, significant cues and useful information in speech are usually not taken proper advantage of, resulting in sub-optimal performance in many speech systems. There exists, however, a vast literature on speech production and perception mechanisms and their impacts on acoustic phonetics that could be more effectively utilized in modern speech systems. A re-examination of these knowledge sources is needed. On the other hand, recent advances in speech modelling and processing and the availability of a huge collection of multilingual speech data have provided an unprecedented opportunity for acoustic phoneticians to revise and strengthen their knowledge and develop new theories. Such a collaborative effort between science and technology is beneficial to the speech community and it is likely to lead to a paradigm shift for designing next-generation speech algorithms and systems. This, however, calls for a focussed attention to be devoted to analysis and processing techniques aiming at a more effective extraction of information and knowledge in speech.
Objectives:
The objective of this workshop is to discuss innovative approaches to the analysis of speech signals, so that it can bring out the subtle and unique characteristics of speech and speaker. This will also help in discovering speech cues useful for improving the performance of speech systems significantly. Several attempts have been made in the past to explore speech analysis methods that can bridge the gap between human and machine processing of speech. In particular, the time varying aspects of interactions between excitation and vocal tract systems during production seem to elude exploitation. Some of the explored methods include all-pole and polezero modelling methods based on temporal weighting of the prediction errors, interpreting the zeros of speech spectra, analysis of phase in the time and transform domains, nonlinear (neural network) models for information extraction and integration, etc. Such studies may also bring out some finer details of speech signals, which may have implications in determining the acoustic-phonetic cues needed for developing robust speech systems.
The Workshop:
G will present a full-morning common tutorial to give an overview of the present stage of research linked to the subject of the workshop
G will be organised as a single series of oral and poster presentations
G each oral presentation is given 30 minutes to allow for ample time for discussion
G is an ideal forum for speech scientists to discuss the perspectives that will further future research collaborations.
Potential Topic areas:
G Parametric and nonparametric models
G New all-pole and pole-zero spectral modelling
G Temporal modelling
G Non-spectral processing (group delay etc)
G Integration of spectral and temporal processing
G Biologically-inspired speech analysis and processing
G Interactions between excitation and vocal tract systems
G Characterization and representation of acoustic phonetic attributes
G Attributed-based speaker and spoken language characterization
G Analysis and processing for detecting acoustic phonetic attributes
G Language independent aspects of acoustic phonetic attributes detection
G Detection of language-specific acoustic phonetic attributes
G Acoustic to linguistic and acoustic phonetic mapping
G Mapping from acoustic signal to articulator configurations
G Merging of synchronous and asynchronous information
G Other related topics
Call for papers. Notification of review:
The submission deadline is edxtended to February 14, 2008.
Registration
Fees for early and late registration for ISCA and non-ISCA members will be made available on the website during September 2007.
Venue:
The workshop will take place at Aalborg University, Department of Electronic Systems, Denmark. See the workshop website for further and latest information.
Accommodation:
There are a large number of hotels in Aalborg most of them close to the city centre. The list of hotels, their web sites and telephone numbers are given on the workshop website. Here you will also find information about transportation between the city centre and the university campus.
How to reach Aalborg:
Aalborg Airport is half an hour away from the international Copenhagen Airport. There are many daily flight connections between Copenhagen and Aalborg. Flying with Scandinavian Airlines System (SAS) or one of the Star Alliance companies to Copenhagen enables you to include Copenhagen-Aalborg into the entire ticket, and this way reducing the full transportation cost. There is also an hourly train connection between the two cities; the train ride lasts approx. five hours
Organising Committee:
Paul Dalsgaard, B. Yegnanarayana, Chin-Hui Lee, Paavo Alku, Rolf Carlson, Torbjørn Svendsen,
Important dates
Submission of full and final: January 31, 2008 on the Website
http://www.es.aau.dk/ITRW/
Notification of review results: No later than March 30., 2008. -
ITRW on Evidence-based Voice and Speech Rehabilitation in Head
ISCA Workshop
Evidence-based Voice and Speech Rehabilitation in Head & Neck Oncology
Amsterdam, May 15-16, 2008
Evidence-based Voice and Speech Rehabilitation is of increasing relevance in Head & Neck Oncology. The number of patients requiring treatment for cancer in the upper respiratory and vocal tract keeps rising. Moreover, treatment - whether it concerns an "organ preservation protocol" or traditional surgery and radiotherapy - negatively impacts the function of organs vital for communication. A "function preservation treatment" does, unfortunately, not yet exist. This workshop seeks to assemble the latest and most relevant knowledge on evidence-based voice and speech rehabilitation. Aside from the main topic (voice and speech rehabilitation after total laryngectomy), other areas, such as vocal issues in early-stage larynx carcinoma, and various stages of oral / oropharyngeal carcinoma will be addressed.
The workshop comprises four topical sessions (see below). Each session includes two keynote lectures plus a round-table discussion and (maximally 10) poster presentations pertinent to the session's topic. A work document, based on the keynote lectures, will form the basis for each round-table discussion. This work document will contain all presently available research evidence, discuss its (clinical) relevance and will formulate directions and areas of interest for future research. The keynote lectures, work documents and poster papers are to be compiled into Workshop Proceedings, and will be published under ISCA flag (website: http://www.isca-speech.org/). It is our aim to make these Proceedings available at the workshop. This will result in a useful and traceable ‘State of the Art' handbook/CD/web publication.
Prof. Dr. Frans JM Hilgers
Prof. Dr. Louis CW Pols
Dr. Maya van Rossum
Venue:
Tinbergen lecture hall, Royal Netherlands Academy of Arts and Sciences. Kloveniersburgwal 29, Amsterdam
More information can be obtained from the website www.fon.hum.uva.nl/webhnr/
or by sending aOrganization:
Prof. Dr. Frans JM Hilgers
Prof. Dr. Louis CW Pols
Dr. Maya van Rossum
Institute of Phonetic Sciences - Amsterdam Center for Language and Communication, University of Amsterdam
Department of Head and Neck Oncology and Surgery
The Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital
Department of Otolaryngology, Academic Medical Center, University of Amsterdam
International Faculty
Prof. Philip C Doyle, PhD University of Western Ontario, London, Canada
Prof. Tanya L Eadie, PhD University of Washington, Seattle, USA
Prof. Dr. Dr. Ulrich Eysholdt University of Erlangen-Nuremberg, Germany
Prof. Britta Hammarberg, PhD Karolinska University, Stockholm, Sweden
Prof. Jeffrey P Searle, PhD University of Kansas, Kansas City, USA
Local Faculty
Dr. Annemieke H Ackerstaff 2
Dr. Corina J van As-Brooks 2
Dr. Michiel WM van den Brekel 2,3
Prof. Dr. Frans Hilgers 1,2, 3
Petra Jongmans, MA 1, 2
Lisette van der Molen, MA 2
Prof. Dr. Louis CW Pols 1
Dr. Maya van Rossum 2, 4
Dr. Irma M Verdonck-de Leeuw 5
1 Institute of Phonetic Sciences/Amsterdam Center of Language and Communication, University of Amsterdam
2 The Netherlands Cancer Institute, Amsterdam
3 Academic Medical Center, University of Amsterdam
4 University Medical Center Leiden
5 Free University Medical Center, Amsterdam
Course secretariat: Mrs. Marion van Zuilen
The Netherlands Cancer Institute
Plesmanlaan 121 1066CX Amsterdam, The Netherlands
TelephOrganization:
Prof. Dr. Frans JM Hilgers
Prof. Dr. Louis CW Pols
Dr. Maya van Rossum
Institute of Phonetic Sciences - Amsterdam Center for Language and Communication, University of Amsterdam
Department of Head and Neck Oncology and Surgery
The Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital
Department of Otolaryngology, Academic Medical Center, University of Amsterdam
International Faculty
Prof. Philip C Doyle, PhD University of Western Ontario, London, Canada
Prof. Tanya L Eadie, PhD University of Washington, Seattle, USA
Prof. Dr. Dr. Ulrich Eysholdt University of Erlangen-Nuremberg, Germany
Prof. Britta Hammarberg, PhD Karolinska University, Stockholm, Sweden
Prof. Jeffrey P Searle, PhD University of Kansas, Kansas City, USA
Local Faculty
Dr. Annemieke H Ackerstaff 2
Dr. Corina J van As-Brooks 2
Dr. Michiel WM van den Brekel 2,3
Prof. Dr. Frans Hilgers 1,2, 3
Petra Jongmans, MA 1, 2
Lisette van der Molen, MA 2
Prof. Dr. Louis CW Pols 1
Dr. Maya van Rossum 2, 4
Dr. Irma M Verdonck-de Leeuw 5
1 Institute of Phonetic Sciences/Amsterdam Center of Language and Communication, University of Amsterdam
2 The Netherlands Cancer Institute, Amsterdam
3 Academic Medical Center, University of Amsterdam
4 University Medical Center Leiden
5 Free University Medical Center, Amsterdam
Course secretariat: Mrs. Marion van Zuilen
The Netherlands Cancer Institute
Plesmanlaan 121 1066CX Amsterdam, The Netherlands
Telephone +3120-512-2550; Fax +3120-512-2554
e-mail to f.hilgers@nki.nl or kno@nki.nl
-
ISCA TR Workshop on Experimental Linguistics
August 2008, Athens, Greece
Back to Top
Website
Prof. Antonis Botinis -
Audio Visual Speech Processing Workshop (AVSP )
International Conference on Auditory-Visual Speech Processing AVSP 2008
Dates: 26-29 September 2008
Location: Moreton Island, Queensland, Australia
Website: http://express.hid.ri.cmu.edu/AVSP2008/Main.html
AVSP 2008 will be held as an ISCA Tutorial and Research Workshop at
Tangalooma Wild Dolphin Resort on Moreton Island from the 26-29
September 2008. AVSP 2008 is a satellite conference to Interspeech 2008,
being held in Brisbane from the 22-26 September 2008. Tangalooma is
located at close distance from Brisbane, so that attendance at AVSP 2008
can easily be combined with participation in Interspeech 2008.
Auditory-visual speech production and perception by human and machine is
an interdisciplinary and cross-linguistic field which has attracted
speech scientists, cognitive psychologists, phoneticians, computational
engineers, and researchers in language learning studies. Since the
inaugural workshop in Bonas in 1995, Auditory-Visual Speech Processing
workshops have been organised on a regular basis (see an overview at the
avisa website). In line with previous meetings, this conference will
consist of a mixture of regular presentations (both posters and oral),
and lectures by invited speakers.
Topics include but are not limited to:
- Machine recognition
- Human and machine models of integration
- Multimodal processing of spoken events
- Cross-linguistic studies
- Developmental studies
- Gesture and expression animation
- Modelling of facial gestures
- Speech synthesis
- Prosody
- Neurophysiology and neuro-psychology of audition and vision
- Scene analysis
Paper submission:
Details of the paper submission procedure will be available on the
website in a few weeks time.
Chairs:
Simon Lucey
Roland Goecke
Patrick Lucey -
Robust ASR Workshop
Santiago, Chile
October-November 2008
Dr. Nestor Yoma
Forthcoming events supported (but not organized) by ISCA
-
3rd International Conference on Large-scale Knowledge Resources (LKR 2008)
3-5 March, 2008, Tokyo Institute of Technology, Tokyo Japan
Back to Top
Website
Sponsored by: 21st Century Center of Excellence (COE) Program "Framework for Systematization and Application of Large-scale",Tokyo Institute of Technology
In the 21st century, we are now on the way to the knowledge-intensive society in which knowledge plays ever more important roles. Research interest should inevitably shift from information to knowledge, namely how to build, organize, maintain and utilize knowledge are the central issues in a wide variety of fields. The 21st Century COE program, "Framework for Systematization and Application of Large-scale Knowledge Resources (COE-LKR)" conducted by Tokyo Institute of Technology is one of the attempts to challenge these important issues. Inspired by this project, LKR2008 aims at bringing together diverse contribution in cognitive science, computer science, education and linguistics to explore design, construction, extension, maintenance, validation, and application of knowledge.
Topics of Interest to the conference includes:
Infrastructure for Large-scale Knowledge
Grid computing
Network computing
Software tools and development environments
Database and archiving systems
Mobile and ubiquitous computing
Systematization for Large-scale Knowledge
Language resources
Multi-modal resources
Classification, Clustering
Formal systems
Knowledge representation and ontology
Semantic Web
Cognitive systems
Collaborative knowledge
Applications and Evaluation of Large-scale Knowledge
Archives for science and art
Educational media
Information access
Document analysis
Multi-modal human interface
Web applications
Organizing committee General conference chair: Furui, Sadaoki (Tokyo Institute of Technology)
Program co-chairs: Ortega, Antonio (University of Southern California)
Tokunaga, Takenobu (Tokyo Institute of Technology)
Publication chair: Yonezaki, Naoki (Tokyo Institute of Technology)
Publicity chair: Yokota, Haruo (Tokyo Institute of Technology)
Local organizing chair: Shinoda, Koichi (Tokyo Institute of Technology)
Submission
Since we are aiming at an interdisciplinary conference covering wide range of topics concerning large-scale knowledge resources, authors are requested to add general introductory description in the beginning of the paper so that readers of other research area can understand the importance of the work. Note that one of the reviewers of each paper is assigned from other topic area to see if this requirement is fulfilled.
There are two categories of paper presentation: oral and poster. The category of the paper should be stated at submission. Authors are invited to submit original unpublished research papers, in English, up to 12 pages for oral presentation and 4 pages for poster presentation, strictly following the LNCS/LNAI format guidelines available at the Springer LNCS Web page. . Details of the submission procedure will be announced later on.
Reviewing
The reviewing of the papers will be blind and managed by an international Conference Program Committee consisting of Area Chairs and associated Program Committee Members. Final decisions on the technical program will be made by a meeting of the Program Co-Chairs and Area Chairs. Each submission will be reviewed by at least three program committee members, and one of the reviewers is assigned from a different topic area.
Publication
The conference proceedings will be published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI), which will be available at the conference.
Important dates
Paper submission deadline: 30 August, 2007
Notification of acceptance: 10 October, 2007
Camera ready papers due: 10 November, 2007
e-mail correspondence -
Call for Papers (Preliminary version) Speech Prosody 2008
Campinas, Brazil, May 6-9, 2008
Speech Prosody 2008 will be the fourth conference of a series of international events of the Special Interest Groups on Speech Prosody (ISCA), starting by the one held in Aix-en Provence, France, in 2002. The conferences in Nara, Japan (2004), and in Dresden, Germany (2006) followed the proposal of biennial meetings, and now is the time of changing place and hemisphere by trying the challenge of offering a non-stereotypical view of Brazil. It is a great pleasure for our labs to host the fourth International Conference on Speech Prosody in Campinas, Brazil, the second major city of the State of São Paulo. It is worth highlighting that prosody covers a multidisciplinary area of research involving scientists from very different backgrounds and traditions, including linguistics and phonetics, conversation analysis, semantics and pragmatics, sociolinguistics, acoustics, speech synthesis and recognition, cognitive psychology, neuroscience, speech therapy, language teaching, and related fields. Information: sp2008_info@iel.unicamp.br. Web site: http://sp2008.org. We invite all participants to contribute with papers presenting original research from all areas of speech prosody, especially, but nor limited to the following.
Scientific Topics
Prosody and the Brain
Long-Term Voice Quality
Intonation and Rhythm Analysis and Modelling
Syntax, Semantics, Pragmatics and Prosody
Cross-linguistic Studies of Prosody
Prosodic variability
Prosody in Discourse
Dialogues and Spontaneous Speech
Prosody of Expressive Speech
Perception of Prosody
Prosody in Speech Synthesis
Prosody in Speech Recognition and Understanding
Prosody in Language Learning and Acquisition
Pathology of Prosody and Aids for the Impaired
Prosody Annotation in Speech Corpora
Others (please, specify)
Organising institutions
Speech Prosody Studies Group, IEL/Unicamp | Lab. de Fonética, FALE/UFMG | LIACC, LAEL, PUC-SP
Important Dates
Call for Papers: May 15, 2007
Full Paper Submission: Nov. 2nd, 2007
Notif. of Acceptance: Dec. 14th, 2007
Early Registration: Jan. 14th, 2008
Conference: May 6-9, 2008
-
CFP:The International Workshop on Spoken Languages Technologies for Under- The International Workshop on Spoken Languages Technologies for Under-resourced languages (SLTU)
The International Workshop on Spoken Languages Technologies for Under-resourced languages (SLTU)
languages (SLTU) Hanoi University of Technology, Hanoi, Vietnam,
May 5 - May 7, 2008.
EXTENDED DEADLINE 30 January 2008
Workshop Web Site : http://www.mica.edu.vn/sltu
The STLU meeting is a technical conference focused on spoken language processing for
under-resourced languages. This first workshop will focus on Asian languages, and
the idea is to mainly (but not exclusively) target languages of the area (Vietnamese,
Khmer, Lao, Chinese dialects, Thai, etc.). However, all contributions on other
under-resourced languages of the world are warmly welcomed. The workshop aims
at gathering researchers working on:
* ASR, synthesis and speech translation for under-resourced languages
* portability issues * fast resources acquisition (speech, text, lexicons, parallel corpora)
* spoken language processing for languages with rich morphology
* spoken language processing for languages without separators
* spoken language processing for languages without writing system
Important dates
* Paper submission dqte: EXTENDED to January 30, 2008
* Notification of Paper Acceptance: February 20, 2008
* Author Registration Deadline: March 1, 2008 Scientific Committee
* Pr Tanja Schultz, CMU, USA
* Dr Yuqing Gao, IBM, USA
* Dr Lori Lamel, LIMSI, France
* Dr Laurent Besacier, LIG, France
* Dr Pascal Nocera, LIA, France
* Pr Jean-Paul Haton, LORIA, France
* Pr Luong Chi Mai, IOIT, Vietnam
* Pr Dang Van Chuyet, HUT, Vietnam
* Pr Pham Thi Ngoc Yen, MICA, Vietnam
* Dr Eric Castelli, MICA, Vietnam
* Dr Vincent Berment, LIG Laboratory, France
* Dr Briony Williams, University of Wales, UK
Local Organizing Committee
* Pr Nguyen Trong Giang, HUT/MICA
* Pr Ha Duyen Tu, HUT
* Pr Pham Thi Ngoc Yen, HUT/MICA
* Pr Geneviève Caelen-Haumont, MICA
* Dr Trinh Van Loan, HUT
* Dr Mathias Rossignol, MICA
* M. Hoang Xuan Lan, HUT
Back to Top -
Joint Workshop on Hands-free Speech Communication and Microphone Arrays
CALL FOR PAPERS - HSCMA 2008
Back to Top
Joint Workshop on Hands-free Speech Communication and Microphone Arrays
Trento, Italy, 6-8 May 2008,
http://hscma2008.fbk.eu
************************************************************************
Technically sponsored by the IEEE Signal Processing Society
HSCMA 2008 is an event supported by ISCA - International Speech Communication Association
************************************************************************
TECHNICAL PROGRAM:
Following the workshop held at Rutgers University in 2005, HSCMA 2008 aims to continue the tradition of previous workshops on Hands-free Speech Communication(HSC) and Microphone Arrays (MA). The workshop is mainly devoted to presenting recent advances in speech and signal processing techniques based upon multi-microphone systems, and to distant-talking speech communication and human/machine interaction. The organizing committee invites the international community to present and discuss state-of-the-art developments in the field.
HSCMA 2008 will feature plenary talks by leading researchers in the field as well as poster and demo sessions.
PAPER SUBMISSION:
The technical scope of the workshop includes, but it is not limited to:
* Multichannel acoustic signal processing for speech acquisition, interference mitigation and noise suppression
* Acoustic source localization and separation
* Dereverberation
* Acoustic echo cancellation
* Acoustic event detection and classification
* Microphone array technology and architectures, especially for distant-talking Automatic Speech Recognition (ASR) and acoustic scene analysis
* ASR technology for hands-free interfaces
* Robust features for ASR
* Feature-level enhancement and dereverberation
* Multichannel speech corpora for system training and benchmarking
* Microphone arrays for multimodal human/machine communication
Prospective authors are invited to submit papers in any technical areas relevant to the workshop and are encouraged to give demonstrations of their work.
The authors should submit a two page extended abstract including text, figures, references, and paper classification categories.
PDF files of extended abstracts must be submitted through the conference website located at hscma2008.fbk.eu. Comprehensive guidelines for abstract preparation and submission can also be found at the conference website.
IMPORTANT DATES:
Submission of two-page abstract: January 25, 2008
Notification of acceptance: February 8, 2008
Final manuscript submission and author's registration: March 1, 2008 -
SIGDIAL 2008 9th SIGdial Workshop on Discourse and Dialogue
SIGDIAL 2008 9th SIGdial Workshop on Discourse and Dialogue
COLUMBUS, OHIO; June 19-20 2008 (with ACL/HLT 2008)
http://www.sigdial.org/workshops/workshop9
** Submission Deadline: Feb 15 2008 **
1st CALL FOR PAPERS
Continuing with a series of successful workshops in Antwerp, Sydney,
Lisbon, Boston, Sapporo, Philadelphia, Aalborg, and Hong Kong, this
workshop spans the ACL and ISCA SIGdial interest area of discourse and
dialogue. This series provides a regular forum for the presentation of
research in this area to both the larger SIGdial community as well as
researchers outside this community. The workshop is organized by
SIGdial, which is sponsored jointly by ACL and ISCA. SIGdial 2008 will
be a workshop of ACL/HLT 2008.
TOPICS OF INTEREST
We welcome formal, corpus-based, implementation or analytical work on
discourse and dialogue including but not restricted to the following
three themes:
1. Discourse Processing and Dialogue Systems
Discourse semantic and pragmatic issues in NLP applications such as
text summarization, question answering, information retrieval
including topics like:
- Discourse structure, temporal structure, information structure
- Discourse markers, cues and particles and their use
- (Co-)Reference and anaphora resolution, metonymy and bridging
resolution
- Subjectivity, opinions and semantic orientation
Spoken, multi-modal, and text/web based dialogue systems including
topics such as:
- Dialogue management models;
- Speech and gesture, text and graphics integration;
- Strategies for preventing, detecting or handling miscommunication
(repair and correction types, clarification and under-specificity,
grounding and feedback strategies);
- Utilizing prosodic information for understanding and for
disambiguation;
2. Corpora, Tools and Methodology
Corpus-based work on discourse and spoken, text-based and multi-modal
dialogue including its support, in particular:
- Annotation tools and coding schemes;
- Data resources for discourse and dialogue studies;
- Corpus-based techniques and analysis (including machine learning);
- Evaluation of systems and components, including methodology, metrics
and case studies;
3. Pragmatic and/or Semantic Modeling
The pragmatics and/or semantics of discourse and dialogue (i.e. beyond
a single sentence) including the following issues:
- The semantics/pragmatics of dialogue acts (including those which are
less studied in the semantics/pragmatics framework);
- Models of discourse/dialogue structure and their relation to
referential and relational structure;
- Prosody in discourse and dialogue;
- Models of presupposition and accommodation; operational models of
conversational implicature.
SUBMISSIONS
The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
presentations, followed by posters and demonstrations.
- Long papers must be no longer than 8 pages, including title,
examples, references, etc. In addition to this, two additional pages
are allowed as an appendix which may include extended example
discourses or dialogues, algorithms, graphical representations, etc.
- Short papers and demo descriptions should aim to be 4 pages or less
(including title, examples, references, etc.).
Please use the official ACL style files:
http://www.ling.ohio-state.edu/~djh/acl08/stylefiles.html
Submission/Reviewing will be managed by the EasyChair system. Link to
follow.
Papers that have been or will be submitted to other meetings or
publications must provide this information (see submission
format). SIGdial 2008 cannot accept for publication or presentation
work that will be (or has been) published elsewhere. Any questions
regarding submissions can be sent to the co-Chairs.
Authors are encouraged to make illustrative materials available, on
the web or otherwise. For example, excerpts of recorded conversations,
recordings of human-computer dialogues, interfaces to working systems,
etc.
IMPORTANT DATES
Submission Feb 15 2008
Notification Mar 31 2008
Final submissions Apr 14 2008
Workshop June 19-20 2008
WEBSITES
Workshop website: http://www.sigdial.org/workshops/workshop9
Submission link: To be announced
SIGdial organization website: http://www.sigdial.org
CO-LOCATION ACL/HLT 2008 website: http://www.acl2008.org
CONTACT
For any questions, please contact the co-Chairs at:
Beth Ann Hockey bahockey@ucsc.edu
David Schlangen das@ling.uni-potsdam.de
Back to Top -
LIPS 2008 Visual Speech Synthesis Challenge
LIPS 2008: Visual Speech Synthesis Challenge
LIPS 2008 is the first visual speech synthesis challenge. It will be
held as a special session at INTERSPEECH 2008 in Brisbane, Australia
(http://www.interspeech2008.org). The aim of this challenge is to
stimulate discussion about subjective quality assessment of synthesised
visual speech with a view to developing standardised evaluation procedures.
In association with this challenge a training corpus of audiovisual
speech and accompanying phoneme labels and timings will be provided to
all entrants, who should then train their systems using this data. (As
this is the first year the challenge will run and to promote wider
participation, proposed entrants are free to use a pre-trained model).
Prior to the session a set of test sentences (provided as audio, video
and phonetic labels) must be synthesised on-site in a supervised room. A
series of double-blind subjective tests will then be conducted to
compare each competing system against all others. The overall winner
will be announced and presented with their prize at the closing ceremony
of the conference.
All entrants will submit a 4/6 (TBC) page paper describing their system
to INTERSPEECH indicating that the paper is addressed to the LIPS special
session. A special edition of the Eurasip Journal on Speech, Audio and Music
Processing in conjunction with the challenge is also scheduled.
To receive updated information as it becomes available, you can join the
mailing list by visiting
https://mail.icp.inpg.fr/mailman/listinfo/lips_challenge. Further
details will be mailed to you in due course.
Please invite colleagues to join and dispatch this email largely to your
academic and industrial partners. Besides a large participation of
research groups in audiovisual speech synthesis and talking faces we
particularly welcome participation of the computer game industry.
Please confirm your willingness to participate in the challenge, submit
a paper describing your work and join us in Brisbane by sending an email
to sascha.fagel@tu-berlin.de, b.theobald@uea.ac.uk,
gerard.bailly@gipsa-lab.inpg.fr
Organising Committee
Sascha Fagel, University of Technology, Berlin - Germany
Barry-John Theobald, University of East Anglia, Norwich - UK
Gerard Bailly, GIPSA-Lab, Dpt. Speech & Cognition, Grenoble - France
Back to Top
Future Speech Science and Technology Events
-
MATMT 2008 workshop: Mixing approaches to Machine Translation
MATMT2008 workshop:
"Mixing Approaches to Machine Translation"http://ixa2.si.ehu.es/matmt-2008
Donostia-San Sebastian , Thursday February 14th 2008
IXA Group - University of the Basque CountryCALL FOR PARTICIPATION
Workshop topics
We are particularly interested in papers describing research and development in the following areas:
- Comparing different approaches for developing MT
- Methods to compare and integrate translation outputs obtained with different MT approaches.
- MT evaluation methods, especially those suitable for languages with rich morphology.
- Morphology-, syntax- or semantic-augmented SMT models
- Research developed using OpenSource language resources for developing hybrid MT
Program
University of the Basque Country, Faculty of Computer Science
Lardizabal 1, DonostiaFebruary 14th
Keynote speakers:
- Federico, Marcello (Fondazione Bruno Kessler, Italy)
- Koehn, Philipp (University of Edinburgh, UK)
- Way, Andy (Dublin City University)
9.00-9.30: Registration
9.30-10.15: Invited talk 1- P. Koehn Moses: Moving Open Source MT towards Linguistically Richer Models
10.15-11.05: Regular talks - Evaluation (20' and 5' for questions)
- A Method of Automatically Evaluating Machine Translations Using a Word-alignment-based Classifier
- Diagnosing Human Judgements in MT Evaluation: an Example based on the Spanish Language
11.05-11.30: Coffee break
11.30-13.10: Regular talks - Mixed methods 1 (20' and 5' for questions)
- Mixing Approaches to MT for Basque: Selecting the best output from RBMT, EBMT and SMT
- Enriching Statistical Translation Models using a Domain-independent Multilingual Lexical Knowledge Base
- Statistical Post-Editing: A Valuable Method in Domain Adaptation of RBMT Systems for Less-Resourced Languages
- From free shallow monolingual resources to machine translation systems: easing the task
13.30-14.30: Lunch
14.30-15.15: Invited talk 2- M. Federico Recent Advances in Spoken Language Translation
15.15-16.30: Regular talks - Mixed methods 2 (20' and 5' for questions)
- Exploring Spanish-morphology effects on Chinese-Spanish SMT
- Linguistic Categorisation in Machine Translation using Stochastic Finite State Transducers
- Vocabulary Extension via POS Information for SMT
16.30-17.00: Coffee break
17:00-18.30:
- Invited talk 3- A. Way Combining Approaches to Machine Translation: the DCU Experience
- Conclusions. Moderator: David Farwell
Registration
The registration fee is 50 €, when registering before February 4th.
Late registration will still be possible, but at 60 €.
Online registration is open.
The fee includes proceedings, lunch and coffee/cookies
The steps for on-line registration are the following:
- creating an account in the system (REGISTER), including your email
- receiving your identification by e-mail
- entering in the system (ENTER)
- confirmation of the fee (ACCEPT) (2 times)
- choosing electronic payment (ACCEPT)
- selection of your credit card company
- entering information about your credit card (secure connection)
Going to the registration process
If you have any problem please contact i.alegria[at]ehu.es .
Optional Dinner
An optional dinner will be organized at a Cider House (Sagardotegi)
We´ll go to an authentic cider house where we´ll taste the best cider and eat the traditional menu: codfish omelette, fried codfish with peppers, grilled beef T-bone and local cheese with quince and walnuts. An experience that you will never forget. The price will be about 35 € including the transport (to be paid on registration desktop)
Venue and Travel
- Donostia (or San Sebastian): The City
- General info
- Map
- How to arrive
- Accommodation Our suggestion (close to the university):
Programme Committee
- Iñaki Alegria (University of the Basque Country, Donostia)
- Kutz Arrieta (Vicomtech, Donostia)
- Núria Castell (Technical University of Catalonia, TALP, Barcelona)
- Arantza Diaz de Ilarraza (University of the Basque Country, Donostia)
- David Farwell (Technical University of Catalonia, TALP, Barcelona)
- Mikel Forcada (University of Alacant, Alicante)
- Philipp Koehn (University Of Edinburgh, UK)
- Lluis Marquez (Technical University of Catalonia, Barcelona) (Co-chair)
- Hermann Ney (Rheinisch-Westfälische Technische Hochschule, Aachen)
- Kepa Sarasola (University of the Basque Country, Donostia) (Co-chair)
Local organization
IXA Group, University of the Basque Country
- Alegria I., Casillas A., Díaz de Ilarraza A., Igartua J., Labaka G., Lersundi M., and Sarasola K.
- Gurrutxaga A., Leturia,I., and Saralegi X.
-
LangTech2008
The language and speech technology conference: Langtech2008
Rome, 28-29 February 2008
San Michele a Ripa conference centre.
Website
We are most delighted to welcome you to join us at the LangTech2008 conference which will be held at the San Michele a Ripa convention center in Rome, February 28-29, 2008. After two successful national conferences on speech and language technology (2002, 2006), the ForumTal decided to promote an international event in the field. A follow up of the previous LangTech conferences (Berlin, Paris), LangTech2008 aims at giving a chance to the industrial and research communities, and public administration, to share and discuss language and speech technologies. The conference will feature world-class speakers, exhibits, lecture and poster sessions.
PAPERS SUBMISSION DEADLINE: 30th November 2007
EXHIBITION BOOTHS RESERVATION: Reduced Fares until 15th November 2007
REGISTRATION: Reduced Fees until 31st December 2007
A golden promotional opportunity for all language technology SMEs!
LangTech 2008, http://www.langtech.it/en/, the language technology business conference, is featuring a special elevator session for small and medium sized enterprises, SMEs.
An elevator session is a session with very short presentations.
If you seek business partners, you are invited to participate in LangTech 2008 in Rome, February 28-29, and make yourself known to the audience.
A committee of European experts shall choose a total of 10 SMEs from anywhere in Europe and beyond to give a 5 min self-promotional presentation in English before a floor of venture capitalists, business peers, large technology corporations and other interested parties.
A jury will select three of the presenting companies, and award the first, second and third LangTech Prize.
Submissions must be received by 30 December 2007.
The lucky candidates will be informed by 15 January 2008.
We will offer a reduced fee to LangTech 2008 to all SMEs selected to present at the elevator session.
If you wish to submit a request to present your SME for this unique opportunity, please contact sme@langtech.it immediately, and visit the web site dedicated to LangTech 2008, http://www.langtech.it/en/, where you can download a short slide set with guidelines for preparing your candidature.
Dr Calzolari would be pleased if you could spread the Conference Announcement and the Call for SMEs
Presentations to anyone you consider potentially interested in the event.
Dr PAOLA BARONI
Researcher
Consiglio Nazionale delle Ricerche
Istituto di Linguistica Computazionale
Area della Ricerca di Pisa
Via Giuseppe Moruzzi
56124 Pisa
ITALY
Phone: [+39] 050 315 2873
Fax: [+39] 050 315 2834
e-Mail: paola.baroni@ilc.cnr.it
URL: http://www.ilc.cnr.it
Skype: paola.baroni
Back to Top -
AVIOS
San Diego, March 10 - 12, 2008
Back to Top
The defining conference on Voice Search
From the Applied Voice Input Output Society and Bill Meisel's TMA Associates
Voice Search 2008 will be held at the San Diego Marriott Hotel and Marina, San Diego, California, March 10 - 12, 2008. Voice Search is a rapidly evolving technology and market. AVIOS (the Applied Voice Input Output Society) and Bill Meisel (president of TMA Associates and Editor of Speech Strategy News) are joining together to launch this new conference as a definitive resource for companies that will be impacted by this important trend.
"Voice Search" suggests an analogy to "Web Search," which has been a runaway success for both users and providers. The maturing of speech recognition and text-to-speech synthesis--and the recent involvement of large companies--has validated the availability of the core functionality necessary to support this analogy. The conference explores the possibilities, limitations, and differences of Voice Search and Web search.
Web search made the Web an effective and required marketing tool. Will Voice Search do the same for the telephone channel? The potential impact on call centers is another key issue covered by the conference.
The agenda covers:
§ What Voice Search is and will become
§ Applications of Voice Search
§ The appropriate use of speech technology to support voice search
§ Insight for service providers, enterprises, Web services, and call centers that want to take advantage of this new resource
§ Marketing channels and business models in Voice Search
§ Emerging supporting technology, development tools, and delivery platforms supporting Voice Search
§ Dealing with the surge of calls created by Voice Search.
Specific topics that will be covered at Voice Search 2008 include:
Applications
- Automated directory assistance and local search
- Voice information searches by telephone
- Ad-supported information access by phone
- Audio/Video searches on the Web and enterprises
- Speech analytics-extracting business intelligence from audio files
- Converting voicemail to searchable text
- Other new applications and services
- Application examples and demonstrations
Markets
- How the voice search market is developing
- The changing role of the telephone in marketing
- Business models
- The right way to deliver audio ads
- Justifying subscriber fees
Delivery
- Platforms, tools, and services for effectively delivering these applications
- Implementation examples and demonstrations
- Hosted versus customer-premises solutions
- Supporting multiple modes of interaction
- Key sources of technology and service
Contact centers
- The impact of Voice Search on contact centers
- Speech automation to handle the increased call flow
- Moving from handling problems to building customer relationships
Technology
- Speech recognition methods supporting voice search
- Text-to-speech quality and alternatives
- Supporting multimodal solutions
- Supporting standards
- Delivering responsive applications
- Voice User Interface issues and solutions in voice search
Sponsorships are available:
http://www.voicesearchconference.com/sponsor.htm
We're interested in proposals for speaking (available slots are limited):
http://www.voicesearchconference.com/talk.htm
Registration is open with an early-registration discount:
http://www.voicesearchconference.com/registration.htm
Other information:
What is Voice Search?
Voice Search News
About AVIOS
About Bill Meisel and TMA Associates
Or contact cONTACT. -
CfP-2nd INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS (LATA 2008)
Tarragona, Spain, March 13-19, 2008
Website http://www.grlmc.com
AIMS:
LATA is a yearly conference in theoretical computer science and its applications. As linked to the International PhD School in Formal Languages and Applications that is being developed at the host institute since 2001, LATA 2008 will reserve significant room for young computer scientists at the beginning of their career. It will aim at attracting scholars from both classical theory fields and application areas (bioinformatics, systems biology, language technology, artificial intelligence, etc.).SCOPE:
Back to Top
Topics of either theoretical or applied interest include, but are not limited to:
- words, languages and automata
- grammars (Chomsky hierarchy, contextual, multidimensional, unification, categorial, etc.)
- grammars and automata architectures
- extended automata
- combinatorics on words
- language varieties and semigroups
- algebraic language theory
- computability
- computational and structural complexity
- decidability questions on words and languages
- patterns and codes
- symbolic dynamics
- regulated rewriting
- trees, tree languages and tree machines
- term rewriting
- graphs and graph transformation
- power series
- fuzzy and rough languages
- cellular automata
- DNA and other models of bio-inspired computing
- symbolic neural networks
- quantum, chemical and optical computing
- biomolecular nanotechnology
- automata and logic
- algorithms on automata and words
- automata for system analysis and programme verification
- automata, concurrency and Petri nets
- parsing
- weighted machines
- transducers
- foundations of finite state technology
- grammatical inference and algorithmic learning
- text retrieval, pattern matching and pattern recognition
- text algorithms
- string and combinatorial issues in computational biology and bioinformatics
- mathematical evolutionary genomics
- language-based cryptography
- data and image compression
- circuits and networks
- language-theoretic foundations of artificial intelligence and artificial life
- digital libraries
- document engineering
STRUCTURE:
LATA 2008 will consist of:
- 3 invited talks (to be announced in the second call for papers)
- 2 tutorials (to be announced in the second call for papers)
- refereed contributions
- open sessions for discussion in specific subfields or on professional issues
SUBMISSIONS:
Authors are invited to submit papers presenting original and unpublished research. Papers should not exceed 12 pages and should be formatted according to the usual LNCS article style. Submissions have to be sent through the web page.
PUBLICATION:
A volume of proceedings (expectedly LNCS) will be available by the time of the conference. A refereed volume of selected proceedings containing extended papers will be published soon after it as a special issue of a major journal.
REGISTRATION:
The period for registration will be open since January 7 to March 13, 2008. Details about how to register will be provided through the website of the conference.
Early registration fees: 250 euros
Early registration fees (PhD students): 100 euros
Registration fees: 350 euros
Registration fees (PhD students): 150 euros
FUNDING:
25 grants covering partial-board accommodation will be available for nonlocal PhD students. To apply, the candidate must e-mail her/his CV together with a copy of the document proving her/his status as a PhD student.
IMPORTANT DATES:
Paper submission: November 16, 2007
Application for funding (PhD students): December 7, 2007
Notification of funding acceptance or rejection: December 21, 2007
Notification of paper acceptance or rejection: January 18, 2008
Early registration: February 1, 2008
Final version of the paper for the proceedings: February 15, 2008
Starting of the conference: March 13, 2008
Submission to the journal issue: May 23, 2008
FURTHER INFORMATION:
E-mail
Website http://www.grlmc.com
ADDRESS:
LATA 2008
Research Group on Mathematical Linguistics
Rovira i Virgili University
Plaza Imperial Tarraco, 1
43005 Tarragona, Spain
Phone: +34-977-559543
Fax: +34-977-559597 -
CfP Workshop on Empirical Approaches to Speech Rhythm
CALL FOR PAPERS
*** Workshop on Empirical Approaches to Speech Rhythm ***
Centre for Human Communication
UCL
Abstracts due: 31st January 2008
Workshop date: 28th March 2008
Empirical studies of speech rhythm are becoming increasingly popular.
Metrics for the quantification of rhythm have been applied to
typological, developmental, pathological and perceptual questions.
The prevalence of rhythm metrics based on durational characteristics
of consonantal and vocalic intervals (e.g. deltaV, deltaC, %V, nPVI-
V, rPVI-C, VarcoV and VarcoC) indicate the need for agreement about
their relative efficacy and reliability. More fundamentally, it
remains to be demonstrated whether such metrics really quantify
speech rhythm, a controversial and elusive concept.
Confirmed speakers:
Francis Nolan (Cambridge) - keynote speaker
Fred Cummins (UCD)
Volker Dellwo (UCL)
Klaus Kohler (Kiel)
Elinor Payne (Oxford)
Petra Wagner (Bonn)
Laurence White (Bristol)
Abstracts:
We invite abstract submissions for a limited number of additional
oral presentations, and for poster presentations. We welcome
abstracts that address any or all of the following questions:
- What is speech rhythm?
- How should we measure speech rhythm?
- Which rhythm metrics are most effective and reliable?
- What can rhythm metrics tell us?
- What are the limitations of rhythm metrics?
Publication:
It is intended that a limited number of contributions to the workshop
may be published in a special issue of Phonetica. Initial selection
of papers will be made after the workshop with a view to compiling a
thematically coherent publication. Selected papers will subsequently
be reviewed.
Important dates:
Abstracts must be received by: 31st January 2008
Notification of acceptance: 15th February 2008
Date of Workshop: 28th March 2008
Abstract submission:
Abstracts should be sent to: rhythm2008@phon.ucl.ac.uk. Abstracts
should be in Word or rtf format, 12pt Times New Roman, 1.5 line
spacing, and no longer than one page of A4. The file should be
entitled RhythmWorkshop-[name].doc, where [name] is the last name of
the first author. The abstract should start with:
- the title of the abstract in bold and centered;
- the name(s) and department(s) of the author(s) in italics and
centered;
- the email address(es) of the author(s), centred.
The body of the abstract should be justified left and right.
Further information:
For more information and updates please check www.phon.ucl.ac.uk/
rhythm2008. Email enquiries should be directed to
rhythm2008@phon.ucl.ac.uk.
On behalf of the scientific organizing committee:
Volker Dellwo, Elinor Payne, Petra Wagner and Laurence White
Back to Top -
IEEE International Conference on Acoustics, Speech and Signal Processng ICASSP
Las Vegas USA
30 March 4 April 2008
The world's largest and most comprehensive technical conference focused on signal processing.
Website: http://www.icassp2008.org/
Back to Top -
Marie Curie Research Training Workshop "Sound to Sense"
Marie Curie Research Training Network "Sound to Sense" (S2S)
Beyond Short Units
Using non segmental units and top-down models in speech recognition in humans and machines
16th - 19th April 2008, Naples, Italy
http://www.sound2sense.eu/index.php/s2s/workshop/beyond-short-units/The event will have a double identity: both a workshop and a doctoral school. The workshop is planned to be a succession of debates and tutorials held by invited speakers in which phoneticians and phonologists, psycholinguists and speech engineers compare their opinions on these themes.
Workshop themes:
- Fine Phonetic Detail
- ASR and human speech recognition models based on "long" segments and-or top-down approaches
- Alternative ASR acoustic models
- Non segmental features for ASR
- Multi-modal speech representation and analysis
- Choosing the best analysis unit for speech processing in humans and machines
- Time, duration and tempo: common clocks between the listener and the speaker
- Constraints, interactions and relations between phonetics and "higher" linguistic levels in spoken language
- Spoken language grammar
- Language Modelling for ASR
Invited speakers:
- G. Coro, Milano, IT
- P. Cosi, Padova, IT
- U. Frauenfelder, Geneva, CH
- S. Hawkins, Cambridge, UK
- J. Local, York, UK
- R. Moore, Sheffield, UK
- L. ten Bosch, Nijmegen, NL
- M. Voghera, Salerno, IT
- J. Volin, Prague, CZ
- L. White, Bristol, UK
The workshop is mainly directed to S2S members but contributions (posters) will be accepted from any other interested researcher.
Back to Top
Those who intend to contribute to the workshop with a poster on one of the above cited themes should send an abstract (minimum two A4 pages) to:
s2snaples@gmail.com not later than 29th February 2008, acceptation will be notified within 10th March 2008.
The same e-mail address can be used for any further questions. -
Expressivity in Music and Speech
2nd CALL FOR ABSTRACTS :
Back to Top
NEW SUBMISSION DEADLINE : February 17th, 2008
Prosody and expressivity in speech and music
Satellite Event around Speech Prosody 2008 / First EMUS Conference -
Expressivity in MUsic and Speech
http://www.sp2008.org/events.php /
http://recherche.ircam.fr/equipes/analyse-synthese/EMUS
Campinas, Brazil, May 5th, 2008
[Abstract submission deadline: February 17th, 2008]
Keywords: emotion, expressivity, prosody, music, acquisition,
perception, production,
interpretation, cognitive sciences, neurosciences, acoustic analysis.
DESCRIPTION:
Speech and music conceal a treasure of "expressive potential" for they
can activate sequences of varied
emotional experiences in the listener. Beyond their semiotic
differences, speech and music share acoustic
features such as duration, intensity, and pitch, and have their own
internal organization, with their own
rhythms, colors, timbres and tones.
The aim of this workshop is to question the connections between various
forms of expressivity, and the
prosodic and gestural dimensions in the spheres of music and speech. We
will first tackle the links
between speech and music through enaction and embodied cognition. We
will then work on computer
modelling for speech and music synthesis. The third part will focus on
musicological and aesthetic
perspectives. We will end the workshop with a round table in order to
create a dialogue between the
various angles used to apprehend prosody and expressivity in both speech
and music.
FRAMEWORK:
This workshop will be the starting point of a string of events on the
relations between language and music:
May 16th: Prosody, Babbling and Music (Ecole Normale Supérieure Lettres
et Sciences Humaines, Lyon)
June 17-18th: Prosody of Expressivity in Music and Speech (IRCAM, Paris)
September 25th and 26th: Semiotics and microgenesis of verbal and
musical forms (RISC, Paris).
Our aim is to make links between several fields of research and create a
community interested in the
relations between music and language. The project will be materialized
in a final publication of the
keynote papers of those four events.
SUBMISSION PROCEDURE:
The workshop will host about ten posters.
Authors should submit an extended abstract to: beller@ircam.fr in pdf
format by January 30, 2008.
We will send an email confirming the reception of the submission. The
suggested abstract length is
maximum 1 page, formatted in standard style.
The authors of the accepted abstracts will be allocated as poster
highlights. Time will be allocated
in the programme for poster presentations and discussions.
Before the workshop, the extended abstracts (maximum 4 pages) will be
made available to a broader
audience on the workshop web site. We also plan to maintain the web page
after the workshop and
encourage the authors to submit slides and posters with relevant links
to their personal web pages.
KEY DATES:
Dec 10: Workshop announcement and Call for Abstracts
Jan 30: Abstract submission deadline
Feb 17: New submission deadline
Mar 28: Notification of acceptance
Apr 25: Final extended abstracts due
May 5: Workshop
SCIENTIFIC COMMITTEE:
Christophe d'Alessandro (LIMSI, Orsay);
Antoine Auchlin (University of Geneva, Linguistics Department);
Grégory Beller (IRCAM);
Nick Campbell (ATR, Nara);
Anne Lacheret (MODYCO,
Nanterre University) ;
Sandra Madureira (PUC-SP);
Aliyah Morgenstern (ICAR, Ecole Normale Supérieure Lettres et Sciences
Humaines) ;
Nicolas Obin (IRCAM)
ORGANISERS:
- University of Geneva, Linguistics Department (Antoine Auchlin)
- IRCAM (Grégory Beller and Nicolas Obin)
- MODYCO, Nanterre University (Anne Lacheret)
- ICAR, Ecole Normale Supérieure Lettres et Sciences Humaines (Aliyah
Morgenstern)
CONTACT :
For questions/ suggestions about the workshop, please contact
beller@ircam.fr
Please refer to http://recherche.ircam.fr/equipes/analyse-synthese/EMUS for
up-to-date information about the workshop.
PROGRAM
http://www.sp2008.org/events/EMUS-conferences.pdf -
CfP- LREC 2008 - 6th Language Resources and Evaluation Conference
Palais des Congrès Mansour Eddahbi, MARRAKECH - MOROCCO
MAIN CONFERENCE: 28-29-30 MAY 2008
WORKSHOPS and TUTORIALS: 26-27 MAY and 31 MAY- 1 JUNE 2008
Conference web site
The sixth international conference on Language Resources and Evaluation (LREC) will be organised in 2008 by ELRA in cooperation with a wide range of international associations and organisations.
CONFERENCE TOPICS
Issues in the design, construction and use of Language Resources (LRs): text, speech, multimodality
- Guidelines, standards, specifications, models and best practices for LRs
- Methodologies and tools for LRs construction and annotation
- Methodologies and tools for the extraction and acquisition of knowledge
- Ontologies and knowledge representation
- Terminology
- Integration between (multilingual) LRs, ontologies and Semantic Web technologies
- Metadata descriptions of LRs and metadata for semantic/content markup
Exploitation of LRs in different types of systems and applications
- For: information extraction, information retrieval, speech dictation, mobile communication, machine translation, summarisation, web services, semantic search, text mining, inferencing, reasoning, etc.
- In different types of interfaces: (speech-based) dialogue systems, natural language and multimodal/multisensorial interactions, voice activated services, etc.
- Communication with neighbouring fields of applications, e.g. e-government, e-culture, e-health, e-participation, mobile applications, etc.
- Industrial LRs requirements, user needs
Issues in Human Language Technologies evaluation
- HLT Evaluation methodologies, protocols and measures
- Validation, quality assurance, evaluation of LRs
- Benchmarking of systems and products Usability evaluation of HLT-based user interfaces, interactions and dialog systems
- Usability and user satisfaction evaluation
General issues regarding LRs & Evaluation
- National and international activities and projects
- Priorities, perspectives, strategies in national and international policies for LRs
- Open architectures
- Organisational, economical and legal issues
Special Highlights
LREC targets the integration of different types of LRs - spoken, written, and other modalities - and of the respective communities. To this end, LREC encourages submissions covering issues which are common to different types of LRs and language technologies.
LRs are currently developed and deployed in a much wider range of applications and domains. LREC 2008 recognises the need to encompass all those data that interact with language resources in an attempt to model more complex human processes and develop more complex systems, and encourages submissions on topics such as:
- Multimodal and multimedia systems, for Human-Machine interfaces, Human-Human interactions, and content processing
- Resources for modelling language-related cognitive processes, including emotions
- Interaction/Association of language and perception data, also for robotic systems
The Scientific Programme will include invited talks, oral presentations, poster and demo presentations, and panels. There is no difference in quality between oral and poster presentations. Only the appropriateness of the type of communication (more or less interactive) to the content of the paper will be considered.
SUBMISSIONS AND DATES
Submitted abstracts of papers for oral and poster or demo presentations should consist of about 1500-2000 words.
- Submission of proposals for oral and poster/demo papers: 31 October 2007
- Submission of proposals for panels, workshops and tutorials: 31 October 2007
The Proceedings on CD will include both oral and poster papers, in the same format. In addition a Book of Abstracts will be printed. Back to Top -
Call for Papers: HLT & NLP within the Arabic world Workshop at LREC 2008
*
Back to Top
HLT & NLP within the Arabic world: Arabic Language and **local ** languages processing: Status Updates and Prospects *
Please refer to http://www.lrec-conf.org/lrec2008/Workshops.html for details.
* Motivation and Aims*
This Workshop intends to add value to the issues addressed during the main conference (Human Language Technologies (HLT) & Natural Language Processing (NLP)) and enhance the work carried out at different places to process Arabic language(s) and more generally Semitic languages and other local and foreign languages spoken in the region.
It should bring together people who are actively involved in Arabic Written and Spoken language processing in a mono- or cross/multilingual context, and give them an opportunity to update the community through reports on completed and ongoing work as well as on the availability of LRs, evaluation protocols and campaigns, products and core technologies (in particular open source ones). This should enable the participants to develop a common view on where we stand with respect to these particular set of languages and to foster the discussion of the future of this research area. Particular attention will be paid to activities involving technologies such as Machine Translation, Cross-Lingual Information Retrieval/extraction, Summarization, Speech to text transcriptions, etc., and languages such as Arabic varieties, Amazigh, Amharic, Hebrew, Maltese, and other local languages. Evaluation methodologies and resources for evaluation of HLT are also a main focus.
* Topics of Interest *
The submissions should address some of the following issues:
· Issues in the design, the acquisition, creation, management, access, distribution, use of Language Resources (Standard Arabic, Colloquial Arabic, other Semitic languages, Amazigh, Coptic, Maltese, English/French spoken locally, etc.)
· Impact on LR collections/processing and NLP of the crucial issues related to "code switching" between different dialects and languages
· Specific issues related to the above-mentioned languages such as role of morphology, named entities, corpus alignment, etc.)
· Multilinguality issues including relationship between Colloquial and Standard Arabic
· Exploitation of LR in different types of applications
· Industrial LR requirements and community's response;
· Benchmarking of systems and products; resources for benchmarking and evaluation for written and spoken language processing;
· Focus on some key technologies such as MT (all approaches e.g. Statistical, Example-Based, etc.), Information Retrieval, Speech Recognition, Spoken Documents Retrieval, CLIR, Question-Answering, Summarization,
· Local, regional, and international activities and projects;
· Needs, possibilities, forms, initiatives of/for regional and international cooperation.
* Submission Details (more on http://www.lrec-conf.org/lrec2008/Workshops.html) *
Submissions must be in English. Abstracts for workshop contributions should not exceed Four A4 pages (excluding references). An additional title page should state: the title; author(s); affiliation(s); and contact author's e-mail address, as well as postal address, telephone and fax numbers.
Submission is to be sent by email, preferably in Postscript or PDF format, to: arabic@elda.org mailto:choukri@elda.org to arrive before * 15 February 2008 * .
Registration to LREC'08 will be required for participation, so potential participants are invited to refer to the main conference website for all details not covered in the present call ( http://www.lrec-conf.org/lrec2008/)
* Important Dates *
Call for papers: 3 January 2008
Deadline for abstract submissions: 15 February 2008
Notification of acceptance: 14 March 2008
Final version of accepted paper: 11 April 2008
Workshop full-day: Saturday 31^st May 2008
* Workshop chair *
Khalid Choukri (ELRA/ELDA, France )
* Workshop Co-chairs *
Mona Diab, Columbia University , USA
Bente Maegaard (CST, University of Copenhagen , Denmark )
Paolo Rosso, Universidad Politécnica Valencia , Spain
Abdelhadi Soudi ENIM ( Morocco )
Ali Farghaly, Oracle USA and Monterey Institute of International Studies, -
Collaboration: interoperability between people in the creation of language resources for less-resourced languages
CALL FOR PAPERS
Back to Top
"Collaboration: interoperability between people in the creation of language resources for less-resourced languages"
LREC 2008 pre-conference workshop
Marrakech, Morocco: afternoon of Tuesday 27th May 2008
Organised by the SALTMIL Special Interest Group of ISCA
SALTMIL: http://ixa2.si.ehu.es/saltmil/
LREC 2008: http://www.lrec-conf.org/lrec2008/
Call For Papers: http://ixa2.si.ehu.es/saltmil/en/activities/lrec2008/lrec-2008-workshop-cfp.html
Paper submission: http://www.easychair.org/conferences/?conf=saltmil2008
Papers are invited for the above half-day workshop, in the format outlined below. Most submitted papers will be presented in poster form, though some authors may be invited to present in lecture format.
Context and Focus
The minority or "less resourced" languages of the world are under increasing pressure from the major languages (especially English), and many of them lack full political recognition. Some minority languages have been well researched linguistically, but most have not, and the majority do not yet possess basic speech and language resources which would enable the commercial development of products. This lack of language products may accelerate the decline of those languages that are already struggling to survive. To break this vicious circle, it is important to encourage the development of basic language resources as a first step.
In recent years, linguists across the world have realised the need to document endangered languages immediately, and to publish the raw data. This raw data can be transformed automatically (or with the help of volunteers) into resources for basic speech and language technology. It thus seems necessary to extend the scope of recent workshops on speech and language technology beyond technological questions of interoperability between digital resources: the focus will be on the human aspect of creating and disseminating language resources for the benefit of endangered and non-endangered less-resourced languages.
Topics
The theme of "collaboration" centres on issues involved in collaborating with:
* Trained researchers.
* Non-specialist workers (paid or volunteers) from the speaker community.
* The wider speaker community.
* Officials, funding bodies, and others.
Hence there will be a corresponding need for the following:
* With trained researchers: Methods and tools for facilitating collaborative working at a distance.
* With non-specialist workers: Methods and tools for training new workers for specific tasks, and laying the foundations for continuation of these skills among native speakers.
* With the wider speaker community: Methods of gaining acceptance and wider publicity for the work, and of increasing the take-up rates after completion of the work.
* With others: Methods of presenting the work in non-specialist terms, and of facilitating its progress.
Topics may include, but are not limited to:
* Bringing together people with very different backgrounds.
* How to organize volunteer work (some endangered languages have active volunteers).
* How to train non-specialist volunteers in elicitation methods.
* Working with the speaker community: strengthening acceptance of ICT and language resources among the speaker community.
* Working collaboratively to build speech and text corpora with few existing language resources and no specialist expertise.
* Web-based creation of linguistic resources, including web 2.0.
* The development of digital tools to facilitate collaboration between people.
* Licensing issues; open source, proprietary software.
* Re-use of existing data; interoperability between tools and data.
* Language resources compatible with limited computing power environments (old machines, the $100 handheld device, etc.)
* General speech and language resources for minority languages, with particular emphasis on software tools that have been found useful.
Important dates
29 February 2008 Deadline for submission
17 March 2008 Notification
31 March 2008 Final version
27 May 2008 Workshop
Organisers
* Briony Williams: Language Technologies Unit, Bangor University, Wales, UK
* Mikel Forcada: Departament de Llenguatges i Sistemes Informàtics, Universitat d'Alacant, Spain
* Kepa Sarasola: Dept. of Computer Languages, University of the Basque Country
Submission information
We expect short papers of max 3500 words (about 4-6 pages) describing research addressing one of the above topics, to be submitted as PDF documents by uploading to the following URL:
http://www.easychair.org/conferences/?conf=saltmil2008
The final papers should not have more than 6 pages, adhering to the stylesheet that will be adopted for the LREC Proceedings (to be announced later on the Conference web site). -
CfP ELRA Workshop on Evaluation
CALL FOR PAPERS
ELRA Workshop on Evaluation
Looking into the Future of Evaluation: when automatic metrics meet
task-based and performance-based approaches
To be held in conjunction with the 6th International Language Resources
and Evaluation Conference (LREC 2008)
27 May 2008, Palais des Congrès Mansour Eddahbi, Marrakech
Background
Automatic methods to evaluate system performance play an important role
in the development of a language technology system. They speed up
research and development by allowing fast feedback, and the idea is also
to make results comparable while aiming to match human evaluation in
terms of output evaluation. However, after several years of study and
exploitation of such metrics we still face problems like the following ones:
* they only evaluate part of what should be evaluated
* they produce measurements that are hard to understand/explain, and/or
hard to relate to the concept of quality
* they fail to match human evaluation
* they require resources that are expensive to create
etc. Therefore, an effort to integrate knowledge from a multitude of
evaluation activities and methodologies should help us solve some of
these immediate problems and avoid creating new metrics that reproduce
such problems.
Looking at MT as a sample case, problems to be immediately pointed out
are twofold: reference translations and distance measurement. The former
are difficult and expensive to produce, they do not cover the usually
wide spectrum of translation possibilities and what is even more
discouraging, worse results are obtained when reference translations are
of higher quality (more spontaneous and natural, and thus, sometimes
more lexically and syntactically distant from the source text).
Regarding the latter, the measurement of the distance between the source
text and the output text is carried out by means of automatic metrics
that do not match human intuition as well as claimed. Furthermore,
different metrics perform differently, which has already led researchers
to study metric/approach combinations which integrate automatic methods
into a deeper linguistically oriented evaluation. Hopefully, this should
help soften the unfair treatment received by some rule-based systems,
clearly punished by certain system-approach sensitive metrics.
On the other hand, there is the key issue of « what needs to be measured
», so as to draw the conclusion that « something is of good quality »,
or probably rather « something is useful for a particular purpose ». In
this regard, works like those done within the FEMTI framework have shown
that aspects such as usability, reliability, efficiency, portability,
etc. should also be considered. However, the measuring of such quality
characteristics cannot always be automated, and there may be many other
aspects that could be usefully measured.
This workshop follows the evolution of a series of workshops where
methodological problems, not only for MT but for evaluation in general,
have been approached. Along the lines of these discussions and aiming to
go one step further, the current workshop, while taking into account the
advantages of automatic methods and the shortcomings of current methods,
should focus on task-based and performance-based approaches for
evaluation of natural language applications, with key questions such as:
- How can it be determined how useful a given system is for a given task?
- How can focusing on such issues and combining these approaches with
our already acquired experience on automatic evaluation help us develop
new metrics and methodologies which do not feature the shortcomings of
current automatic metrics?
- Should we work on hybrid methodologies of automatic and human
evaluation for certain technologies and not for others?
- Can we already envisage the integration of these approaches?
- Can we already plan for some immediate collaborations/experiments?
- What would it mean for the FEMTI framework to be extended to other HLT
applications, such as summarization, IE, or QA? Which new aspects would
it need to cover?
We solicit papers that address these questions and other related issues
relevant to the workshop.
Workshop Programme and Audience Addressed
This full-day workshop is intended for researchers and developers on
different evaluation technologies, with experience on the various issues
concerned in the call, and interested in defining a methodology to move
forward.
The workshop feature invited talks, submitted papers, and will conclude
with a discussion on future developments and collaboration.
Workshop Chairing Team
Gregor Thurmair (Linguatec Sprachtechnologien GmbH, Germany) - chair
Khalid Choukri (ELDA - Evaluations and Language resources Distribution
Agency, France) - co-chair
Bente Maegaard (CST, University of Copenhagen, Denmark) - co-chair
Organising Committee
Victoria Arranz (ELDA - Evaluations and Language resources Distribution
Agency, France)
Khalid Choukri (ELDA - Evaluations and Language resources Distribution
Agency, France)
Christopher Cieri (LDC - Linguistic Data Consortium, USA)
Eduard Hovy (Information Sciences Institute of the University of
Southern California, USA)
Bente Maegaard (CST, University of Copenhagen, Denmark)
Keith J. Miller (The MITRE Corporation, USA)
Satoshi Nakamura (National Institute of Information and Communications
Technology, Japan)
Andrei Popescu-Belis (IDIAP Research Institute, Switzerland)
Gregor Thurmair (Linguatec Sprachtechnologien GmbH, Germany)
Important dates
Deadline for abstracts: Monday 28 January 2008
Notification to Authors: Monday 3 March 2008
Submission of Final Version: Tuesday 25 March 2008
Workshop: Tuesday 27 May 2008
Submission Format
Abstracts should be no longer than 1500 words and should be submitted in
PDF format to Gregor Thurmair at g.thurmair@linguatec.de.
Back to Top -
2nd Intl Workshop on emotion corpora for research on emotion and affect
deadline for abstract : 12/02/2008
Second International Workshop on EMOTION (satellite of LREC):
CORPORA FOR RESEARCH ON EMOTION AND AFFECT
Monday, 26 May 2008
in Marrakech (Morocco)
In Association with
6th INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
LREC2008 http://www.lrec-conf.org/lrec2008/
Main Conference
28-29-30 May 2008
This workshop follows a first successful workshop on Corpora for research
on Emotion and Affect at LREC 2006. The HUMAINE network of excellence
(http://emotion-research.net/) has brought together several groups working
on the development of emotional databases, the HUMAINE association will
continue this effort and the workshop aims to broaden the interaction that
has developed in that context. The HUMAINE Association portal will provide
a range of services for individuals, such as a web presence, access to
data, and an email news service; special interest groups will be provided
with a working folder, a mailing list, and a discussion forum or a blog.
Conferences, workshops and research projects in the area of
emotion-oriented computing can be given a web presence on the portal.
Papers are invited in the area of corpora for research on emotion and
affect. They may raise one or more of the following questions. What kind
of theory of emotion is needed to guide the area? What are appropriate
sources? Which modalities should be considered, in which combinations?
What are the realistic constraints on recording quality? How can the
emotional content of episodes be described within a corpus? Which
emotion-related features should a corpus describe, and how? How should
access to corpora be provided? What level of standardisation is
appropriate? How can quality be assessed? Ethical issues in database
development and access.
Description of the specific technical issues of the workshop:
Many models of emotion are common enough to affect the way teams go about
collecting and describing emotion-related data. Some which are familiar
and intuitively appealing are known to be problematic, either because they
are theoretically dated or because they do not transfer to practical
contexts. To evaluate the resources that are already available, and to
construct valid new corpora, research teams need some sense of the models
that are relevant to the area.
The organising committee:
Laurence Devillers / Jean-Claude Martin
Spoken Language Processing group/ Architectures and Models for
Interaction, LIMSI-CNRS,
BP 133, 91403 Orsay Cedex, France
(+33) 1 69 85 80 62 / (+33) 1 69 85 81 04 (phone)
(+33) 1 69 85 80 88 / (+33) 1 69 85 80 88 (fax)
devil@limsi.fr / martin@limsi.fr
http://www.limsi.fr/Individu/devil/
http://www.limsi.fr/Individu/martin/
Roddy Cowie / School of Psychology
Ellen Douglas-Cowie / Dean of Arts, Humanities and Social Sciences Queen's
University, Belfast BT7 1NN, UK
+44 2890 974354 / +44 2890 975348 (phone)
+44 2890 664144 / +44 2890 ****** (fax)
http://www.psych.qub.ac.uk/staff/teaching/cowie/index.aspx
http://www.qub.ac.uk/en/staff/douglas-cowie/
r.cowie@qub.ac.uk / e.douglas-Cowie@qub.ac.uk
Anton Batliner - Lehrstuhl fuer Mustererkennung (Informatik 5)
Universitaet Erlangen-Nuernberg - Martensstrasse 3
91058 Erlangen - F.R. of Germany
Tel.: +49 9131 85 27823 - Fax.: +49 9131 303811
batliner@informatik.uni-erlangen.de
http://www5.informatik.uni-erlangen.de/Personen/batliner/
Contact: Laurence Devillers lrec-emotion@limsi.fr
-------------------------
IMPORTANT DATES
-------------------------
1rt call for paper 21 December
2nd call for paper 29 January
Deadline for 1500-2000 words abstract submission 12 February
Notification of acceptance 12 March
Final version of accepted paper 4 April
Workshop full-day 26 May
--------------
SUBMISSIONS
---------------
The workshop will consist of paper and poster presentations.
Submitted abstracts of papers for oral and poster must consist of about
1500-2000 words.
Final submissions should be 4 pages long, must be in English,
and follow the submission guidelines at LREC2008.
The preferred format is MS word or pdf. The file should be submitted via
email
to lrec-emotion@limsi.fr
-----------------------------
As soon as possible, authors are encouraged to send to
lrec-emotion@limsi.fr a brief email indicating their intention to
participate, including their contact information and the topic they intend
to address in their
submissions.
Proceedings of the workshop will be printed by the LREC Local Organising
Committee.
Submitted papers will be blind reviewed.
--------------------------------------------------
TIME SCHEDULE AND REGISTRATION FEE
--------------------------------------------------
The workshop will consist of a full-day session,
There will be time for collective discussions.
For this full-day Workshop, the registration fee will
be specified on http://www.lrec-conf.org/lrec2008/ -
HLT and NLP within the Arabic world
HLT & NLP within the Arabic world:
Back to Top
Arabic Language and local languages processing:
Status Updates and Prospects
Workshop held in conjunction with LREC 2008
Saturday, May 31st 2008
The submissions are now open. Please follow the procedure to submit your abstract.
Only online submissions will be considered. All abstracts should be submitted in PDF format through the online submission form on START before 15 February 2008.
Submissions must be in English.
Abstracts should be submitted in PDF format.
Abstracts for workshop contributions should not exceed 4 (Four) A4 pages (excluding references). An additional title page should state: the title; author(s); affiliation(s); and contact author's e-mail address, as well as postal address, telephone and fax numbers.
There is no template for the pdf abstract. The template will be made available online for the for the final papers.
The submissions are not anonymous.
Submitted papers will be judged based on relevance to the workshop aims, as well as the novelty of the idea, technical quality, clarity of presentation, and expected impact on future research within the area of focus.
Any question should be sent to arabic@elda.org.
Registration to LREC'08 will be required for participation, so potential participants are invited to refer to the main conference website for all details not covered in the present call.
Important Dates
Call for papers: 3 January 2008
Deadline for abstract submissions: 15 February 2008
Notification of acceptance: 14 March 2008
Final version of accepted paper: 11 April 2008
Workshop full-day: Saturday 31st May 2008
Full Call for Papers is available at: http://www.lrec-conf.org/lrec2008/IMG/ws/HLTwithin%20the%20Arabic%20world-final.html
Submission page: https://www.softconf.com/LREC2008/ALLLP2008/ -
CALL for JEP/TALN/RECITAL 2008 - Avignon France
CALL FOR WORKSHOPS AND TUTORIALS
For the third time, after Nancy in 2002 and Fes in 2004, the French speech association AFCP and the French NLP association ATALA are jointly organising their main conference in order to group together the two research community working in the fields of Speech and Natural Language Processing.
The conference will include oral and poster communications, invited conferences, workshops and tutorials. Workshop and tutorials will be held on June 13, 2008.
The official languages are French and English.
IMPORTANT DATES
Deadline for proposals: November 22nd 2007
Approval by the TALN committee: November 30th 2007
Final version for inclusion in the proceedings: April 4th 2008
Workshop and tutorials: June 13th 2008
OBJECTIVES
Workshops can be organized on any specific aspect of NLP. The aim of these sessions is to facilitate an in-depth discussion of this theme.
A workshop has its own president and its own program committee. The president is responsible for organizing a call for paper/participation and for the coordination of his program committee. The organizers ofthe main TALN conference will only take in charge the organization of the usual practical details (rooms, coffee breaks, proceedings).
Workshops will be organized in parallel sessions on the last day of the conference (2 to 4 sessions of 1:30).
Tutorials will be held on the same day.
HOW TO SUBMIT
Workshop and Tutorial proposals will be sent by email to taln08@atala.org before November 22nd, 2007.
** Workshop proposals will contain an abstract presenting the proposed theme, the program committee list and the expected length of the session.
** Tutorial proposals will contain an abstract presenting the proposed theme, a list of all the speakers and the expected length of the session (1 or 2 sessions of 1:30).
The TALN program committee will make a selection of the proposals and announce it on November 30th, 2008.
FORMAT
Conferences will be given in French or English (for non French native speakers). Papers to be published in the proceedings will conform to the TALN style sheet which is available on the conference web site. Worshop papers should not be longer than 10 pages in Times 12 (references included).
Contact: taln08@atala.org
-
6th Intl Conference on Content-based Multimedia Indexing CBMI '08
Sixth International Conference on Content-Based Multimedia Indexing (CBMI'08)
http://cbmi08.qmul.net/
18-20th June, 2008, London, UK
CBMI is the main international forum for the presentation and discussion of the latest
technological advances, industrial needs and product developments in multimedia indexing,
search, retrieval, navigation and browsing. Following the five successful previous events
(Toulouse 1999, Brescia 2001, Rennes 2003, Riga 2005, and Bordeaux 2007), CBMI'08
will be hosted by Queen Mary, University of London in the vibrant city of London.
The focus of CBMI'08 is the integration of what could be regarded as unrelated disciplines
including image processing, information retrieval, human computer interaction and
semantic web technology with industrial trends and commercial product development.
The technical program of CBMI'08 will include presentation of invited plenary talks,
special sessions as well as regular sessions with contributed research papers.
Topics of interest include, but are not limited to:
* Content-based browsing, indexing and retrieval of images, video and audio
* Matching and similarity search
* Multi-modal and cross-modal indexing
* Content-based search
* Multimedia data mining
* Summarisation and browsing of multimedia content
* Semantic web technology
* Semantic inference
* Semantic mapping and ontologies
* Identification and tracking of semantic regions in scenes
* Presentation and visualization tools
* Graphical user interfaces for navigation and browsing
* Personalization and content adaptation
* User modelling, interaction and relevance feed-back
* Metadata generation and coding
* Large scale multimedia database management
* Applications of multimedia information retrieval
* Analysis and social content applications
* Evaluation metrics
Submission
Prospective authors are invited to submit papers using the on-line system at the
conference website http://cbmi08.qmul.net/. Accepted papers will be published in the
Conference Proceedings. Extended and improved versions of CBMI papers will be
reviewed and considered for publication in Special Issues of IET Image Processing
(formerly IEE Proceedings Vision, Image and Signal Processing) and EURASIP journal
on Image and Video Processing.
Important Dates:
Submission of full papers (to be received by): 5th February, 2008
Notification of acceptance: 20th March, 2008
Submission of camera-ready papers: 10th April, 2008 Conference: 18-20th June, 2008
Organisation Committee:
General Chairs: Ebroul Izquierdo,Queen Mary, University of London, UK
Technical Co- Chairs: Jenny Benois-Pineau , University of Bordeaux, France
Arjen P. de Vries, Centrum voor Wiskunde en Informatica, NL
Alberto Del Bimbo,Universita` degli Studi di Firenze, Italy
Bernard Merialdo,Institut Eurecom, France
EU Commission:
Roberto Cencioni(Head of Unit INFSO E2) European Commission
Luis Rodriguez Rosello(Head of Unit INFSO D2) European Commission
Special Session Chairs:
Stefanos Kollias, National Technical University of Athens, Greece
Gael Richard,GET-Telecom Paris, France
Contacts:
Ebroul Izquierdo ebroul.izquierdo@elec.qmul.ac.uk
Giuseppe Passino giuseppe.passino@elec.qmul.ac.uk
Qianni Zhang qianni.zhang@elec.qmul.ac.uk
Back to Top -
CfP IIS2008 Workshop on Spoken Language and Understanding and Dialog Systems
2nd CALL FOR PAPERS
IIS 2008 Workshop on Spoken Language Understanding and Dialogue Systems
Zakopane, Poland 18 June 2008
http://nlp.ipipan.waw.pl/IIS2008/luna.html
Submission deadline: 31 January 2008
The workshop is organized by the IST LUNA (http://www.ist-luna.eu/) projects members and it is aimed to give an opportunity to share ideas on problems related to communication with computer systems in natural language and dialogue systems.
SCOPE
The main area of interest of the workshop is human-computer interaction in natural language and include among others:
- spontaneous speech recognition,
- preparation of speech corpora,
- transcription problems in spoken corpora
- parsing problems in spoken texts
- semantic interpretation of text,
- knowledge representation in relation to dialogue systems,
- dialogue models,
- spoken language understanding.
SUBMISSIONS
The organizers invite long (10 pages) and short (5 pages) papers. The papers will be refereed on the basis of long abstracts (4 pages) by an international committee. The final papers are to be prepared using LaTeX. The conference proceedings in paper and electronic form will be distributed at the conference. They will be available on-line after the conference. I
IMPORTANT DATES
Submission deadline (abstracts) 31 January 2008
Notification of acceptance: 29 February 2008
Full papers, camera-ready version due: 31 March 2008
Workshop: 18 June 2008
ORGANISERS
Malgorzata Marciniak mm@ipipan.waw.pl
Agnieszka Mykowiecka agn@ipian.waw.pl
Krzysztof Marasek kmarasek@pjwstk.edu.pl
Back to Top -
4TH TUTORIAL AND RESEARCH WORKSHOP PIT08
***************************************************************
Back to Top
**February 10, 2008: Deadline for Long, Short and Demo Papers**
***************************************************************
Following previous successful workshops between 1999 and 2006, the
4TH TUTORIAL AND RESEARCH WORKSHOP
PERCEPTION AND INTERACTIVE TECHNOLOGIES FOR SPEECH-BASED SYSTEMS
(PIT08)
will be held at the Kloster Irsee in southern Germany from June 16 to
June 18, 2008.
Please follow this link to visit our workshop website
http://it.e-technik.uni-ulm.de/World/Research.DS/irsee-workshops/pit08/introduction.html
Submissions will be short/demo or full papers of 4-10 pages.
Important dates:
**February 10, 2008: Deadline for Long, Short and Demo Papers**
March 15, 2008: Author notification
April 1, 2008: Deadline for final submission of accepted paper
April 18, 2008: Deadline for advance registration
June 7, 2008: Final programme available on the web
The workshop will be technically co-sponsored by the IEEE Signal
Processing Society. It is envisioned to publish the proceedings in the
LNCS/LNAI Series by Springer.
We welcome you to the workshop.
Elisabeth André, Laila Dybkjaer, Wolfgang Minker, Heiko Neumann,
Michael Weber, Roberto Pieraccini
PIT'08 Organising Committee
--
Wolfgang Minker
University of Ulm
Department of Information Technology
Albert-Einstein-Allee 43
D-89081 Ulm
Phone: +49 731 502 6254/-6251
Fax: +49 691 330 3925516
http://it.e-technik.uni-ulm.de/World/Research.DS/ -
eNTERFACE 2008 Orsay Paris
eNTERFACE'08 the next international summer workshop on multimodal
interfaces will take place at LIMSI, in Orsay (near Paris), France,
during four weeks, August 4th-29th, 2008.
Please consider proposing projects and participate to the workshop (see
the Call for Projects proposal on the web site or attached to this mail).
eNTERFACE08 is the next of a series of successful workshops initiated by
SIMILAR, the European Network of Excellence (NoE) on Multimodal
interfaces. eNTERFACE'08 will follow the fruitful path opened by
eNTERFACE05 in Mons, Belgium, continued by eNTERFACE06 in Dubrovnik,
Croatia and eNTERFACE07 in Istambul, Turkey. SIMILAR came to an end in
2007, and the eNTERFACE http://www.enterface.org workshops are now
under the aegis of the OpenInterface http://www.openinterface.org
Foundation.
eNTERFACE'08 Important Dates
. December 17th, 2007: Reception of the complete Project proposal in
the format provided by the Author's kit
. January 10rd, 2008: Notification of project acceptance
. February 1st, 2008: Publication of the Call for Participation
. August 4th -- August 29th, 2008: eNTERFACE 08 Workshop
Christophe d'Alessandro
CNRS-LIMSI, BP 133 - F91403 Orsay France
tel +33 (0) 1 69 85 81 13 / Fax -- 80 88
-
Calls for EUSIPCO 2008-Lausanne Switzerland
CALL FOR PAPERS
CALL FOR SPECIAL SESSIONS AND CALL FOR TUTORIALS
EUSIPCO-2008 - 16th European Signal Processing Conference
August 25-29, 2008, Lausanne, Switzerland - http://www.eusipco2008.org
The 2008 European Signal Processing Conference (EUSIPCO-2008) is the sixteenth in a series of conferences promoted by EURASIP, the European Association for Signal, Speech, and Image Processing (www.eurasip.org ). Formerly biannual, this conference is now ayearly event. This edition will take place in Lausanne, Switzerland, organized by the Swiss Federal Institute of Technology, Lausanne (EPFL).
EUSIPCO-2008 will focus on the key aspects of signal processing theory and applications. Exploration of new avenues and methodologies of signal processing will also be encouraged. Accepted papers will be published in the Proceedings of EUSIPCO-2008. Acceptance will be based on quality, relevance and originality. Proposals for special sessions and tutorials are also invited.
For the first time, access to the tutorials will be free to all registered participants!
IMPORTANT DATES:
Proposals for Special Sessions: December 7, 2007
Proposals for Tutorials: February 8, 2008
Electronic submission of Full papers (5 pages A4): February 8, 2008
Notification of Acceptance: April 30, 2008
Conference: August 25-29, 2008
More details on how to submit papers and proposals for special sessions and tutorials can be found on the conference web site http://www.eusipco2008.org
Prof. Jean-Philippe Thiran
EPFL - Signal Processing Institute
EUSIPCO-2008 General Chair
Back to Top -
TDS 2008 11th Int.Conf. on Text, Speech and Dialogue
TSD 2008 - PRELIMINARY ANNOUNCEMENT
Eleventh International Conference on TEXT, SPEECH and DIALOGUE (TSD 2008)
Brno, Czech Republic, 8-12 September 2008
http://www.tsdconference.org/
The conference is organized by the Faculty of Informatics, Masaryk
University, Brno, and the Faculty of Applied Sciences, University of
West Bohemia, Pilsen. The conference is supported by International
Speech Communication Association.
Venue: Brno, Czech Republic
TSD SERIES
TSD series evolved as a prime forum for interaction between
researchers in both spoken and written language processing from the
former East Block countries and their Western colleagues. Proceedings
of TSD form a book published by Springer-Verlag in their Lecture Notes
in Artificial Intelligence (LNAI) series.
TOPICS
Topics of the conference will include (but are not limited to):
text corpora and tagging
transcription problems in spoken corpora
sense disambiguation
links between text and speech oriented systems
parsing issues
parsing problems in spoken texts
multi-lingual issues
multi-lingual dialogue systems
information retrieval and information extraction
text/topic summarization
machine translation
semantic networks and ontologies
semantic web
speech modeling
speech segmentation
speech recognition
search in speech for IR and IE
text-to-speech synthesis
dialogue systems
development of dialogue strategies
prosody in dialogues
emotions and personality modeling
user modeling
knowledge representation in relation to dialogue systems
assistive technologies based on speech and dialogue
applied systems and software
facial animation
visual speech synthesis
Papers on processing of languages other than English are strongly
encouraged.
PROGRAM COMMITTEE
Frederick Jelinek, USA (general chair)
Hynek Hermansky, Switzerland (executive chair)
FORMAT OF THE CONFERENCE
The conference program will include presentation of invited papers,
oral presentations, and a poster/demonstration sessions. Papers will
be presented in plenary or topic oriented sessions.
Social events including a trip in the vicinity of Brno will allow
for additional informal interactions.
CONFERENCE PROGRAM
The conference program will include oral presentations and
poster/demonstration sessions with sufficient time for discussions of
the issues raised.
IMPORTANT DATES
March 15 2008 ............ Submission of abstract
March 22 2008 ............ Submission of full papers
May 15 2008 .............. Notification of acceptance
May 31 2008 .............. Final papers (camera ready) and registration
July 23 2008 ............. Submission of demonstration abstracts
July 30 2008 ............. Notification of acceptance for
demonstrations sent to the authors
September 8-12 2008 ...... Conference date
The contributions to the conference will be published in proceedings
that will be made available to participants at the time of the
conference.
OFFICIAL LANGUAGE
of the conference will be English.
ADDRESS
All correspondence regarding the conference should be
addressed to
Dana Hlavackova, TSD 2008
Faculty of Informatics, Masaryk University
Botanicka 68a, 602 00 Brno, Czech Republic
phone: +420-5-49 49 33 29
fax: +420-5-49 49 18 20
email: tsd2008@tsdconference.org
LOCATION
Brno is the the second largest city in the Czech Republic with a
population of almost 400.000 and is the country's judiciary and
trade-fair center. Brno is the capital of Moravia, which is in the
south-east part of the Czech Republic. It had been a Royal City since
1347 and with its six universities it forms a cultural center of the
region.
Brno can be reached easily by direct flights from London, Moscow, Barcelona
and Prague and by trains or buses from Prague (200 km) or Vienna (130 km).
Back to Top -
CfP 50th International Symposium ELMAR-2008
50th International Symposium ELMAR-2008
10-13 September 2008, Zadar, Croatia
Submission deadline: March 03, 2008
CALL FOR PAPERS AND SPECIAL SESSIONS
TECHNICAL CO-SPONSORS
IEEE Region 8
EURASIP - European Assoc. Signal, Speech and Image Processing
IEEE Croatia Section
IEEE Croatia Section Chapter of the Signal Processing Society
IEEE Croatia Section Joint Chapter of the AP/MTT Societies
TOPICS
--> Image and Video Processing
--> Multimedia Communications
--> Speech and Audio Processing
--> Wireless Commununications
--> Telecommunications
--> Antennas and Propagation
--> e-Learning and m-Learning
--> Navigation Systems
--> Ship Electronic Systems
--> Power Electronics and Automation
--> Naval Architecture
--> Sea Ecology
--> Special Session Proposals - A special session consist
of 5-6 papers which should present a unifying theme
from a diversity of viewpoints; deadline for proposals
is February 04, 2008.
KEYNOTE TALKS
* Professor Sanjit K. Mitra, University of Southern California, Los Angeles, California, USA:
Image Processing using Quadratic Volterra Filters
* Univ.Prof.Dr.techn. Markus Rupp, Vienna University
of Technology, AUSTRIA:
Testbeds and Rapid Prototyping in Wireless Systems
* Professor Paul Cross, University College London, UK:
GNSS Data Modeling: The Key to Increasing Safety and
Legally Critical Applications of GNSS
* Dr.-Ing. Malte Kob, RWTH Aachen University, GERMANY:
The Role of Resonators in the Generation of Voice
Signals
SPECIAL SESSIONS
SS1: "VISNET II - Networked Audiovisual Systems"
Organizer: Dr. Marta Mrak, I-lab, Centre for Communication
Systems Research, University of Surrey, UNITED KINGDOM
Contact: http://www.ee.surrey.ac.uk/CCSR/profiles?s_id=3D3937
SS2: "Computer Vision in Art"
Organizer: Asst.Prof. Peter Peer and Dr. Borut Batagelj,
University of Ljubljana, Faculty of Computer and Information
Science, Computer Vision Laboratory, SLOVENIA
Contact: http://www.lrv.fri.uni-lj.si/~peterp/ or
http://www.fri.uni-lj.si/en/personnel/298/oseba.html
SUBMISSION
Papers accepted by two reviewers will be published in
symposium proceedings available at the symposium and
abstracted/indexed in the INSPEC and IEEExplore database.
More info is available here: http://www.elmar-zadar.org/
IMPORTANT: Web-based (online) paper submission of papers in
PDF format is required for all authors. No e-mail, fax, or
postal submissions will be accepted. Authors should prepare
their papers according to ELMAR-2008 paper sample, convert
them to PDF based on IEEE requirements, and submit them using
web-based submission system by March 03, 2008.
SCHEDULE OF IMPORTANT DATES
Deadline for submission of full papers: March 03, 2008
Notification of acceptance mailed out by: April 21, 2008
Submission of (final) camera-ready papers : May 05, 2008
Preliminary program available online by: May 12, 2008
Registration forms and payment deadline: May 19, 2008
Accommodation deadline: June 02, 2008
GENERAL CO-CHAIRS
Ive Mustac, Tankerska plovidba, Zadar, Croatia
Branka Zovko-Cihlar, University of Zagreb, Croatia
PROGRAM CHAIR
Mislav Grgic, University of Zagreb, Croatia
CONTACT INFORMATION
Assoc.Prof. Mislav Grgic, Ph.D.
FER, Unska 3/XII
HR-10000 Zagreb
CROATIA
Telephone: + 385 1 6129 851=20
Fax: + 385 1 6129 568=20
E-mail: elmar2008 (_) fer.hr
For further information please visit:
http://www.elmar-zadar.org/
Back to Top -
2nd IEEE Intl Conference on Semantic Computing
IEEE ICSC2008
Second IEEE International Conference on Semantic Computing
Call for Papers
Deadline March 1st, 2008
August 4th-7th, 2008
Santa Clara, CA, USA http://icsc.eecs.uci.edu/The field of Semantic Computing (SC) brings together those disciplines concerned with connecting the (often vaguely-formulated) intentions of humans with computational content. This connection can go both ways: retrieving, using and manipulating existing content according to user's goals ("do what the user means"); and creating, rearranging, and managing content that matches the author's intentions ("do what the author means").
The content addressed in SC includes, but is not limited to, structured and semi-structured data, multimedia data, text, programs, services and, even, network behaviour. This connection between content and the user is made via (1) Semantic Analysis, which analyzes content with the goal of converting it to meaning (semantics); (2)Semantic Integration, which integrates content and semantics from multiple sources; (3)Semantic Applications, which utilize content and semantics to solve problems; and (4)Semantic Interfaces, which attempt to interpret users' intentions expressed in natural language or other communicative forms.
Example areas of SC include (but, again, are not limited to) the following:
ANALYSIS AND UNDERSTANDING OF CONTENT
- Natural-language processing
- Image and video analysis
- Audio and speech analysis
- Analysis of structured and semi-structured data
- Analysis of behavior of software, services, and networks
INTEGRATION OF MULTIPLE SEMANTIC REPRESENTATIONS
- Database schema integration
- Ontology integration
- Interoperability and Service Integration
SEMANTIC INTERFACES
- Natural-Language Interface
- Multimodal Interfaces
APPLICATIONS
- Semantic Web and other search technologies
- Question answering
- Semantic Web services
- Multimedia databases
- Engineering of software, services, and networks based on
- natural-language specifications
- Context-aware networks of sensors, devices, and/or applications
The second IEEE International Conference on Semantic Computing (ICSC2008) builds on the success of ICSC2007 as an international interdisciplinary forum for researchers and practitioners to present research that advances the state of the art and practice of Semantic Computing, as well as identifying the emerging research topics and defining the future of Semantic Computing. The conference particularly welcomes interdisciplinary research that facilitates the ultimate success of Semantic Computing.
The event is located in Santa Clara, California, the heart of Silicon Valley. The technical program of ICSC2008 includes tutorials, workshops, invited talks, paper presentations, panel discussions, demo sessions, and an industry track. Submissions of high-quality papers describing mature results or on-going work are invited.
In addition to Technical Papers, the conference will feature
* Tutorials * Workshops * Demo Sessions * Special Sessions * Panels * Industry Track
SUBMISSIONS
Authors are invited to submit an 8-page technical paper manuscript in double-column IEEE format following the guidelines available on the ICSC2008 web page under "submissions".
The Conference Proceedings will be published by the IEEE Computer Society Press. Distinguished quality papers presented at the conference will be selected for publications in internationally renowned journals.
-
EUSIPCO 2008 Lausanne Switzerland
2nd CALL FOR PAPERS
2nd CALL FOR TUTORIALS
EUSIPCO-2008 - 16th European Signal Processing Conference - August 25-29, 2008, Lausanne, Switzerland
DEADLINE FOR SUBMISSION: February 8, 2008
The 2008 European Signal Processing Conference (EUSIPCO-2008) is the sixteenth in a series of conferences promoted by EURASIP, the European Association for Signal Processing (www.eurasip.org). This edition will take place in Lausanne, Switzerland, organized by the Swiss Federal Institute of Technology, Lausanne (EPFL).
EUSIPCO-2008 will focus on the key aspects of signal processing theory and applications. Exploration of new avenues and methodologies of signal processing will also be encouraged. Accepted papers will be published in the Proceedings of EUSIPCO-2008. Acceptance will be based on quality, relevance and originality. Proposals for tutorials are also invited.
*** This year will feature some exciting events and novelties: ***
- We are preparing a very attractive tutorial program and for the first time, access to the tutorials will be free to all registered participants! Some famous speakers have already been confirmed, but we also hereby call for new proposals for tutorials.
- We will also have top plenary speakers, including Stéphane Mallat (Polytechnique, France), Jeffrey A. Fessler (The University of Michigan, Ann Arbor, Michigan, USA), Phil Woodland (Cambridge, UK) and Bernhard Schölkopf (Max Planck Institute, Tübingen, Germany).
- The Conference will include 12 very interesting special sessions on some of the hottest topics in signal processing. See http://www.eusipco2008.org/11.html for the complete list of those special sessions.
- The list of 22 area chairs has been confirmed: see details at http://www.eusipco2008.org/7.html
- The social program will also be very exciting, with a welcome reception at the fantastic Olympic Museum in Lausanne, facing the Lake Geneva and the Alps (http://www.olympic.org/uk/passion/museum/index_uk.asp) and with the conference banquet starting with a cruise on the Lake Geneva on an historical boat, followed by a dinner at the Casino of Montreux (http://www.casinodemontreux.ch/).
Therefore I invite you to submit your work to EUSIPCO-2008 by the deadline and to attend the Conference in August in Lausanne.
IMPORTANT DATES:
Submission deadline of full papers (5 pages A4): February 8, 2008
Submission deadline of proposals for tutorials: February 8, 2008
Notification of Acceptance: April 30, 2008
Conference: August 25-29, 2008
More details on how to submit papers and proposals for tutorials can be found on the conference web site http://www.eusipco2008.org/
Back to Top -
5th Joint Workshop on Machine Learning and Multimodal Interaction MLMI 2008
MLMI 2008 first call for papers:
Back to Top
5th Joint Workshop on Machine Learning
and Multimodal Interaction (MLMI 2008)
8-10 September 2008
Utrecht, The Netherlands
http://www.mlmi.info/
The fifth MLMI workshop will be held in Utrecht, The Netherlands,
following successful workshops in Martigny (2004), Edinburgh (2005),
Washington (2006) and Brno (2007). MLMI brings together researchers
from the different communities working on the common theme of advanced
machine learning algorithms applied to multimodal human-human and
human-computer interaction. The motivation for creating this joint
multi-disciplinary workshop arose from the actual needs of several large
collaborative projects, in Europe and the United States.
* Important dates
Submission of papers/posters: Monday, 31 March 2008
Acceptance notifications: Monday, 12 May 2008
Camera-ready versions of papers: Monday, 16 June 2008
Workshop: 8-10 September 2008
* Workshop topics
MLMI 2008 will feature talks (including a number of invited speakers),
posters and demonstrations. Prospective authors are invited to submit
proposals in the following areas of interest, related to machine
learning and multimodal interaction:
- human-human communication modeling
- audio-visual perception of humans
- human-computer interaction modeling
- speech processing
- image and video processing
- multimodal processing, fusion and fission
- multimodal discourse and dialogue modeling
- multimodal indexing, structuring and summarization
- annotation and browsing of multimodal data
- machine learning algorithms and their applications to the topics above
* Satellite events
MLMI'08 will feature special sessions and satellite events, as during
the previous editions of MLMI (see http://www.mlmi.info/ for examples). To
propose special sessions or satellite events, please contact the special
session chair.
MLMI 2008 is broadly colocated with a number of events in related
domains: Mobile HCI 2008, 2-5 September, in Amsterdam; FG 2008, 17-19
September, in Amsterdam; and ECML 2008, 15-19 September, in Antwerp.
* Guidelines for submission
The workshop proceedings will be published in Springer's Lecture Notes
in Computer Science series (pending approval). The first four editions
of MLMI were published as LNCS 3361, 3869, 4299, and 4892. However,
unlike previous MLMIs, the proceedings of MLMI 2008 will be printed
before the workshop and will be already available onsite to MLMI 2008
participants.
Submissions are invited either as long papers (12 pages) or as short
papers (6 pages), and may include a demonstration proposal. Upon
acceptance of a paper, the Program Committee will also assign to it a
presentation format, oral or poster, taking into account: (a) the most
suitable format given the content of the paper; (b) the length of the
paper (long papers are more likely to be presented orally); (c) the
preferences expressed by the authors.
Please submit PDF files using the submission website at
http://groups.inf.ed.ac.uk/mlmi08/, following the Springer LNCS format
for proceedings and other multiauthor volumes
(http://www.springer.com/east/home/computer/lncs?SGWID=5-164-7-72376-0).
Camera-ready versions of accepted papers, both long and short, are
required to follow these guidelines and to take into account the
reviewers' comments. Authors of accepted short papers are encouraged to
turn them into long papers for the proceedings.
* Venue
Utrecht is the fourth largest city in the Netherlands, with historic
roots back to the Roman Empire. Utrecht hosts one of the bigger
universities in the country, and with its historic centre and the many
students it provides and excellent atmosphere for social activities in-
or outside the workshop community. Utrecht is centrally located in the
Netherlands, and has direct train connections to the major cities and
Schiphol International Airport.
TNO, organizer of MLMI 2008, is a not-for-profit research organization.
TNO speech technological research is carried out in Soesterberg, at
TNO Human Factors, and has research areas in ASR, speaker and language
recognition, and word and event spotting.
The workshop will be held in "Ottone", a beautiful old building near the
"Singel", the canal which encircles the city center. The conference
hall combines a spacious setting with a warm an friendly ambiance.
* Organizing Committee
David van Leeuwen, TNO (Organization Chair)
Anton Nijholt, University of Twente (Special Sessions Chair)
Andrei Popescu-Belis, IDIAP Research Institute (Programme Co-chair)
Rainer Stiefelhagen, University of Karlsruhe (Programme Co-chair) -
2008 International Workshop on Multimedia Signal Processing
2008 International Workshop on Multimedia Signal Processing
October 8-10, 2008
Back to TopShangri-la Hotel Cairns, Queensland, Australia
http://www.mmsp08.org/
MMSP-08 Call for Papers MMSP-08 is the tenth international workshop on multimedia signal
processing. The workshop is organized by the Multimedia Signal Processing Technical
Committee of the IEEE Signal Processing Society. A new theme of this workshop is
Bio-Inspired Multimedia Signal Processing in Life Science Research.
The main goal of MMSP-2008 is to further the scientific research within the broad field of
multimedia signal processing and its interaction with other new emerging areas such
as life science. The workshop will focus on major trends and challenges in this area, i
ncluding brainstorming a roadmap for the success of future research and application.
MMSP-08 workshop consists of interesting features:
* A Student Paper Contest with awards sponsored by Canon. To enter the contest a
paper submission must have a student as the first author
* A Best Paper from oral presentation session with awards sponsored by Microsoft.
* A Best Poster presentation with awards sponsored by National ICT Australia (NICTA).
* New session for Bio-Inspired Multimedia Signal Processing SCOPE Papers are solicited
in, but not limited to, the following general areas:
*Bio-inspired multimedia signal processing
*Multimedia processing techniques inspired by the study of signals/images derived from
medical, biomedical and other life science disciplines with applications to multimedia signal processing. *Fusion mechanism
of multimodal signals in human information processing system and applications to
multimodal multimedia data fusion/integration.
*Comparison between bio-inspired methods and conventional methods.
*Hybrid multimedia processing technology and systems incorporating bio-inspired and
conventional methods.
*Joint audio/visual processing, pattern recognition, sensor fusion, medical imaging,
2-D and 3-D graphics/geometry coding and animation, pre/post-processing of digital video,
joint source/channel coding, data streaming, speech/audio, image/video coding and
processing
*Multimedia databases (content analysis, representation, indexing, recognition and
retrieval)
*Human-machine interfaces and interaction using multiple modalities
*Multimedia security (data hiding, authentication, and access control)
*Multimedia networking (priority-based QoS control and scheduling, traffic engineering,
soft IP multicast support, home networking technologies, position aware computing,
wireless communications).
*Multimedia Systems Design, Implementation and Application (design, distributed
multimedia systems, real time and non-real-time systems; implementation; multimedia
hardware and software)
*Standards
SCHEDULE
* Special Sessions (contact the respective chair): March 8, 2008
* Papers (full paper, 4-6 pages, to be received by): April 18, 2008
* Notification of acceptance by: June 18, 2008
* Camera-ready paper submission by: July 18, 2008
General Co-Chairs
Prof. David Feng, University of Sydney, Australia, and Hong Kong
Polytechnic University feng@it.usyd.edu.au
Prof. Thomas Sikora, Technical University Berlin Germany sikora@nue.tu-berlin.de
Prof. W.C. Siu, Hong Kong Polytechnic University enwcsiu@polyu.edu.hk
Technical Program Co-Chairs
Dr. Jian Zhang National ICT Australia jian.zhang@nicta.com.au
Prof. Ling Guan Ryerson University, Canada lguan@ee.ryerson.ca
Prof. Jean-Luc Dugelay Institute EURECOM, Sophia Antipolis, France Jean-Luc.Dugelay@eurecom.fr
Special Session Co-Chairs:
Prof. Wenjun Zeng University of Missouri, USA zengw@missouri.edu
Prof. Pascal Frossard EPFL, Switzerland pascal.frossard@epfl.ch
-
International Seminar on Speech Production ISSP 2008
The International Seminar on Speech Production (ISSP-2008)
will be held in Strasbourg (Haguenau, 25 km north of Strasbourg-France) in December 2008, from Monday the 8th to Friday the 12th.
Please take note of the following important dates:
(1) Submission of a 2 page abstract (Times 12): March 28th, 2008
(2) Notification of acceptance: April 21st, 2008
(3) Early registration: 5th May, 2008
(4) Full paper submission: September 19th, 2008
(5) Late registration: September 22nd, 2008
(6) Conference dates: Monday, December 8 to Friday, December 12.
The conference website is under construction.
We are looking forward to seeing you in Alsace.
The Organizers
Rudolph Sock (IPS), Susanne Fuchs (ZAS Phonetik, Berlin) & Yves Laprie
(INRIA- LORIA, Nancy)
Back to Top