Contents

1 . Message from the board

 Dear members,

This year we introduce an innovation for the papers at INTERSPEECH 2008: Short and long papers.

Each Interspeech is unique in some fashion, and Interspeech 2008 is no
exception. One of the experiments being tried this year is to allow for
two types of paper submissions - single page ("short paper") and
four-page ("long paper"). Long papers have always been the standard at
Interspeech conferences and remain the preferred method of publication.
However, in the speech science community, the publication of a four-page
paper can in some cases interfere with later journal publication,
causing problems for authors who are evaluated by the number of their
journal publications. 

The speech processing area traditionally has not suffered from such
constraints. In addition, researchers in the speech processing area have
an imperative to rapidly reimplement ideas presented in other people's
papers in their own local problem / context and test out the proposed
methods. They cannot afford to wait for the one-two year review/publish
cycle.  A short paper really does not offer sufficient information for
such exercises. 

Yet  another issue is that Interspeech is hardly an intimate conference.
One cannot reasonably expect to hear most of the papers. Short papers
can be seen by all attendees in single-track conferences but not in
muti-tier conferences, especially on the order of Interspeech. 

Therefore, the combined community has a dilemma. Is it "short papers",
"long papers", or both? Should we place different requirements on
different communities? Should we indicate explicitly in the title that a
paper is "long" or "short" so that readers encountering "short" papers
in citations know in advance there will be a dearth of detail in the
published record? There are no easy answers to these questions. 

Since speech is basically an empirical area, the Interspeech 2008
committee has decided to do what most of us do in our day jobs -
experiment! This year, Interspeech 2008 will allow two types of
submissions - both short and long papers. Although the terminology
varies from conference to conference, here short papers are more like
structured abstracts.
ISCA will survey the reactions of the community and try to recommend a
course of action for future conferences. Our goal is to try to serve the
needs of a diverse community and recognize that one glove does not fit
all hands. Please give us your comments!

 

The board
 

 

 

 

Back to Top

2 . Editorial

 Dear Members, 

Thanks to Helen Meng's student Laurence Liu, I now have a wonderful tool for editing ISCApad. I thank Laurence who progressively took my requirements into account. From the table of contents, you have  an instantaneous access to all subsections (topics, conferences, job openings) by clicking on the links. There is also a return the table of contents by following the link at the bottom of each subsection (Back to top).

Unfortunately I had  some problems with my PC at this month and this is the reason this  this issue is late.

One  important information is the cancellation of the ITRW  on  Evidence-based Voice and Speech Rehabilitation in Head & Neck Oncology  that was organized in Amsterdam.

There is also an important message from the board at the top of this issue: please send your reactions to Tanja Schultz  (conferences@isca-speech.org).

Chris Wellekens

Institut Eurecom

Sophia Antipolis France

Back to Top

3 . ISCA News

3-1 . ISCA Scientific Achievement Medalist 2008

ISCA Scientific Achievement Medal for 2008  It is with great pleasure that I announce the ISCA Medalist for 2008 - Hiroya Fujisaki. Prof. Fujisaki has contributed to the speech research community in so many aspects, in speech analysis, synthesis and prosody, that it will be a very hard task for me to summarize his long list of achievements. He is also the founder of the ICSLP series of conferences which, being now fully integrated as one of ISCA's yearly conferences, will have its 10th anniversary this year.


Back to Top

3-2 . ISCA Fellows

ISCA Fellows, Call for Nominations

In 2007, ISCA will begin its Fellow Program to recognize and honor  outstanding members who have made significant contributions to the field  of speech science and technology.  To qualify for this  distinction, a candidate must have been an ISCA member for five years or  more with a minimum of ten years experience in the field.  Nominations  may be made by any ISCA member (see Nomination Form).  The nomination  must be accompanied by references from three current ISCA Fellows (or, during the first three years of the program, by ISCA Board members). A Fellow may be recognized by his/her outstanding technical contributions and/or continued significant service to ISCA.  The candidate's technical contribution should be summarized in the nomination in terms of publications, patents, projects, prototypes and their impact in the community.

Fellows will be selected by a Fellow Selection Committee of nine members who each serve three-year terms.  In the first year of the program, the Committee will be formed by ISCA Board members.  Over the next three years, one third of the members of the Selection Committee will be replaced by ISCA Fellows until the Committee consists entirely of ISCA Fellows.  Members of the Committee will be chosen by the ISCA Board.
 
The committee will hold a virtual meeting during June to evaluate the current years nominations.
 
Nominations should be submitted on the form provided at http://www.isca-speech.org/fellows.html.  Nominations should be submitted before May 23rd 2008.

Back to Top

3-3 . Google Scholar and the ISCA Archive

  •  

    Google Scholar and the ISCA Archive 
     
    The indexing of the ISCA Archive (http://www.isca-speech.org/archive/) by the Google Scholar search engine (http://scholar.google.com/) is now thorough enough to be quite useful, so this seems like a good time to give an overview of the service.  Google Scholar is a research literature search engine that provides full-text search for ISCA papers whose full text cannot be searched with other search engines. Google Scholar's citation tracking shows what papers have cited a particular paper, which can be very useful for finding follow-up work, related work and corrections.  More details about these and other features are given below. 
     
    The titles, author lists, and abstracts of ISCA Archive papers are all on the public web, so they can be searched by a general-purpose search engine such as Google.  However, the full texts of most ISCA papers are password protected and thus cannot be searched with a general-purpose search engine.  Google Scholar, through an arrangement with ISCA, has access to the full text of ISCA papers. Google Scholar has similar arrangements with many other publishers.  (On the other hand, general-purpose search engines index all sorts of web pages and other documents accessible through the public web, many of which will not be in the Google Scholar index.  So it's often useful to perform the same search using both Google Scholar and a general-purpose search engine.) 
     
    Google Scholar automatically extracts citations from the full text of papers. It uses this information to provide a "Cited by" list for each paper in the Google Scholar index.  This is a list of papers that have cited that paper. Google Scholar also provides an automatically generated "Related Articles" list for each paper.  The "Cited by" and "Related Articles" lists are powerful tools for discovering relevant papers.  Furthermore, the length of a paper's "Cited by" list can be used as a convenient (although imperfect) measure of the paper's impact.  Discussions about the subtleties of using Google Scholar to measure impact can be found at http://www.harzing.com/resources.htm#/pop_gs.htm and http://blogs.nature.com/nautilus/2007/07/google_scholar_as_a_measure_of.html
     
    It's possible to restrict Google Scholar searches to papers published by ISCA by using Google Scholar's Advanced Search feature and entering "ISCA" in the "Return articles published in" field.  If "ISCA" is entered in that field, and nothing is entered in the main search field, then the search results will show what ISCA papers are the most highly cited. 
     
    It should be noted that that there are many papers on ISCA-related topics which are not in the Google Scholar index.  For example, it seems many ICPhS papers are missing.  And old papers which have been scanned in from paper copies will either not have their full contents indexed, or will be indexed using imperfect OCR technology. Furthermore, as of November 2007 the indexing of the ISCA Archive by Google Scholar is still not 100% complete.  There are a few different areas which are not perfectly indexed, but the biggest planned improvement is to start using OCR for the ISCA papers which have been scanned in from paper copies. 
     
    There may be a time lag between when a new event is added to the ISCA Archive in the future and when it appears in the Google Scholar index. This time lag may be longer than the usual lag of general-purpose search engines such as Google, because ISCA must create Google Scholar catalog data for every new event and because the Google Scholar index seems to update considerably more slowly than the Google index. 
     
    Acknowledgements: ISCA's arrangement with Google Scholar is a project of students Rahul Chitturi, Tiago Falk, David Gelbart, Agustin Gravano, and Francis Tyers, ISCA webmaster Matt Bridger, and ISCA Archive coordinator Wolfgang Hess.  Our thanks to Google's Christian DiCarlo and Darcy Dapra, and the rest of the Google Scholar team. 

     

Back to Top

4 . SIG's activities

  • A list of Speech Interest Groups can be found on our web.

     

Back to Top

4-1 . SLaTE

The International Speech Communication Association Special Interest Group (ISCA SIG) on

Speech and Language Technology in Education

 

A special interest group was created in mid-September 2006 at the Interspeech 2006 conference in Pittsburgh. This is its official website. On this site you can find information about the SIG.

 

The next SLaTE ITRW will be in 2009 in England; here is early information about this exciting meeting!

 

OUR STATEMENT OF PURPOSE

The purpose of the International Speech Communication Association (ISCA) Special Interest Group on Speech and Language Technology in Education (SLaTE) shall be to promote interest in the use of speech and natural language processing for education; to provide members of ISCA with a special interest in speech and language technology in education with a means of exchanging news of recent research developments and other matters of interest in Speech and Language Technology in Education; to sponsor meetings and workshops on that subject that appear to be timely and worthwhile, operating within the framework of ISCA's by-laws for SIGs; and to provide and make available resources relevant to speech and language technology in education, including text and speech corpora, analysis tools, analysis and generation software, research papers and generated data.

 

Activities

  SLaTE Workshops

SLaTE ITRW Workshop October 1-3 2007 in Farmington Pennsylvania.

You can obtain proceedings of this ITRW from ISCA.

 

OTHER Workshops AND RELATED MEETINGS

 

We hark back to the first meeting of researchers interested in this area that was organized by our colleagues at KTH and held in Marholmen Sweden in 1998 http://www.speech.kth.se/still/.

 

 

Another meeting of interest in our field was held in Venice in 2004. It was organized by Rodolfo Delmonte.  http://www.isca-speech.org/archive/icall2004/index.html

 

A very interesting session was held at Interspeech 2006 by Patti Price and Abeer Alwan. The papers were reviewed by four panelists and you can see the panelists’ slides here.

Back to Top

4-2 . INVITATION « JEUNES CHERCHEURS » AUX JOURNÉES D’ÉTUDES SUR LA PAROLE 2008 (in french)

Dans le cadre de sa politique d’ouverture internationale, et en continuité de l’action lancée lors des JEPs 2004 au Maroc, et 2006 à Dinard, 
 
l’AFCP invite des étudiants ou jeunes chercheurs de la communauté Communication Parlée rattachés à des laboratoires situés hors de France 
à participer à la conférence JEP-TALN 2008 (Avignon, 9-13 juin 2008, http://www.lia.univ-avignon.fr/jep-taln08/).
 
Cette aide couvrira les frais de transport, d’hébergement et d’inscription de quelques (4/5) jeunes chercheurs venus de l’étranger.
 
Modalités de candidature :
Le candidat devra envoyer à ferrane@irit.fr ET Irina.Illina@loria.fr * AVANT LE 26 AVRIL 2008 * le dossier de candidature (voir pièce jointe) comportant :
•    un CV succinct présentant les activités scientifiques du candidat et sa formation universitaire, 
•    un paragraphe expliquant la motivation du candidat et mettant en valeur les retombées attendues d’une participation aux JEP-TALN 2008,
•    une estimation des frais de transport (voir ci-dessous).
Le dossier devra être accompagné d’une lettre de recommandation du directeur de recherche pour les étudiants
 
Remarques et Calendrier :
- Les décisions d’acceptation seront rendues pour le *5 mai 2008*
- La soumission et l’acceptation d’une contribution scientifique aux JEPs n’est pas un critère de selection pour cette invitation
- Priorité sera donnée aux candidats venant de pays peu représentés aux JEP
- Pour votre estimation de frais de transport : les aéroports les plus proches sont : Aéroport Avignon Caumont (www.avignon.aeroport.fr/), Aéroport Marseille-Provence (www.marseille.aeroport.fr) ou Aéroports de Paris (www.aeroportsdeparis.fr); la gare la plus proche est Avignon TGV ou Avignon Centre (voir www.voyages-sncf.com pour les tarifs de train).
 
Back to Top

5 . Future ISCA Conferences and workshops (ITRW)

5-1 . INTERSPEECH 2008

INTERSPEECH 2008 incorporating SST 08 

September 22-26, 2008

Brisbane Convention & Exhibition Centre

Brisbane, Australia

http://www.interspeech2008.org/

 

Interspeech is the world's largest and most comprehensive conference on Speech

Science and Speech Technology. We invite original papers in any related area,

including (but not limited to):

             Human Speech Production, Perception and Communication; 

             Speech and Language Technology; 

             Spoken Language Systems; and 

 

            Applications, Resources, Standardisation and Evaluation

  • In addition, a number of Special Sessions on selected topics have been organised and we invite you to submit for these also (see website for a complete list).

    Interspeech 2008 has two types of submission formats: Full 4-page Papers and

     Short 1-page Papers. Prospective authors are invited to submit papers in either

     format via the conference website by 7 April 2008. 

     

    Important Dates 

    Paper Submission: Monday, 7 April 2008, 3pm GMT 

    Notification of Acceptance/Rejection: Monday, 16 June 2008, 3pm GMT 

    Early Registration Deadline: Monday, 7 July 2008, 3pm GMT 

    Tutorial Day: Monday, 22 September 2008 

    Main conference: 23-26 September 2008 

     For more information please visit the website http://www.interspeech2008.org

     

    Chairman: Denis Burnham, MARCS, University of West Sydney.   

Back to Top

5-2 . INTERSPEECH 2009

Brighton, UK,
Conference Website
Chairman: Prof. Roger Moore, University of Sheffield.

Back to Top

5-3 . INTERSPEECH 2010

Chiba, Japan
Conference Website
ISCA is pleased to announce that INTERSPEECH 2010 will take place in Makuhari-Messe, Chiba, Japan, September 26-30, 2010. The event will be chaired by Keikichi Hirose (Univ. Tokyo), and will have as a theme "Towards Spoken Language Processing for All - Regardless of Age, Health Conditions, Native Languages, Environment, etc."

 

Back to Top

5-4 . ISCA Workshop on Evidence-based Voice and Speech Rehabilitation in Head & Neck Oncology

This workshop is cancelled

 

 

ISCA Workshop

Evidence-based Voice and Speech Rehabilitation in Head & Neck Oncology

Amsterdam, May 15-16, 2008

Back to Top

5-5 . ITRW on Speech analysis and processing for knowledge discovery

June 4 - 6, 2008
Aalborg, Denmark
Workshop website   http://www.es.aau.dk/ITRW/ 

 

Humans are very efficient at capturing information and messages in speech, and they often perform this task effortlessly even when the signal is degraded by noise, reverberation and channel effects. In contrast, when a speech signal is processed by conventional spectral analysis methods, significant cues and useful information in speech are usually not taken proper advantage of, resulting in sub-optimal performance in many speech systems. There exists, however, a vast literature on speech production and perception mechanisms and their impacts on acoustic phonetics that could be more effectively utilized in modern speech systems. A re-examination of these knowledge sources is needed. On the other hand, recent advances in speech modelling and processing and the availability of a huge collection of multilingual speech data have provided an unprecedented opportunity for acoustic phoneticians to revise and strengthen their knowledge and develop new theories. Such a collaborative effort between science and technology is beneficial to the speech community and it is likely to lead to a paradigm shift for designing next-generation speech algorithms and systems. This, however, calls for a focussed attention to be devoted to analysis and processing techniques aiming at a more effective extraction of information and knowledge in speech.
Objectives:
The objective of this workshop is to discuss innovative approaches to the analysis of speech signals, so that it can bring out the subtle and unique characteristics of speech and speaker. This will also help in discovering speech cues useful for improving the performance of speech systems significantly. Several attempts have been made in the past to explore speech analysis methods that can bridge the gap between human and machine processing of speech. In particular, the time varying aspects of interactions between excitation and vocal tract systems during production seem to elude exploitation. Some of the explored methods include all-pole and polezero modelling methods based on temporal weighting of the prediction errors, interpreting the zeros of speech spectra, analysis of phase in the time and transform domains, nonlinear (neural network) models for information extraction and integration, etc. Such studies may also bring out some finer details of speech signals, which may have implications in determining the acoustic-phonetic cues needed for developing robust speech systems.
The Workshop:
G will present a full-morning common tutorial to give an overview of the present stage of research linked to the subject of the workshop
G will be organised as a single series of oral and poster presentations
G each oral presentation is given 30 minutes to allow for ample time for discussion
G is an ideal forum for speech scientists to discuss the perspectives that will further future research collaborations.
Potential Topic areas:
G Parametric and nonparametric models
G New all-pole and pole-zero spectral modelling
G Temporal modelling
G Non-spectral processing (group delay etc)
G Integration of spectral and temporal processing
G Biologically-inspired speech analysis and processing
G Interactions between excitation and vocal tract systems
G Characterization and representation of acoustic phonetic attributes
G Attributed-based speaker and spoken language characterization
G Analysis and processing for detecting acoustic phonetic attributes
G Language independent aspects of acoustic phonetic attributes detection
G Detection of language-specific acoustic phonetic attributes
G Acoustic to linguistic and acoustic phonetic mapping
G Mapping from acoustic signal to articulator configurations
G Merging of synchronous and asynchronous information
G Other related topics
Registration
Fees for early and late registration for ISCA and non-ISCA members will be made available on the website during September 2007.
Venue:
The workshop will take place at Aalborg University, Department of Electronic Systems, Denmark. See the workshop website for further and latest information.
Accommodation:
There are a large number of hotels in Aalborg most of them close to the city centre. The list of hotels, their web sites and telephone numbers are given on the workshop website. Here you will also find information about transportation between the city centre and the university campus.
How to reach Aalborg:
Aalborg Airport is half an hour away from the international Copenhagen Airport. There are many daily flight connections between Copenhagen and Aalborg. Flying with Scandinavian Airlines System (SAS) or one of the Star Alliance companies to Copenhagen enables you to include Copenhagen-Aalborg into the entire ticket, and this way reducing the full transportation cost. There is also an hourly train connection between the two cities; the train ride lasts approx. five hours
Organising Committee:
Paul Dalsgaard, B. Yegnanarayana, Chin-Hui Lee, Paavo Alku, Rolf Carlson, Torbjørn Svendsen,

http://www.es.aau.dk/ITRW/
 


Back to Top

5-6 . ITRW on experimental linguistics

August 2008, Athens, Greece
Website
Prof. Antonis Botinis


Back to Top

5-7 . International Conference on Auditory-Visual Speech Processing AVSP 2008

Dates: 26-29 September 2008Location: Moreton Island, Queensland, Australia
Website: http://express.hid.ri.cmu.edu/AVSP2008/Main.html

AVSP 2008 will be held as an ISCA Tutorial and Research Workshop at
Tangalooma Wild Dolphin Resort on Moreton Island from the 26-29
September 2008. AVSP 2008 is a satellite conference to Interspeech 2008,
being held in Brisbane from the 22-26 September 2008. Tangalooma is
located at close distance from Brisbane, so that attendance at AVSP 2008
can easily be combined with participation in Interspeech 2008.

Auditory-visual speech production and perception by human and machine is
an interdisciplinary and cross-linguistic field which has attracted
speech scientists, cognitive psychologists, phoneticians, computational
engineers, and researchers in language learning studies. Since the
inaugural workshop in Bonas in 1995, Auditory-Visual Speech Processing
workshops have been organised on a regular basis (see an overview at the
avisa website). In line with previous meetings, this conference will
consist of a mixture of regular presentations (both posters and oral),
and lectures by invited speakers.

Topics include but are not limited to:
- Machine recognition
- Human and machine models of integration
- Multimodal processing of spoken events
- Cross-linguistic studies
- Developmental studies
- Gesture and expression animation
- Modelling of facial gestures
- Speech synthesis
- Prosody
- Neurophysiology and neuro-psychology of audition and vision
- Scene analysis

Paper submission:
Details of the paper submission procedure will be available on the
website in a few weeks time.

Chairs:
Simon Lucey
Roland Goecke
Patrick Lucey

 

Back to Top

5-8 . ITRW on Robust ASR

Santiago, Chile
October-November 2008
Dr. Nestor Yoma 

Back to Top

6 . Books, databases and softwares

6-1 . Reviewing a book?

  • The author of the book Advances in Digital Speech Transmission told me that you might be interested in doing a review of her book. If so I would be pleased to send you a free review copy. Please just answer to this email and let me know the address where I can send to book to.

    Martin, Rainer / Heute, Ulrich / Antweiler, Christiane
    Advances in Digital Speech Transmission

    1. Edition - January 2008
    99.90 Euro
    2008. 572 Pages, Hardcover
    - Practical Approach Book -
    ISBN-10: 0-470-51739-5
    ISBN-13: 978-0-470-51739-0 - John Wiley & Sons

    Best regards

    Tina Heuberger
    ----------------------------------------------------
    Public Relations Associate
    Physical Sciences and Life Sciences Books
    Wiley-Blackwell
    Wiley-VCH Verlag GmbH & Co. KGaA
    Boschstr. 12
    69469 Weinheim
    Germany
    phone +49/6201/606-412
    fax +49/6201/606-223
    mailto:theuberger@wiley-vch.de


Back to Top

6-2 . Books

La production de la parole
Author: Alain Marchal, Universite d'Aix en Provence, France
Publisher: Hermes Lavoisier
Year: 2007

Speech enhancement-Theory and Practice
Author: Philipos C. Loizou, University of Texas, Dallas, USA
Publisher: CRC Press
Year:2007

Speech and Language Engineering
Editor: Martin Rajman
Publisher: EPFL Press, distributed by CRC Press
Year: 2007

Human Communication Disorders/ Speech therapy
This interesting series can be listed on Wiley website

Incurses em torno do ritmo da fala
Author: Plinio A. Barbosa
Publisher: Pontes Editores (city: Campinas)
Year: 2006 (released 11/24/2006)
(In Portuguese, abstract attached.) Website

Speech Quality of VoIP: Assessment and Prediction
Author: Alexander Raake
Publisher: John Wiley & Sons, UK-Chichester, September 2006
Website

Self-Organization in the Evolution of Speech, Studies in the Evolution of Language
Author: Pierre-Yves Oudeyer
Publisher:Oxford University Press
Website

Speech Recognition Over Digital Channels
Authors: Antonio M. Peinado and Jose C. Segura
Publisher: Wiley, July 2006
Website

Multilingual Speech Processing
Editors: Tanja Schultz and Katrin Kirchhoff ,
Elsevier Academic Press, April 2006
Website

Reconnaissance automatique de la parole: Du signal a l'interpretation
Authors: Jean-Paul Haton
Christophe Cerisara
Dominique Fohr
Yves Laprie
Kamel Smaili
392 Pages     Publisher: Dunod

 

*Automatic Speech Recognition on Mobile Devices and over Communication 
Networks
*Editors: Zheng-Hua Tan and Børge Lindberg
Publisher: Springer, London, March 2008
website <http://asr.es.aau.dk/>
 
About this book
The remarkable advances in computing and networking have sparked an 
enormous interest in deploying automatic speech recognition on mobile 
devices and over communication networks. This trend is accelerating.
This book brings together leading academic researchers and industrial 
practitioners to address the issues in this emerging realm and presents 
the reader with a comprehensive introduction to the subject of speech 
recognition in devices and networks. It covers network, distributed and 
embedded speech recognition systems, which are expected to co-exist in 
the future. It offers a wide-ranging, unified approach to the topic and 
its latest development, also covering the most up-to-date standards and 
several off-the-shelf systems.
 
Latent Semantic Mapping: Principles & Applications
Author: Jerome R. Bellegarda, Apple Inc., USA
Publisher: Morgan & Claypool
Series: Synthesis Lectures on Speech and Audio Processing
Year: 2007
Website: http://www.morganclaypool.com/toc/sap/1/1
 
 
Back to Top

6-3 . LDC News

In this month's newsletter, the Linguistic Data Consortium (LDC) would 
like to announce the availability of three new publications.
 
------------------------------------------------------------------------
 
*New Publications*
 
(1)  CSLU: National Cellular Telephone Speech Release 2.3 
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2008S02> 
was created by the Center for Spoken Language Understanding (CSLU) at 
OGI School of Science and Engineering, Oregon Health and Science 
University, Beaverton, Oregon. It consists of cellular telephone speech 
and corresponding transcripts, specifically, approximately one minute of 
speech from 2336 speakers calling from locations throughout the United 
States. 
 
Speakers called the CSLU data collection system on cellular telephones, 
and they were asked a series of questions. Two prompt protocols were 
used: an In Vehicle Protocol for speakers calling from inside a vehicle 
and a Not in Vehicle Protocol for those calling from outside a vehicle. 
The protocols shared several questions, but each protocol contained 
distinct queries designed to probe the conditions of the caller's in 
vehicle/not in vehicle surroundings.
 
The text transcriptions in this corpus were produced using the non 
time-aligned word-level conventions described in The CSLU Labeling 
Guide, which is included in the documentation for this release. CSLU: 
National Cellular Telephone Speech Release 2.3 contains orthographic and 
phonetic transcriptions of corresponding speech files.  CSLU: National 
Cellular Corpus Release 2.3 is distributed on 1 DVD-ROM.
 
2008 Subscription Members will automatically receive two copies of this 
corpus, provided that they have submitted a signed copy of the LDC User 
Agreement for CSLU Corpora 
<http://www.ldc.upenn.edu/Catalog/mem_agree/CSLU_User_Agreement.html>. 
2008 Standard Members may request a copy as part of their 16 free 
membership corpora. Nonmembers may license this data for US$150.
 
***
 
(2)  GALE Phase 1 Arabic Blog Parallel Text 
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2008T02> 
was prepared by the LDC and consists of 102K words (222 files) of Arabic 
blog text and its English translation from thirty-three sources. This 
release was used as training data in Phase 1 of the DARPA-funded GALE 
program.
 
The task of preparing this corpus involved four stages of work: data 
scouting, data harvesting, formatting, and data selection.
 
Data scouting involved manually searching the web for suitable blog 
text. Data scouts were assigned particular topics and genres along with 
a production target in order to focus their web search. Formal 
annotation guidelines and a customized annotation toolkit helped data 
scouts to manage the search process and to track progress.
 
Data scouts logged their decisions about potential text of interest 
(sites, threads and posts) to a database. A nightly process queried the 
annotation database and harvested all designated URLs. Whenever 
possible, the entire site was downloaded, not just the individual thread 
or post located by the data scout.
 
Once the text was downloaded, its format was standardized so that the 
data could be more easily integrated into downstream annotation 
processes. Typically a new script was required for each new domain name 
that was identified. After scripts were run, an optional manual process 
corrected any remaining formatting problems.
 
The selected documents were then reviewed for content suitability using 
a semi-automatic process. A statistical approach was used to rank a 
document's relevance to a set of already-selected documents labeled as 
"good." An annotator then reviewed the list of relevance-ranked 
documents and selected those which were suitable for a particular 
annotation task or for annotation in general. 
 
After files were selected, they were reformatted into a human-readable 
translation format, and the files were then assigned to professional 
translators for careful translation. Translators followed LDC's GALE 
Translation guidelines, which describe the makeup of the translation 
team, the source, data format, the translation data format, best 
practices for translating certain linguistic features (such as names and 
speech disfluencies), and quality control procedures applied to 
completed translations.
 
All final data are in Tab Delimited Format (TDF). TDF is compatible with 
other transcription formats, such as the Transcriber format and AG 
format, and it is easy to process.  Each line of a TDF file corresponds 
to a speech segment and contains 13 tab delimited field.A source TDF 
file and its translation are the same except that the transcript in the 
source TDF is replaced by its English translation.  GALE Phase 1 Arabic 
Blog Parallel Text is distributed via web download.*
*
 
2008 Subscription Members will automatically receive two copies of this 
corpus on disc. 2008 Standard Members may request a copy as part of 
their 16 free membership corpora. Nonmembers may license this data for 
US$1500
 
***
 
(3)  STC-TIMIT 1.0 
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2008S03> 
is a telephone version of TIMIT Acoustic Phonetic Continuous Speech 
Corpus, LDC93S1 
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC93S1> 
(TIMIT). TIMIT contains broadband recordings of 630 speakers of eight 
major dialects of American English reading ten phonetically rich 
sentences. Created in 1993, TIMIT was designed to provide speech data 
for acoustic-phonetic studies and for the development and evaluation of 
automatic speech recognition systems. In this TIMIT-derived corpus, the 
entire TIMIT database was passed through an actual telephone channel in 
a single call. Thus, a single type of channel distortion and noise 
affect the whole database.
 
The process was managed using a Dialogic switchboard for the calling and 
receiving ends. No transducer (microphone) was employed; the original 
digital signal was converted to analog using the switchboard's A/D 
converter, transmitted trough a telephone channel and converted back to 
digital format before recording. As a result, the only distortion 
introduced is that of the telephone channel itself.
 
The STC-TIMIT 1.0 database is organized in the same manner as in the 
original TIMIT corpus: 4620 files belonging to the training partition 
and 1680 files belonging to the test partition. Utterances in STC-TIMIT 
1.0 are time-aligned with those of TIMIT with an average precision of 
0.125 ms (1 sample), by maximizing the cross-correlation between pairs 
of files from each corpus. Thus, labels from TIMIT may be used for 
STC-TIMIT 1.0, and the effects of telephone channels may be studied on a 
frame-by-frame basis.
 
Two telephone lines within the same building were connected to a 
Dialogic(R) card. One of the lines was used as the calling-end and 
played the speech file, while the other line was used as the 
receiving-end and recorded the new signal. The whole recording process 
was conducted in a single call.
 
After recording, the file was pre-cut according to the length of the 
corresponding TIMIT database file. Each resulting file was then aligned 
to its corresponding file in TIMIT using the xcorr routine in Matlab(R). 
Based on these results, the recorded file was sliced again from the 
original recorded file using the newly-generated alignments. Thus, each 
file in STC-TIMIT 1.0 is aligned to its equivalent in TIMIT and has the 
same length.  STC-TIMIT 1.0 is distributed on one DVD-ROM.
 
2008 Subscription Members will automatically receive two copies of this 
corpus. 2008 Standard Members may request a copy as part of their 16 
free membership corpora. Nonmembers may license this data for US$800.
------------------------------------------------------------------------
 
**
Ilya Ahtaridis
Membership Coordinator
 
--------------------------------------------------------------------
*Linguistic Data Consortium                     Phone: (215) 573-1275
University of Pennsylvania                       Fax: (215) 573-2175
3600 Market St., Suite 810                         ldc@ldc.upenn.edu
Philadelphia, PA 19104 USA                  http://www.ldc.upenn.edu*
 
 
 
 
Back to Top

6-4 . Question Answering on speech transcripts (QAst)

  • The QAst organizers are pleased to announce the release of the development dataset for
    the CLEF-QA 2008 track "Question Answering on Speech Transcripts" (QAst).
    We take this opportunity to launch a first call for participation in
    this evaluation exercise.

    QAst is a CLEF-QA track that aims at providing an evaluation framework
    for QA technology on speech transcripts, both manual and automatic.
    A detailed description of this track is available at:
    http://www.lsi.upc.edu/~qast <http://www.lsi.upc.edu/~qast>

    It is the second evaluation for the QAst track.
    Last year (QAst 2007), factual questions had been generated for two
    distinct corpora (in English language only). This year, in addition to
    factual questions,
    some definition questions are generated, and five corpora covering three
    different languages are used (3 corpora in English, 1 in Spanish and 1
    in French).

    Important dates:

    # 15 June 2008: evaluation set released
    # 30 June 2008: submission deadline

    The pilot track is organized jointly by the Technical University of
    Catalonia (UPC), the Evaluations and Language resources Distribution
    Agency (ELDA) and Laboratoire d'Informatique pour la Mécanique et les
    Sciences de l'Ingénieur (LIMSI).

    If you are interested in participating please send an email to Jordi
    Turmo (turmo_AT_lsi.upc.edu) with "QAst" in the subject line.


Back to Top

6-5 . ELRA - Language Resources Catalogue - Update

 
ELRA is happy to announce that 1 new Speech Resource, produced within
the Technolangue programme, is now available in its catalogue.
 
*ELRA-S0272 MEDIA speech database for French
*The MEDIA speech database for French was produced by ELDA within the
French national project MEDIA (Automatic evaluation of man-machine
dialogue systems), as part of the Technolangue programme funded by the
French Ministry of Research and New Technologies (MRNT). It contains
1,258 transcribed dialogues from 250 adult speakers. The method chosen
for the corpus construction process is that of a =91Wizard of Oz=92 (WoZ)
 
system. This consists of simulating a natural language man-machine
dialogue. The scenario was built in the domain of tourism and hotel
reservation.
The semantic annotation of the corpus is available in this catalogue and
referenced ELRA-E0024 (MEDIA Evaluation Package).
For more information, see:=20
http://catalog.elra.info/product_info.php?products_id=3D1057
 
For more information on the catalogue, please contact Val=E9rie Mapelli
mailto:mapelli@elda.org
 
Visit our on-line catalogue: http://catalog.elra.info
<http://catalog.elra.info/>.
 
Back to Top

7 . Job openings

Back to Top

7-1 . Speech Engineer/Senior Speech Engineer at Microsoft, Mountain View, CA,USA

 

Job Type: Full-Time
Send resume to Bruce Buntschuh 
   Responsibilities: 
Tellme, now a subsidiary of Microsoft, is a company that is focused on delivering the highest quality voice recognition based applications while providing the highest possible automation to its clients. Central to this focus is the speech recognition accuracy and performance that is used by the applications. The candidate will be responsible for the development, performance analysis, and optimization of grammars, as well as overall speech recognition accuracy, in a wide variety of real world applications in all major market segments. This is a unique opportunity to apply and extend state of the art speech recognition technologies to emerging spaces such as information search on mobile devices.

Requirements:

· Strong background in engineering, linguistics, mathematics, machine learning, and or computer science. 
· In depth knowledge and expertise in the field of speech recognition.
· Strong analytical skills with a determination to fully understand and solve complex problems.
· Excellent spoken and written communication skills.
· Fluency in English (Spanish a plus).
· Programming capability with scripting tools such as Perl.

Education:

MS, PhD, or equivalent technical experience in an area such as engineering, linguistics, mathematics, or computer science.

 

Back to Top

7-2 . Speech Technology and Software Development Engineer at Microsoft Redmond WA, USA

Speech Technologies and Modeling

Speech Component Group

Microsoft Corporation

Redmond WA, USA

Please contact: Yifan.Gong@microsoft.com

Microsoft's Speech Component Group has been working on automatic speech recognition (SR) in real environments. We develop SR products for multiple languages for mobile devices, desktop computers, and communication servers. The group now has an open position for speech scientists with a software development focus to work on our acoustic and language modeling technologies. The position offers great opportunities for innovation and technology and product development.

Responsibilities:

·     Design and implement speech/language modeling and recognition algorithms to improve recognition accuracy.
·     Create, optimize and deliver quality speech recognition models and other components tailored to our customers' needs.
·     Identify, investigate and solve challenging problems in the areas of recognition accuracy from speech recognition system deployments.
·     Improve speech recognition language expansion engineering process that ensures product quality and scalability.

Required competencies and skills:

·     Passion about speech technology and quality software, demonstrated ability relative to the design and implementation of speech recognition algorithms.
·     Strong desire for achieving excellent results, strong problem solving skills, ability to multi-task, handle ambiguities, and identify issues in complex SR systems.
·     Good software development skills, including strong aptitude for software design and coding. 3+ years of experience in C/C++ and programming with scripting languages are highly desirable.
·     MS or PhD degree in Computer Science, Electrical Engineering, Mathematics, or related disciplines, with strong background in speech recognition technology, statistical modeling, or signal processing.
·     Track record of developing SR algorithms, or experience in linguistic/phonetics, is a plus.

 

Back to Top

7-3 . PhD Research Studentship in Spoken Dialogue Systems- Cambridge UK

Applications are invited for an EPSRC sponsored studentship in Spoken Dialogue Systems leading to the PhD degree. The student will join a team lead by Professor Steve Young working on statistical approaches to building Spoken Dialogue Systems. The overall goal of the team is to develop complete working end-to-end systems which can be trained from real data and which can be continually adapted on-line. The PhD work will focus specifically on the use of Partially Observable Markov Decision Processes for dialogue modelling and techniques for learning and adaptation within that framework. The work will involve statistical modelling, algorithm design and user evaluation. The successful candidate will have a good first degree in a relevant area. Good programming skills in C/C++ are essential and familiarity with Matlab would be useful.
The studentship will be for 3 years starting in October 2007 or January 2008. The studentship covers University and College fees at the Home/EU rate and a maintenance allowance of 13000 pounds per annum. Potential applicants should email Steve Young with a brief CV and statement of interest in the proposed work area 

Back to Top

7-4 . AT&T - Labs Research: Research Staff Positions - Florham Park, NJ

AT&T - Labs Research is seeking exceptional candidates for Research Staff positions. AT&T is the premiere broadband, IP, entertainment, and wireless communications company in the U.S. and one of the largest in the world. Our researchers are dedicated to solving real problems in speech and language processing, and are involved in inventing, creating and deploying innovative services. We also explore fundamental research problems in these areas. Outstanding Ph.D.-level candidates at all levels of experience are encouraged to apply. Candidates must demonstrate excellence in research, a collaborative spirit and strong communication and software skills. Areas of particular interest are                 

  • Large-vocabulary automatic speech recognition
  • Acoustic and language modeling
  • Robust speech recognition
  • Signal processing
  • Speaker recognition
  • Speech data mining
  • Natural language understanding and dialog
  • Text and web mining
  • Voice and multimodal search

AT&T Companies are Equal Opportunity Employers. All qualified candidates will receive full and fair consideration for employment. More information and application instructions are available on our website at http://www.research.att.com/. Click on "Join us". For more information, contact Mazin Gilbert (mazin at research dot att dot com).

 


Back to Top

7-5 . Research Position in Speech Processing at UGent, Belgium

Background

Since March 2005, the universities of Leuven, Gent, Antwerp and Brussels have joined forces in a big research project, called SPACE (SPeech Algorithms for Clinical and Educational applications). The project aims at contributing to the broader application of speech technology in educational and therapeutic software tools. More specifically, it pursues the automatic detection and classification of reading errors in the context of an automatic reading tutor, and the objective assessment of disordered speech (e.g. speech of the deaf, dysarthric speech, ...) in the context of computer assisted speech therapy assessment. Specific for the target applications is that the speech is either grammatically and lexically incorrect or a-typically pronounced. Therefore, standard technology cannot be applied as such in these applications.

Job description

The person we are looking for will be in charge of the data-driven development of word mispronunciation models that can predict expected reading errors in the context of a reading tutor. These models must be integrated in the linguistic model of the prompted utterance, and achieve that the speech recognizer becomes more specific in its detection and classification of presumed errors than a recognizer which is using a more traditional linguistic model with context-independent garbage and deletion arcs.  A challenge is also to make the mispronunciation model adaptive to the progress made by the user.

Profile

We are looking for a person from the EU with a creative mind, and with an interest in speech & language processing and machine learning. The work will require an ability to program algorithms in C and Python. Having experience with Python is not a prerequisite (someone with some software experience is expected to learn this in a short time span). Demonstrated experience with speech & language processing and/or machine learning techniques will give you an advantage over other candidates.

The job is open to a pre-doctoral as well as a post-doctoral researcher who can start in November or December. The job runs until February 28, 2009, but a pre-doctoral candidate aiming for a doctoral degree will get opportunities to do follow-up research in related projects. 

Interested persons should send their CV to Jean-Pierre Martens (martens@elis.ugent.be). There is no real deadline, but as soon as a suitable person is found, he/she will get the job.

 

Back to Top

7-6 . Summer Inter positions at Motorola Schaumburg Illinois USA

Motorola Labs - Center for Human Interaction Research (CHIR) located in Schaumburg Illinois, USA, is offering summer intern positions in 2008 (12 weeks each).

CHIR's mission:

Our research lab develops technologies that provide access to rich communication, media and information services effortless, based on natural, intelligent interaction. Our research aims on systems that adapt automatically and proactively to changing environments, device capabilities and to continually evolving knowledge about the user.

Intern profiles:

1) Acoustic environment/event detection and classification.

Successful candidate will be a PhD student near the end of his/her PhD study and is skilled in signal processing and/or pattern recognition; he/she knows Linux and C/C++ programming. Candidates with knowledge of acoustic environment/event classification are preferred.

2) Speaker adaptation for applications on speech recognition and spoken document retrieval.

The successful candidate must currently be pursuing a Ph.D. degree in EE or CS with complete understanding and hand-on experience on automatic speech recognition related research. Proficiency in Linux/Unix working environment and C/C++ programming. Strong GPA. A strong background in speaker adaptation is highly preferred.

3) Development of voice search-based web applications on a smartphone

We are looking for an intern candidate to help create an "experience" prototype based on our voice search technology. The app will be deployed on a smartphone and demonstrate intuitive and rich interaction with web resources. This intern project is oriented more towards software engineering than research. We target an intern with a master's degree and strong software engineering background. Mastery of C++ and experience with web programming (AJAX and web services) is required. Development experience on Windows CE/Mobile desired.

4) Integrated Voice Search Technology For Mobile Devices.

Candidate should be proficient in information retrieval, pattern recognition and speech recognition. Candidate should program in C++ and script languages such as Python or Perl in Linux environment. Also, he/she should have knowledge on information retrieval or search engines.

We offer competitive compensation, fun-to-work environment and Chicago-style pizza.

If you are interested, please send your resume to:

Dusan Macho, CHIR-Motorola Labs

Email:  dusan.macho@motorola.com

Tel: +1-847-576-6762

 


Back to Top

7-7 . Nuance: Software engineer speech dialog tools

In order to strengthen our Embedded ASR Research team, we are looking for a:

SOFTWARE ENGINEER SPEECH DIALOGUE TOOLS

As part of our team, you will be creating solutions for voice user interfaces for embedded applications on mobile and automotive platforms.

OVERVIEW:

- You will work in Nuance's Embedded ASR R&D team, developing technology, tools, and run-time software to enable our customers to develop and test embedded speech applications. Together with our team of speech and language experts, you will work on natural language dialogue systems for our customers in the Automotive and Mobile sector.

- You will work either at Nuance's Office in Aachen, a beautiful, old city right in the heart of Europe with great history and culture, or at Nuance's International Headquarters in Merelbeke, a small town just 5km away from the heart of the vibrant and picturesque city of Ghent, in the Flanders region of Belgium. Both Aachen and Ghent offer some of the most spectacular historic town centers in Europe, and are home to large international universities.

- You will work in an international company and cooperate with people on various locations including in Europe, America and Asia. You may occasionally be asked to travel.

RESPONSIBILITIES:

- You will work on the development of tools and solutions for cutting edge speech and language understanding technologies for automotive and mobile devices.

- You will work on enhancing various aspects of our advanced natural language dialogue system, such as the layer of connected applications, the configuration setup, inter-module communication, etc.

- In particular, you will be responsible for the design, implementation, evaluation, optimization and testing, and documentation of tools such as GUI and XML applications that are used to develop, configure, and fine-tune advanced dialogue systems.

QUALIFICATIONS:

- You have a university degree in computer science, engineering, mathematics, physics, computational linguistics, or a related field.

- You have very strong software and programming skills, especially in C/C++, ideally also for embedded applications.

- You have experience with Python or other scripting languages.

- GUI programming experience is a strong asset.

The following skills are a plus:

- Understanding of communication protocols

- Understanding of databases

- Understanding of computational agents and related frameworks (such as OAA).

- A background in (computational) linguistics, dialogue systems, speech processing, grammars, and parsing techniques, statistics and machine learning, especially as related to natural language processing, dialogue, and representation of information

- You can work both as a team player and as goal-oriented independent software engineer.

- You can work in a multi-national team and communicate effectively with people of different cultures.

- You have a strong desire to make things really work in practice, on hardware platforms with limited memory and processing power.

- You are fluent in English and you can write high quality documentation.

- Knowledge of other languages is a plus.

CONTACT:

Please send your applications, including cover letter, CV, and related documents (maximum 5MB total for all documents, please) to

Deanna Roe                  Deanna.roe@nuance.com

Please make sure to document to us your excellent software engineering skills.

ABOUT US:

Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world.  Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched.  With more than 3000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.

 

Back to Top

7-8 . Nuance: Speech scientist London UK

  • Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world.  Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched.  With more than 2000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.

    To strengthen our International Professional Services team, based in London, we are currently looking for a

     

     

                                Speech Scientist, London, UK

    Nuance Professional Services (PS) has designed, developed, and optimized thousands of speech systems across dozens of industries, including directory search, call center automation, applications in telecom, finance, airline, healthcare, and other verticals; applications for video games, mobile dictation, enhanced search services, SMS, and in-car navigation.  Nuance PS applications have automated approximately 7 billion phone conversations for some of the world's most respected companies, including British Airways, Vodafone, Amtrak, Bank of America, BellCanada, Citigroup, General Electric, NTT and Verizon.

    The PS organization consists of energetic, motivated, and friendly individuals.  The Speech Scientists in PS are among the best and brightest, with PhDs from universities such as Cambridge (UK), MIT, McGill, Harvard, Penn, CMU, and Georgia Tech, and having worked at research labs such Bell Labs, Motorola Labs, and ATR (Japan), culminating in over 300 years of Speech Science experience and covering well over 20 languages.

    Come and join Nuance PS and work on the latest technology from one of the prominent speech recognition technology providers, and make a difference in the way the world communicates.

    Job Overview

    As a Speech Scientist in the Professional Services group, you will work on automated speech recognition applications, covering a broad range of activities in all project phases, including the design, development, and optimization of the system.  You will:

    • Work across application development teams to ensure best possible recognition performance in deployed systems
    • Identify recognition challenges and assess accuracy feasibility during the design phase,
    • Design, develop, and test VoiceXML grammars and create JSPs, Java, and ECMAscript grammars for dynamic contexts
    • Optimize accuracy of applications by analyzing performance and tuning statistical language models, pronunciations, and acoustic models, including identifying areas for improvement by running the recognizer offline
    • Contribute to the generation and presentation of client-facing reports
    • Act as technical lead on more intensive client projects
    • Develop methodologies, scripts, procedures that improve efficiency and quality
    • Develop tools and enhance algorithms that facilitate deployment and tuning of recognition components
    • Act as subject matter domain expert for specific knowledge domains
    • Provide input into the design of future product releases

         Required Skills

    • MS or PhD in Computer Science, Engineering, Computational Linguistics, Physics, Mathematics, or related field (or equivalent)
    • Strong analytical and problem solving skills and ability to troubleshoot issues
    • Good judgment and quick-thinking
    • Strong programming skills, preferably Perl or Python
    • Excellent written and verbal communications skills
    • Ability to scope work taking technical, business and time-frame constraints into consideration
    • Works well in a team and in a fast-paced environment

    Beneficial Skills

    • Strong programming skills in either Perl, Python, Java, C/C++, or Matlab
    • Speech recognition knowledge
    • Strong pattern recognition, linguistics, signal processing, or acoustics knowledge
    • Statistical data analysis
    • Experience with XML, VoiceXML, and Wiki
    • Ability to mentor or supervise others
    • Additional language skills, eg French, Dutch, German, Spanish

     


Back to Top

7-9 . Nuance: Research engineer speech engine

In order to strengthen our Embedded ASR Research team, we are looking for a:

 RESEARCH ENGINEER SPEECH ENGINE

As part of our team, you will be creating solutions for voice user interfaces for embedded applications on mobile and automotive platforms.

 OVERVIEW:

- You will work in Nuance's Embedded ASR R&D team, developing, improving and maintaining core ASR engine algorithms for our customers in the Automotive and Mobile sector.

- You will work either at Nuance's Office in Aachen, a beautiful, old city right in the heart of Europe with great history and culture, or at Nuance's International Headquarters in Merelbeke, a small town just 5km away from the heart of the vibrant and picturesque city of Ghent, in the Flanders region of Belgium. Both Aachen and Ghent offer some of the most spectacular historic town centers in Europe, and are home to large international universities.

- You will work in an international company and cooperate with people on various locations including in Europe, America and Asia. You may occasionally be asked to travel.

RESPONSIBILITIES:

- You will work on the developing, improving and maintaining core ASR engine algorithms for cutting edge speech and natural language understanding technologies for automotive and mobile devices.

- You will work on the design and development of more efficient, flexible ASR search algorithms with high focus on low memory and processor requirements.

QUALIFICATIONS:

- You have a university degree in computer science, engineering, mathematics, physics, computational linguistics, or a related field. PhD is a plus.

- A background in (computational) linguistics, speech processing, ASR search, confidence values, grammars, statistics and machine learning, especially as related to natural language processing.

- You have very strong software and programming skills, especially in C/C++, ideally also for embedded applications.

The following skills are a plus:

- You have experience with Python or other scripting languages.

- Broad knowledge about architectures of embedded platforms and processors.

- Understanding of databases

- You can work both as a team player and as goal-oriented independent software engineer.

- You can work in a multi-national team and communicate effectively with people of different cultures.

- You have a strong desire to make things really work in practice, on hardware platforms with limited memory and processing power.

- You are fluent in English and you can write high quality documentation.

- Knowledge of other languages is a plus.

CONTACT:

Please send your applications, including cover letter, CV, and related documents (maximum 5MB total for all documents, please) to

Deanna Roe                  Deanna.roe@nuance.com

Please make sure to document to us your excellent software engineering skills.

ABOUT US:

Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world.  Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched.  With more than 3000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.

 

Back to Top

7-10 . Nuance RESEARCH ENGINEER SPEECH DIALOG SYSTEMS:

In order to strengthen our Embedded ASR Research team, we are looking for a:

    RESEARCH ENGINEER SPEECH DIALOGUE SYSTEMS

As part of our team, you will be creating speech technologies for embedded applications varying from simple command and control tasks up to natural language speech dialogues on mobile and automotive platforms.

OVERVIEW:

-You will work in Nuance's Embedded ASR research and production team, creating technology, tools and runtime software to enable our customers develop embedded speech applications. In our team of speech and language experts, you will work on natural language dialogue systems that define the state of the art.

- You will work at Nuance's International Headquarters in Merelbeke, a small town just 5km away from the heart of the picturesque city of Ghent, in the Flanders region of Belgium. Ghent has one of the most spectacular historic town centers of Europe and is known for its unique vibrant yet cozy charm, and is home to a large international university.

- You will work in an international company and cooperate with people on various locations including in Europe, America, and Asia.  You may occasionally be asked to travel.

RESPONSIBILITIES:

- You will work on the development of cutting edge natural language dialogue and speech recognition technologies for automotive embedded systems and mobile devices.

- You will design, implement, evaluate, optimize, and test new algorithms and tools for our speech recognition systems, both for research prototypes and deployed products, including all aspects of dialogue systems design, such as architecture, natural language understanding, dialogue modeling, statistical framework, and so forth.

- You will help the engine process multi-lingual natural and spontaneous speech in various noise conditions, given the challenging memory and processing power constraints of the embedded world.

QUALIFICATIONS:

- You have a university degree in computer science, (computational) linguistics, engineering, mathematics, physics, or a related field. A graduate degree is an asset.

-You have strong software and programming skills, especially in C/C++, ideally for embedded applications. Knowledge of Python or other scripting languages is a plus. [HQ1] 

- You have experience in one or more of the following fields:

     dialogue systems

     applied (computational) linguistics

     natural language understanding

     language generation

     search engines

     speech recognition

     grammars and parsing techniques.

     statistics and machine learning techniques

     XML processing

-You are a team player, willing to take initiative and assume responsibility for your tasks, and are goal-oriented.

-You can work in a multi-national team and communicate effectively with people of different cultures.

-You have a strong desire to make things really work in practice, on hardware platforms with limited memory and processing power.

-You are fluent in English and you can write high quality documentation.

-Knowledge of other languages is a strong asset.

CONTACT:

Please send your applications, including cover letter, CV, and related documents (maximum 5MB total for all documents, please) to

 

Deanna Roe                  Deanna.roe@nuance.com

ABOUT US:

Nuance is the leading provider of speech and imaging solutions for businesses and consumers around the world.  Every day, millions of users and thousands of businesses experience Nuance by calling directory assistance, requesting account information, dictating patient records, telling a navigation system their destination, or digitally reproducing documents that can be shared and searched.  With more than 3000 employees worldwide, we are committed to make the user experience more enjoyable by transforming the way people interact with information and how they create, share and use documents. Making each of those experiences productive and compelling is what Nuance is about.

 

Back to Top

7-11 . Research Position in Speech Processing at Nagoya Institute of Technology,Japan

Nagoya Institute of Technology is seeking a researcher for a

post-doctoral position in a new European Commission-funded project

EMIME ("Efficient multilingual interaction in mobile environment")

involving Nagoya Institute of Technology and other five European

partners, starting in March 2008 (see the project summary below).

The earliest starting date of the position is March 2007. The initial

duration of the contract will be one year, with a possibility for

prolongation (year-by-year basis, maximum of three years). The

position provides opportunities to collaborate with other researchers

in a variety of national and international projects. The competitive

salary is calculated according to qualifications based on NIT scales.

The candidate should have a strong background in speech signal

processing and some experience with speech synthesis and recognition.

Desired skills include familiarity with latest spectrum of technology

including HTK, HTS, and Festival at the source code level.

For more information, please contact Keiichi Tokuda

(http://www.sp.nitech.ac.jp/~tokuda/).

 

About us

Nagoya Institute of Technology (NIT), founded on 1905, is situated in

the world-quality manufacturing area of Central Japan (about one hour

and 40 minetes from Tokyo, and 36 minites from Kyoto by Shinkansen).

NIT is a highest-level educational institution of technology and is

one of the leaders of such institutions in Japan. EMIME will be

carried at the Speech Processing Laboratory (SPL) in the Department of

Computer Science and Engineering of NIT. SPL is known for its

outstanding, continuous contribution of developing high-performance,

high-quality opensource software: the HMM-based Speech Synthesis

System "HTS" (http://hts.sp.nitech.ac.jp/), the large vocabulary

continuous speech recognition engine "Julius"

(http://julius.sourceforge.jp/), and the Speech Signal Processing

Toolkit "SPTK" (http://sp-tk.sourceforge.net/). The laboratory is

involved in numerous national and international collaborative

projects. SPL also has close partnerships with many industrial

companies, in order to transfer its research into commercial

applications, including Toyota, Nissan, Panasonic, Brother Inc.,

Funai, Asahi-Kasei, ATR.

Project summary of EMIME

The EMIME project will help to overcome the language barrier by

developing a mobile device that performs personalized speech-to-speech

translation, such that a user's spoken input in one language is used

to produce spoken output in another language, while continuing to

sound like the user's voice. Personalization of systems for

cross-lingual spoken communication is an important, but little

explored, topic. It is essential for providing more natural

interaction and making the computing device a less obtrusive element

when assisting human-human interactions.

We will build on recent developments in speech synthesis using hidden

Markov models, which is the same technology used for automatic speech

recognition. Using a common statistical modeling framework for

automatic speech recognition and speech synthesis will enable the use

of common techniques for adaptation and multilinguality.

Significant progress will be made towards a unified approach for

speech recognition and speech synthesis: this is a very powerful

concept, and will open up many new areas of research. In this

project, we will explore the use of speaker adaptation across

languages so that, by performing automatic speech recognition, we can

learn the characteristics of an individual speaker, and then use those

characteristics when producing output speech in another language.

Our objectives are to:

1. Personalize speech processing systems by learning individual

characteristics of a user's speech and reproducing them in

synthesized speech.

2. Introduce a cross-lingual capability such that personal

characteristics can be reproduced in a second language not spoken

by the user.

3. Develop and better understand the mathematical and theoretical

relationship between speech recognition and synthesis.

4. Eliminate the need for human intervention in the process of

cross-lingual personalization.

5. Evaluate our research against state-of-the art techniques and in a

practical mobile application.

 


Back to Top

7-12 . C/C++ Programmer Munich, Germany

Digital publishing AG is one of Europe's leading producers of  interactive software for foreign language training. In our e- learning courses we want to place the emphasis on speaking and  spoken language understanding.  In order to strengthen our Research & Development Team in Munich,  Germany, we are looking for experienced C or C++ programmers with  at least 3 years experience in the design and coding of  sophisticated software systems under Windows.   
We offer   
-a creative working atmosphere in an international team of   software engineers, linguists and editors working on    challenging research projects in speech recognition and    speech dialogue systems  
- participation in all phases of a product life cycle, as we    are interested in the fast transfer of research results    into products.  
- the possibility to participate in international scientific    conferences.   
- a permanent job in the center of Munich.  
- excellent possibilities for development within our fast    growing company.    
- flexible working times, competitive compensation and    arguably the best espresso in Munich.   
We expect  
-several years of practical experience in software    development in C or C++ in a commercial or academic    environment.  
-experience with parallel algorithms and thread    programming.  
-experience with object-oriented design of software    systems.  
-good knowledge of English or German.   
Desirable is  
-experience with optimization of algorithms.  
-experience in statistical speech or language    processing, preferably speech recognition, speech    synthesis, speech dialogue systems or chatbots.  
-experience with Delphi or Turbo Pascal.   
Interested? We look forward to your application:  (preferably by e-mail)   
digital publishing AG  
Freddy Ertl  f.ertl@digitalpublishing.de  
Tumblinger Straße 32  
D-80337 München Germany 

 

Back to Top

7-13 . Speech and Natural Language Processing Engineer at M*Modal, Pittsburgh.PA,USA

M*Modal is a fast-moving speech technology company based in Pittsburgh, PA. Our portfolio of conversational speech recognition and natural language understanding technologies is widely recognized as the most advanced in the industry. We are a leading innovator in the field of conversational documentation services (CDS) - where speech recognition and natural language understanding are combined in a unique setup targeted to truly understand conversational speech and turn it directly into actionable and meaningful data. Our proprietary speech understanding technology - operating on M*Modal's computing grid hosted in our national data center - is already redefining the way clinical information is captured in healthcare.


We are seeking an experienced and dedicated speech and natural language processing engineer who wants to push the frontiers of conversational speech understanding. Join our renowned research and development team, and add to our unique blend of scientific and engineering excellence.

Responsibilities:

  • You will be working with other members of the R&D team to continuously improve our speech and natural language understanding technologies.
  • You will participate in designing and implementing algorithms, tools and methodologies in the area of automatic speech recognition and natural language processing/understanding.
  • You will collaborate with other members of the R&D team to identify, analyze and resolve technical issues.

 

Requirements:

  • Solid background in speech recognition, natural language processing, machine learning and information extraction.
  • 2+ years of experience participating in software development projects
  • Proficient with Java, C++ and scripting (e.g. Python, Perl, ...)
  • Excellent analytical and problem-solving skills
  • Integrate and communicate well in small R&D teams
  • Masters degree in CS or related engineering fields
  • Experience in a healthcare-related field a plus

 

In June 2007 M*Modal moved to a great new office space in the Squirrel Hill area of Pittsburgh.  We are excited to be growing and are looking for individuals who have a passion for the work they do and are interested in becoming a member of a dynamic work group of smart passionate drivers who also know how to have fun.

 

M*Modal offers a top-notch benefits package that includes medical, dental and vision coverage, short-term disability, matching 401K savings plan, holidays, paid-time-off and tuition refund.  If you would like to be considered for this opportunity, please send your resume and cover letter to Mary Ann Gamble at maryann.gamble@mmodal.com

 

Back to Top

7-14 . Senior Research Scientist -- Speech and Natural Language Processing at M*Modal, Pittsburgh, PA,USA

M*Modal is a fast-moving speech technology company based in Pittsburgh, PA. Our portfolio of conversational speech recognition and natural language understanding technologies is widely recognized as the most advanced in the industry. We are a leading innovator in the field of conversational documentation services (CDS) - where speech recognition and natural language understanding are combined in a unique setup targeted to truly understand conversational speech and turn it directly into actionable and meaningful data. Our proprietary speech understanding technology - operating on M*Modal's computing grid hosted in our national data center - is already redefining the way clinical information is captured in healthcare.


We are seeking an experienced and dedicated senior research scientist who wants to push the frontiers of conversational speech understanding. Join our renowned research and development team, and add to our unique blend of scientific and engineering excellence.

Responsibilities:

  • Plan and perform research and development tasks to continuously improve a state-of-the-art speech understanding system
  • Take a leading role in identifying solutions to challenging technical problems
  • Contribute original ideas and turn them into product-grade software implementations
  • Collaborate with other members of the R&D team to identify, analyze and resolve technical issues

 

Requirements:

  • Solid research & development background with 3+ years of experience in speech recognition research, covering at least two of the following topics: speech processing, acoustic modeling, language modeling, decoding, LVCSR, natural language processing/understanding, speaker verification/identification, audio mining
  • Working knowledge of Machine Learning, Information Extraction and Natural Language Processing algorithms
  • 3+ years of experience participating in large-scale software development projects using C++ and Java.
  • Excellent analytical, problem-solving and communication skills
  • PhD with focus on speech recognition or Masters degree with 3+ years industry experience working on automatic speech recognition
  • Experience and/or education in medical informatics a plus
  • Working experience in a healthcare related field a plus

 


In June 2007 M*Modal moved to a great new office space in the Squirrel Hill area of Pittsburgh.  We are excited to be growing and are looking for individuals who have a passion for the work they do and are interested in becoming a member of a dynamic work group of smart passionate drivers who also know how to have fun.

 

M*Modal offers a top-notch benefits package that includes medical, dental and vision coverage, short-term disability, matching 401K savings plan, holidays, paid-time-off and tuition refund.  If you would like to be considered for this opportunity, please send your resume and cover letter to Mary Ann Gamble at maryann.gamble@mmodal.com

 

Back to Top

7-15 . Postdoc position at LORIA, Nancy, France

Building an articulatory model from ultrasound, EMA and MRI data

 

Postdoctoral position

 

 

Research project

An articulatory model comprises both the visible and the internal mobile articulators which are involved in speech articulation: the lower jaw, tongue, lips and velum) as well as the fixed walls (the palate, the rear wall of the pharynx). An articulatory model is dynamic since the articulators deform during speech production. Such a model has a potential interest in the field of language learning by providing visual feedback on the articulation conducted by the learner, and many other applications.

Building an articulatory model is difficult because the different articulators have to be detected from specific image modalities: the lips are acquired through video, the tongue shape is acquired through ultrasound imaging with a high frame rate but these 2D images are very noisy. Finally, 3D images of all articulators can be obtained with MRI but only for sustained sounds (as vowels) due to the long acquisition time of MRI images.

The subject of this post-doc is to construct a dynamic 3D model of the entire vocal tract by merging the 3D information available in the MRI acquisitions and temporal 2D information provided by the contours of the tongue visible on the ultrasound images or X-ray images.

We are working on the construction of an articulatory model within the European project ASPI (http://aspi.loria.fr/ ).

We already built an acquisition system which allows us to obtain synchronized data from ultrasound, MRI, video and EM modalities.

Only a few complete articulatory models are currently available in the world and a real challenge in the field is to design set-ups and easy-to-use methods for automatically building the model of any speaker from 3D and 2D images. Indeed, the existence of more articulatory models would open new directions of research about speaker variability and speech production.

 

Objectives

The aim of the subject is to build a deformable model of the vocal tract from static 3D MRI images and 2D dynamic 2D sequences. Previous works have been conducted on the modelling of the vocal tract, and especially of the tongue (M. Stone[1] O. Engwall[2]). Unfortunately, important human interaction is required to extract tongue contours in the images. In addition, only one image modality is often considered in these works, thus reducing the reliability of the model obtained.

The aim of this work is to provide automatic methods for segmenting features in the images as well as methods for building a parametric model of the 3D vocal tract with these specific aims:

  • The segmentation process is to be guided by prior knowledge on the vocal tract. In particular shape, topologic as well as regularity constraints must be considered.
  • A parametric model of the vocal tract has to be defined (classical models are linear and built from a principal component analysis). Special emphasis must be put on the problem of matching the various features between the images.
  • Besides classical geometric constraints, both the building and the assessment of the model will be guided by acoustic distances in order to check for the adequation between the sound synthesized from the model and the sound realized by the human speaker.

 

Skill and profile

The recruited person must have a solid background in computer vision and in applied mathematics. Informations and demonstrations on the research topics addressed by the Magrit team are available at http://magrit.loria.fr/  

 

References

[1] M. Stone : Modeling tongue surface contours from Cine-MRI images. Journal of Speech, language, hearing research, 2001.

[2]:P. Badin, G. Bailly, L. Reveret: Three-dimensional linear articulatory modeling of tongue, lips and face based on MRI and video images, Journal of Phonetics, 2002, vol 30, p 533-553

 

Contact

Interested candidates are invited to contact Marie-Odile Berger, berger@loria.fr, +33 3 54 95 85 01

 

Important information

This position is advertised in the framework of the national INRIA campaign for recruiting post-docs. It is a one year position, renewable, beginning fall 2008. The salary is 2,320€ gross per month. 

 

Selection of candidates will be a two step process. A first selection for a candidate will be carried out internally by the Magrit group. The selected candidate application will then be further processed for approval and funding by an INRIA committee.

 

Doctoral thesis less than one year old (May 2007) or being defended before end of 2008. If defence has not taken place yet, candidates must specify the tentative date and jury for the defence.

 

Important - Useful links

Presentation of INRIA postdoctoral positions

To apply (be patient, loading this link takes times...)

 

Back to Top

7-16 . Internships at Motorola Labs Schaumburg

Motorola Labs - Center for Human Interaction Research (CHIR) 
located in Schaumburg Illinois, USA, 
is offering summer intern positions in 2008 (12 weeks each). 
 
CHIR's mission
 
Our research lab develops technologies that provide access to rich communication, media and 
information services effortless, based on natural, intelligent interaction. Our research 
aims on systems that adapt automatically and proactively to changing environments, device