ISCApad number 93

March 10th, 2006

Dear Members,
This is our 93rd ISCApad. Many new calls for papers are announced. I urge all organizers to inform me as soon as they know about submission deadline extensions: indeed since ISCApad is a monthly newsletter, new dates are frequently over when they can be inserted in the new ISCApad issue.
Our student activity committee has decided to update the list of speech labs: please help them by filling out the on-line form as described below.
I remind all members to inform our secretariat or myself if they plan to change their email or affiliation in order ISCA will not lose its members and will be able to contact them via ISCApad.

Christian Wellekens

TABLE OF CONTENTS

  1. ISCA News
  2. SIG's activities
  3. Courses, internships
  4. Books, databases, softwares
  5. Job openings
  6. Journals
  7. Future INTERSPEECH Conferences
  8. Future ISCA Tutorial and Research Workshops (ITRW)
  9. Forthcoming Events supported (but not organized) by ISCA
  10. Future Speech Science and technology events

ISCA NEWS


Our archivist message Dear Colleagues:
Another part of the archive is now complete: ICSLP'92 (Banff, Canada) is online. ICSLP'92 was the last ICSLP conference to be included in the archive. Now we only have two Eurospeech conferences and the Edinburgh predecessor left for inclusion.
Professor Wolfgang Hess

From ISCA Student activity committee (SAC)
One of our tasks was updating the list of speech groups. SAC put together a webpage for groups to upload that information by themselves. You can see that page here: http://www.isca-students.org/ new-speech-lab.php
Murat Akbacak
ISCA-SAC Student Coordinator

ISCA GRANTS
are available for students and young scientists attending meetings.
For more information: http://www.isca-speech.org/grants

top

SIG's activities


A list of Speech Interest Groups can be found on our web.

Information de l'AFCP
*** Prix de thèse 2005 ***
décerné par l'Association Francophone de la Communication Parlée ( AFCP)
L'AFCP décerne chaque année un prix scientifique récompensant une excellente thèse du domaine de la communication parlée. L'AFCP souhaite ainsi promouvoir toutes les facettes de la recherche en communication parlée : des travaux fondamentaux aux travaux appliqués, qu'ils soient du domaine des STIC, des SHS, des SDV... L'objectif de ce prix est de dynamiser les jeunes chercheurs, et de faire connaître leurs travaux à l'ensemble de la communauté.
Le jury est composé d'universitaires, de chercheurs, tous membres élus du CA de l'AFCP, et est présidé par l'un des membres du conseil consultatif de l'AFCP. Ce jury sélectionnera parmi les thèses candidates celle qui recevra le prix AFCP. Il pourra en outre distinguer d'autres thèses qui seront valorisées sur le site de l'AFCP. Le prix de l'édition 2004 a été remporté par R. Ridouane pour sa thèse « Suite de consonnes en berbère : phonétique et phonologie » (http://www.afcp-parole.org/article.php3?id_article=606).
La remise officielle du prix se fera au cours de la rencontre biennale "Les Journées d'Etudes sur la Parole" (JEP), qui a pour vocation de rassembler et synthétiser l'ensemble des travaux de la communauté francophone. Les récipiendaires se verront remettre une somme comprise entre 500 et 1000 euros, et seront invités à présenter leurs travaux à l'ensemble de la communauté durant les JEP.
*** CANDIDATURE :
Peut candidater tout docteur ayant soutenu son doctorat entre le 1er octobre 2004 et le 31 décembre 2005. On ne peut candidater qu'à une seule édition.
Dépôt de la thèse sur site AFCP & envoi postal du dossier AVANT le 20 MARS 2006.
1/ Déposez votre manuscrit de thèse (.pdf) sur le serveur AFCP des thèses qui regroupe la plupart des thèses francophones du domaine.
2/ Postez 1 seul fichier (.pdf) sur CD ou disquette à :
H. Glotin - Prix AFCP
LSIS - Université Sud Toulon Var, BP20132
83957 La Garde Cedex 20 - France
contenant :
- résumé de votre thèse (2p. Max),
- liste de vos publications,
- tous les rapports (jury et rapporteurs) scannés de votre soutenance de thèse,
- une lettre de recommandation scannée de votre directeur de thèse pour ce prix,
- votre CV (+ coord. complètes dont Email).

top

COURSES, INTERNSHIPS

1st INTERNATIONAL PhD SCHOOL IN LANGUAGE AND SPEECH TECHNOLOGIES 2005-2007.

Rovira i Virgili University
Research Group on Mathematical Linguistics
Tarragona, Spain
Website of the Group
Foundational courses (April-June 2006)
Foundations of Linguistics I: Morphology, Lexicon and Syntax -- M. Dolores Jiménez-López, Tarragona
Foundations of Linguistics II: Semantics, Pragmatics and Discourse -- Gemma Bel-Enguix, Tarragona
Formal Languages -- Carlos Martín-Vide, Tarragona
Declarative Programming Languages: Prolog, Lisp -- various researchers at the host institute
Procedural Programming Languages: C, Java, Perl, Matlab -- various researchers at the host institute
Main courses (July-December 2006)
POS Tagging, Chunking, and Shallow Parsing -- Yuji Matsumoto, Nara
Empirical Approaches to Word Sense Disambiguation, Semantic Role Labeling, Semantic Parsing, and Information Extraction -- Raymond Mooney, Austin TX
Ontology Engineering: From Cognitive Science to the Semantic Web -- M. Teresa Pazienza, Roma
Anaphora Resolution in Natural Language Processing -- Ruslan Mitkov, Wolverhampton
Language Processing for Human-Machine Dialogue Modelling -- Yorick Wilks, Sheffield
Spoken Dialogue Systems -- Diane Litman, Pittsburgh PA
Natural Language Processing Pragmatics: Probabilistic Methods and User Modeling Implications -- Ingrid Zukerman, Clayton
Machine Learning Approaches to Developing Language Processing Modules -- Walter Daelemans, Antwerpen
Multimodal Speech-Based Interfaces -- Elisabeth André, Augsburg
Information Extraction -- Guy Lapalme, Montréal QC
Search Methods in Natural Language Processing -- Helmut Horacek, Saarbrücken
Optional courses (from the 5th International PhD School in Formal Languages and Applications)
Tree Adjoining Grammars -- James Rogers, Richmond IN
Uni?cation Grammars -- Shuly Wintner, Haifa
Context-Free Grammar Parsing -- Giorgio Satta, Padua
Probabilistic Parsing -- Mark-Jan Nederhof, Groningen
Categorial Grammars -- Michael Moortgat, Utrecht
Weighted Finite-State Transducers -- Mehryar Mohri, New York NY
Finite State Technology for Linguistic Applications -- André Kempe, Xerox, Grenoble
Natural Language Processing with Symbolic Neural Networks -- Risto Miikkulainen, Austin TX
Students:
Candidate students for the programme are welcome from around the world. Most appropriate degrees include Computer Science and Linguistics, but other students (for instance, from Psychology, Logic, Engineering or Mathematics) can be accepted depending on the strengths of their undergraduate training. The ?rst two months of class are intended to homogenize the students’ varied background.
In order to check eligibility for the programme, the student must be certain that the highest university degree s/he got enables her/him to be enrolled in a doctoral programme in her/his home country.
Tuition Fees:
1,700 euros in total, approximately.
Dissertation:
After following the courses, the students enrolled in the programme will have to write and defend a research project and, later, a dissertation in English in their own area of interest, in order to get the so-called European PhD degree (which is a standard PhD degree with an additional mark of quality). All the professors in the programme will be allowed to supervise students’ work.
Funding:
During the teaching semesters, funding opportunities will be provided, among others, by the Spanish Ministry for Foreign Affairs and Cooperation (Becas MAEC), and by the European Commission (Alban scheme for Latin American citizens). Additionally, the host university will have a limited amount of economic resources itself for covering the tuition fees and full-board accommodation of a few students.
Immediately after the courses and during the writing of the PhD dissertation, some of the best students will be offered 4-year research fellowships, which will allow them to work in the framework of the host research group.
Pre-Registration Procedure:
In order to pre-register, one should post (not fax, not e-mail) to the programme chairman:
a xerocopy of the main page of the passport,
a xerocopy of the highest university education diploma,
a xerocopy of the academic record,
full CV,
letters of recommendation (optional),
any other document to prove background, interest and motivation (optional).
Schedule:
Announcement of the programme: September 12, 2005
Pre-registration deadline: November 30, 2005
Selection of students: December 7, 2005
Starting of the classes: April 18, 2006
Summer break (tentative): July 25, 2006
Re-starting of the classes (tentative): September 4, 2006
End of the classes (tentative): December 22, 2006
Defense of the research project (tentative): September 14, 2007
DEA examination (tentative): April 27, 2008
Questions and Further Information:
Please, contact the programme chairman, Carlos Martín-Vide
Postal Address:
Research Group on Mathematical Linguistics
Rovira i Virgili University
Pl. Imperial Tàrraco, 1
43005 Tarragona, Spain
Phone: +34-977-559543, +34-977-554391
Fax: +34-977-559597, +34-977-554391

top

BOOKS, DATABASES, SOFTWARES

PHONETICA Journal-Editor: K. Kohler, Karger (Kiel) Website Special offer to members of ISCA members:
CHF 145.55/EUR 107.85/USD 132.25 for 2006 online or print subscription
Phonetic Science is a field increasingly accessible to experimental verification. Reflecting this development, ‘Phonetica’ is an international and interdisciplinary forum which features expert original work covering all aspects of the subject: descriptive linguistic phonetics and phonology (comprising segmental as well as prosodic phenomena) are focussed side by side with the experimental measuring domains of speech physiology, articulation, acoustics, and perception. ‘Phonetica’ thus provides an overall representation of speech communication. Papers published in this journal report both theoretical issues and empirical data.
Order Form
Please enter your ISCA member number
o online CHF 145.55/EUR 107.85/USD 132.25
o print* CHF 145.55/EUR 107.85/USD 132.25
o combined (online and print)* CHF 190.55/EUR 140.85/USD 173.25
*+ postage and handling: CHF 22.40/EUR 16.20/USD 30.40
Payment: by credit card (American Express,Diners,Visa,Eurocard).Send your card number and type and expiration date
by Check enclosed
or ask to be billed

Name/Address (please print):
Date and signature required

SPEECH and LANGUAGE ENGINEERING, Eds M.Rajman, V.Pallota (EPFL)
CRC Press
LA PHONETIQUE Jacqueline Vaissière
Collection Que Sais-Je.

top

JOB OPENINGS

We invite all laboratories and industrial companies which have job offers to send them to the ISCApad editor: they will appear in the newsletter and on our website for free. (also have a look at http://www.isca-speech.org/jobsas well as http://www.elsnet.org/Jobs)

POst_doc at IRISA, Rennes, Brittany, France

Sparse representations for audio indexing
The goal of this work is to investigate the use of new features, derived from sparse representations, for audio indexing. Most audio indexing systems rely on statistical models (Gaussian mixture model, hidden Markov model, ...) of cepstral coefficients to detect and track sound events, such as speech, music or speakers, among large masses of audio data. Fourier-based cepstra are computed based on a fixed-length, short-time window, but alternate analysis horizons may bring a better discriminative power [ICASSP05]. In contrast, sparse representations provide a powerful analysis framework which allow for an explicit representation of signal features at various time scales. Furthermore, specific signal features such as harmonic or chirped structures [WASPAA05] can also be efficiently represented. It is therefore believed that such representations will allow for a more efficient discrimination between the various class of sounds that can be met in a mass of audio documents.
The candidate will be in charge of designing and evaluating feature derived from sparse representations in various audio class tracking applications. In particular, the choice of an appropriate set of dictionaries for the decomposition of real-world audio signal is a crucial problem that should be studied. The development of this work will rely on the Matching Pursuit ToolKit (MPTK) software, and will be evaluated within several applications of audio event tracking, all developed in the METISS group.
Prospective candidates should be proficient in at least one of the following domains
* statistical models (hidden Markov models, Gaussian mixture models, etc.)
* automatic classification (Bayesian decision, decision trees, etc.)
* sparse signal representations (e.g. Matching Pursuit, Basis Pursuit, etc.)
* time / frequency signal analysis
and hold a Ph.D. in the area of signal processing or pattern recognition. In this last case, knowledge in signal processing will be an asset.
References
[ICASSP05] "Discriminative Power of Transient Frames in Speaker Recognition", Jerome Louradour, Khalid Daoudi, and Regine Andre-Obrecht, in "Proc. ICASSP 2005", 2005.
[WASPAA05] "A comparison of two extensions of the Matching Pursuit Algorithm for the harmonic decomposition of sounds", Sacha Krstulovic, Remi Gribonval, Pierre Leveau and Laurent Daudet, in "Proc. WASPAA 2005", 2005
Contact
Interested candidates are invited to contact Guillaume Gravier and/or Rémi Gribonval. Important information
This position is advertised in the framework of the national INRIA campaign for recruiting post-docs. It is a one year position, non renewable, beginning fall 2006. Gross income will be 2,150 euros per month.
Selection of candidates will be a two step process. A first selection for a candidate will be carried out internally by the METISS group. The selected candidate application will then be further processed for approval and funding by an INRIA comittee.
Candidates must have defended their Ph. D. posterior to May 2005 and prior to September 1, 2006. If defense has not taken place yet, candidates must specify the tentative date and jury for the defense. The age limit is set to 40 years old.

Visiting Research Positions at Research Group on Mathematical Linguistics at Rovira i Virgili University (Tarragona, Spain)

1-2 visiting research positions may be available in the Research Group on Mathematical Linguistics at Rovira i Virgili University (Tarragona Spain). Web site of the host institute.
ELIGIBLE TOPICS
- Language and automata theory and its applications
- Biomolecular computing and nanotechnology
- Bioinformatics
- Language and speech technologies
- Formal theories of language acquisition and evolutionary linguistics
- Computational neuroscience
Other related fields might still be eligible provided there are strong enough candidates for them.
JOB PROFILE
- The positions are intended for experienced prestigious researchers willing to develop a research project in the framework of the host institute for 3-12 months starting in 2007. Some doctoral teaching and supervising are expected too
- They will be filled in under the form of a grant
- There is no restriction on the candidate's age
ELIGIBILITY CONDITIONS
- Having been awarded the PhD degree earlier than 2001
- Holding a stable position and being on sabbatical leave from her/his home organization
- Having got the rank of Professor or a comparable rank in industry
ECONOMIC CONDITIONS
- A nontaxable monthly allowance amounting 1500-3000 euros depending on the researcher's merits and her/his other sources of income during the stay
- A travel allowance
- Health coverage at the researcher's request.
EVALUATION PROCEDURE
It will consist of 2 steps
- a pre-selection based on CV and carried out by the host institute
- an application by the shortlisted candidates to be assessed externally by the funding agency including CV-research project (up to 8 pages long) and workplan.
SCHEDULE
Expressions of interest are welcome until February 19, 2006.They should simply contain the candidate's CV and mention 2006-1 in the subject line. The outcome of the pre-selection will be reported immediately after.
Pre-selected candidates will be supported in the application process by the host institute.The deadline for completing the whole process is March 5th 2006.
Final results will be available not earlier than August 2006.
CONTACT
Carlos Martin-Vide

PhD Studentship - Taiwan International Graduate Program (TIGP) on Computational Linguistics and Chinese Language Processing

The CLCLP (Computational Linguistics and Chinese Language Processing) Ph.D. Program offers an internationally competitive curriculum specializing in Chinese Computational Linguistics, and the program provides advanced training and research opportunities for leading international Ph.D. students.
Research Tracks
Corpus Linguistics and Language Archives, Information Retrieval and Information Extraction, Knowledge Representation and Acquisition, Natural Language Processing, Spoken Language Processing.
Fellowship and Stipend
Financial support will be provided for all students for 3 years in the form of assistantship. The stipend level is NT $32,000 per month (equivalent of roughly US $11,000 annually).
Application
- Early decision deadline: January 31, 2006 (Students receive notification from the Admission Committee in March.)
- Normal application deadline: March 31, 2006 (Students receive notification from the Admission Committee in June.)
Reference Websites
- Taiwan International Graduate Program, Academia Sinica
- The TIGP-CLCLP program
- Contact e-mail Ms. Alice Lu

Doctoral Position: Campus de Metz (France) de Supélec

Topic: Reconnaissance vocale par unités sous phonétiques
DeadLine : 28/02/2006
Equipe "Systèmes de Traitement du Signal" (STS)
Encadrant : Olivier Pietquin
Thèse à commencer aussi tôt que possible.
Salaire : +/- 1350 euros
Sujet
Aujourd'hui, les systèmes de reconnaissance vocale utilisant des modèles de Markov cachés (HMM) combinés à des GMM (Gaussian Mixture Models) ou même des réseaux de neurones (ANN) font référence dans le domaine. Ces systèmes sont souvent basés sur un processus intermédiaire de reconnaissance d’unités phonétiques. Dans ce sujet de thèse, nous proposons d’étudier l’utilisation d’unités sous-phonétiques (propriétés articulatoires, signal glottique) dans le processus de reconnaissance vocale.
Ces unités étant indépendantes de la langue, nous étudierons l’opportunité d’entraîner ces systèmes de reconnaissance vocale sur des bases de données enregistrées dans des langues différentes ou dans le but de reconnaître de la parole prononcée avec accent ou défaut de prononciation.
Durant cette étude, nous nous pencherons à la fois sur le choix des unités les plus pertinentes pour l’application, sur le choix d’une méthode robuste de reconnaissance des unités sous-phonétiques, et sur l’intégration de cette méthode dans un processus global de reconnaissance de la parole.
Contact
Olivier Pietquin
Responsable de l'équipe STS
Supélec Campus de Metz
2 rue Edouard Belin
57070 Metz
Tel : 03 87 76 47 70
Website

Post-Doctoral Position in Audio signals segmentation and Indexing - ENST (Paris)

Position
The LTCI lab (Laboratoire Traitement et Communication de l'Information) which is a joined research lab between CNRS and GET/Télécom Paris is proposing a postdoctoral position in Audio signals segmentation and indexing, to start in September/October 2006.
Project Description
The focus of this project is on audio indexing and on content-based information retrieval especially for radiophonic audio streams. For such streams, the audio signal gathers on a single track (or file) numerous events or combination of events (speech, music, applause, environmental noise, jingles, etc…) that are important to automatically detect. In fact, it is known that efficient speech/music segmentation leads to improved performances for speech recognition or speaker tracking. However, beyond the speech/music segmentation, it is also important to consider more complex situations (speech detection on musical background, solo detection in a music performance, singing voice segments localisation, genre or orchestration estimation etc….). Hence, one of the main objectives of this project is to obtain an automatic segmentation of the different types of segments (speech, music,….) including mixed segments in developing new statistical approaches for novelty detection and content structuring, in developing new methods for speech enhancement (or singing voice enhancement) with musical background (and vice versa for musical sources identification) and in developing new methods for audio information extraction (automatic extraction of main melody, harmony, rhythm and genre) from musical signals.
This research work will fit in the framework of several national and international collaborative projects and in the first place in the European network of excellence IST-Kspace that aims at building an open and expandable framework for collaborative research in semantic inference for semi-automatic annotation and retrieval of multimedia content.
Candidate Profile
As minimum requirements, the candidate will have:
• A PhD in audio signal processing, speech processing, statistics, machine learning, computer science, electrical engineering, or a related discipline.
• Familiarity with audio signal processing
• Programming skills
The ideal candidate would also have:
• Experience with corpus-based methods.
• Solid experience of research work materialised by publications in conferences or/and journals
• Experience with machine learning and excitement about interdisciplinary work.
• Autonomy and excitement to work in a team
• Some musical experience
Other Information
Preferred starting date: September or October 2006
Location: LTCI / Télécom Paris, 37 rue Dareau, 75014 Paris, FRANCE
Duration : 12 months
Competitive salary
The LTCI lab is located in the heart of Paris (France) one of the culturally most exciting, diverse, and inclusive cities in the world ,
Signal and Image Processing department
GET/Télécom Paris
LTCI
More information
Prof. Gaël RICHARD , phone +33 1 45 81 73 65)
Prof. Yves GRENIER
Prof. Henri MAITRE

POSTDOCTORAL POSITION at LINKÖPING UNIVERSITY (SWEDEN)

A position for a postdoctoral associate is available within the Sound Technology Group , Digital Media Division at the Department of Science and Technology (ITN) Linköping University at Campus Norrköping, Sweden.
Our research is focused on physical and perceptual models of sound sources, sound source separation and adapted signal representations.
Candidates must have a strong background in research and a completed Ph.D.
Programming skills (e.g. Matlab, C/C++ or Java) are very desirable, as well as expertise in conducting acoustic/auditory experiments.
We are especially interested in candidates with research background in the following areas:
. Auditory Scene Analysis
. Sound Processing
. Spatial Audio and Hearing
. Time-Frequency and Wavelet Representations
. Acoustics
but those with related research interests are also welcome to apply.
Inquiries and CVs must be addressed to Prof. G. Evangelista (please consult the sound technology web page in order to obtain the e-mail address)
Professor of Sound Technology
Digital Media Division
Department of Science and Technology (ITN)
Linköping Institute of Technology (LiTH) at Campus Norrköping
SE-60174 Norrköping, Sweden

EDINBURGH SPEECH SCIENCE and TECHNOLOGY (EdSST) project - PhD positions in speech science and technology

Five PhD positions funded by the European Commission under the Marie Curie Early Stage Research Training (EST) scheme are available on the Edinburgh Speech Science and Technology (EdSST) project. EdSST is an interdisciplinary research training programme that aims close the gap between speech science and technology, focussing on a number of overlapping research areas, each of which includes components from speech science and speech technology:
* Articulatory instrumentation and modelling
* Speech synthesis
* Speech recognition
* Human-computer dialogue systems
* Inclusive design
* Augmentative and alternative communication
for further details see:Research Website
You should have a first or upper second class honours degree or its equivalent, and/or a Masters degree, in Informatics or Linguistics. Informatics includes areas such as Artificial Intelligence, Cognitive Science, Computer Science, Information Engineering, and Computational Linguistics. Linguistics includes areas such as Phonetics, Speech Science, Speech and Language Therapy, and Human Communication Sciences. Applicants with degrees in these disciplines will also be considered: Electrical Engineering, Psychology, Mathematics, Philosophy, and Physics.
You must also fulfil European Union Marie Curie EST selection criteria.
EdSST Fellows will be expected to register for a PhD with either the University of Edinburgh or QMUC, depending on PhD topic.
Application details and further information: Information Website

PhD Studentship on 'Communicative/Expressive Speech Synthesis' at UNIVERSITY of SHEFFIELD

Recent years have seen a substantial growth in the capabilities of Speech Technology systems, both in the research laboratory and in the commercial marketplace. However, despite this progress, contemporary speech technology is not able to fulfil the requirements demanded by many potential applications, and performance is still significantly short of the capabilities exhibited by human talkers and listeners, especially in interactive real-world environments.
This shortfall is especially noticeable in the 'text-to-speech' (TTS) systems that have been developed for automated spoken language output. Considerable advances have been made in naturalness and voice quality, yet state-of-the-art TTS systems still exhibit a rather limited range of speaking styles, a general lack of expressiveness and restricted communicative functionality.
The objective of this research is to investigate novel approaches to text-to-speech synthesis that have the potential to overcome these limitations, and which could contribute to the next-generation of speech-based systems, especially in application areas such as assistive technology.
Funding is available immediately for an eligible UK/EU student. Applicants should possess a computational background and should ideally have some knowledge/experience of speech processing.
Thesis Supervisor: Prof. Roger K. Moore
For further information, contact Prof. Roger Moore or see our website for how to apply.
The Speech and Hearing research group in Computer Science at the University of Sheffield has an international reputation in the multi-disciplinary field of speech and hearing research. With three chairs, four faculty, five research associates and around twelve research students, this is one of the strongest teams worldwide. A unique aspect of the group is the wide spectrum of research topics covered, from the psychophysics of hearing through to the engineering of state-of-the-art speech technology systems.

JOURNALS

Call for papers for a Special Issue of Speech Communication:
"Bridging the Gap Between Human and Automatic Speech Processing"

This special issue of Speech Communication is entirely devoted to studies that seek to bridge the gap between human and automatic speech recognition. It follows the special session at INTERSPEECH 2005 on the same topic.
Schedule
announcement sent out in January AND February 2006
submission date: April 30 2006
papers out for review: May 7 2006
first round of reviews in: June 30 2006
notification of acceptance/revisions/rejections: July 7 2006
revisions due: August 15
notification of acceptance: August 30 2006
final manuscript due: September 30 2006
tentative publication date: December 2006
Topics
Papers are invited that cover one or several of the following issues:
- quantitative comparisons of human and automatic speech processing capabilities, especially under varying environmental conditions
- computational approaches to modelling human speech perception
- use of automatic speech processing as an experimental tool in human speech perception research
- speech perception/production-inspired modelling approaches for speech recognition, speaker/language recognition, speaker tracking, sound source separation
- use of perceptually motivated models for providing rich transcriptions of speech signals (i.e. annotations going beyond the word, such as emotion, attitude, speaker characteristics, etc.)
- fine phonetic details: how should we envisage the design and evaluation of computational models of the relation between fine phonetic details in the signal on the one hand, and key effects in (human) speech processing on the other hand.
- how can advanced detectors for articulatory-phonetic features be integrated in the computational models for human speech processing
- the influence of speaker recognition on speech processing
Papers must be submitted by April 30, 2006 at Elsevier website. During the submission mention that you are submitting to the Special Issue on "Bridging the Gap..." in the paper section/category or author comments and request Julia Hirschberg as managing editor for the paper.
Guest editors:
Katrin Kirchhoff
Department of Electrical Engineering
University of Washington
Box 352500
Seattle, WA, 98195
(206) 616-5494

Louis ten Bosch
Dept. of Language and Speech
Radboud University Nijmegen
Post Box 9103
6500 HD Nijmegen
+31 24 3616069

Papers accepted for FUTURE PUBLICATION in Speech Communication

Full text available on http://www.sciencedirect.com/ for Speech Communication subscribers and subscribing institutions. Click on Publications, then on Speech Communication and on Articles in press. The list of papers in press is displayed and a .pdf file for each paper is available.

Jan Stadermann and Gerhard Rigoll, Hybrid NN/HMM acoustic modeling techniques for distributed speech recognition, Speech Communication, In Press, Uncorrected Proof, , Available online 3 March 2006, . (Website Keywords: Distributed speech recognition; Tied-posteriors; Hybrid speech recognition

Gerasimos Xydas and Georgios Kouroupetroglou, Tone-Group F0 selection for modeling focus prominence in small-footprint speech synthesis, Speech Communication, In Press, Uncorrected Proof, , Available online 2 March 2006, . (Website Keywords: Text-to-speech synthesis; Tone-Group unit-selection; Intonation and emphasis in speech synthesis

Antonio Cardenal-López, Carmen García-Mateo and Laura Docío-Fernández, Weighted Viterbi decoding strategies for distributed speech recognition over IP networks, , Speech Communication, In Press, Uncorrected Proof, , Available online 28 February 2006, . (Website Keywords: Distributed speech recognition; Weighted Viterbi decoding; Missing data

Felicia Roberts, Alexander Francis and Melanie Morgan, The interaction of inter-turn silence with prosodic cues in listener perceptions of "trouble" in conversation, Speech Communication, In Press, Uncorrected Proof, , Available online 28 February 2006, . (Website Keywords: Silence; Prosody; Pausing; Human conversation; Word duration

Ismail Shahin, Enhancing speaker identification performance under the shouted talking condition using second-order circular hidden Markov models, Speech Communication, In Press, Corrected Proof, , Available online 14 February 2006, . (Website Keywords: First-order left-to-right hidden Markov models; Neutral talking condition; Second-order circular hidden Markov models; Shouted talking condition

A. Borowicz, M. Parfieniuk and A.A. Petrovsky, An application of the warped discrete Fourier transform in the perceptual speech enhancement, Speech Communication, In Press, Corrected Proof, , Available online 10 February 2006, . (Website Keywords: Speech enhancement; Warped discrete Fourier transform; Perceptual processing

Pushkar Patwardhan and Preeti Rao, Effect of voice quality on frequency-warped modeling of vowel spectra, Speech Communication, In Press, Corrected Proof, , Available online 3 February 2006, . (Website Keywords: Voice quality; Spectral envelope modeling; Frequency warping; All-pole modeling; Partial loudness

Jinfu Ni and Keikichi Hirose, Quantitative and structural modeling of voice fundamental frequency contours of speech in Mandarin, Speech Communication, In Press, Corrected Proof, , Available online 26 January 2006, . (Website Keywords: Prosody modeling; F0 contours; Tone; Intonation; Tone modulation; Resonance principle; Analysis-by-synthesis; Tonal languages

Francisco Campillo Díaz and Eduardo Rodríguez Banga, A method for combining intonation modelling and speech unit selection in corpus-based speech synthesis systems, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, . (Website Keywords: Speech synthesis; Unit selection; Corpus-based; Intonation

Jean-Baptiste Maj, Liesbeth Royackers, Jan Wouters and Marc Moonen, Comparison of adaptive noise reduction algorithms in dual microphone hearing aids, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, . (Website Keywords: Adaptive beamformer; Adaptive directional microphone; Calibration; Noise reduction algorithms; Hearing aids

Roberto Togneri and Li Deng, A state-space model with neural-network prediction for recovering vocal tract resonances in fluent speech from Mel-cepstral coefficients, Speech Communication, In Press, Corrected Proof, , Available online 24 January 2006, . (Website Keywords: Vocal tract resonance; Tracking; Cepstra; Neural network; Multi-layer perceptron; EM algorithm; Hidden dynamics; State-space model

T. Nagarajan and H.A. Murthy, Language identification using acoustic log-likelihoods of syllable-like units, Speech Communication, In Press, Corrected Proof, , Available online 19 January 2006, . (Website Keywords: Language identification; Syllable; Incremental training

Yasser Ghanbari and Mohammad Reza Karami-Mollaei, A new approach for speech enhancement based on the adaptive thresholding of the wavelet packets, Speech Communication, In Press, Corrected Proof, , Available online 19 January 2006, . (Website Keywords: Speech processing; Speech enhancement; Wavelet thresholding; Noisy speech recognition

Mohammad Ali Salmani-Nodoushan, A comparative sociopragmatic study of ostensible invitations in English and Farsi, Speech Communication, In Press, Corrected Proof, , Available online 11 January 2006, . (Website Keywords: Ostensible invitations; Politeness; Speech act theory; Pragmatics; Face threatening acts

Laurent Benaroya, Frédéric Bimbot, Guillaume Gravier and Rémi Gribonval, Experiments in audio source separation with one sensor for robust speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 19 December 2005, . (Website Keywords: Noise suppression; Source separation; Speech enhancement; Speech recognition

Naveen Srinivasamurthy, Antonio Ortega and Shrikanth Narayanan, Efficient scalable encoding for distributed speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 19 December 2005, . (Website Keywords: Distributed speech recognition; Scalable encoding; Multi-pass recognition; Joint coding-classification

Leigh D. Alsteris and Kuldip K. Paliwal, Further intelligibility results from human listening tests using the short-time phase spectrum, Speech Communication, In Press, Corrected Proof, , Available online 5 December 2005, . (Website Keywords: Short-time Fourier transform; Phase spectrum; Magnitude spectrum; Speech perception; Overlap-add procedure; Automatic speech recognition; Feature extraction; Group delay function; Instantaneous frequency distribution

Luis Fernando D'Haro, Ricardo de Córdoba, Javier Ferreiros, Stefan W. Hamerich, Volker Schless, Basilis Kladis, Volker Schubert, Otilia Kocsis, Stefan Igel and José M. Pardo, An advanced platform to speed up the design of multilingual dialog applications for multiple modalities, Speech Communication, In Press, Corrected Proof, , Available online 5 December 2005, . (Website Keywords: Automatic dialog systems generation; Dialog management tools; Multiple modalities; Multilinguality; XML; VoiceXML

Ben Milner and Xu Shao, Clean speech reconstruction from MFCC vectors and fundamental frequency using an integrated front-end, Speech Communication, In Press, Corrected Proof, , Available online 21 November 2005, . (Website Keywords: Distributed speech recognition; Speech reconstruction; Sinusoidal model; Source-filter model; Fundamental frequency estimation; Auditory model

Min Chu, Yong Zhao and Eric Chang, Modeling stylized invariance and local variability of prosody in text-to-speech synthesis, Speech Communication, In Press, Corrected Proof, , Available online 18 November 2005, . (Website Keywords: Prosody; Stylized invariance; Local variability; Soft prediction; Unit selection; Text-to-speech

Stephen So and Kuldip K. Paliwal, Scalable distributed speech recognition using Gaussian mixture model-based block quantisation, Speech Communication, In Press, Corrected Proof, , Available online 18 November 2005, . (Website Keywords: Distributed speech recognition; Gaussian mixture models; Block quantisation; Aurora-2

Junho Park and Hanseok Ko, Achieving a reliable compact acoustic model for embedded speech recognition system with high confusion frequency model handling, Speech Communication, In Press, Corrected Proof, , Available online 11 November 2005, . (Website Keywords: Tied-mixture HMM; Compact acoustic modeling; Embedded speech recognition system

Amalia Arvaniti, D. Robert Ladd and Ineke Mennen, Phonetic effects of focus and "tonal crowding" in intonation: Evidence from Greek polar questions, Speech Communication, In Press, Corrected Proof, , Available online 26 October 2005, . (Website Keywords: Intonation; Focus; Tonal alignment; Phrase accent; Tonal crowding

Dimitrios Dimitriadis and Petros Maragos, Continuous energy demodulation methods and application to speech analysis, Speech Communication, In Press, Corrected Proof, , Available online 25 October 2005, . (Website Keywords: Nonstationary speech analysis; Energy operators; AM-FM modulations; Demodulation; Gabor filterbanks; Feature distributions; ASR; Robust features; Nonlinear speech analysis

Daniel Recasens and Aina Espinosa, Dispersion and variability of Catalan vowels, Speech Communication, In Press, Corrected Proof, , Available online 24 October 2005, . (Website Keywords: Vowels; Catalan; Schwa; Vowel spaces; Contextual and non-contextual variability for vowels; Acoustic analysis; Electropalatography

Cynthia G. Clopper and David B. Pisoni, The Nationwide Speech Project: A new corpus of American English dialects, Speech Communication, In Press, Corrected Proof, , Available online 21 October 2005, . (Website Keywords: Speech corpus; Dialect variation; American English

Diane J. Litman and Kate Forbes-Riley, Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors, Speech Communication, In Press, Corrected Proof, , Available online 19 October 2005, . (Website Keywords: Emotional speech; Predicting user state via machine learning; Prosody; Empirical study relevant to adaptive spoken dialogue systems; Tutorial dialogue systems

Carsten Meyer and Hauke Schramm, Boosting HMM acoustic models in large vocabulary speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 19 October 2005, . (Website Keywords: Boosting; AdaBoost; Machine learning; Acoustic model training; Spontaneous speech; Automatic speech recognition

SungHee Kim, Robert D. Frisina, Frances M. Mapes, Elizabeth D. Hickman and D. Robert Frisina, Effect of age on binaural speech intelligibility in normal hearing adults, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, . (Website Keywords: Age; Presbycusis; HINT; Speech intelligibility in noise

Tong Zhang, Mark Hasegawa-Johnson and Stephen E. Levinson, Cognitive state classification in a spoken tutorial dialogue system, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, . (Website Keywords: Intelligent tutoring system; User affect recognition; Spoken language processing

Mark D. Skowronski and John G. Harris, Applied principles of clear and Lombard speech for automated intelligibility enhancement in noisy environments, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, . (Website Keywords: Clear speech; Speech enhancement; Energy redistribution

Marcos Faundez-Zanuy, Speech coding through adaptive combined nonlinear prediction, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, . (Website Keywords: Speech coding; Nonlinear prediction; Neural networks; Data fusion

Praveen Kakumanu, Anna Esposito, Oscar N. Garcia and Ricardo Gutierrez-Osuna, A comparison of acoustic coding models for speech-driven facial animation, Speech Communication, In Press, Corrected Proof, , Available online 17 October 2005, . (Website Keywords: Speech-driven facial animation; Audio-visual mapping; Linear discriminants analysis

Atsushi Fujii, Katunobu Itou and Tetsuya Ishikawa, LODEM: A system for on-demand video lectures, Speech Communication, In Press, Corrected Proof, , Available online 27 September 2005, . (Website Keywords: Cross-media retrieval; Speech recognition; Spoken document retrieval; Adaptation; Lecture video

Marián Képesi and Luis Weruaga, Adaptive chirp-based time-frequency analysis of speech signals, Speech Communication, In Press, Corrected Proof, , Available online 21 September 2005, . (Website Keywords: Time-frequency analysis; Harmonically related chirps; Fan-Chirp transform

Hauke Schramm, Xavier Aubert, Bart Bakker, Carsten Meyer and Hermann Ney, Modeling spontaneous speech variability in professional dictation, Speech Communication, In Press, Corrected Proof, , Available online 19 September 2005, . (Website Keywords: Automatic speech recognition; Spontaneous speech modeling; Pronunciation modeling; Rate of speech modeling; Filled pause modeling; Model combination

Giampiero Salvi, Dynamic behaviour of connectionist speech recognition with strong latency constraints, Speech Communication, In Press, Corrected Proof, , Available online 14 June 2005, . (Website Keywords: Speech recognition; Neural network; Low latency; Non-linear dynamics

Christopher Dromey, Shawn Nissen, Petrea Nohr and Samuel G. Fletcher, Measuring tongue movements during speech: Adaptation of a magnetic jaw-tracking system, Speech Communication, In Press, Corrected Proof, , Available online 14 June 2005, . (Website Keywords: Tongue; Movement; Measurement; Magnetic; Kinematic

Erhard Rank and Gernot Kubin, An oscillator-plus-noise model for speech synthesis, Speech Communication, In Press, Corrected Proof, , Available online 21 April 2005, . (Website Keywords: Non-linear time-series; Oscillator model; Speech production; Noise modulation Kevin M. Indrebo, Richard J. Povinelli and Michael T. Johnson, Sub-banded reconstructed phase spaces for speech recognition, Speech Communication, In Press, Corrected Proof, , Available online 24 February 2005, . (Website Keywords: Speech recognition; Dynamical systems; Nonlinear signal processing; Sub-bands

top

FUTURE CONFERENCES

Publication policy: Hereunder, you will find very short announcements of future events. The full call for participation can be accessed on the conference websites
See also our Web pages (http://www.isca-speech.org/) on conferences and workshops.

FUTURE INTERSPEECH CONFERENCES

Call for papers-INTERSPEECH 2006-ICSLP
INTERSPEECH 2006 - ICSLP, the Ninth International Conference on Spoken Language Processing dedicated to the interdisciplinary study of speech science and language technology, will be held in Pittsburgh, Pennsylvania, September 17-21, 2006, under the sponsorship of the International Speech Communication Association (ISCA).
The INTERSPEECH meetings are considered to be the top international conference in speech and language technology, with more than 1000 attendees from universities, industry, and government agencies. They are unique in that they bring together faculty and students from universities with researchers and developers from government and industry to discuss the latest research advances, technological innovations, and products. The conference offers the prospect of meeting the future leaders of our field, exchanging ideas, and exploring opportunities for collaboration, employment, and sales through keynote talks, tutorials, technical sessions, exhibits, and poster sessions. In recent years the INTERSPEECH meetings have taken place in a number of exciting venues including most recently Lisbon, Jeju Island (Korea), Geneva, Denver, Aalborg (Denmark), and Beijing.
ISCA, together with the INTERSPEECH 2006 - ICSLP organizing committee, would like to encourage submission of papers for the upcoming conference in the following
TOPICS of INTEREST
Linguistics, Phonetics, and Phonology
Prosody
Discourse and Dialog
Speech Production
Speech Perception
Physiology and Pathology
Paralinguistic and Nonlinguistic Information (e.g. Emotional Speech)
Signal Analysis and Processing
Speech Coding and Transmission
Spoken Language Generation and Synthesis
Speech Recognition and Understanding
Spoken Dialog Systems
Single-channel and Multi-channel Speech Enhancement
Language Modeling
Language and Dialect Identification
Speaker Characterization and Recognition
Acoustic Signal Segmentation and Classification
Spoken Language Acquisition, Development and Learning
Multi-Modal Processing
Multi-Lingual Processing
Spoken Language Information Retrieval
Spoken Language Translation
Resources and Annotation
Assessment and Standards
Education
Spoken Language Processing for the Challenged and Aged
Other Applications
Other Relevant Topics
SPECIAL SESSIONS
In addition to the regular sessions, a series of special sessions has been planned for the meeting. Potential authors are invited to submit papers for special sessions as well as for regular sessions, and all papers in special sessions will undergo the same review process as papers in regular sessions. Confirmed special sessions and their organizers include:
* The Speech Separation Challenge, Martin Cooke (Sheffield) and Te-Won Lee (UCSD)
* Speech Summarization, Jean Carletta (Edinburgh) and Julia Hirschberg (Columbia)
* Articulatory Modeling, Eric Bateson (University of British Columbia)
* Visual Intonation, Marc Swerts (Tilburg)
* Spoken Dialog Technology R&D, Roberto Pieraccini (Tell-Eureka)
* The Prosody of Turn-Taking and Dialog Acts, Nigel Ward (UTEP) and Elizabeth Shriberg (SRI and ICSI)
* Speech and Language in Education, Patti Price (pprice.com) and Abeer Alwan (UCLA)
* From Ideas to Companies, Janet Baker (formerly of Dragon Systems)
PAPER SUBMISSION
The deadline for submission of 4-page full papers is April 7, 2006. Paper submission will be exclusively through the conference website, using submission guidelines to be provided. Previously-published papers should not be submitted. The corresponding author will be notified by e-mail of the paper status by June 9, 2006. Minor updates will be allowed from June 10 to June 16, 2006.
CALL FOR TUTORIAL PROPOSALS
We encourage proposals for three-hour tutorials to be held on September 17, 2006. Those interested in organizing a tutorial should send a 2-3 page description by electronic mail , in plain ASCII (iso8859-1) text as soon as possible, but no later than January 31 2006.
Proposals for tutorials should contain the following information:
* Title of the tutorial
* Summary and relevance
* Description of contents and course material
* The names, postal addresses, phone numbers, and email addresses of the tutorial speakers, with a one-paragraph statement describing the research interests and areas of expertise of the speaker(s)
* Any special requirements for technical needs (display projector, computer infrastructure, etc.)
IMPORTANT DATES
Four-page paper deadline: April 7, 2006
Notification of paper status: June 9, 2006
Early registration deadline: June 23, 2006
Tutorial Day: September 17, 2006
Main Conference: September 18-21, 2006
Further information via Website or send email
Organizer
Professor Richard M. Stern (General Chair)
Carnegie Mellon University
Electrical Engineering and Computer Science
5000 Forbes Avenue
Pittsburgh, PA 15213-3890
Fax: +1 412 268-3890
Email

INTERSPEECH 2007-EUROSPEECH
August 27-31,2007,Antwerp, Belgium
Chair: Dirk van Compernolle, K.U.Leuven and Lou Boves, K.U.Nijmegen
Website

INTERSPEECH 2008-ICSLP
September 22-26, 2008, Brisbane, Queensland, Australia
Chairman: Denis Burnham, MARCS, University of West Sydney.

top

FUTURE ISCA TUTORIAL AND RESEARCH WORKSHOP (ITRW)

ITRW on Multilingual Speech and Language Processing (MULTILING 2006)

Organized by: Stellenbosch University Centre for Language and Speech Technology
in collaboration with ISCA
9-11 April 2006, Stellenbosch, South Africa
Keynote speaker: Tanja Schultz - Interactive Systems Laboratories, Carnegie Mellon University
Important dates:
Deadline for abstract submission: 12 September 2005
Notification of acceptance: 14 October 2005
Deadline of early registration & full paper submission: 10 February 2006
Workshop dates: 9-11 April 2006
Contact: Justus Roux or consult the workshop website

ITRW on Speech Recognition and Intrinsic Variation (SRIV)- Toulouse, France

May 20th 2006, Toulouse, France
Satellite of ICASSP-2006
Website
email address .
PDF
Topics
- Accented speech modeling and recognition,
- Children speech modeling and recognition,
- Non-stationarity and relevant analysis methods,
- Speech spectral and temporal variations,
- Spontaneous speech modeling and recognition,
- Speech variation due to emotions,
- Speech corpora covering sources of variation,
- Acoustic-phonetic correlates of variations,
- Impact and characterization of speech variations on ASR,
- Speaker adaptation and adapted training,
- Novel analysis and modeling structures,
- Man/machine confrontation: ASR and HSR (human speech recognition),
- Disagnosis of speech recognition models,
- Intrinsic variations in multimodal recognition,
- Review papers on these topics are also welcome,
- Application and services scenarios involving strong speech variations
Important dates
Submission deadline: Feb. 1, 2006
Notification acceptance: Mar. 1, 2006
Final manuscript due: Mar. 15, 2006
Progam available: Mar. 22, 2006
Registration deadline: Mar. 29, 2006
Workshop: May 20, 2006 (after ICASSP 2006)
Workshop
This event is organized as a satellite of the ICASSP 2006 conference. The workshop will take place in Toulouse, on 20 May 2006, just after the conference, which ends May 19. The workshop will consist of oral and poster sessions, as well as talks by guest speakers.
More information
Website
email address .
PDF

ITRW on Experimental Linguistics

28-30 August 2006, Athens Greece
CALL FOR PAPERS
AIMS
The general aims of the Workshop are to bring together researchers of linguistics and related disciplines in a unified context as well as to discuss the development of experimental methodologies in linguistic research with reference to linguistic theory, linguistic models and language applications.
SUBJECTS AND RELATED DISCIPLINES
1. Theory of language
2. Cognitive linguistics
3. Neurolinguistics
4. Speech production
5. Speech acoustics
6. Phonology
7. Morphology
8. Syntax
9. Prosody
10. Speech perception
11. Psycholinguistics
12. Pragmatics
13. Semantics
14. Discourse linguistics
15. Computational linguistics
16. Language technology
MAJOR TOPICS
I. Lexicon
II. Sentence
III. Discourse
IMPORTANT DATES
1 February 2006, deadline of abstract submission
1 March 2006, notification of acceptance
1 April 2006, registration
1 May 2006, camera ready paper submission
28-30 August 2006, Workshop
CHAIR
Antonis Botinis, University of Athens, Greece
Marios Fourakis, University of Wisconsin-Madison, USA
Barbara Gawronska, University of Skövde, Sweden
ORGANIZING COMMITTEE
Aikaterini Bakakou-Orphanou, University of Athens
Antonis Botinis, University of Athens
Christoforos Charalambakis, University of Athens
SECRETARIAT
ISCA Workshop on Experimental Linguistics
Department of Linguistics
University of Athens
GR-15784, Athens GREECE
Tel.: +302107277668
Fax: +302107277029
e-mail
Workshop site address

2nd ITRW on PERCEPTUAL QUALITY OF SYSTEMS

Berlin, Germany, 4 - 6 September 2006
WORKSHOP AIMS
The quality of systems which address human perception is difficult to describe. Since quality is not an inherent property of a system, users have to decide on what is good or bad in a specific situation. An engineering approach to quality includes the consideration of how a system is perceived by its users, and how the needs and expectations of the users develop. Thus, quality assessment and prediction have to take the relevant human perception and judgement factors into account. Although significant progress has been made in several areas affecting quality within the last two decades, there is still no consensus on the definition of quality and its contributing components, as well as on assessment, evaluation and prediction methods.
Perceptual quality is attributed to all systems and services which involve human perception. Telecommunication services directly provoke such perceptions: Speech communication services (telephone, Voice over IP), speech technology (synthesis, spoken dialogue systems), as well as multimodal services and interfaces (teleconference, multimedia on demand, mobile phones, PDAs). However, the situation is similar for the perception of other products, like machines, domestic devices, or cars. An integrated view on system quality makes use of knowledge gained in different disciplines and may therefore help to find general underlying principles. This will assist the increase of usability and perceived quality of systems and services, and finally yield better acceptance.
The workshop is intended to provide an interdisciplinary exchange of ideas between both academic and industrial researchers working on different aspects of perceptual quality of systems. Papers are invited which refer to methodological aspects of quality and usability assessment and evaluation, the underlying perception and judgment processes, as well as to particular technologies, systems or services. Perception-based as well as instrumental approaches will complement each other in giving a broader picture of perceptual quality. It is expected that this will help technology providers to develop successful, high-quality systems and services.
WORKSHOP TOPICS
The following non-exhaustive list gives examples of topics which are relevant for the workshop, and for which papers are invited:
- Methodologies and Methods of Quality Assessment and Evaluation
- Metrology: Test Design and Scaling
- Quality of Speech and Music
- Quality of Multimodal Perception
- Perceptual Quality vs. Usability
- Semio-Acoustics and -Perception
- Quality and Usability of Speech Technology Devices
- Telecommunication Systems and Services
- Multi-Modal User Interfaces
- Virtual Reality
- Product-Sound Quality
IMPORTANT DATES
April 15, 2006 (updated): Abstract submission (approx. 800 words)
May 15, 2006: Notification of acceptance
June 15, 2006: Submission of the camera-ready paper (max. 6 pages)
September 4-6, 2006: Workshop
WORKSHOP VENUE
The workshop will take place in the "Harnack-Haus", a villa-like conference center located in the quiet western part of Berlin, near the Free University. As long as space permits, all participants will be accommodated in this center. Accommodation and meals are included in the workshop fees. The center is run by the Max-Planck-Gesellschaft and can easily be reached from all three airports of Berlin (Tegel/TXL, Schönefeld/SXF and Tempelhof/THF). Details on the venue, accommodation and transportation will be announced soon.
PROCEEDINGS
CD workshop proceedings will be available upon registration at the conference venue and subsequently on the workshop web site.
LANGUAGE
The official language of the workshop will be English.
LOCAL WORKSHOP ORGANIZATION
Ute Jekosch (IAS, Technical University of Dresden)
Sebastian Möller (Deutsche Telekom Labs, Technical University of Berlin)
Alexander Raake (Deutsche Telekom Labs, Technical University of Berlin)
CONTACT INFORMATION
Sebastian Möller, Deutsche Telekom Labs, Ernst-Reuter-Platz 7,
D-10587 Berlin, Germany
phone +49 30 8353 58465, fax +49 30 8353 58409
Website

ITRW on Statistical and Perceptual Audition ( 2006)

A satellite workshop of INTERSPEECH 2006 -ICSLP
September 16, 2006, Pittsburgh, PA, USA
Website
This will be a one-day workshop with a limited number of oral presentations, chosen for breadth and provocation, and an informal atmosphere to promote discussion. We hope that the participants in the workshop will be exposed to a broader perspective, and that this will help foster new research and interesting variants on current approaches.
Topics
Generalized audio analysis
Speech analysis
Music analysis
Audio classification
Scene analysis
Signal separation
Speech recognition
Multi-channel analysis
In all cases, preference will be given to papers that clearly involve both perceptually-defined or perceptually-related problems, and statistical or machine-learning based solutions.
Important dates
Submission of a 4-6 pages long paper deadline (double column) April 21 2006
Notification of acceptance June 9, 2006

NOLISP'07: Non linear Speech Processing

May 22-25, 2007 , Paris, France

6th ISCA Speech Synthesis Research Workshop (SSW-6)

Bonn (Germany), August 22-24, 2007
A satellite of INTERSPEECH 2007 (Antwerp)in collaboration with SynSIG
Details will be posted by early 2007
Contact
Prof. Wolfgang Hess

ITRW on Robustness

November 2007, Santiago, Chile

top

FORTHCOMING EVENTS SUPPORTED (but not organized) by ISCA

2nd INTERNATIONAL CONFERENCE ON TONAL ASPECTS OF LANGUAGES TAL 2006

La Rochelle (France) April 27-29th, 2006
CALL FOR PAPERS
Jointly organised by La Rochelle University and Paris 3 University( Phonetics & Phonology laboratory, UMR 7018 CNRS).
Satellite conference of PROSODY 2006 that will be held in Dresden (Germany) in May 02-05, 2006.
The aim of the TAL 2006 conference is to bring together researchers interested in all areas of tone languages.
The conference welcomes papers on the following topics:
- typology and phonology of tone languages
- tone languages acquisition
- speech physiology and pathology in tone languages
- tones production
- perception in tone languages
- tone languages prosody
- modelling of tones and intonation
- speech processing in tone languages
- cognitive aspects of tone languages
- others
PAPER SUBMISSION
The deadline for full paper submission (4 pages, 2 columns, simple-spaced, Time New Roman 10 points) is January 15, 2006.
Paper submission is possibly exclusively via the conference website, in accordance with the submission guidelines. No previously published papers should be submitted.
Each corresponding author will be notified by e-mail of the acceptance of the paper by January, 31, 2006.
IMPORTANT DATES
Intention of participation: before October 30, 2005
Full paper submission deadline: January 15, 2006
Notification of paper acceptance/rejection: February 1st, 2006
Early registration deadline: February 28, 2005
final paper: March 31, 2005
INFORMATIONS
If you want to be updated as more information becomes available, please send an email.

SPEECH PROSODY 2006

International Conference on Speech Prosody
May 2-5 2006
International Congress Center, Dresden, Germany
For further information, visit our website
Topics
We invite contributions in any of the following areas and also appreciate suggestions for Special Sessions:
* Prosody and the Brain
* Prosody and Speech Production
* Analysis, Formulation and Modeling of Prosody
* Syntax, Semantics, Pragmatics and Prosody
* Cross-linguistic Studies of Prosody
* Prosodic Variability
* Prosody of Dialogues and Spontaneous Speech
* Prosody and Affect
* Prosody and Speech Perception
* Prosody in Speech Synthesis
* Prosody in Speech Recognition and Understanding
* Prosody in Language Learning
* Auditory-Visual Production and Perception of Prosody
* Pathology of Prosody and Aids for the Impaired
* Annotation and Speech Corpus Creation
* Others
Organizing Committee:
Ruediger Hoffmann - Chair
Hansjoerg Mixdorff - Program Chair
Oliver Jokisch - Technical Chair
Important Dates:
Proposals for special sessions: November 11, 2005
Full 4-page paper submission: December 31, 2005
Advanced registration deadline: February 28, 2006
Conference: May 2-5, 2006
Post-conference day: May 6, 2006

2nd Workshop on Multimodal User Authentication

A satellite conference of ICASSP 2006 in Toulouse France.
May 11-12,2006
Workshop website
Topics
Iris identification
Eye and face analysis
Speaker recognition/verification
Fingerprint recognition
Audio/Image indexing and retrieval
Joint audio/video processing
Gesture analysis
Signature recognition
Multimodal Fusion and Integration Techniques for Authentication
Intelligent interfaces for biometric systems and data bases and tools for system evaluation
Applications and implementations of multimodal user authentication systems
Privacy issues and standards
Important dates
Electronic submission of photo ready paper January 15, 2006
Notification of acceptance March 8 2006
Advance registration before March 15 2006
Final papers due March 15, 2006

5th SALTMIL Workshop on Minority Languages

Strategies for developing machine translation for minority languages
Tuesday May 23rd 2006 (morning)
Magazzini del Cotone Conference Centre, Genoa, Italy
Organised in conjunction with LREC 2006: Fifth International Conference on Language Resources and Evaluation, Genoa, Italy, 24-26 May 2006
This workshop continues the series of LREC workshops organized by SALTMIL ( SALTMIL is the ISCA Special Interest Group for Speech And Language Technology for Minority Languages.
Format
The workshop will begin with the following talks from invited speakers:
* Lori Levin (Carnegie Mellon University, USA): "Omnivorous MT: Using whatever resources are available."
* Anna Sågvall Hein (University of Uppsala, Sweden): "Approaching new languages in machine translation."
* Hermann Ney (Rheinisch-Westfälische Technische Hochschule, Aachen, Germany): "Statistical Machine Translation with and without a bilingual training corpus"
* Delyth Prys (University of Wales, Bangor): "The BLARK matrix and its relation to the language resources situation for the Celtic languages."
* Daniel Yacob (Ge'ez Frontier Foundation) "Unicode Development for Under-Resourced Languages".
* Mikel Forcada (Universitat d’Alacant, Spain): "Open source machine translation: an opportunity for minor languages"
These talks will be followed by a poster session with contributed papers.
Papers
Papers are invited that describe research and development in the following areas:
* The BLARK (Basic Language Resource Kit) matrix at ELDA, and how it relates to minority languages.
* The advantages and disadvantages of different corpus-based strategies for developing MT, with reference to a) speed of development, and b) level of researcher expertise required.
* What open-source or free language resources are available for developing MT?
* Existing resources for minority languages, with particular emphasis on software tools that have been found useful.
All contributed papers will be presented in poster format. All contributions will be included in the workshop proceedings (CD). They will also be published on the SALTMIL website.
Important dates
* Abstract submission: February 27, 2006
* Notification of acceptance: March 13, 2006
* Final version of paper: April 10, 2006
* Workshop: May 23, 2006 (morning)
Submissions
Abstracts should be in English, and up to four pages long. The submission format is PDF. Papers will be reviewed by members of the programme committee. The reviews are not anonymous. Accepted papers may be up to 6 pages long. The final full papers should be in the format specified for the LREC proceedings. Each submitted abstract should include: title; author(s); affiliation(s), together with the contact author's e-mail address, postal address, telephone and fax numbers.
Abstracts should be submitted online in PDF format at: http://www.easychair.org/SALTMIL2006. The deadline for submission is February 27th.
Programme committee
* Briony Williams (University of Wales, Bangor, UK): Programme Chair
* Kepa Sarasola (University of the Basque Country)
* Bojan Petek (University of Ljubljana, Slovenia)
* Julie Berndsen (University College Dublin, Ireland)
* Atelach Alemu Argaw (University of Stockholm, Sweden)

HLT-NAACL 2006 Call for Demos

2006 Human Language Technology Conference and North American chapter of the Association for Computational Linguistics annual meeting.
New York City, New York
Conference date: June 4-9, 2006
Submission deadline: March 3, 2006
Website
Proposals are invited for the HLT-NAACL 2006 Demonstrations Program. This program is aimed at offering first-hand experience with new systems, providing opportunities to exchange ideas gained from creating systems, and collecting feedback from expert users. It is primarily intended to encourage the early exhibition of research prototypes, but interesting mature systems are also eligible. Submission of a demonstration proposal on a particular topic does not preclude or require a separate submission of a paper on that topic; it is possible that some but not all of the demonstrations will illustrate concepts that are described in companion papers.
Demo Co-Chairs
John Dowding, University of California/Santa Cruz
Natasa Milic-Frayling, Microsoft Research, Cambridge, United Kingdom
Alexander Rudnicky, Carnegie Mellon University.
Areas of Interest
We encourage the submission of proposals for demonstrations of software and hardware related to all areas of human language technology. Areas of interest include, but are not limited to, natural language, speech, and text systems for:
- Speech recognition and generation;
- Speech retrieval and summarization;
- Rich transcription of speech;
- Interactive dialogue;
- Information retrieval, filtering, and extraction;
- Document classification, clustering, and summarization;
- Language modeling, text mining, and question answering;
- Machine translation;
- Multilingual and cross-lingual processing;
- Multimodal user interface;
- Mobile language-enabled devices;
- Tools for Ontology, Lexicon, or other NLP resource development;
- Applications in growing domains (web-search, bioinformatics, ...).
Please be referred to the HLT-NAACL 2006 CFP for a more detailed but not necessarily an exhaustive list of relevant topics.
Important Dates
Submission deadline: March 3, 2006
Notification of acceptance: April 6, 2006
Submission of final demo related literature: April 17, 2006
Conference: June 4-9, 2006
Submission
Format
A demo proposal should consist of the following parts:
- An extended abstract of up to four pages, including the title, authors, full contact information, and technical content to be demonstrated. It should give an overview of what the demonstration is aimed to achieve, how the demonstration illustrates novel ideas or late-breaking results, and how it relates to other systems or projects described in the context of other research (i.e., references to related literature).
- A detailed requirement description of hardware, software, and network access expected to be provided by the local organizer. Demonstrators are encouraged to be flexible in their requirements (possibly preparing different demos for different logistical situations). Please state what you can bring yourself and what you absolutely must be provided with. We will do our best to provide equipment and resources but at this point we cannot guarantee anything beyond the space and power supply.
- A concise outline of the demo script, including the accompanying narrative, and either a web address to access the demo or visual aids (e.g., screen-shots, snapshots, or sketches). The demo script should be no more than 6 pages.
The demo abstract must be submitted electronically in the Portable Document Format (PDF). It should follow the format guidelines for the main conference papers. Authors are encouraged to use the style files provided on the HLT-NAACL 2006 website. It is the responsibility of the authors to ensure that their proposals use no unusual format features and can be printed on a standard Postscript printer.
Procedure
Demo proposals should be submitted electronically to the demo co-chairs.
Reviewing
Demo proposals will be evaluated on the basis of their relevance to the conference, innovation, scientific contribution, presentation, and usability, as well as potential logistical constraints.
Publication
The accepted demo abstracts will be published in the Companion Volumne to the Proceedings of the HLT-NAACL 2006 Conference.
Further Details
Further details on the date, time, and format of the demonstration session(s) will be determined and provided at a later date. Please send any inquiries to the demo co-chairs.

HLT-NAACL 2006

Call for Tutorial Proposals
Proposals are invited for the Tutorial Program for HLT-NAACL 2006, to be held at the New York Marriott at the Brooklyn Bridge from June 4 to 9, 2006. The tutorial day is June 4, 2006. The HLT-NAACL conferences combine the HLT (Human Language Technology) and NAACL (North American chapter of the Association for Computational Linguistics) conference series, and bring together researchers in NLP, IR, and speech. For details, see our website .
We seek half-day tutorials covering topics in Speech Processing, Information Retrieval, and Natural Language Processing, including their theoretical foundations, intersections, and applications. Tutorials will normally move quickly, but they are expected to be accessible, understandable, and of interest to a broad community of researchers, preferably from multiple areas of Human Language Technology. Our target is to have four to six tutorials.
SUBMISSION DETAILS
Proposals for tutorials should be submitted by electronic mail, in plain text, PDF, Microsoft Word, or HTML. They should be submitted, by the date shown below, by email. The subject line should be: "HLT-NAACL'06 TUTORIAL PROPOSAL".
Proposals should contain:
1. A title and brief (2-page max) description of the tutorial topic and content. Include a brief outline of the tutorial structure showing that the tutorial's core content can be covered in a three hours (two 1.5 hour sessions). Tutorials should be accessible to the broadest practical audience. In keeping with the focus of the conference, please highlight any topics spanning disciplinary boundaries that you plan to address. (These are not strictly required, but they are a big plus.)
2. An estimate of the audience size. If approximately the same tutorial has been given elsewhere, please list previous venues and approximate audience sizes. (There's nothing wrong with repeat tutorials; we'd just like to know.)
3. The names, postal addresses, phone numbers, and email addresses of the organizers, with one-paragraph statements of their research interests and areas of expertise.
4. A description of special requirements for technical needs (computer infrastructure, etc). Tutorials must be financially self-supporting. The conference organizers will establish registration rates that will cover the room, audio-visual equipment, internet access, snacks for breaks, and reproduction the tutorial notes. A description of any additional anticipated expenses must be included in the proposal.
PRACTICAL ARRANGEMENTS
Accepted tutorial speakers will be asked to provide descriptions of their tutorials suitable for inclusion in all of: email announcements, the conference registration material, the printed program, the website, and the proceedings. This will involve producing text and/or HTML and/or LaTeX/Word/PDF versions of appropriate lengths.
Tutorial notes will be printed and distributed by the Association for Computational Linguistics (ACL). These materials, containing at least copies of the slides that will be presented and a bibliography for the material that will be covered, must be submitted by the date indicated below to allow adequate time for reproduction. Presenters retain copyright for their materials, but ACL requires that presenters execute a non-exclusive distribution license to permit distribution to participants and sales to others.
Tutorial presenters will be compensated in accordance with current ACL policies; see details .
IMPORTANT DATES
Submission: Jan 20, 2006
Notification: Feb 10, 2006
Descriptions due: Mar 1, 2006
Course material due: May 1, 2006
Tutorial date: Jun 4, 2006
TUTORIAL CHAIRS
Jim Glass, Massachusetts Institute of Technology
Christopher Manning, Stanford University
Douglas W. Oard, University of Maryland

11-th International Conference SPEECH AND COMPUTER (SPECOM'2006)

25-29 June 2006
St. Petersburg, Russia
Conference website Organized by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS)
Supported by SIMILAR NoE, INTAS association, ELSNET and ISCA.
Topics
- Signal processing and feature extraction;
- Multimodal analysis and synthesis;
- Speech recognition and understanding;
- Natural language processing;
- Speaker and language identification;
- Speech synthesis;
- Speech perception and speech disorders;
- Speech and language resources;
- Applied systems for Human-Computer Interaction;
IMPORTANT DATES
- Papers and proposals submission start: 15 January 2006
- Proposals for special sessions: 1 February 2006
- Full paper submission: 10 March 2006
- Notification of acceptance: 31 March 2006
- Early registration deadline: 15 April 2006
- Conference SPECOM: 25-29 June 2006
The conference venue and dates were selected so that the attendees can possibly be exposed to St. Petersburg unique and wonderful phenomenon known as the White Nights, for our city is the world's only metropolis where such a phenomenon occurs every summer.
CONTACT INFORMATION
SPECOM'2006, SPIIRAS, 39, 14th line, St-Petersburg, 199178, RUSSIA
Tel.: +7 812 3287081 Fax: +7 812 3284450
E-mail
Web

IEEE Odyssey 2006: The Speaker and Language Recognition Workshop

28 - 30 June 2006
Ritz Carlton Hotel, Spa & Casino
San Juan, Puerto Rico
The IEEE Odyssey 2006 Workshop on Speaker and Language Recognition will be held in scenic San Juan, Puerto Rico at the Ritz Carlton Hotel. This Odyssey is sponsored by the IEEE, is an ISCA Tutorial and Research Workshop of the ISCA Speaker and Language Characterization SIG, and is hosted by The Polytechnic University of Puerto Rico.
Topics
Topics of interest include speaker recognition (verification, identification, segmentation, and clustering); text-dependent and -independent speaker recognition; multispeaker training and detection; speaker characterization and adaptation; features for speaker recognition; robustness in channels; robust classification and fusion; speaker recognition corpora and evaluation; use of extended training data; speaker recognition with speech recognition; forensics, multimodality, and multimedia speaker recognition; speaker and language confidence estimation; language, dialect, and accent recognition; speaker synthesis and transformation; biometrics; human recognition; and commercial applications.
Paper Submission
Prospective authors are invited to submit papers written in English via the Odyssey website. The style guide, templates, and submission form can be downloaded from the Odyssey website. Two members of the Scientific Committee will review each paper. At least one author of each paper is required to register. The workshop proceedings will be published on CD- ROM.
Schedule
Proposal due 15 January 2006
Notification of acceptance 27 February 2006
Final papers due 30 March 2006
Preliminary program 21 April 2006
Workshop 28-30 June 2006
Registration and Information
Registration will be handled via the Odyssey website
. NIST SRE ‘06 Workshop
The NIST Speaker Recognition Evaluation 2006 Workshop will be held immediately before Odyssey ‘06 at the same location on 25-27 June. Everyone is invited to evaluate their systems via the NIST SRE. The NIST Workshop is only for participants and by prearrangement. Please contact Dr. Alvin Martin to participate and see the NIST website for details.
Chairs>br> Kay Berkling, Co-Chair Polytechnic University of Puerto Rico
Pedro A. Torres-Carrasquillo, Co-Chair MIT Lincoln Laboratory, USA

7th SIGdial workshop on discourse and dialogue

Sydney (co-located with COLING/ACL)
June 15-16,2006 (tentative dates)
Website
Contact: Dr Jan Alexandersson

International Workshop on Spoken Language Translation

ATR Kyoto (Japan)
November 30-December 1 2006
Website

IV Jornadas en Tecnologia del Habla

Zaragoza, Spain
November 8-10, 2006
Website

Call for papers International Symposium on Chinese Spoken Language Processing (ISCSLP'2006) Special Session on Speaker Recognition

Singapore Dec. 13-16, 2006
Conference website
Topics
ISCSLP'06 will feature world-renowned plenary speakers, tutorials, exhibits, and a number of lecture and poster sessions on the following topics:
* Speech Production and Perception
* Phonetics and Phonology
* Speech Analysis
* Speech Coding
* Speech Enhancement
* Speech Recognition
* Speech Synthesis
* Language Modeling and Spoken Language Understanding
* Spoken Dialog Systems
* Spoken Language Translation
* Speaker and Language Recognition
* Indexing, Retrieval and Authoring of Speech Signals
* Multi-Modal Interface including Spoken Language Processing
* Spoken Language Resources and Technology Evaluation
* Applications of Spoken Language Processing Technology
* Others
The official language of ISCSLP is English. The regular papers will be published as a volume in the Springer LNAI series, and the poster papers will be published in a companion volume. Authors are invited to submit original, unpublished work on all the aspects of Chinese spoken language processing.
The conference will also organize four special sessions:
* Special Session on Rich Information Annotation and Spoken Language Processing
* Special Session on Robust Techniques for Organizing and Retrieving Spoken Documents
* Special Session on Speaker Recognition
* Special Panel Session on Multilingual Corpus Development
The schedule of the conference is as the following:
* Full paper submission by Jun. 15, 2006
* Notification of acceptance by Jul. 25, 2006
* Camera ready papers by Aug. 15, 2006
* Early registration by Nov. 1, 2006
Please visit the conference website for more details.

top

FUTURE SPEECH SCIENCE AND TECHNOLOGY EVENTS

TC-STAR Openlab on Speech Translation: Trento Italy

TC-STAR Workshop on Speech Translation
Trento, 30th March 2006-1st April WARNING New dates!(before EACL 2006)
Openlab 2006 Website
Call For Participation
OpenLab 2006 is a training initiative of the European Integrated Project TC-STAR, Technologies and Corpora for Speech-to-speech Translation Research.
OpenLab 2006 aims to expand the TC-STAR research community in the areas of Automatic Speech Recognition (ASR) and Spoken Language Translation (SLT).
Students and young researchers in these areas are invited to contribute on shared TC-STAR project tasks.
The translation of European Parliament speeches from Spanish to English is the application domain of interest. Contributions on the following and other closely related topics will be welcome:
- Integration of ASR and SLT
- Statistical Models for SLT
- System combination in ASR and SLT
- Morphology and Syntax in SLT
- Error analysis in SLT
Several months before the meeting in Trento, language resources and tools will be made available to interested participants. Word graphs and n-best lists generated by different ASR and SLT systems will be provided, as well as training and testing collections to develop and evaluate a SLT system.
Participants will present and discuss their results in Trento, and will have the opportunity to attend tutorial speeches held by experts.
Participation in OpenLab 2006 is free. In addition, for a limited number of applications, lodging expenses will be covered by the organization.
Organizers:
Marcello Federico, ITC-irst, Trento
Ralf Schlüter, RWTH, Aachen

TC-STAR Second Evaluation Campaign 2006

TC-STAR is an European integrated project focusing on Speech-to-Speech Translation (SST). To encourage significant advances in all SST technologies, annual competitive evaluations are organized. Automatic Speech Recognition (ASR), Spoken Language Translation (SLT) and Text-To-Speech (TTS) are evaluated independently and within an end-to-end system. The project targets a selection of unconstrained conversational speech domains-speeches and broadcast news-and three languages: European English, European Spanish, and Mandarin Chinese. The first evaluation took place in March 2005 for ASR and SLT and September 2005 for TTS. TC-STAR welcomes outside participants in its 2nd evaluation of January-February 2006. This participation is free of charge. The TC-STAR 2006 evaluation campaign will consider:
· SLT in the following directions :
o Chinese-to-English (Broadcast News)
o Spanish-to-English (European Parliament plenary speeches)
o English-to-Spanish (European Parliament plenary speeches)
· ASR in the following languages :
o English (European Parliament plenary speeches)
o Spanish (European Parliament plenary speeches)
o Mandarin Chinese (Broadcast News)
· TTS in Chinese, English, and Spanish under the following conditions:
o Complete system: participants use their own training data
o Voice conversion intralingual and crosslingual, expressive speech: data provided by TC-STAR
o Component evaluation
For ASR and SLT, training data will be made available by the TC-STAR project for English and Spanish and can be purchased at LDC for Chinese. Development data will be provided by the TC-STAR project. Legal issues regarding the data will be detailed in the 2nd Call For Participation.
All participants will be given the opportunity to present and discuss their results in the TC-STAR evaluation workshop in Barcelona in June 2006.
Tentative schedule:
Registration: October 2005 (early expression of interest is welcome)
ASR evaluation: from mid January to end of January 2006
SLT evaluation: from begin February to mid February 2006
TTS evaluation: from begin February to end of February 2006
Release: April 2006
Submission of papers: May 2006
Workshop: June 2006
Contact: Djamel Mostefa (ELDA)
tel. +33 1 43 13 33 33

Call for papers: 3rd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms

Washington DC, USA
1-3 May 2006
Workshop website
OVERVIEW
The third MLMI workshop is coming to Washington DC, USA and will feature talks (including a number of invited speakers), posters, and demonstrations. In common with MLMI'05, the workshop will be immediately followed by the NIST meeting recognition workshop, centering on the Rich Transcription 2006 Meeting Recognition (RT-06) evaluation. This workshop will take place at the same location during 3-4 May 2006.
Topics
The following areas of interest, related to multimodal interaction:
* human-human communication modeling
* speech processing
* visual processing
* multimodal processing, fusion and fission
* multimodal discourse and dialog modeling
* human-human interaction modeling
* multimodal indexing, structuring, summarization and presentation
* multimodal annotation
* applications and HCI issues
* machine learning applied to the above
Workshop proceedings will be published by Springer, in the Lecture Notes in Computer Science (LNCS) series.
GUIDELINES FOR SUBMISSIONS
Submissions may be:
* full papers for oral or poster presentation, and inclusion in the proceedings
* extended abstracts for poster presentation only
* demonstration proposals
Full papers and extended abstracts should be submitted as PDF, and follow the Springer LNCS format for 'Proceedings and Other Multiauthor Volumes'
Length:
Full papers - 12 pages maximum
Extended abstracts - 2 pages maximum
Submissions should be made following the link on the workshop website.
Final versions of accepted full papers, which will appear in the proceedings will be due approximately 2 months after the workshop.
Demonstration proposals should be made using the form on the workshop website
IMPORTANT DATES
17 February 2006: Submission of full papers
10 March 2006: Submission of extended abstracts and demonstration proposals
24 March 2006: Acceptance notifications
1-3 May 2006: MLMI'06 workshop
30 June 2006: Submission of final versions of accepted full papers

MLMI is supported by the US National Institute of Standards and Technology (NIST), through the AMI and CHIL Integrated Projects and the PASCAL Network of Excellence funded by the FP6 IST priority of the European Union, and through the Swiss National Science Foundation National Centre of Competence in Research IM2.
AMI
CHIL
PASCAL
IM2.

LREC 2006 - 5th Conference on Language Resources and Evaluation

Magazzini del Cotone Conference Center, GENOA - ITALY
MAIN CONFERENCE: 24-25-26 MAY 2006
WORKSHOPS and TUTORIALS: 22-23 and 27-28 MAY 2006
Conference web site
The fifth international conference on Language Resources and Evaluation, LREC 2006, is organised by ELRA in cooperation with a wide range of international associations and organisations.
CONFERENCE TOPICS
Issues in the design, construction and use of Language Resources (LRs)
Issues in Human Language Technologies (HLT) evaluation
Special Highlights
LREC targets the integration of different types of LRs (spoken, written, and other modalities), and of the respective communities. To this end, LREC encourages submissions covering issues which are common to different types of LRs and language technologies, such as dialogue strategy, written and spoken translation, domain-specific data, multimodal communication or multimedia document processing, and will organise, in addition to the usual tracks, common sessions encompassing the different areas of LRs.
The 2006 Conference emphasises in particular the importance of promoting:
- synergies and integration between (multilingual) LRs and Semantic Web technologies,
- new paradigms for sharing and integrating LRs and LT coming from different sources,
- communication with neighbouring fields for applications in e-government and administration,
- common evaluation campaigns for the objective evaluation of the performances of different systems,
- systems and products (also industrial ones) based on large-size and high quality LRs.
LREC therefore encourages submissions of papers, panels, workshops, tutorials on the use of LRs in these areas.
ABSTRACT SUBMISSION
Submitted abstracts of papers for oral and poster or demo presentations should consist of about 1000 words.
A limited number of panels, workshops and tutorials is foreseen: proposals will be reviewed by the Programme Committee.
For panels, please send a brief description, including an outline of the intended structure (topic, organiser, panel moderator, tentative list of panelists).
For workshops and tutorials, see the dedicated section below.
Only electronic submissions will be considered. Further details about submission will be circulated in the 2nd Call for Papers to be issued at the end of July and posted on the LREC web site (www.lrec-conf.org).
IMPORTANT DATES
* Submission of proposals for panels, workshops and tutorials: 14 October 2005
* Submission of proposals for oral and poster papers, referenced demos: 14 October 2005
* Notification of acceptance of panels, workshops and tutorials proposals: 7 November 2005
* Notification of acceptance of oral papers, posters, referenced demos: 16 January 2006
* Final versions for the proceedings: 20 February 2006
* Conference: 24-26 May 2006
* Pre-conference workshops and tutorials: 22 and 23 May 2006
* Post-conference workshops and tutorials: 27 and 28 May 2006
WORKSHOPS AND TUTORIALS
Pre-conference workshops and tutorials will be organised on 22 and 23 May 2006, and post-conference workshops and tutorials on 27 and 28 May 2006. A workshop/tutorial can be either half day or full day. Proposals for workshops and tutorials should be no longer than three pages, and include:
* A brief technical description of the specific technical issues that the workshop/tutorial will address.
* The reasons why the workshop/tutorial is of interest this time.
* The names, postal addresses, phone and fax numbers and email addresses of the workshop/tutorial organising committee, which should consist of at least three people knowledgeable in the field, coming from different institutions.
* The name of the member of the workshop/tutorial organising committee designated as the contact person.
* A time schedule of the workshop/tutorial and a preliminary programme.
* A summary of the intended workshop/tutorial call for participation.
* A list of audio-visual or technical requirements and any special room requirements.
CONSORTIA AND PROJECT MEETINGS
Consortia or projects wishing to take this opportunity for organising meetings should contact the ELDA office .
Email
Web Elra
Web Elda

JOINT INFERENCE FOR NATURAL LANGUAGE PROCESSING

Workshop at HLT/NAACL 2006, in New York City
June 8, 2006
Website IMPORTANT DATES
* EXTENDED Paper submissions due: Wednesday, March 31
* Notification of accepted papers: Thursday, April 21
* Camera ready papers due: Wednesday, May 3
*LATE-BREAKING PAPER DEADLINE (will not appear in proceedings): Friday May 5
* Workshop: June 8, 2006
FORMAT OF PAPERS
If you wish to present at the workshop, submit a paper of no more than 8 pages in two column format, following the HLT/NAACL style (see http://nlp.cs.nyu.edu/hlt-naacl06/cfp.html). Proceedings will be published in conjunction with the main HLT/NAACL proceedings. is Web site for workshop submissions
Authors who cannot submit a PDF file electronically should contact the organizers.
ORGANIZERS
Charles Sutton, University of Massachusetts
Andrew McCallum, University of Massachusetts
Jeff Bilmes, University of Washington

XXVIèmes Journées d'Étude sur la Parole

12-16 juin 2006
Bretagne
Website
OBJECTIFS
Themes
Les principaux thèmes retenus pour la conférence sont:
1 Production de parole
2 Acoustique de la parole
3 Perception de parole
4 Phonétique et phonologie
5 Prosodie
6 Reconnaissance et compréhension de la parole
7 Reconnaissance de la langue et du locuteur
8 Modèles de langage
9 Synthèse de la parole
10 Analyse, codage et compression de la parole
11 Applications à composantes orales (dialogue, indexation...)
12 Évaluation, corpus et ressources
13 Psycholinguistique
14 Acquisition de la parole et du langage
15 Apprentissage d'une langue seconde
16 Pathologies de la parole
17 Autres ...
DATES À RETENIR
Date limite de soumission des propositions 1 mars 2006
Notification aux auteurs de l'acceptation ou du refus 3 avril 2006
Soumission des articles finaux 1 mai 2006
Date du congrès 12-16 juin 2006
CONTACTS
Pour les questions scientifiques, contactez Pascal Perrier, Président de l'AFCP.
Pour des renseignements pratiques, jep2006@irisa.fr.

PERCEPTION AND INTERACTIVE TECHNOLOGIES ( 06)

Kloster Irsee in southern Germany from June 19 to June 21, 2006.
Website.
Submissions will be short/demo or full papers of 4-10 pages.
Important dates
March 15, 2006: Notification of acceptance/rejection
April 1, 2006: Deadline for final submission of accepted paper
April 1, 2006: Deadline for advance registration
June 7, 2006: Final programme available on the web
It is envisioned to publish the proceedings in the LNCS/LNAI Series by Springer.
PIT'06 Organising Committee:
Elisabeth André, Laila Dybkjaer, Wolfgang Minker, Heiko Neumann, Michael Weber, Marcus Hennecke, Gregory Baratoff

9th Western Pacific Acoustics Conference(WESPAC IX 2006)

June 26-28, 2006
Seoul, Korea
Program Highlights of WESPAC IX 2006
(by Session Topics)
* Human Related Topics- Aeroacoustics
* Product Oriented Topics
* Speech Communication
* Analysis: Through Software and Hardware
* Underwater Acoustics
* Physics: Fundamentals and Applications
* Other Hot Topics in Acoustics
WESPAC IX 2006 Secretariat
SungKyunKwan University, Acoustics Research Laboratory
300 Chunchun-dong, Jangan-ku, Suwon 440-746, Republic of Korea
Tel: +82-31-290-5957 Fax: +82-31-290-7055
E-mail
Website

Appel à Communications: Journée Nasalité

Mercredi 5 juillet 2006 de 9h00 à 18h30.
Auditoire Hotyat (1er étage), Université de Mons-Hainaut, 17, Place Warocqué, 7000 Mons.
Website
Conférenciers invités
Pierre Badin (Institut de la Communication Parlée, Grenoble, France)
Abigail Cohn (Cornell University, New York, USA)
Didier Demolin (Universidade de Sao Paulo, Brazil & Université Libre de Bruxelles, Belgique)
Dates
Date limite de soumission: Mercredi 29 mars 2006
Date de notification de l'acceptation Mercredi 26 avril 2006
Date du colloque Mercredi 5 juillet 2006
Modalités de soumission
Envoyer un message avec les coordonnées du premier auteur avec, en fichier attaché, un résumé anonyme d'une page maximum (références comprises).
Publications
*Un livre contenant les résumés des communications sera distribué à toutes les personnes inscrites au colloque.
*Les participants sont invités à soumettre une version écrite de leur communication pour une éventuelle publication dans le numéro spécial de la revue Parole qui sera consacré au colloque.
Date limite de soumission des papiers: mercredi 9 aout 2006.
Inscription
Inscrivez-vous par simple mail à l'adresse: nasal@umh.ac.be.
Informations
Website
Contact: Véronique Delvaux
Laboratoire de Phonétique
Université de Mons-Hainaut
18, place du Parc, 7000 Mons Belgium
+3265373140

Call for papers:AAAI Workshop on Statistical and Empirical Approaches for Spoken Dialogue Systems

Boston, Massachusetts, USA
16 or 17 July 2006
Workshop website
OVERVIEW
This workshop seeks to draw new work on statistical and empirical approaches for spoken dialogue systems. We welcome both theoretical and applied work, addressing issues such as:
* Representations and data structures suitable for automated learning of dialogue models
* Machine learning techniques for automatic generation and improvement of dialogue managers
* Machine learning techniques for ontology construction and integration
* Techniques to accurately simulate human-computer dialogue
* Creation, use, and evaluation of user models
* Methods for automatic evaluation of dialogue systems
* Integration of spoken dialogue systems into larger intelligent agents, such as robots
* Investigations into appropriate optimization criteria for spoken dialogue systems
* Applications and real-world examples of spoken dialogue systems incorporating statistical or empirical techniques
* Use of statistical or empirical techniques within multi-modal dialogue systems
* Application of statistical or empirical techniques to multi-lingual spoken dialogue systems
* Rapid development of spoken dialogue systems from database content and corpora
* Adaptation of dialogue systems to new domains and languages
* The use and application of techniques and methods from related areas, such as cognitive science, operations research, emergence models, etc.
* Any other aspect of the application of statistical or empirical techniques to Spoken Dialogue Systems.
WORKSHOP FORMAT
This will be a one-day workshop immediately before the main AAAI conference and will consist mainly of presentations of new work by participants.
The day will also feature a keynote talk from Satinder Singh (University of Michigan), who will speak about using Reinforcement Learning in the spoken dialogue domain.
Interaction will be encouraged and sufficient time will be left for discussion of the work presented. To facilitate a collaborative environment, the workshop size will be limited to authors, presenters, and a small number of other participants.
Proceedings of the workshop will be published as an AAAI technical report.
SUBMISSION AND REVIEW PROCESS
Prospective authors are invited to submit full-length, 6-page, camera-ready papers via email. Authors are requested to use the AAAI paper template and follow the AAAI formatting guidelines.
AAAI paper template
AAAI formatting guidelines.
Authors are asked to email papers to Jason Williams.
All papers will be reviewed electronically by three reviewers. Comments will be provided and time will be given for incorporation of comments into accepted papers.
For accepted papers, at least one author from each paper is expected to register and attend. If no authors of an accepted paper register for the workshop, the paper may be removed from the workshop proceedings. Finally, authors of accepted papers will be expected to sign a standard AAAI-06 "Permission to distribute" form.
IMPORTANT DATES
* Friday 17 March 2006 : Camera-ready paper submission deadline
* Monday 24 April 2006 : Acceptance notification
* Friday 5 May 2006 : AAAI-06 and workshop registration opens
* Friday 12 May 2006 : Final camera-ready papers and "AAAI Permission to distribute" forms due
* Friday 19 May 2006 : AAAI-06 Early registration deadline
* Friday 16 June 2006 : AAAI-06 Late registration deadline
* Sunday 16 or Monday 17 July 2006 : Workshop
* Tuesday-Thursday 18-20 July 2006 : Main AAAI-06 Conference
ORGANIZERS
Pascal Poupart, University of Waterloo
Stephanie Seneff, Massachusetts Institute of Technology
Jason D. Williams, University of Cambridge
Steve Young, University of Cambridge
ADDITIONAL INFORMATION
For additional information please contact: Jason D. Williams
submissions
Phone: +44 7786 683 013
Fax: +44 1223 332662
Cambridge University
Department of Engineering
Trumpington Street
Cambridge
CB2 1PZ
United Kingdom

Call for papers: 2006 IEEE International Workshop on Machine Learning for Signal Processing

(Formerly the IEEE Workshop on Neural Networks for Signal Processing)
September 6 - 8, 2006, Maynooth, Ireland
MLSP'2006 webpage
Deadlines:
Paper Submission March 31, 2006
Data analysis competition newMarch 31, 2006
The sixteenth in a series of IEEE workshops on Machine Learning for Signal Processing (MLSP) will be held in Maynooth, Ireland, September 6-8, 2006. Maynooth is located 15 miles west of Dublin in Co. Kildare, Ireland?s equestrian and golfing heartland (and home to the 2006 Ryder Cup). It is a pleasant 18th century planned town, best known for its seminary, St. Patrick's College, where Catholic Priests have been trained since 1795. Co.Kildare.
The workshop, formally known as Neural Networks for Signal Processing (NNSP), is sponsored by the IEEE Signal Processing society (SPS) and organized by the MLSP technical committee of the IEEE SPS. The name of the NNSP technical committee, and hence the workshop, was changed to Machine Learning for Signal Processing in September 2003 to better reflect the areas represented by the technical committee.
Topics
The workshop will feature keynote addresses, technical presentations, special sessions and tutorials, all of which will be included in the registration. Papers are solicited for, but not limited to, the following areas:
Learning Theory and Modeling; Bayesian Learning and Modeling; Sequential Learning; Sequential Decision Methods; Information-theoretic Learning; Neural Network Learning; Graphical and Kernel Models; Bounds on performance; Blind Signal Separation and Independent Component Analysis; Signal Detection; Pattern Recognition and Classification, Bioinformatics Applications; Biomedical Applications and Neural Engineering; Intelligent Multimedia and Web Processing; Communications Applications; Speech and Audio Processing Applications; Image and Video Processing Applications.
A data analysis and signal processing competition is being organized in conjunction with the workshop. This competition is envisioned to become an annual event where problems relevant to the mission and interests of the MLSP community will be presented with the goal of advancing the current state-of-the-art in both theoretical and practical aspects. The problems are selected to reflect the current trends to evaluate existing approaches on common benchmarks as well as areas where crucial developments are thought to be necessary. Details of the competition can be found on the workshop website.
Selected papers from MLSP 2006 will be considered for a special issue of Neurocomputing to appear in 2007. The winners of the data analysis and signal processing competition will also be invited to contribute to the special issue.
Paper Submission Procedure
Prospective authors are invited to submit a double column paper of up to six pages using the electronic submission procedure described at the workshop homepage. Accepted papers will be published in a bound volume by the IEEE after the workshop and a CDROM volume will be distributed at the workshop.
Chairs
General Chair:Seán MCLOONE, NUI Maynooth,
Technical Chair:Tülay ADALI , University of Maryland, Baltimore County

MMSP 2006 International Workshop on Multimedia Signal Processing

October 3-6th, 2006
Fairmont Empress Hotel
Victoria,BC, Canada
Website.
Topics
Multimedia processing: all modalities
Multimedia data bases
Multimedia security
Multimedia networking
Multimedia Systems Design, Implementation and Applications
Human Machine Interfaces and Interaction using multimodalities
Human Perception
Standards
Important dates
Special sessions (see website) March 6, 2006
Papers April 8th,2006
Notification of acceptance June 8th, 2006
Camera ready paper July 8th, 2006

Call for papers: Workshop on Multimedia Content Representation, Classification and Security (MRCS)

September 11 - 13, 2006
Istanbul, Turkey
Workshop website
In cooperation with
The International Association for Pattern Recognition (IAPR)
The European Association for Signal-Image Processing (EURASIP)
GENERAL CHAIRS
Bilge Gunsel,Istanbul Technical Univ.,Turkey
Anil K. Jain, Michigan State University, USA
TECHNICAL PROGRAM CHAIR
Murat Tekalp,Koc University, Turkey
SPECIAL SESSIONS CHAIR
Kivanc Mihcak, Microsoft Research, USA
Prospective authors are invited to submit extended summaries of not more than six (6) pages including results, figures and references. Submitted papers will be reviewed by at least two members of the program committee. Conference Proceedings will be available on site. Please check the website for further information.
IMPORTANT DATES
Special Sessions (contact the special sesions chair): March 10, 2006
Submission of Extended Summary: April 10, 2006
Notificatin of Acceptance: June 10, 2006
Camera-ready Paper Submission Due: July 10, 2006
Topics
The areas of interest include but are not limited to:
- Feature extraction, multimedia content representation and classification techniques
- Multimedia signal processing
- Authentication, content protection and digital rights management
- Audio/Video/Image Watermarking/Fingerprinting
- Information hiding, steganography, steganalysis
- Audio/Video/Image hashing and clustering techniques
- Evolutionary algorithms in content based multimedia data representation, indexing and retrieval
- Transform domain representations
- Multimedia mining
- Benchmarking and comparative studies
- Multimedia applications (broadcasting, medical, biometrics, content aware networks, CBIR.)

Ninth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2006)

Brno, Czech Republic, 11-15 September 2006
Website
The conference is organized by the Faculty of Informatics, Masaryk University, Brno, and the Faculty of Applied Sciences, University of West Bohemia, Pilsen. The conference is supported by International Speech Communication Association.
TSD SERIES
TSD series evolved as a prime forum for interaction between researchers in both spoken and written language processing from the former East Block countries and their Western colleagues. Proceedings of TSD form a book published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series.
TOPICS
Topics of the conference will include (but are not limited to):
text corpora and tagging
transcription problems in spoken corpora
sense disambiguation
links between text and speech oriented systems
parsing issues, especially parsing problems in spoken texts
multi-lingual issues, especially multi-lingual dialogue systems
information retrieval and information extraction
text/topic summarization
machine translation semantic networks and ontologies
semantic web speech modeling
speech segmentation
speech recognition
search in speech for IR and IE
text-to-speech synthesis
dialogue systems
development of dialogue strategies
prosody in dialogues
emotions and personality modeling
user modeling
knowledge representation in relation to dialogue systems assistive technologies based on speech and dialogue applied systems and software facial animation visual speech synthesis Papers on processing of languages other than English are strongly encouraged.
ORGANIZERS
Frederick Jelinek, USA (general chair)
Hynek Hermansky, USA (executive chair)
KEYNOTE SPEAKERS
Eduard Hovy, USA
Louise Guthrie, GB
James Pustejovsky, USA
FORMAT OF THE CONFERENCE
The conference program will include presentation of invited papers, oral presentations, and a poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions.
Social events including a trip in the vicinity of Brno will allow for additional informal interactions.
CONFERENCE PROGRAM
The conference program will include oral presentations and poster/demonstration sessions with sufficient time for discussions of the issues raised. The conference will welcome three keynote speakers - Eduard Hovy, Louise Guthrie and James Pustejovsky, and it will offer two special panels devoted to Emotions and Search in Speech.
IMPORTANT DATES
March 15 2006 ............ Submission of abstract
March 22 2006 ............ Submission of papers
May 15 2006 .............. Notification of acceptance
May 31 2006 .............. Final papers (camera ready) and registration
July 23 2006 ............. Submission of demonstration abstracts
July 30 2006 ............. Notification of acceptance for demonstrations sent to the authors
September 11-15 2006 ..... Conference date
The contributions to the conference will be published in proceedings that will be made available to participants at the time of the conference.
OFFICIAL LANGUAGE
of the conference will be English.
ADDRESS
All correspondence regarding the conference should be addressed to
Dana Hlavackova, TSD 2006
Faculty of Informatics, Masaryk University
Botanicka 68a, 602 00 Brno, Czech Republic
phone: +420-5-49 49 33 29
fax: +420-5-49 49 18 20
email
LOCATION
Brno is the the second largest city in the Czech Republic with a population of almost 400.000 and is the country's judiciary and trade-fair center. Brno is the capital of Moravia, which is in the south-east part of the Czech Republic. It had been a Royal City since 1347 and with its six universities it forms a cultural center of the region.
Brno can be reached easily by direct flights from London and Munich and by trains or buses from Prague (200 km) or Vienna (130 km).

Call for papers MMSP-06

IEEE Signal Processing Society 2006 International Workshop on Multimedia Signal Processing (MMSP06),
October 3-6, 2006,
Fairmount Empress Hotel, Victoria, BC, Canada
Website
- A Student Paper Contest with awards sponsored by Microsoft Research. To enter the contest a paper submission must have a student as the first author
- Overview sessions that consist of papers presenting the state-of-the-art in methods and applications for selected topics of interest in multimedia signal processing
- Wrap-up presentations that summarize the main contributions of the papers accepted at the workshop, hot topics and current trends in multimedia signal processing
- New content requirements for the submitted papers
- New review guidelines for the submitted papers
SCOPE
Papers are solicited for, but not limited to, the general areas:
- Multimedia Processing (modalities: audio, speech, visual, graphics, other; processing: pre- and post- processing of multimodal data, joint audio/visual and multimodal processing, joint source/channel coding, 2-D and 3-D graphics/geometry coding and animation, multimedia streaming)
- Multimedia Databases (content analysis, representation, indexing, recognition, and retrieval)
- Multimedia Security (data hiding, authentication, and access control)
- Multimedia Networking (priority-based QoS control and scheduling, traffic engineering, soft IP multicast support, home networking technologies, wireless technologies)
- Multimedia Systems Design, Implementation and Applications (design: distributed multimedia systems, real-time and non real-time systems; implementation: multimedia hardware and software; applications: entertainment and games, IP video/web conferencing, wireless web, wireless video phone, distance learning over the Internet, telemedicine over the Internet, distributed virtual reality)
- Human-Machine Interfaces and Interaction using multiple modalities
- Human Perception (including integration of art and technology)
- Standards
SCHEDULE
- Special Sessions (contact the respective chair by): March 8, 2006 (Call for Special Sessions)
- Papers (full paper, 4-6 pages, to be received by): April 8, 2006 (Instructions for Authors)
- Notification of acceptance by: June 8, 2006
- Camera-ready paper submission by: July 8, 2006 (Instructions for Authors)
Check the workshop website for updates.
Manage your subscription at: http://ewh.ieee.org/enotice/ options.php?LN=CONF

Call for papers 8th International Conference on Signal Processing

Nov. 16-20, 2006, Guilin, China
website
The 8th International Conference on Signal Processing will be held in Guilin, China on Nov. 16-20, 2006. It will include sessions on all aspects of theory, design and applications of signal processing. Prospective authors are invited to propose papers in any of the following areas, but not limited to:
A. Digital Signal Processing (DSP)
B. Spectrum Estimation & Modeling
C. TF Spectrum Analysis & Wavelet
D. Higher Order Spectral Analysis
E. Adaptive Filtering &SP
F. Array Signal Processing
G. Hardware Implementation for Signal Processing
H. Speech and Audio Coding
I. Speech Synthesis & Recognition
J. Image Processing & Understanding
K. PDE for Image Processing
L. Video compression &Streaming
M. Computer Vision & VR
N. Multimedia & Human-computer Interaction
O. Statistic Learning & Pattern Recognition
P. AI & Neural Networks
Q. Communication Signal processing
R. SP for Internet and Wireless Communications
S. Biometrics & Authentification
T. SP for Bio-medical & Cognitive Science
U. SP for Bio-informatics
V. Signal Processing for Security
W. Radar Signal Processing
X. Sonar Signal Processing and Localization
Y. SP for Sensor Networks
Z. Application & Others

top