Abstract is missing.
- Evolution of language from action understandingLeonardo Fogassi. 1-2 [doi]
- Early processing of visual speech information modulates the subsequent processing of auditory speech input at a pre-attentive level: Evidence from event-related brain potential dataRiadh Lebib, David Papo, Abdel Douiri, Stella de Bode, Pierre-Marie Baudonniere. 3-8 [doi]
- Testing the cuing hypothesis for the AV speech detection advantageJeesun Kim, Chris Davis. 9-12 [doi]
- Enhanced auditory detection with av speech: perceptual evidence for speech and non-speech mechanismsLynne E. Bernstein, Sumiko Takayanagi, Edward T. Auer. 13-17 [doi]
- Auditory syllabic identification enhanced by non-informative visible speechJean-Luc Schwartz, Frédéric Berthommier, Christophe Savariaux. 19-24 [doi]
- Audiovisual asynchrony detection for speech and nonspeech signalsBrianna L. Conrey, David B. Pisoni. 25-30 [doi]
- Discrimination of auditory-visual synchronyKen W. Grant, Virginie van Wassenhove, David Poeppel. 31-35 [doi]
- Electrophysiology of auditory-visual speech integrationVirginie van Wassenhove, Ken W. Grant, David Poeppel. 37-42 [doi]
- Auditory-visual speech perception development in Japanese and English speakersKaoru Sekiyama, Denis Burnham, Helen Tam, V. Dogu Erdener. 43-47 [doi]
- Developing the TAS: Individual differences in silent speechreading, reading and phonological awareness in deaf and hearing speechreadersTara Mohammed, Mairéad MacSweeney, Ruth Campbell. 49-54 [doi]
- Perception of point light displays of speech by normal-hearing adults and deaf adults with cochlear implantsTonya R. Bergeson, David B. Pisoni, Jeffrey T. Reynolds. 55-60 [doi]
- Visual and auditory perception of epenthetic glidesMarie-Agnès Cathiard, Christian Abry, Séverine Gedzelman, Hélène Loevenbruck. 61-66 [doi]
- Selective adaptation and recalibration of auditory speech by lipread information: DissipationJean Vroomen, Mirjam Keetels, Sabine van Linden, Béatrice de Gelder, Paul Bertelson. 67-70 [doi]
- Effect of audiovisual primes on identification of auditory target syllablesVille Ojanen, Jyrki Tuomainen, Mikko Sams. 71-75 [doi]
- Why the FLMP should not be applied to McGurk data ...or how to better compare models in the Bayesian frameworkJean-Luc Schwartz. 77-82 [doi]
- Model Selection in AVSP: Some old and not so old newsDominic W. Massaro. 83-88 [doi]
- A phonetically neutral model of the low-level audiovisual interactionFrédéric Berthommier. 89-94 [doi]
- Joint audio-visual speech processing for recognition and enhancementGerasimos Potamianos, Chalapathy Neti, Sabine Deligne. 95-104 [doi]
- Shape and appearance models of talking faces for model-based trackingMatthias Odisio, Gérard Bailly. 105-110 [doi]
- Low resource lip finding and tracking algorithm for embedded devicesJesus F. Guitarte Perez, Klaus Lukas, Alejandro F. Frangi. 111-116 [doi]
- Audio-visual speech recognition using lip movement extracted from side-face imagesTomoaki Yoshinaga, Satoshi Tamura, Koji Iwano, Sadaoki Furui. 117-120 [doi]
- A System for Automatic Lip ReadingIslam Shdaifat, Rolf-Rainer Grigat, Detlev Langmann. 121-126 [doi]
- Visual feature analysis for automatic speechreadingPatricia Scanlon, Richard B. Reilly, Philip de Chazal. 127-132 [doi]
- Statistical analysis of the relationship between audio and video speech parameters for Australian EnglishRoland Goecke, J. Bruce Millar. 133-138 [doi]
- Pure audio McGurk effectLaurent Girin. 139-144 [doi]
- Further experiments on audio-visual speech source separationDavid Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz. 145-150 [doi]
- Using speech and gesture to explore user states in multimodal dialogue systemsRui Ping Shi, Johann Adelhardt, Viktor Zeißler, Anton Batliner, Carmen Frank, Elmar Nöth, Heinrich Niemann. 151-156 [doi]
- Improvement of three simultaneous speech recognition by using AV integration and scattering theory for humanoidKazuhiro Nakadai, Daisuke Matsuura, Hiroshi G. Okuno, Hiroshi Tsujino. 157-162 [doi]
- Effects of image distortions on audio-visual speech recognitionMartin Heckmann, Frédéric Berthommier, Christophe Savariaux, Kristian Kroschel. 163-168 [doi]
- Czech audio-visual speech corpus of a car driver for in-vehicle audio-visual speech recognitionMilos Zelezný, Petr Císar. 169-173 [doi]
- Improving audio-visual speech recognition with an infrared headsetJing Huang, Gerasimos Potamianos, Chalapathy Neti. 175-178 [doi]
- The role of Cued Speech in language processing by deaf children : An overviewJacqueline Leybaert. 179-186 [doi]
- Evaluation of a talking head based on appearance modelsBarry-John Theobald, J. Andrew Bangham, Iain Matthews, Gavin C. Cawley. 187-192 [doi]
- Linking the structure and perception of 3D faces: Gender, ethnicity, and expressive postureGuillaume Vignali, Harold Hill, Eric Vatikiotis-Bateson. 193-198 [doi]
- Toolkit for animation of Finnish talking headMichael Frydrych, Jari Kätsyri, Martin Dobsík, Mikko Sams. 199-204 [doi]
- Lipreadability of a synthetic talking face in normal hearing and hearing-impaired listenersCatherine Siciliano, Andrew Faulkner, Geoff Williams. 205-208 [doi]
- Coproduction of speech and emotions: visual and acoustic modifications of some phonetic labial targets#Emanuela Magno Caldognetto, Piero Cosi, Carlo Drioli, Graziano Tisato, Federica Cavicchio. 209-214 [doi]
- Two articulation models for audiovisual speech synthesis - description and determinationSascha Fagel, Caroline Clemens. 215-220 [doi]
- Triphone-based coarticulation modelElisabetta Bevacqua, Cathérine Palachaud. 221-226 [doi]
- Toward an audiovisual synthesizer for Cued Speech: Rules for CV French syllablesVirginie Attina, Denis Beautemps, Marie-Agnès Cathiard, Matthias Odisio. 227-232 [doi]
- Measurements of articulatory variation and communicative signals in expressive speechMagnus Nordstrand, Gunilla Svanfeldt, Björn Granström, David House. 233-238 [doi]
- Identification of synthetic and natural emotional facial expressionsJari Kätsyri, Vasily Klucharev, Michael Frydrych, Mikko Sams. 239-243 [doi]
- Audiovisual perception of contrastive focus in FrenchMarion Dohen, Hélène Loevenbruck, Marie-Agnès Cathiard, Jean-Luc Schwartz. 245-250 [doi]
- A method for the analysis and measurement of communicative head movements in human dialoguesLoredana Cerrato, Mustapha Skhiri. 251-256 [doi]
- Exploring the spatial frequency requirements of audio-visual speech using superimposed facial motionDouglas M. Shiller, Christian Kroos, Eric Vatikiotis-Bateson, Kevin G. Munhall. 257 [doi]