Abstract is missing.
- Appreciating face-to-face dialogueJanet Beavin Bavelas. 1 [doi]
- Auditory-visual perception of syllabic tones in ThaiHansjörg Mixdorff, Patavee Charnvivit, Denis K. Burnham. 3-8 [doi]
- Read my lips: an animated face helps communicate musical lyricsDominic W. Massaro, Miguel Hidalgo-Barnes. 9-10 [doi]
- McGurk fusion effects in Arabic wordsAzra N. Ali, Ashraf Hassan-Haj, Michael Ingleby, Ali Idrissi. 11-16 [doi]
- A visual concomitant of the Lombard reflexJeesun Kim, Chris Davis, Guillaume Vignali, Harold Hill. 17-22 [doi]
- Facilitating speech detection in style!: the effect of visual speaking style on the detection of speech in noiseNicole Lees, Denis K. Burnham. 23-28 [doi]
- Cognitive processing of audiovisual cues to prominenceMarc Swerts, Emiel Krahmer. 29-30 [doi]
- Reading speech and emotion from still faces: fMRI findingsCheryl M. Capek, Ruth Campbell, Mairéad MacSweeney, Marc Seal, Dafydd Waters, Bencie Woll, Tony David, Philip K. McGuire, Mick Brammer. 31-34 [doi]
- Towards a lexical fuzzy logical model of perception: the time-course of audiovisual speech processing in word identificationAlexandra Jesse, Dominic W. Massaro. 35-36 [doi]
- The integration of coarticulated segments in visual speechJacques C. Koreman, Georg Meyer. 37-38 [doi]
- Perception of congruent and incongruent audiovisual speech stimuliJintao Jiang, Lynne E. Bernstein, Edward T. Auer. 39-44 [doi]
- Visual contribution to speech perception: measuring the intelligibility of talking headsSlim Ouni, Michael M. Cohen, Hope Ishak, Dominic W. Massaro. 45-46 [doi]
- An agent-based framework for auditory-visual speech investigationMichael Walsh, Stephen Wilson. 47-52 [doi]
- Internal models differentially implicated in audiovisual perception of non-native vowel contrastsDaniel E. Callan. 53-54 [doi]
- Audiovisual processing of Lombard speechVictor Chung, Nicole Mirante, Jolien Otten, Eric Vatikiotis-Bateson. 55-56 [doi]
- Development of auditory-visual speech perception in English-speaking children: the role of language-specific factorsV. Dogu Erdener, Denis K. Burnham. 57-62 [doi]
- Using graphics to study the perception of speech-in-noise, and vice versaHarold Hill, Eric Vatikiotis-Bateson. 63-64 [doi]
- Inter speaker variability of labial coarticulation with the view of developing a formal coarticulation model for FrenchVincent Robert, Brigitte Wrobel-Dautcourt, Yves Laprie, Anne Bonneau. 65-70 [doi]
- How to model face and tongue biomechanics for the study of speech production?Yohan Payan. 71-72 [doi]
- Problems associated with current area-based visual speech feature extraction techniquesPatrick Lucey, David Dean, Sridha Sridharan. 73-78 [doi]
- Exploiting lower face symmetry in appearance-based automatic speechreadingGerasimos Potamianos, Patricia Scanlon. 79-84 [doi]
- Improved speech reading through a free-parts representationSimon Lucey, Patrick Lucey. 85-86 [doi]
- A coding method for visual telephony sequencesEdson Bárcenas, Mauricio Díaz, Rafael Carrillo, Ricardo Solano, Carolina Soto, Luis Valderrama, Javier Villegas, Pedro R. Vizcaya. 87-92 [doi]
- Design and recording of Czech speech corpus for audio-visual continuous speech recognitionPetr Císar, Milos Zelezný, Zdenek Krnoul, Jakub Kanis, Jan Zelinka, Ludek Müller. 93-96 [doi]
- Audio-visual speaker identification using the CUAVE databaseDavid Dean, Patrick Lucey, Sridha Sridharan. 97-102 [doi]
- Consonant confusion structure based on machine classification of visual features in continuous speechJianxia Xue, Jintao Jiang, Abeer Alwan, Lynne E. Bernstein. 103-108 [doi]
- 3d lip tracking and co-inertia analysis for improved robustness of audio-video automatic speech recognitionRoland Goecke. 109-114 [doi]
- A multi-measurement approach to the identification of the audiovisual facial correlates of contrastive focus in FrenchMarion Dohen, Hélène Loevenbruck, Harold Hill. 115-116 [doi]
- The history of articulatory synthesis at Haskins laboratoriesPhilip Rubin, Gordon Ramsay, Mark Tiede. 117-118 [doi]
- Artisynth: an extensible, cross-platform 3d articulatory speech synthesizerSidney Fels, Florian Vogt, Kees van den Doel, John E. Lloyd, Oliver Guenther. 119-124 [doi]
- Capturing data and realistic 3d models for cued speech analysis and audiovisual synthesisFrédéric Elisei, Gérard Bailly, Guillaume Gibert, Rémi Brun. 125-130 [doi]
- Statistical analysis and synthesis of 3d faces for auditory-visual speech animationTakaaki Kuratate. 131-136 [doi]
- Computational model of some communication head movements in a speech actSonia Sangari, Mustapha Skhiri, Bertil Lyberg. 137-142 [doi]
- Finite element modeling of the tongueFlorian Vogt. 143-144 [doi]
- A low-cost stereovision based system for acquisition of visible articulatory dataBrigitte Wrobel-Dautcourt, Marie-Odile Berger, Blaise Potard, Yves Laprie, Slim Ouni. 145-150 [doi]
- Structure and function in the human jawAlan G. Hannam. 151 [doi]