Abstract is missing.
- Speechreading essentials: signal, paralinguistic cues, and skillBjörn Lidestam, Björn Lyxell. 1-6 [doi]
- The influence of the lexicon on visual spocen word recognitionEdward T. Auer Jr., Lynne E. Bernstein, Sven L. Mattys. 7-12 [doi]
- TAS: A new test of adult speechreading - deaf people really can be better speechreadersTara Ellis, Mairéad MacSweeney, Barbara Dodd, Ruth Campbell. 13-17 [doi]
- Is it easier to lipread one's own speech gestures than those of somebody else? it seems not!Jean-Luc Schwartz, Christophe Savariaux. 18-23 [doi]
- Towards the facecoder: dynamic face synthesis based on image motion estimation in speechChristian Kroos, Saeko Masuda, Takaaki Kuratate, Eric Vatikiotis-Bateson. 24-29 [doi]
- Viseme space for realistic speech animationSumedha Kshirsagar, Nadia Magnenat-Thalmann. 30-35 [doi]
- Audiovisual speech perception in Williams SyndromeM. Bohning, Ruth Campbell, Annette Karmiloff-Smith. 36-39 [doi]
- Comparing cortical activity during the perception of two forms of biological motion for language communicationEdward T. Auer Jr., Lynne E. Bernstein, Manbir Singh. 40-44 [doi]
- Neural areas underlying the processing of visual speech information under conditions of degraded auditory informationDaniel E. Callan, Akiko E. Callan, Eric Vatikiotis-Bateson. 45-49 [doi]
- Similarity structure in visual phonetic perception and optical phoneticsLynne E. Bernstein, Jintao Jiang, Abeer Alwan, Edward T. Auer Jr.. 50-55 [doi]
- The mismatch negativity (MMN) and the McGurk effectCécile Colin, Monique Radeau, Paul Deltenre. 56-61 [doi]
- A case of multimodal aprosodia: impaired auditory and visual speech prosody perception in a patient with right hemisphere damageKaren Nicholson, Shari R. Baum, Lola Cuddy, Kevin G. Munhall. 62-65 [doi]
- Extraction of 3D facial motion parameters from mirror-reflected multi-view video for audio-visual synthesisI-Chen Lin, Jeng-Sheng Yeh, Ming Ouhyoung. 66-71 [doi]
- Modelling an Italian talking headCatherine Pelachaud, Emanuela Magno Caldognetto, Claudio Zmarich, Piero Cosi. 72-77 [doi]
- Visual speech synthesis using statistical models of shape and appearanceBarry-John Theobald, J. Andrew Bangham, Iain Matthews, Gavin C. Cawley. 78-83 [doi]
- Hidden Markov models for visual speech synthesis with limited dataAllan Arb, Steven Gustafson, Timothy R. Anderson, Raymond E. Slyh. 84-89 [doi]
- Creating and controlling video-realistic talking headsFrédéric Elisei, Matthias Odisio, Gérard Bailly, Pierre Badin. 90-97 [doi]
- Multimodal translationShigeo Morishima, Shin Ogata, Satoshi Nakamura. 98-103 [doi]
- Electrophysiology of unimodal and audiovisual speech perceptionLynne E. Bernstein, Curtis W. Ponton, Edward T. Auer Jr.. 104-109 [doi]
- Development of a lip-sync algorithm based on an audio-visual corpusJin Young Kim, Seung Ho Choi, Joohun Lee. 110-114 [doi]
- Analysis of audio-video correlation in vowels in Australian EnglishRoland Goecke, J. Bruce Millar, Alexander Zelinsky, Jordi Robert-Ribes. 115-120 [doi]
- Non-verbal correlates to focal accents in SwedishChristel Ekvall, Bertil Lyberg, Michael Randén. 121-126 [doi]
- Visible speech cues and auditory detection of spoken sentences: an effect of degree of correlation between acoustic and visual propertiesJeesun Kim, Chris Davis. 127-131 [doi]
- Speech intelligibility derived from asynchronous processing of auditory-visual informationKen W. Grant, Steven Greenberg. 132-137 [doi]
- Asking a naive question about the McGurk effect: Why does audio [b] give more [d] percepts with visual [g] than with visual [d]?Marie-Agnès Cathiard, Jean-Luc Schwartz, Christian Abry. 138-142 [doi]
- Investigating the role of luminance boundaries in visual and audiovisual speech recognition using line drawn facesM. V. McCotter, T. R. Jordan. 143-148 [doi]
- Auditory-visual L2 speech perception: Effects of visual cues and acoustic-phonetic context for Spanish learners of EnglishMarta Ortega-Llebaria, Andrew Faulkner, Valérie Hazan. 149-154 [doi]
- Visual discrimination of cantonese tone by tonal but non-Cantonese speakers, and by non-tonal language speakersDenis Burnham, Susanna Lau, Helen Tam, Colin Schoknecht. 155-160 [doi]
- Bimodal word identification: effects of modality, speech style, sentence and phonetic/visual contextDebra M. Hardison. 161-166 [doi]
- Visual attention influences audiovisual speech perceptionKaisa Tiippana, Mikko Sams, Tobias S. Andersen. 167-171 [doi]
- Modeling of audiovisual speech perception in noiseTobias S. Andersen, Kaisa Tiippana, Jouko Lampinen, Mikko Sams. 172-176 [doi]
- Automatic speechreading of impaired speechGerasimos Potamianos, Chalapathy Neti. 177-182 [doi]
- Audio-visual recognition of spectrally reduced speechFrédéric Berthommier. 183-188 [doi]
- A hybrid ANN/HMM audio-visual speech recognition systemMartin Heckmann, Frédéric Berthommier, Kristian Kroschel. 189-194 [doi]
- Noise-based audio-visual fusion for robust speech recognitionEric K. Patterson, Sabri Gurbuz, Zekeriya Tufekci, John N. Gowdy. 195-198 [doi]
- Development of a completely computerized McGurk design under variation of the signal to noise ratioBjörn Kabisch, Carol Nisch, Eckart R. Straube, Ruth Campbell. 199 [doi]
- LIPPS - A visual telephone for hearing-impairedHans-Heinrich Bothe. 199 [doi]
- Cortical substrates of seeing speech: still and moving facesGemma A. Calvert, Michael J. Brammer, Ruth Campbell. 199 [doi]
- Obtaining person-independent feature space for lip readingJacek C. Wojdel, Léon J. M. Rothkrantz. 200 [doi]
- Animated speech: research progress and applicationsMichael M. Cohen, Rashid Clark, Dominic W. Massaro. 200 [doi]
- Estimating focus of attention based on gaze and soundRainer Stiefelhagen, Jie Yang 0001, Alex Waibel. 200 [doi]