Abstract is missing.
- Concurrency, synchrony, and temporal organizationEric Vatikiotis-Bateson. 1 [doi]
- Facial dynamics reveals person identity and communicative intent, regulates person perception and social interactionJeffrey F. Cohn. 3 [doi]
- Active appearance models for facial analysisIain Matthews. 5 [doi]
- On evaluating synthesised visual speechBarry-John Theobald, Nicholas Wilkinson, Iain Matthews. 7-12 [doi]
- Building a portable gesture-to-audio/visual speech systemSidney Fels, Robert Pritchard, Eric Vatikiotis-Bateson. 13-18 [doi]
- The effects of temporal asynchrony on the intelligibility of accelerated speechDouglas Brungart, Nandini Iyer, Brian D. Simpson, Virginie van Wassenhove. 19-24 [doi]
- Audio-visual voice command recognition in noisy conditionsJosef Chaloupka, Jan Nouza, Jindrich Zdánský. 25-30 [doi]
- 2 integrationGianluca Giorgolo, Frans A. J. Verstraten. 31-36 [doi]
- Analysis of inter- and intra-speaker variability of head motions during spoken dialogueCarlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita. 37-42 [doi]
- German text-to-audiovisual-speech by 3-d speaker cloningSascha Fagel, Gérard Bailly. 43-46 [doi]
- Visual field advantage in the perception of audiovisual speech segmentsDawn M. Behne, Yue Wang, Stein-Ove Belsby, Solveig Kaasa, Lisa Simonsen, Kirsti Back. 47-50 [doi]
- CENSREC-AV: evaluation frameworks for audio-visual speech recognitionSatoshi Tamura, Chiyomi Miyajima, Norihide Kitaoka, Satoru Hayamizu, Kazuya Takeda. 51-54 [doi]
- Mcgurk effect persists with a partially removed visual signalChristian Kroos, Ashlie Dreves. 55-58 [doi]
- Guided non-linear model estimation (gnoME)Sascha Fagel, Katja Madany. 59-62 [doi]
- Multimodal perception of anticipatory behavior - Comparing blind, hearing and cued speech subjectsEmilie Troille, Marie-Agnès Cathiard, Christian Abry, Lucie Ménard, Denis Beautemps. 63-68 [doi]
- Patch-based analysis of visual speech from multiple viewsPatrick Lucey, Gerasimos Potamianos, Sridha Sridharan. 69-74 [doi]
- A comparison of German talking heads in a smart home environmentSascha Fagel, Christine Kühnel, Benjamin Weiss, Ina Wechsung, Sebastian Möller. 75-78 [doi]
- 2s face on detection and tolerance thresholdsShuichi Sakamoto, Akihiro Tanaka, Shun Numahata, Atsushi Imai, Tohru Takagi, Yôiti Suzuki. 79-82 [doi]
- A neurofunctional model of speech production including aspects of auditory and audio-visual speech perceptionBernd J. Kröger, Jim Kannampuzha. 83-88 [doi]
- Auditory-visual perception of prosodic information: inter-linguistic analysis - contrastive focus in French and JapaneseMarion Dohen, Chun-Huei Wu, Harold Hill. 89-94 [doi]
- May speech modifications in noise contribute to enhance audio-visible cues to segment perception?Maeva Garnier. 95-100 [doi]
- Audiovisual alignment in child-directed speech facilitates word learningAlexandra Jesse, Elizabeth K. Johnson. 101-106 [doi]
- Hearing a talking face: an auditory influence on a visual detection taskJeesun Kim, Christian Kroos, Chris Davis. 107-110 [doi]
- Speaking with smile or disgust: data and modelsGérard Bailly, Antoine Bégault, Frédéric Elisei, Pierre Badin. 111-114 [doi]
- A multilevel fusion approach for audiovisual emotion recognitionGirija Chetty, Michael Wagner. 115-120 [doi]
- Statistical correlation analysis between lip contour parameters and formant parameters for Mandarin monophthongsJunru Wu, Xiaosheng Pan, Jiangping Kong, Alan Wee-Chung Liew. 121-126 [doi]
- From talking to thinking heads: report 2008Denis Burnham, Arman Abrahamyan, Lawrence Cavedon, Chris Davis, Andrew Hodgins, Jeesun Kim, Christian Kroos, Takaaki Kuratate, Trent W. Lewis, Martin H. Luerssen, Garth Paine, David M. W. Powers, Marcia Riley, Stelarc, Kate Stevens. 127-130 [doi]
- Algorithm for computing spatiotemporal coordinationAdriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson. 131-136 [doi]
- Fused HMM adaptation of synchronous HMMs for audio-visual speaker verificationDavid Dean, Sridha Sridharan. 137-141 [doi]
- Describing "INTERFACE" a matlabÉ tool for building talking headsPiero Cosi, Graziano Tisato. 143-146 [doi]
- Analysis of technologies and resources for multimodal information kiosk for deaf usersMilos Zelezný. 147-152 [doi]
- Retargeting cued speech hand gestures for different talking heads and speakersGérard Bailly, Yu Fang, Frédéric Elisei, Denis Beautemps. 153-158 [doi]
- A, v, and AV discrimination of vowel durationBjörn Lidestam. 159-162 [doi]
- Towards real-time speech-based facial animation applications built on HUGE architectureGoranka Zoric, Igor S. Pandzic. 163-166 [doi]
- Improving pain recognition through better utilisation of temporal informationPatrick Lucey, Jessica Howlett, Jeffrey F. Cohn, Simon Lucey, Sridha Sridharan, Zara Ambadar. 167-172 [doi]
- Linguistically valid movement behavior measured non-invasivelyAdriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson. 173-177 [doi]
- The challenge of multispeaker lip-readingStephen J. Cox, Richard Harvey, Yuxuan Lan, Jacob L. Newman, Barry-John Theobald. 179-184 [doi]
- Audio-visual feature selection and reduction for emotion classificationSanaul Haq, Philip J. B. Jackson, James D. Edge. 185-190 [doi]
- Text-to-AV synthesis system for Thinking Head ProjectTakaaki Kuratate. 191-194 [doi]
- Objective and perceptual evaluation of parameterizations of 3d motion captured speech dataKatja Madany, Sascha Fagel. 195-198 [doi]
- Listening while speaking: new behavioral evidence for articulatory-to-auditory feedback projectionsMarc Sato, Emilie Troille, Lucie Ménard, Marie-Agnès Cathiard, Vincent Gracco. 199-204 [doi]
- Age-related experience in audio-visual speech perceptionMagnus Alm, Dawn M. Behne. 205-208 [doi]
- A model for the dynamics of articulatory lip movementsÞórir Harðarson, Hans-Heinrich Bothe. 209-214 [doi]
- Evaluation of synthesized sign and visual speech by deafZdenek Krnoul, Patrik Rostík, Milos Zelezný. 215-218 [doi]
- Lip segmentation using adaptive color space trainingErol Ozgur, Mustafa Berkay Yilmaz, Harun Karabalkan, Hakan Erdogan, Mustafa Unel. 219-222 [doi]
- Static and dynamic lip feature analysis for speaker verificationShi-Lin Wang, Alan Wee-Chung Liew. 223-227 [doi]
- Parameterisation of 3d speech lip movementsJames D. Edge, Adrian Hilton, Philip J. B. Jackson. 229-234 [doi]
- A comparative study of 2d and 3d lip tracking methods for AV ASRRoland Göcke, Akshay Asthana. 235-240 [doi]