Abstract is missing.
- Alignment in iconic gestures: does it make sense?Lisette Mol, Emiel Krahmer, Marc Swerts. 3-8 [doi]
- 2s faceShuichi Sakamoto, Akihiro Tanaka, Shun Numahata, Atsushi Imai, Tohru Takagi, Yôiti Suzuki. 9-12 [doi]
- LW2a: an easy tool to transform voice WAV files into talking animationsPiero Cosi, Graziano Tisato. 13-17 [doi]
- Effects of smiled speech on lips, larynx and acousticsSascha Fagel. 18-21 [doi]
- Visual speech information aids elderly adults in stream segregationAlexandra Jesse, Esther Janse. 22-27 [doi]
- The development of speechreading in deaf and hearing children: introducing a new test of child speechreading (toCS)Fiona Kyle, Mairéad MacSweeney, Tara Mohammed, Ruth Campbell. 28-31 [doi]
- Audio-visual mutual dependency models for biometric liveness checksGirija Chetty, Roland Göcke, Michael Wagner. 32-37 [doi]
- Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentialsSatoko Hisanaga, Kaoru Sekiyama, Tomohiko Igasaki, Nobuki Murayama. 38-42 [doi]
- Effects of visual prominence cues on speech intelligibilitySamer Al Moubayed, Jonas Beskow. 43-46 [doi]
- Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniquesWesley Mattheyses, Lukas Latacz, Werner Verhelst. 47-53 [doi]
- Speaker-dependent audio-visual emotion recognitionSanaul Haq, Philip J. B. Jackson. 53-58 [doi]
- Audio-visual speech perception in mild cognitive impairment and healthy elderly controlsNatalie A. Phillips, Shari R. Baum, Vanessa Taler. 59-64 [doi]
- Are virtual humans uncanny?: varying speech, appearance and motion to better understand the acceptability of synthetic humansTakaaki Kuratate, Kathryn Ayers, Jeesun Kim, Marcia Riley, Denis Burnham. 65-69 [doi]
- Visual influence on auditory perception: is speech special?Christian Kroos, Katherine Hogan. 70-75 [doi]
- Auditory-visual perception of talking faces at birth: a new paradigmMarion Coulon, Bahia Guellaï, Arlette Streri. 76-79 [doi]
- Area of mouth opening estimation from speech acoustics using blind deconvolution techniqueCong-Thanh Do, Abdeldjalil Aïssa-El-Bey, Dominique Pastor, André Goalic. 80-85 [doi]
- Comparison of human and machine-based lip-readingSarah Hilder, Richard Harvey, Barry-John Theobald. 86-89 [doi]
- Untying the knot between gestures and speechMarieke Hoetjes, Emiel Krahmer, Marc Swerts. 90-95 [doi]
- Can you tell if tongue movements are real or synthesized?Olov Engwall, Preben Wik. 96-101 [doi]
- Comparing visual features for lipreadingYuxuan Lan, Richard Harvey, Barry-John Theobald, Eng-Jon Ong, Richard Bowden. 102-106 [doi]
- Auditory-visual infant directed speech in Japanese and EnglishTakaaki Shochi, Kaoru Sekiyama, Nicole Lees, Mark Boyce, Roland Göcke, Denis Burnham. 107-112 [doi]
- Recalibration of audiovisual simultaneity in speechAkihiro Tanaka, Kaori Asakawa, Hisato Imai. 113-116 [doi]
- Audiovisual speech recognition with missing or unreliable dataDorothea Kolossa, Steffen Zeiler, Alexander Vorwerk, Reinhold Orglmeister. 117-122 [doi]
- Older and younger adults use fewer neural resources during audiovisual than during auditory speech perceptionAxel H. Winneke, Natalie A. Phillips. 123-126 [doi]
- Startegies and results for the evaluation of the naturalness of the LIPPS facial animation systemJana Eger, Hans-Heinrich Bothe. 127-129 [doi]
- Recognizing spoken vowels in multi-talker babble: spectral and visual speech cuesChris Davis, Jeesun Kim. 130-133 [doi]
- Effective visually-derived Wiener filtering for audio-visual speech processingIbrahim Almajai, Ben Milner. 134-139 [doi]
- Pairing audio speech and various visual displays: binding or not binding?Aymeric Devergie, Frédéric Berthommier, Nicolas Grimault. 140-146 [doi]
- Effects of exhaustivity and uncertainty on audiovisual focus productionCharlotte Wollermann, Bernhard Schröder. 145-150 [doi]
- Voice activity detection based on fusion of audio and visual informationShin'ichi Takeuchi, Takashi Hashiba, Satoshi Tamura, Satoru Hayamizu. 151-154 [doi]
- Space-time audio-visual speech recognition with multiple multi-class probabilistic support vector machinesSamuel Pachoud, Shaogang Gong, Andrea Cavallaro. 155-160 [doi]
- Refinement of lip shape in sign speech synthesisZdenek Krnoul. 161-165 [doi]
- An image-based talking head systemKang Liu, Jörn Ostermann. 166 [doi]
- The UWB 3d talking head text-driven system controlled by the SAT method used for the LIPS 2009 challengeZdenek Krnoul, Milos Zelezný. 167-168 [doi]
- Synface - verbal and non-verbal face animation from audioJonas Beskow, Giampiero Salvi, Samer Al Moubayed. 169 [doi]
- HMM-based motion trajectory generation for speech animation synthesisLijuan Wang, Wei Han, Xiaojun Qian, Frank K. Soong. 170 [doi]