Abstract is missing.
- Acoustical and visual processing in the animal kingdomSverre Sjölander. [doi]
- From actor to avatar: real world challenges in capturing the human faceColm Massey. [doi]
- Visual speech influences speeded auditory identificationTim Paris, Jeesun Kim, Chris Davis. 5-8 [doi]
- Do infants detect a-v articulator congruency for non-native click consonants?Catherine T. Best, Christian Kroos, Julia Irwin. 9-14 [doi]
- Perceiving visual prosody from point-light displaysErin Cvejic, Jeesun Kim, Chris Davis. 15-20 [doi]
- Binding and unbinding the Mcgurk effect in audiovisual speech fusion: follow-up experiments on a new paradigmOlha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz. 21-24 [doi]
- Children's expression of uncertainty in collaborative and competitive contextsMandy Visser, Emiel Krahmer, Marc Swerts. 25-30 [doi]
- The effect of seeing the interlocutor on auditory and visual speech production in noiseMichael Fitzpatrick, Jeesun Kim, Chris Davis. 31-35 [doi]
- Auditory-visual discrimination and identification of lexical tone within and across tone languagesDenis Burnham, Virginie Attina, Benjawan Kasisopa. 37-42 [doi]
- Audiovisual perception of counter-expectational questionsJoan Borràs-Comes, Cecilia Pugliesi, Pilar Prieto. 43-47 [doi]
- Introducing visual target cost within an acoustic-visual unit-selection speech synthesizerUtpala Musti, Vincent Colotte, Asterios Toutios, Slim Ouni. 49-55 [doi]
- Auditory and photo-realistic audiovisual speech synthesis for DutchWesley Mattheyses, Lukas Latacz, Werner Verhelst. 55-60 [doi]
- Photo-realistic visual speech synthesis based on AAM features and an articulatory DBN model with constrained asynchronyPeng Wu, Dongmei Jiang, He Zhang, Hichem Sahli. 61-66 [doi]
- Talking heads for elderly and Alzheimer patients (THEA): project report and demonstrationSascha Fagel. 67 [doi]
- Improving naturalness of visual speech synthesisLászló Czap, János Mátyás. 69 [doi]
- A robotic head using projected animated facesSamer Al Moubayed, Simon Alexandersson, Jonas Beskow, Björn Granström. 71 [doi]
- Audiovisual speech processing in visual speech noiseJeesun Kim, Chris Davis. 73-76 [doi]
- Audiovisual streaming in voicing perception: new evidence for a low-level interaction between audio and visual modalitiesFrédéric Berthommier, Jean-Luc Schwartz. 77-80 [doi]
- An ordinal model of the Mcgurk illusionTobias S. Andersen. 81-86 [doi]
- Thin slices of head movements during problem solving reveal level of difficultyBart Joosten, Marije van Amelsvoort, Emiel Krahmer, Eric O. Postma. 87-92 [doi]
- Dimensional mapping of multimodal integration on audiovisual emotion perceptionYoshiko Arimoto, Kazuo Okanoya. 93-98 [doi]
- Turn-taking control using gaze in multiparty human-computer dialogue: effects of 2d and 3d displaysSamer Al Moubayed, Gabriel Skantze. 99-102 [doi]
- Bilingual corpus for AVASR using multiple sensors and depth informationGeorgios Galatas, Gerasimos Potamianos, Dimitrios I. Kosmopoulos, Christopher McMurrough, Fillia Makedon. 103-106 [doi]
- Kinetic data for large-scale analysis and modeling of face-to-face conversationJonas Beskow, Simon Alexandersson, Samer Al Moubayed, Jens Edlund, David House. 107-110 [doi]
- "mask-bot" - a life-size talking head animated robot for AV speech and human-robot communication researchTakaaki Kuratate, Brennand Pierce, Gordon Cheng. 111-116 [doi]
- Development of communication support system using lip readingTakeshi Saitoh. 117-122 [doi]
- LUCIA-webGL: a web based Italian MPEG-4 talking headGiuseppe Riccardo Leone, Piero Cosi. 123-126 [doi]
- Improved detection of ball hit events in a tennis game using multimodal informationQiang Huang, Stephen J. Cox, Fei Yan, Teofilo de Campos, David Windridge, Josef Kittler, William J. Christmas. 127-130 [doi]
- Speech-driven lip motion generation for tele-operated humanoid robotsCarlos Toshinori Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita. 131-135 [doi]
- On the audiovisual asynchrony of speechLászló Czap. 137-140 [doi]