Abstract is missing.
- Embodied language learning with the humanoid robot icubAngelo Cangelosi. 1 [doi]
- Audiovisual speech integration: modulatory factors and the link to sound symbolismCharles Spence. 3 [doi]
- Who presents worst? a study on expressions of negative feedback in different intergroup contextsMandy Visser, Emiel Krahmer, Marc Swerts. 5-10 [doi]
- Audio-visual speaker conversion using prosody featuresAdela Barbulescu, Thomas Hueber, Gérard Bailly, Rémi Ronfard. 11-16 [doi]
- Spontaneous synchronisation between repetitive speech and rhythmic gestureGregory Zelic, Jeesun Kim, Chris Davis. 17-20 [doi]
- Culture and nonverbal cues: how does power distance influence facial expressions in game contexts?Phoebe Mui, Martijn Goudbeek, Marc Swerts, Per van der Wijst. 21-26 [doi]
- Predicting head motion from prosodic and linguistic featuresAngelika Hönemann, Diego Evin, Alejandro J. Hadad, Hansjörg Mixdorff, Sascha Fagel. 27-30 [doi]
- Visual control of hidden-semi-Markov-model based acoustic speech synthesisJakob Hollenstein, Michael Pucher, Dietmar Schabus. 31-36 [doi]
- Objective and subjective feature evaluation for speaker-adaptive visual speech synthesisDietmar Schabus, Michael Pucher, Gregor Hofer. 37-42 [doi]
- Audio-visual interaction in sparse representation features for noise robust audio-visual speech recognitionPeng Shen, Satoshi Tamura, Satoru Hayamizu. 43-48 [doi]
- Assessing the visual speech perception of sampled-based talking headsPaula D. Paro Costa, José Mario De Martino. 49-54 [doi]
- Speech animation using electromagnetic articulography as motion capture dataIngmar Steiner, Korin Richmond, Slim Ouni. 55-60 [doi]
- Phonetic information in audiovisual speech is more important for adults than for infants; preliminary findingsMartijn Baart, Jean Vroomen, Kathleen E. Shaw, Heather Bortfeld. 61-64 [doi]
- Audiovisual speech perception in children with autism spectrum disorders and typical controlsJulia Irwin, Lawrence Brancazio. 65-70 [doi]
- Looking for the bouba-kiki effect in prelexical infantsMathilde Fort, Alexa Weiß, Alexander Martin, Sharon Peperkamp. 71-76 [doi]
- Audiovisual speech perception in children and adolescents with developmental dyslexia: no deficit with McGurk stimuliMargriet A. Groen, Alexandra Jesse. 77-80 [doi]
- Effects of forensically-realistic facial concealment on auditory-visual consonant recognition in quiet and noise conditionsNatalie Fecher, Dominic Watt. 81-86 [doi]
- Impact of cued speech on audio-visual speech integration in deaf and hearing adultsClémence Bayard, Cécile Colin, Jacqueline Leybaert. 87-92 [doi]
- Acoustic and visual adaptations in speech produced to counter adverse listening conditionsValérie Hazan, Jeesun Kim. 93-98 [doi]
- Role of audiovisual plasticity in speech recovery after adult cochlear implantationPascal Barone, Kuzma Strelnikov, Olivier Déguine. 99-104 [doi]
- Auditory and auditory-visual Lombard speech perception by younger and older adultsMichael Fitzpatrick, Jeesun Kim, Chris Davis. 105-110 [doi]
- Integration of acoustic and visual cues in prominence perceptionHansjörg Mixdorff, Angelika Hönemann, Sascha Fagel. 111-116 [doi]
- Detecting auditory-visual speech synchrony: how precise?Chris Davis, Jeesun Kim. 117-122 [doi]
- How far out? the effect of peripheral visual speech on speech perceptionJeesun Kim, Chris Davis. 123-128 [doi]
- Temporal integration for live conversational speechRagnhild Eg, Dawn M. Behne. 129-134 [doi]
- Mixing faces and voices: a study of the influence of faces and voices on audiovisual intelligibilityJérémy Miranda, Slim Ouni. 135-140 [doi]
- The touch of your lips: haptic information speeds up auditory speech processingAvril Treille, Camille Cordeboeuf, Coriandre Vilain, Marc Sato. 141-146 [doi]
- Data and simulations about audiovisual asynchrony and predictability in speech perceptionJean-Luc Schwartz, Christophe Savariaux. 147-152 [doi]
- The effect of musical aptitude on the integration of audiovisual speech and non-speech signals in childrenKaisa Tiippana, Kaupo Viitanen, Riia Kivimäki. 153-156 [doi]
- The sight of your tongue: neural correlates of audio-lingual speech perceptionAvril Treille, Coriandre Vilain, Thomas Hueber, Jean-Luc Schwartz, Laurent Lamalle, Marc Sato. 157-162 [doi]
- Visual front-endwars: Viola-Jones face detector vs Fourier Lucas-KanadeShahram Kalantari, Rajitha Navarathna, David Dean, Sridha Sridharan. 163-168 [doi]
- Aspects of co-occurring syllables and head nods in spontaneous dialogueSimon Alexandersson, David House, Jonas Beskow. 169-172 [doi]
- Avatar user interfaces in an OSGi-based system for health care servicesSascha Fagel, Andreas Hilbert, Christopher C. Mayer, Martin Morandell, Matthias Gira, Martin Petzold. 173-174 [doi]
- Automatic feature selection for acoustic-visual concatenative speech synthesis: towards a perceptual objective measureUtpala Musti, Vincent Colotte, Slim Ouni, Caroline Lavecchia, Brigitte Wrobel-Dautcourt, Marie-Odile Berger. 175-180 [doi]
- Modulating fusion in the McGurk effect by binding processes and contextual noiseOlha Nahorna, Ganesh Attigodu Chandrashekara, Frédéric Berthommier, Jean-Luc Schwartz. 181-186 [doi]
- Visual voice activity detection at different speedsBart Joosten, Eric O. Postma, Emiel Krahmer. 187-190 [doi]
- GMM mapping of visual features of cued speech from speech spectral featuresZuheng Ming, Denis Beautemps, Gang Feng. 191-196 [doi]
- Confusion modelling for automated lip-reading usingweighted finite-state transducersDominic Howell, Barry-John Theobald, Stephen J. Cox. 197-202 [doi]
- Transforming neutral visual speech into expressive visual speechFelix Shaw, Barry-John Theobald. 203-208 [doi]
- Differences in the audio-visual detection of word prominence from Japanese and English speakersMartin Heckmann, Keisuke Nakamura, Kazuhiro Nakadai. 209-214 [doi]
- Speaker separation using visually-derived binary masksFaheem Khan, Ben Milner. 215-220 [doi]
- Improvement of lipreading performance using discriminative feature and speaker adaptationSeko Takumi, Naoya Ukai, Satoshi Tamura, Satoru Hayamizu. 221-226 [doi]
- Efficient face model for lip readingTakeshi Saitoh. 227-232 [doi]