1 | -- | 4 | Philip Rubin, Eric Vatikiotis-Bateson. Editorial |
5 | -- | 21 | Isabella Poggi, Catherine Pelachaud. Performative faces |
23 | -- | 43 | Hani Yehia, Philip Rubin, Eric Vatikiotis-Bateson. Quantitative association of vocal-tract and facial behavior |
45 | -- | 63 | Paul Iverson, Lynne E. Bernstein, Edward T. Auer. Modeling the interaction of phonemic intelligibility and lexical structure in audiovisual word recognition |
65 | -- | 73 | Robert E. Remez, Jennifer M. Fellowes, David B. Pisoni, Winston D. Goh, Philip Rubin. Multimodal perceptual organization of speech: Evidence from tone analogs of spoken utterances |
75 | -- | 87 | Mikko Sams, Petri Manninen, Veikko Surakka, Pia Helin, Riitta Kättö. McGurk effect in Finnish syllables, isolated words, and words in sentences: Effects of word meaning and sentence context |
89 | -- | 96 | Béatrice de Gelder, Jean Vroomen. Impairment of speech-reading in prosopagnosia |
97 | -- | 103 | Art Blokland, Anne H. Anderson. Effect of low frame-rate video on intelligibility of speech |
105 | -- | 115 | Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Lip movement synthesis from speech based on Hidden Markov Models |
117 | -- | 129 | Christian Benoît, Bertrand Le Goff. Audio-visual speech synthesis from French text: Eight years of models, designs and evaluation at the ICP |
131 | -- | 148 | Sumit Basu, Nuria Oliver, Alex Pentland. 3D lip shapes from video: A combined physical-statistical model |
149 | -- | 161 | Alexandrina Rogozan, Paul Deléglise. Adaptive fusion of acoustic and visual sources for automatic speech recognition |