Journal: J. Multimodal User Interfaces

Volume 2, Issue 3-4

145 -- 156Ju-Hwan Lee, Charles Spence. Feeling what you hear: task-irrelevant sounds modulate tactile perception delivered via a touch screen
157 -- 169Dongmei Jiang, Ilse Ravyse, Hichem Sahli, Werner Verhelst. Speech driven realistic mouth animation based on multi-modal unit selection
171 -- 186Anton Batliner, Christian Hacker, Elmar Nöth. To talk or not to talk with a computer
187 -- 198Matei Mancas, Donald Glowinski, Gualtiero Volpe, Antonio Camurri, Pierre Bretéché, Jonathan Demeyer, Thierry Ravet, Paolo Coletta. Real-time motion attention and expressive gesture interfaces
199 -- 203Shuichi Sakamoto, Akihiro Tanaka, Komi Tsumura, Yôiti Suzuki. Effect of speed difference between time-expanded speech and moving image of talker's face on word intelligibility
205 -- 216Elizabeth S. Redden, Linda R. Elliott, Rodger A. Pettitt, Christian B. Carstens. A tactile option to reduce robot controller size
217 -- 235Georgios Goudelis, Anastasios Tefas, Ioannis Pitas. Emerging biometric modalities: a survey