Journal: J. Multimodal User Interfaces

Volume 2, Issue 3-4

145 -- 156Ju-Hwan Lee, Charles Spence. Feeling what you hear: task-irrelevant sounds modulate tactile perception delivered via a touch screen
157 -- 169Dongmei Jiang, Ilse Ravyse, Hichem Sahli, Werner Verhelst. Speech driven realistic mouth animation based on multi-modal unit selection
171 -- 186Anton Batliner, Christian Hacker, Elmar Nöth. To talk or not to talk with a computer
187 -- 198Matei Mancas, Donald Glowinski, Gualtiero Volpe, Antonio Camurri, Pierre Bretéché, Jonathan Demeyer, Thierry Ravet, Paolo Coletta. Real-time motion attention and expressive gesture interfaces
199 -- 203Shuichi Sakamoto, Akihiro Tanaka, Komi Tsumura, Yôiti Suzuki. Effect of speed difference between time-expanded speech and moving image of talker's face on word intelligibility
205 -- 216Elizabeth S. Redden, Linda R. Elliott, Rodger A. Pettitt, Christian B. Carstens. A tactile option to reduce robot controller size
217 -- 235Georgios Goudelis, Anastasios Tefas, Ioannis Pitas. Emerging biometric modalities: a survey

Volume 2, Issue 2

73 -- 74Bülent Sankur. Guest Editorial of the special eNTERFACE issue
75 -- 91Albert Ali Salah, Ramon Morros, Jordi Luque, Carlos Segura, Javier Hernando, Onkar Ambekar, Ben A. M. Schouten, Eric J. Pauwels. Multimodal identification and localization of users in a smart environment
93 -- 103Ferda Ofli, Yasemin Demir, Yücel Yemez, Engin Erzin, A. Murat Tekalp, Koray Balci, Idil Kizoglu, Lale Akarun, Cristian Canton-Ferrer, Joëlle Tilmanne, Elif Bozkurt, A. Tanju Erdem. An audio-driven dancing avatar
105 -- 116Savvas Argyropoulos, Konstantinos Moustakas, Alexey Karpov, Oya Aran, Dimitrios Tzovaras, Thanos Tsakiris, Giovanna Varni, Byungjun Kwon. Multimodal user interface for the communication of the disabled
117 -- 131Oya Aran, Ismail Ari, Lale Akarun, Erinç Dikici, Siddika Parlak, Murat Saraçlar, Pavel Campr, Marek Hrúz. Speech and sliding text aided sign retrieval from hearing impaired sign news videos
133 -- 144Nicolas D'Alessandro, Onur Babacan, Baris Bozkurt, Thomas Dubuisson, Andre Holzapfel, Loïc Kessous, Alexis Moinet, Maxime Vlieghe. RAMCESS 2.X framework - expressive voice analysis for realtime and accurate synthesis of singing

Volume 2, Issue 1

1 -- 0Jean Vanderdonckt. Editorial
3 -- 11Brandon Paulson, Tracy Hammond. MARQS: retrieving sketches learned from a single example using a dual-classifier
13 -- 23Beryl Plimmer. Experiences with digital pen, keyboard and mouse usability
25 -- 41Robbie Schaefer, Wolfgang Müller 0003. Assessment of a multimodal interaction and rendering system against established design principles
43 -- 52Julián García, José Pascual Molina, Diego Martínez, Arturo S. García, Pascual González, Jean Vanderdonckt. Prototyping and evaluating glove-based multimodal interfaces
53 -- 60Marie-Luce Bourguet, Jaeseung Chang. Design and usability evaluation of multimodal interaction with finite state machines: a conceptual framework
61 -- 72Daniel Schreiber, Melanie Hartmann, Felix Flentge, Max Mühlhäuser, Manuel Görtz, Thomas Ziegert. Web based evaluation of proactive user interfaces