Journal: J. Multimodal User Interfaces

Volume 3, Issue 4

249 -- 261Maurizio Mancini, Catherine Pelachaud. Generating distinctive behavior for Embodied Conversational Agents
263 -- 270Stéphanie Buisine, Yun Wang, Ouriel Grynszpan. Empirical investigation of the temporal relations between speech and facial expressions of emotion
271 -- 284Herwin van Welbergen, Dennis Reidsma, Zsófia Ruttkay, Job Zwiers. Elckerlyc
285 -- 297David Díaz Pardo de Vera, Beatriz López-Mencía, Álvaro Hernández Trapote, Luis A. Hernández Gómez. Non-verbal communication strategies to improve robustness in dialogue systems: a comparative study
299 -- 309Samer Al Moubayed, Jonas Beskow, Björn Granström. Auditory visual prominence

Volume 3, Issue 3

155 -- 156Marilyn Rose McGee-Lennon, Laurence Nigay, Philip D. Gray. The challenges of engineering multimodal interaction
157 -- 165Luca Chittaro. Distinctive aspects of mobile interaction and their implications for the design of multimodal interfaces
167 -- 177Andrew Ramsay, Marilyn Rose McGee-Lennon, Graham A. Wilson, Steven J. Gray, Philip D. Gray, François De Turenne. Tilt and go: exploring multimodal mobile maps in the field
179 -- 188Lynne Baillie, Lee Morton, Stephen Uzor, David C. Moffatt. An investigation of user responses to specifically designed activities in a multimodal location based game
189 -- 196Guillaume Rivière, Nadine Couture, Patrick Reuter. The activation of modality in virtual objects assembly
197 -- 213Werner A. König, Roman Rädle, Harald Reiterer. Interactive design of multimodal user interfaces
215 -- 225Marcos Serrano, Laurence Nigay. A wizard of oz component-based approach for rapidly prototyping and testing input multimodal interfaces
227 -- 236Diego Arnone, Alessandro Rossi, Massimo Bertoncini. An open source integrated framework for rapid prototyping of multimodal affective applications in digital entertainment
237 -- 247Bruno Dumas, Denis Lalanne, Rolf Ingold. Description languages for multimodal interaction: a set of guidelines and its illustration with SMUIML

Volume 3, Issue 1-2

1 -- 3Ginevra Castellano, Kostas Karpouzis, Christopher E. Peters, Jean-Claude Martin. Special issue on real-time affect analysis and interpretation: closing the affective loop in virtual agents and robots
7 -- 19Florian Eyben, Martin Wöllmer, Alex Graves, Björn W. Schuller, Ellen Douglas-Cowie, Roddy Cowie. On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues
21 -- 31Abdul Rehman Abbasi, Matthew N. Dailey, Nitin V. Afzulpurkar, Takeaki Uno. Student mental state inference from unintentional body gestures using dynamic Bayesian networks
33 -- 48Loïc Kessous, Ginevra Castellano, George Caridakis. Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis
49 -- 66George Caridakis, Kostas Karpouzis, Manolis Wallace, Loïc Kessous, Noam Amir. Multimodal user's affective state analysis in naturalistic interaction
67 -- 78Pieter-Jan Maes, Marc Leman, Micheline Lesaffre, Michiel Demey, Dirk Moelants. From expressive gesture to sound
79 -- 86Isabella Poggi, Francesca D'Errico. The mental ingredients of bitterness
89 -- 98Ginevra Castellano, Iolanda Leite, André Pereira, Carlos Martinho, Ana Paiva, Peter W. McOwan. Affect recognition for interactive companions: challenges and design in real world scenarios
99 -- 108Laurel D. Riek, Philip C. Paul, Peter Robinson 0001. When my robot smiles at me: Enabling human-robot rapport via real-time head gesture mimicry
109 -- 118Birgitta Burger, Roberto Bresin. Communication of musical expression by means of mobile robot gestures
119 -- 130Christopher E. Peters, Stylianos Asteriadis, Kostas Karpouzis. Investigating shared attention with a virtual agent using a gaze-based interface
131 -- 140Nicole Novielli. HMM modeling of user engagement in advice-giving dialogues
141 -- 153Dennis Hofs, Mariët Theune, Rieks op den Akker. Natural interaction with a virtual guide in a virtual environment