Journal: J. Multimodal User Interfaces

Volume 7, Issue 4

269 -- 279Andrea Sanna, Fabrizio Lamberti, Gianluca Paravati, Felipe Domingues Rocha. A kinect-based interface to animate virtual characters
281 -- 297Mahmoud Ghorbel, Stéphane Betgé-Brezetz, Marie-Pascale Dupont, Guy-Bertrand Kamga, Sophie Piekarec, Juliette Reerink, Arnaud Vergnol. Multimodal notification framework for elderly and professional in a smart nursing home
299 -- 310Felix Schüssel, Frank Honold, Michael Weber 0001. Influencing factors on multimodal interaction during selection tasks
311 -- 319Matthieu Courgeon, Céline Clavel. MARC: a framework that features emotion models for facial animation during human-computer interaction
321 -- 349Elena Vildjiounaite, Daniel Schreiber, Vesa Kyllönen, Marcus Ständer, Ilkka Niskanen, Jani Mäntyjärvi. Prediction of interface preferences with a classifier selection approach
351 -- 370Nadia Elouali, José Rouillard, Xavier Le Pallec, Jean-Claude Tarby. Multimodal interaction: a survey from model driven engineering and mobile perspectives

Volume 7, Issue 3

171 -- 182Deborah A. Dahl. The W3C multimodal architecture and interfaces standard
183 -- 194Dirk Schnelle-Walka, Stefan Radomski, Max Mühlhäuser. JVoiceXML as a modality component in the W3C multimodal architecture
195 -- 206Kostas Karpouzis, George Caridakis, Roddy Cowie, Ellen Douglas-Cowie. Induction, recording and recognition of natural emotions from facial expressions and speech prosody
207 -- 215Christopher McMurrough, Vangelis Metsis, Dimitrios I. Kosmopoulos, Ilias Maglogiannis, Fillia Makedon. A dataset for point of gaze detection using head poses and eye images
217 -- 228Jyoti Joshi, Roland Goecke, Sharifa Alghowinem, Abhinav Dhall, Michael Wagner 0004, Julien Epps, Gordon Parker, Michael Breakspear. Multimodal assistive technologies for depression diagnosis and monitoring
229 -- 245Christian Peter, Andreas Kreiner, Martin Schröter, Hyosun Kim, Gerald Bieber, Fredrik Öhberg, Kei Hoshi, Eva Lindh Waterworth, John A. Waterworth, Soledad Ballesteros. AGNES: Connecting people in a multimodal way
247 -- 267Randy Klaassen, Rieks op den Akker, Tine Lavrysen, Susan van Wissen. User preferences for multi-device context-aware feedback in a digital coaching system

Volume 7, Issue 1-2

1 -- 3Patrizia Paggio, Dirk Heylen, Michael Kipp. Preface
5 -- 18Andy Lücking, Kirsten Bergmann, Florian Hahn, Stefan Kopp, Hannes Rieser. Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications
19 -- 28Catharine Oertel, Fred Cummins, Jens Edlund, Petra Wagner, Nick Campbell. D64: a corpus of richly recorded conversational interaction
29 -- 37Patrizia Paggio, Costanza Navarretta. Head movements, facial expressions and feedback in conversations: empirical evidence from Danish multimodal data
39 -- 53Dairazalia Sanchez-Cortes, Oya Aran, Dinesh Babu Jayagopi, Marianne Schmid Mast, Daniel Gatica-Perez. Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition
55 -- 66Brigitte Bigi, Cristel Portes, Agnès Steuckardt, Marion Tellier. A multimodal study of answers to disruptions
67 -- 78Isabella Poggi, Francesca D'Errico, Laura Vincze. Comments by words, face and body
79 -- 91Xavier Alameda-Pineda, Jordi Sanchez-Riera, Johannes Wienke, Vojtech Franc, Jan Cech, Kaustubh Kulkarni, Antoine Deleforge, Radu Horaud. RAVEL: an annotated corpus for training robots with audiovisual abilities
93 -- 109Anthony Fleury, Michel Vacher, François Portet, Pedro Chahuara, Norbert Noury. A french corpus of audio and multimodal interactions in a health smart home
111 -- 119Michel Dubois, Damien Dupré, Jean-Michel Adam, Anna Tcherkassof, Nadine Mandran, Brigitte Meillon. The influence of facial interface design on dynamic emotional recognition
121 -- 134George Caridakis, Johannes Wagner, Amaryllis Raouzaiou, Florian Lingenfelser, Kostas Karpouzis, Elisabeth André. A cross-cultural, multimodal, affective corpus for gesture expressivity analysis
135 -- 142Jocelynn Cu, Katrina Ysabel Solomon, Merlin Teodosia Suarez, Madelene Sta. Maria. A multimodal emotion corpus for Filipino and its uses
143 -- 155Marko Tkalcic, Andrej Kosir, Jurij F. Tasic. The LDOS-PerAff-1 corpus of facial-expression video clips with affective, personality and user-interaction metadata
157 -- 170Slim Essid, Xinyu Lin, Marc Gowing, Georgios Kordelas, Anil Aksay, Philip Kelly, Thomas Fillon, Qianni Zhang, Alfred Dielmann, Vlado Kitanovski, Robin Tournemenne, Aymeric Masurelle, Ebroul Izquierdo, Noel E. O'Connor, Petros Daras, Gaël Richard. A multi-modal dance corpus for research into interaction between humans in virtual environments