Journal: J. Multimodal User Interfaces

Volume 7, Issue 1-2

1 -- 3Patrizia Paggio, Dirk Heylen, Michael Kipp. Preface
5 -- 18Andy Lücking, Kirsten Bergmann, Florian Hahn, Stefan Kopp, Hannes Rieser. Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications
19 -- 28Catharine Oertel, Fred Cummins, Jens Edlund, Petra Wagner, Nick Campbell. D64: a corpus of richly recorded conversational interaction
29 -- 37Patrizia Paggio, Costanza Navarretta. Head movements, facial expressions and feedback in conversations: empirical evidence from Danish multimodal data
39 -- 53Dairazalia Sanchez-Cortes, Oya Aran, Dinesh Babu Jayagopi, Marianne Schmid Mast, Daniel Gatica-Perez. Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition
55 -- 66Brigitte Bigi, Cristel Portes, Agnès Steuckardt, Marion Tellier. A multimodal study of answers to disruptions
67 -- 78Isabella Poggi, Francesca D'Errico, Laura Vincze. Comments by words, face and body
79 -- 91Xavier Alameda-Pineda, Jordi Sanchez-Riera, Johannes Wienke, Vojtech Franc, Jan Cech, Kaustubh Kulkarni, Antoine Deleforge, Radu Horaud. RAVEL: an annotated corpus for training robots with audiovisual abilities
93 -- 109Anthony Fleury, Michel Vacher, François Portet, Pedro Chahuara, Norbert Noury. A french corpus of audio and multimodal interactions in a health smart home
111 -- 119Michel Dubois, Damien Dupré, Jean-Michel Adam, Anna Tcherkassof, Nadine Mandran, Brigitte Meillon. The influence of facial interface design on dynamic emotional recognition
121 -- 134George Caridakis, Johannes Wagner, Amaryllis Raouzaiou, Florian Lingenfelser, Kostas Karpouzis, Elisabeth André. A cross-cultural, multimodal, affective corpus for gesture expressivity analysis
135 -- 142Jocelynn Cu, Katrina Ysabel Solomon, Merlin Teodosia Suarez, Madelene Sta. Maria. A multimodal emotion corpus for Filipino and its uses
143 -- 155Marko Tkalcic, Andrej Kosir, Jurij F. Tasic. The LDOS-PerAff-1 corpus of facial-expression video clips with affective, personality and user-interaction metadata
157 -- 170Slim Essid, Xinyu Lin, Marc Gowing, Georgios Kordelas, Anil Aksay, Philip Kelly, Thomas Fillon, Qianni Zhang, Alfred Dielmann, Vlado Kitanovski, Robin Tournemenne, Aymeric Masurelle, Ebroul Izquierdo, Noel E. O'Connor, Petros Daras, Gaël Richard. A multi-modal dance corpus for research into interaction between humans in virtual environments