1 | -- | 3 | Patrizia Paggio, Dirk Heylen, Michael Kipp. Preface |
5 | -- | 18 | Andy Lücking, Kirsten Bergmann, Florian Hahn, Stefan Kopp, Hannes Rieser. Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications |
19 | -- | 28 | Catharine Oertel, Fred Cummins, Jens Edlund, Petra Wagner, Nick Campbell. D64: a corpus of richly recorded conversational interaction |
29 | -- | 37 | Patrizia Paggio, Costanza Navarretta. Head movements, facial expressions and feedback in conversations: empirical evidence from Danish multimodal data |
39 | -- | 53 | Dairazalia Sanchez-Cortes, Oya Aran, Dinesh Babu Jayagopi, Marianne Schmid Mast, Daniel Gatica-Perez. Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition |
55 | -- | 66 | Brigitte Bigi, Cristel Portes, Agnès Steuckardt, Marion Tellier. A multimodal study of answers to disruptions |
67 | -- | 78 | Isabella Poggi, Francesca D'Errico, Laura Vincze. Comments by words, face and body |
79 | -- | 91 | Xavier Alameda-Pineda, Jordi Sanchez-Riera, Johannes Wienke, Vojtech Franc, Jan Cech, Kaustubh Kulkarni, Antoine Deleforge, Radu Horaud. RAVEL: an annotated corpus for training robots with audiovisual abilities |
93 | -- | 109 | Anthony Fleury, Michel Vacher, François Portet, Pedro Chahuara, Norbert Noury. A french corpus of audio and multimodal interactions in a health smart home |
111 | -- | 119 | Michel Dubois, Damien Dupré, Jean-Michel Adam, Anna Tcherkassof, Nadine Mandran, Brigitte Meillon. The influence of facial interface design on dynamic emotional recognition |
121 | -- | 134 | George Caridakis, Johannes Wagner, Amaryllis Raouzaiou, Florian Lingenfelser, Kostas Karpouzis, Elisabeth André. A cross-cultural, multimodal, affective corpus for gesture expressivity analysis |
135 | -- | 142 | Jocelynn Cu, Katrina Ysabel Solomon, Merlin Teodosia Suarez, Madelene Sta. Maria. A multimodal emotion corpus for Filipino and its uses |
143 | -- | 155 | Marko Tkalcic, Andrej Kosir, Jurij F. Tasic. The LDOS-PerAff-1 corpus of facial-expression video clips with affective, personality and user-interaction metadata |
157 | -- | 170 | Slim Essid, Xinyu Lin, Marc Gowing, Georgios Kordelas, Anil Aksay, Philip Kelly, Thomas Fillon, Qianni Zhang, Alfred Dielmann, Vlado Kitanovski, Robin Tournemenne, Aymeric Masurelle, Ebroul Izquierdo, Noel E. O'Connor, Petros Daras, Gaël Richard. A multi-modal dance corpus for research into interaction between humans in virtual environments |