Abstract is missing.
- Gastrophysics: using technology to enhance the experience of food and drink (keynote)Charles Spence. 1 [doi]
- Collaborative robots: from action and interaction to collaboration (keynote)Danica Kragic. 2 [doi]
- Situated conceptualization: a framework for multimodal interaction (keynote)Lawrence W. Barsalou. 3 [doi]
- Steps towards collaborative multimodal dialogue (sustained contribution award)Phil Cohen. 4 [doi]
- Tablets, tabletops, and smartphones: cross-platform comparisons of children's touchscreen interactionsJulia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, Lisa Anthony. 5-14 [doi]
- Toward an efficient body expression recognition based on the synthesis of a neutral movementArthur Crenn, Alexandre Meyer, Rizwan Ahmed Khan, Hubert Konik, Saïda Bouakaz. 15-22 [doi]
- Interactive narration with a child: impact of prosody and facial expressionsOvidiu Serban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Adeline Richard, Émilie Chanoni. 23-31 [doi]
- Comparing human and machine recognition of children's touchscreen stroke gesturesAlex Shaw, Jaime Ruiz, Lisa Anthony. 32-40 [doi]
- Virtual debate coach design: assessing multimodal argumentation performanceVolha Petukhova, Tobias Mayer, Andrei Malchanau, Harry Bunt. 41-50 [doi]
- Predicting the distribution of emotion perception: capturing inter-rater variabilityBiqiao Zhang, Georg Essl, Emily Mower Provost. 51-59 [doi]
- Automatically predicting human knowledgeability through non-verbal cuesAbdelwahab Bourai, Tadas Baltrusaitis, Louis-Philippe Morency. 60-67 [doi]
- Pooling acoustic and lexical features for the prediction of valenceZakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, Emily Mower Provost. 68-72 [doi]
- Hand-to-hand: an intermanual illusion of movementDario Pittera, Marianna Obrist, Ali Israr. 73-81 [doi]
- An investigation of dynamic crossmodal instantiation in TUIsFeng Feng, Tony Stockman. 82-90 [doi]
- "Stop over there": natural gesture and speech interaction for non-critical spontaneous intervention in autonomous drivingRobert Tscharn, Marc Erich Latoschik, Diana Löffler, Jörn Hurtienne. 91-100 [doi]
- Pre-touch proxemics: moving the design space of touch targets from still graphics towards proxemic behaviorsIlhan Aslan, Elisabeth André. 101-109 [doi]
- Freehand grasping in mixed reality: analysing variation during transition phase of interactionMaadh Al Kalbani, Maite Frutos Pascual, Ian Williams. 110-114 [doi]
- Rhythmic micro-gestures: discreet interaction on-the-goEuan Freeman, Gareth Griffiths, Stephen A. Brewster. 115-119 [doi]
- Evaluation of psychoacoustic sound parameters for sonificationJamie Ferguson, Stephen A. Brewster. 120-127 [doi]
- Utilising natural cross-modal mappings for visual control of feature-based sound synthesisAugoustinos Tsiros, Grégory Leplâtre. 128-136 [doi]
- Automatic classification of auto-correction errors in predictive text entry based on EEG and context informationFelix Putze, Maik Schünemann, Tanja Schultz, Wolfgang Stuerzlinger. 137-145 [doi]
- Cumulative attributes for pain intensity estimationJoy O. Egede, Michel F. Valstar. 146-153 [doi]
- Towards the use of social interaction conventions as prior for gaze model adaptationRémy Siegfried, Yu Yu, Jean-Marc Odobez. 154-162 [doi]
- Multimodal sentiment analysis with word-level fusion and reinforcement learningMinghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency. 163-171 [doi]
- IntelliPrompter: speech-based dynamic note display interface for oral presentationsReza Asadi, Ha Trinh, Harriet J. Fell, Timothy W. Bickmore. 172-180 [doi]
- Head and shoulders: automatic error detection in human-robot interactionPauline Trung, Manuel Giuliani, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, Manfred Tscheligi. 181-188 [doi]
- The reliability of non-verbal cues for situated reference resolution and their interplay with language: implications for human robot interactionStephanie Gross, Brigitte Krenn, Matthias Scheutz. 189-196 [doi]
- Do you speak to a human or a virtual agent? automatic analysis of user's social cues during mediated communicationMagalie Ochs, Nathan Libermann, Axel Boidin, Thierry Chaminade. 197-205 [doi]
- Estimating verbal expressions of task and social cohesion in meetings by quantifying paralinguistic mimicryMarjolein C. Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltán Szlávik, Hayley Hung. 206-215 [doi]
- Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networksTerry Taewoong Um, Franz Michael Josef Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, Dana Kulic. 216-220 [doi]
- Automatic assessment of communication skill in non-conventional interview settings: a comparative studyPooja Rao S. B., Sowmya Rasipuram, Rahul Das, Dinesh Babu Jayagopi. 221-229 [doi]
- Low-intrusive recognition of expressive movement qualitiesRadoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, Antonio Camurri. 230-237 [doi]
- Digitising a medical clerking system with multimodal interaction supportHarrison South, Martin Taylor, Huseyin Dogan, Nan Jiang. 238-242 [doi]
- GazeTap: towards hands-free interaction in the operating roomBenjamin Hatscher, Maria Luz, Lennart E. Nacke, Norbert Elkmann, Veit Müller, Christian Hansen 0001. 243-251 [doi]
- Boxer: a multimodal collision technique for virtual objectsByungjoo Lee, Qiao Deng, Eve E. Hoggan, Antti Oulasvirta. 252-260 [doi]
- Trust triggers for multimodal command and control interfacesHelen F. Hastie, Xingkun Liu, Pedro Patrón. 261-268 [doi]
- TouchScope: a hybrid multitouch oscilloscope interfaceMatthew Heinz, Sven Bertel, Florian Echtler. 269-273 [doi]
- A multimodal system to characterise melancholia: cascaded bag of words approachShalini Bhatia, Munawar Hayat, Roland Goecke. 274-280 [doi]
- Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfallsVikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, Keelan Evanini. 281-287 [doi]
- Modelling fusion of modalities in multimodal interactive systems with MMMMBruno Dumas, Jonathan Pirau, Denis Lalanne. 288-296 [doi]
- Temporal alignment using the incremental unit frameworkCasey Kennington, Ting Han, David Schlangen. 297-301 [doi]
- Multimodal gender detectionMohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, Mihai Burzo. 302-311 [doi]
- How may I help you? behavior and impressions in hospitality service encountersSkanda Muralidhar, Marianne Schmid Mast, Daniel Gatica-Perez. 312-320 [doi]
- Tracking liking state in brain activity while watching multiple moviesNaoto Terasawa, Hiroki Tanaka, Sakriani Sakti, Satoshi Nakamura 0001. 321-325 [doi]
- Does serial memory of locations benefit from spatially congruent audiovisual stimuli? investigating the effect of adding spatial sound to visuospatial sequencesBenjamin Stahl, Georgios Marentakis. 326-330 [doi]
- ZSGL: zero shot gestural learningNaveen Madapana, Juan Pablo Wachs. 331-335 [doi]
- Markov reward models for analyzing group interactionGabriel Murray. 336-340 [doi]
- Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactionsBéatrice Biancardi, Angelo Cafaro, Catherine Pelachaud. 341-349 [doi]
- The NoXi database: multimodal recordings of mediated novice-expert interactionsAngelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres, Catherine Pelachaud, Elisabeth André, Michel F. Valstar. 350-359 [doi]
- Head-mounted displays as opera glasses: using mixed-reality to deliver an egalitarian user experience during live eventsCarl Bishop, Augusto Esteves, Iain McGregor. 360-364 [doi]
- Analyzing gaze behavior during turn-taking for estimating empathy skill levelRyo Ishii, Shiro Kumano, Kazuhiro Otsuka. 365-373 [doi]
- Text based user comments as a signal for automatic language identification of online videosA. Seza Dogruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, Christoph Oehler. 374-378 [doi]
- Gender and emotion recognition with implicit user signalsManeesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, Ramanathan Subramanian. 379-387 [doi]
- Animating the adelino robot with ERIK: the expressive robotics inverse kinematicsTiago Ribeiro 0001, Ana Paiva. 388-396 [doi]
- Automatic detection of pain from spontaneous facial expressionsFatma Meawad, Su-Yin Yang, Fong Ling Loy. 397-401 [doi]
- Evaluating content-centric vs. user-centric ad affect recognitionAbhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Karthik Yadati, Mohan S. Kankanhalli, Ramanathan Subramanian. 402-410 [doi]
- A domain adaptation approach to improve speaker turn embedding using face representationNam Le, Jean-Marc Odobez. 411-415 [doi]
- Computer vision based fall detection by a convolutional neural networkMiao Yu, Liyun Gong, Stefanos D. Kollias. 416-420 [doi]
- Predicting meeting extracts in group discussions using multimodal convolutional neural networksFumio Nihei, Yukiko I. Nakano, Yutaka Takase. 421-425 [doi]
- The relationship between task-induced stress, vocal changes, and physiological state during a dyadic team taskCatherine Neubauer, Mathieu Chollet, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, Stefan Scherer. 426-432 [doi]
- Meyendtris: a hands-free, multimodal tetris clone using eye tracking and passive BCI for intuitive neuroadaptive gamingLaurens R. Krol, Sarah-Christin Freytag, Thorsten O. Zander. 433-437 [doi]
- AMHUSE: a multimodal dataset for HUmour SEnsingGiuseppe Boccignone, Donatello Conte, Vittorio Cuculo, Raffaella Lanzarotti. 438-445 [doi]
- GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authenticationMohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, Florian Alt. 446-450 [doi]
- Multi-task learning of social psychology assessments and nonverbal features for automatic leadership identificationCigdem Beyan, Francesca Capozzi, Cristina Becchio, Vittorio Murino. 451-455 [doi]
- Multimodal analysis of vocal collaborative search: a public corpus and resultsDaniel McDuff, Paul Thomas, Mary Czerwinski, Nick Craswell. 456-463 [doi]
- UE-HRI: a new dataset for the study of user engagement in spontaneous human-robot interactionsAtef Ben Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, Angelica Lim. 464-472 [doi]
- Mining a multimodal corpus of doctor's training for virtual patient's feedbacksChris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, Roxane Bertrand. 473-478 [doi]
- Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signalsAshwaq Al-Hargan, Neil Cooke, Tareq Binjammaz. 479-486 [doi]
- Multimodal interaction in classrooms: implementation of tangibles in integrated music and math lessonsJennifer Müller, Uwe Oestermeier, Peter Gerjets. 487-488 [doi]
- Web-based interactive media authoring system with multimodal interactionBok Deuk Song, Yeon Jun Choi, Jong Hyun Park. 489-490 [doi]
- Textured surfaces for ultrasound haptic displaysEuan Freeman, Ross Anderson, Julie Williamson, Graham A. Wilson, Stephen A. Brewster. 491-492 [doi]
- Rapid development of multimodal interactive systems: a demonstration of platform for situated intelligenceDan Bohus, Sean Andrist, Mihai Jalobeanu. 493-494 [doi]
- MIRIAM: a multimodal chat-based interface for autonomous systemsHelen F. Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Pedro Patrón, Atanas Laskov. 495-496 [doi]
- SAM: the school attachment monitorDong-Bach Vo, Mohammad Tayarani, Maki Rooksby, Rui Huan, Alessandro Vinciarelli, Helen Minnis, Stephen A. Brewster. 497-498 [doi]
- The Boston Massacre history experienceDavid Novick, Laura M. Rodriguez, Aaron Pacheco, Aaron Rodriguez, Laura Hinojos, Brad Cartwright, Marco Cardiel, Ivan Gris Sepulveda, Olivia Rodriguez-Herrera, Enrique Ponce. 499-500 [doi]
- Demonstrating TouchScope: a hybrid multitouch oscilloscope interfaceMatthew Heinz, Sven Bertel, Florian Echtler. 501 [doi]
- The MULTISIMO multimodal corpus of collaborative interactionsMaria Koutsombogera, Carl Vogel. 502-503 [doi]
- Using mobile virtual reality to empower people with hidden disabilities to overcome their barriersMatthieu Poyade, Glyn Morris, Ian Taylor, Victor Portela. 504-505 [doi]
- Bot or not: exploring the fine line between cyber and human identityMirjam Wester, Matthew P. Aylett, David A. Braude. 506-507 [doi]
- Modulating the non-verbal social signals of a humanoid robotAmol A. Deshmukh, Bart G. W. Craenen, Alessandro Vinciarelli, Mary Ellen Foster. 508-509 [doi]
- Thermal in-car interaction for navigationPatrizia Di Campli San Vito, Stephen A. Brewster, Frank E. Pollick, Stuart White. 510-511 [doi]
- AQUBE: an interactive music reproduction system for aquariumsDaisuke Sasaki, Musashi Nakajima, Yoshihiro Kanno. 512-513 [doi]
- Real-time mixed-reality telepresence via 3D reconstruction with HoloLens and commodity depth sensorsMichal Joachimczak, Juan Liu, Hiroshi Ando. 514-515 [doi]
- Evaluating robot facial expressionsRuth Aylett, Frank Broz, Ayan Ghosh, Peter McKenna, Gnanathusharan Rajendran, Mary Ellen Foster, Giorgio Roffo, Alessandro Vinciarelli. 516-517 [doi]
- Bimodal feedback for in-car mid-air gesture interactionGözel Shakeri, John H. Williamson, Stephen A. Brewster. 518-519 [doi]
- A modular, multimodal open-source virtual interviewer dialog agentKirby Cofino, Vikram Ramanarayanan, Patrick L. Lange, David Pautler, David Suendermann-Oeft, Keelan Evanini. 520-521 [doi]
- Wearable interactive display for the local positioning system (LPS)Daniel M. Lofaro, Christopher Taylor, Ryan Tse, Donald Sofge. 522-523 [doi]
- From individual to group-level emotion recognition: EmotiW 5.0Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, Tom Gedeon. 524-528 [doi]
- Multi-modal emotion recognition using semi-supervised learning and multiple neural networks in the wildDae Ha Kim, Min-Kyu Lee, Dong-Yoon Choi, Byung Cheol Song. 529-535 [doi]
- Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wildStefano Pini, Olfa Ben Ahmed, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, Benoit Huet. 536-543 [doi]
- Group-level emotion recognition using transfer learning from face identificationAlexandr G. Rassadin, Alexey S. Gruzdev, Andrey V. Savchenko. 544-548 [doi]
- Group emotion recognition with individual facial emotion CNNs and global image based CNNsLianzhi Tan, Kaipeng Zhang, Kai Wang, Xiaoxing Zeng, Xiaojiang Peng, Yu Qiao. 549-552 [doi]
- Learning supervised scoring ensemble for emotion recognition in the wildPing Hu, Dongqi Cai, Shandong Wang, Anbang Yao, Yurong Chen. 553-560 [doi]
- Group emotion recognition in the wild by combining deep neural networks for facial expression classification and scene-context analysisAsad Abbas, Stephan K. Chalup. 561-568 [doi]
- Temporal multimodal fusion for video emotion classification in the wildValentin Vielzeuf, Stéphane Pateux, Frédéric Jurie. 569-576 [doi]
- Audio-visual emotion recognition using deep transfer learning and multiple temporal modelsXi Ouyang, Shigenori Kawaai, Ester Gue Hua Goh, Shengmei Shen, Wan Ding, Huaiping Ming, Dong-Yan Huang. 577-582 [doi]
- Multi-level feature fusion for group-level emotion recognitionB. Balaji, V. Ramana Murthy Oruganti. 583-586 [doi]
- A new deep-learning framework for group emotion recognitionQinglan Wei, Yijia Zhao, Qihua Xu, Liandong Li, Jun He 0009, Lejun Yu, Bo Sun. 587-592 [doi]
- Emotion recognition in the wild using deep neural networks and Bayesian classifiersLuca Surace, Massimiliano Patacchiola, Elena Battini Sönmez, William Spataro, Angelo Cangelosi. 593-597 [doi]
- Emotion recognition with multimodal features and temporal modelsShuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang, Yong Qin. 598-602 [doi]
- Group-level emotion recognition using deep models on image scene, faces, and skeletonsXin Guo, Luisa F. Polania, Kenneth E. Barner. 603-608 [doi]
- Towards designing speech technology based assistive interfaces for children's speech therapyRevathy Nayar. 609-613 [doi]
- Social robots for motivation and engagement in therapyKatie Winkle. 614-617 [doi]
- Immersive virtual eating and conditioned food responsesNikita Mae B. Tuanquin. 618-622 [doi]
- Towards edible interfaces: designing interactions with foodTom Gayler. 623-627 [doi]
- Towards a computational model for first impressions generationBéatrice Biancardi. 628-632 [doi]
- A decentralised multimodal integration of social signals: a bio-inspired approachEsma Mansouri-Benssassi. 633-637 [doi]
- Human-centered recognition of children's touchscreen gesturesAlex Shaw. 638-642 [doi]
- Cross-modality interaction between EEG signals and facial expressionSoheil Rayatdoost. 643-646 [doi]
- Hybrid models for opinion analysis in speech interactionsValentin Barrière. 647-651 [doi]
- Evaluating engagement in digital narratives from facial dataRui Huan. 652-655 [doi]
- Social signal extraction from egocentric photo-streamsMaedeh Aghaei. 656-659 [doi]
- Multimodal language grounding for improved human-robot collaboration: exploring spatial semantic representations in the shared space of attentionDimosthenis Kontogiorgos. 660-664 [doi]
- ISIAA 2017: 1st international workshop on investigating social interactions with artificial agents (workshop summary)Thierry Chaminade, Fabrice Lefèvre, Noël Nguyen, Magalie Ochs. 665-666 [doi]
- WOCCI 2017: 6th international workshop on child computer interaction (workshop summary)Keelan Evanini, Maryam Najafian, Saeid Safavi, Kay Berkling. 667-669 [doi]
- MIE 2017: 1st international workshop on multimodal interaction for education (workshop summary)Gualtiero Volpe, Monica Gori, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Paolo Alborno, Erica Volta. 670-671 [doi]
- Playlab: telling stories with technology (workshop summary)Julie Williamson, Tom Flint, Chris Speed. 672-673 [doi]
- MHFI 2017: 2nd international workshop on multisensorial approaches to human-food interaction (workshop summary)Carlos Velasco, Anton Nijholt, Marianna Obrist, Katsunori Okajima, Rick Schifferstein, Charles Spence. 674-676 [doi]