Abstract is missing.
- Understanding people by tracking their word use (keynote)James W. Pennebaker. 1 [doi]
- Learning to generate images and their descriptions (keynote)Richard S. Zemel. 2 [doi]
- Embodied media: expanding human capacity via virtual reality and telexistence (keynote)Susumu Tachi. 3 [doi]
- Help me if you can: towards multiadaptive interaction platforms (ICMI awardee talk)Wolfgang Wahlster. 4 [doi]
- Trust me: multimodal signals of trustworthinessGale M. Lucas, Giota Stratou, Shari Lieblich, Jonathan Gratch. 5-12 [doi]
- Semi-situated learning of verbal and nonverbal content for repeated human-robot interactionIolanda Leite, André Pereira, Allison Funkhouser, Boyang Li, Jill Fain Lehman. 13-20 [doi]
- Towards building an attentive artificial listener: on the perception of attentiveness in audio-visual feedback tokensCatharine Oertel, José Lopes, Yu Yu, Kenneth Alberto Funes Mora, Joakim Gustafson, Alan W. Black, Jean-Marc Odobez. 21-28 [doi]
- Sequence-based multimodal behavior modeling for social agentsSoumia Dermouche, Catherine Pelachaud. 29-36 [doi]
- Adaptive review for mobile MOOC learning via implicit physiological signal sensingPhuong Pham, Jingtao Wang. 37-44 [doi]
- Visuotactile integration for depth perception in augmented realityNina Rosa, Wolfgang Hürst, Peter J. Werkhoven, Remco C. Veltkamp. 45-52 [doi]
- Exploring multimodal biosignal features for stress detection during indoor mobilityKyriaki Kalimeri, Charalampos Saitis. 53-60 [doi]
- An IDE for multimodal controls in smart buildingsSebastian Peters, Jan Ole Johanssen, Bernd Bruegge. 61-65 [doi]
- Personalized unknown word detection in non-native language reading using eye gazeRui Hiraoka, Hiroki Tanaka, Sakriani Sakti, Graham Neubig, Satoshi Nakamura. 66-70 [doi]
- Discovering facial expressions for states of amused, persuaded, informed, sentimental and inspiredDaniel McDuff. 71-75 [doi]
- Do speech features for detecting cognitive load depend on specific languages?Rui Chen, Tiantian Xie, Yingtao Xie, Tao Lin, Ningjiu Tang. 76-83 [doi]
- Training on the job: behavioral analysis of job interviews in hospitalitySkanda Muralidhar, Laurent Son Nguyen, Denise Frauendorfer, Jean-Marc Odobez, Marianne Schmid Mast, Daniel Gatica-Perez. 84-91 [doi]
- Emotion spotting: discovering regions of evidence in audio-visual emotion expressionsYelin Kim, Emily Mower Provost. 92-99 [doi]
- Semi-supervised model personalization for improved detection of learner's emotional engagementNese Alyüz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, Asli Arslan Esme. 100-107 [doi]
- Driving maneuver prediction using car sensor and driver physiological signalsNanxiang Li, Teruhisa Misu, Ashish Tawari, Alexandre Miranda Anon, Chihiro Suga, Kikuo Fujimura. 108-112 [doi]
- On leveraging crowdsourced data for automatic perceived stress detectionJonathan Aigrain, Arnaud Dapogny, Kevin Bailly, Séverine Dubuisson, Marcin Detyniecki, Mohamed Chetouani. 113-120 [doi]
- Investigating the impact of automated transcripts on non-native speakers' listening comprehensionXun Cao, Naomi Yamashita, Toru Ishida. 121-128 [doi]
- Speaker impact on audience comprehension for academic presentationsKeith Curtis, Gareth J. F. Jones, Nick Campbell. 129-136 [doi]
- EmoReact: a multimodal approach and dataset for recognizing emotional responses in childrenBehnaz Nojavanasghari, Tadas Baltrusaitis, Charles E. Hughes, Louis-Philippe Morency. 137-144 [doi]
- Bimanual input for multiscale navigation with pressure and touch gesturesSébastien Pelurson, Laurence Nigay. 145-152 [doi]
- Intervention-free selection using EEG and eye trackingFelix Putze, Johannes Popp, Jutta Hild, Jürgen Beyerer, Tanja Schultz. 153-160 [doi]
- Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigmLei Chen 0004, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle Martin-Raugh, Harrison Kell, Chong Min Lee, Su-Youn Yoon. 161-168 [doi]
- Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasetsShogo Okada, Yoshihiko Ohtake, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Yutaka Takase, Katsumi Nitta. 169-176 [doi]
- Multi-sensor modeling of teacher instructional segments in live classroomsPatrick J. Donnelly, Nathaniel Blanchard, Borhan Samei, Andrew McGregor Olney, Xiaoyi Sun, Brooke Ward, Sean Kelly, Martin Nystrand, Sidney K. D'Mello. 177-184 [doi]
- Meeting extracts for discussion summarization based on multimodal nonverbal informationFumio Nihei, Yukiko I. Nakano, Yutaka Takase. 185-192 [doi]
- Getting to know you: a multimodal investigation of team behavior and resilience to stressCatherine Neubauer, Joshua Woolley, Peter Khooshabeh, Stefan Scherer. 193-200 [doi]
- Measuring the impact of multimodal behavioural feedback loops on social interactionsIonut Damian, Tobias Baur, Elisabeth André. 201-208 [doi]
- Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetingsRyo Ishii, Shiro Kumano, Kazuhiro Otsuka. 209-216 [doi]
- Automatic recognition of self-reported and perceived emotion: does joint modeling help?Biqiao Zhang, Georg Essl, Emily Mower Provost. 217-224 [doi]
- Personality classification and behaviour interpretation: an approach based on feature categoriesSheng Fang, Catherine Achard, Séverine Dubuisson. 225-232 [doi]
- Multiscale kernel locally penalised discriminant analysis exemplified by emotion recognition in speechXinzhou Xu, Jun Deng, Maryna Gavryukova, Zixing Zhang, Li Zhao, Björn W. Schuller. 233-237 [doi]
- Estimating self-assessed personality from body movements and proximity in crowded mingling scenariosLaura Cabrera Quiros, Ekin Gedik, Hayley Hung. 238-242 [doi]
- Deep learning driven hypergraph representation for image-based emotion recognitionYuchi Huang, Hanqing Lu. 243-247 [doi]
- Towards a listening agent: a system generating audiovisual laughs and smiles to show interestKevin El Haddad, Hüseyin Çakmak, Emer Gilmartin, Stéphane Dupont, Thierry Dutoit. 248-255 [doi]
- Sound emblems for affective multimodal output of a robotic tutor: a perception studyHelen F. Hastie, Pasquale Dente, Dennis Küster, Arvid Kappas. 256-260 [doi]
- Automatic detection of very early stage of dementia through multimodal interaction with computer avatarsHiroki Tanaka, Hiroyoshi Adachi, Norimichi Ukita, Takashi Kudo, Satoshi Nakamura. 261-265 [doi]
- MobileSSI: asynchronous fusion for social signal interpretation in the wildSimon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André. 266-273 [doi]
- Language proficiency assessment of English L2 speakers based on joint analysis of prosody and native languageYue Zhang 0014, Felix Weninger, Anton Batliner, Florian Hönig, Björn W. Schuller. 274-278 [doi]
- Training deep networks for facial expression recognition with crowd-sourced label distributionEmad Barsoum, Cha Zhang, Cristian Canton-Ferrer, Zhengyou Zhang. 279-283 [doi]
- Deep multimodal fusion for persuasiveness predictionBehnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltrusaitis, Louis-Philippe Morency. 284-288 [doi]
- Comparison of three implementations of HeadTurn: a multimodal interaction technique with gaze and head turnsOleg Spakov, Poika Isokoski, Jari Kangas, Jussi Rantala, Deepak Akkil, Roope Raisamo. 289-296 [doi]
- Effects of multimodal cues on children's perception of uncanniness in a social robotMaike Paetzel, Christopher E. Peters, Ingela Nyström, Ginevra Castellano. 297-301 [doi]
- Multimodal feedback for finger-based interaction in mobile augmented realityWolfgang Hürst, Kevin Vriens. 302-306 [doi]
- Smooth eye movement interaction using EOG glassesMurtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, Woontack Woo. 307-311 [doi]
- Active speaker detection with audio-visual co-trainingPunarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, Hugo Van Hamme. 312-316 [doi]
- Detecting emergent leader in a meeting environment using nonverbal visual features onlyCigdem Beyan, Nicoló Carissimi, Francesca Capozzi, Sebastiano Vascon, Matteo Bustreo, Antonio Pierro, Cristina Becchio, Vittorio Murino. 317-324 [doi]
- Stressful first impressions in job interviewsAilbhe N. Finnerty, Skanda Muralidhar, Laurent Son Nguyen, Fabio Pianesi, Daniel Gatica-Perez. 325-332 [doi]
- Analyzing the articulation features of children's touchscreen gesturesAlex Shaw, Lisa Anthony. 333-340 [doi]
- Reach out and touch me: effects of four distinct haptic technologies on affective touch in virtual realityImtiaj Ahmed, Ville Harjunen, Giulio Jacucci, Eve E. Hoggan, Niklas Ravaja, Michiel M. A. Spapé. 341-348 [doi]
- Using touchscreen interaction data to predict cognitive workloadPhilipp Mock, Peter Gerjets, Maike Tibus, Ulrich Trautwein, Korbinian Möller, Wolfgang Rosenstiel. 349-356 [doi]
- Exploration of virtual environments on tablet: comparison between tactile and tangible interaction techniquesAdrien Arnaud, Jean-Baptiste Corrégé, Céline Clavel, Michèle Gouiffès, Mehdi Ammi. 357-361 [doi]
- Understanding the impact of personal feedback on face-to-face interactions in the workplaceAfra J. Mashhadi, Akhil Mathur, Marc Van den Broeck, Geert Vanderhulst, Fahim Kawsar. 362-369 [doi]
- Asynchronous video interviews vs. face-to-face interviews for communication skill measurement: a systematic studySowmya Rasipuram, Pooja Rao S. B., Dinesh Babu Jayagopi. 370-377 [doi]
- Context and cognitive state triggered interventions for mobile MOOC learningXiang Xiao, Jingtao Wang. 378-385 [doi]
- Native vs. non-native language fluency implications on multimodal interaction for interpersonal skills trainingMathieu Chollet, Helmut Prendinger, Stefan Scherer. 386-393 [doi]
- Social signal processing for dummiesIonut Damian, Michael Dietz, Frank Gaibler, Elisabeth André. 394-395 [doi]
- Metering "black holes": networking stand-alone applications for distributed multimodal synchronizationMichael Cohen, Yousuke Nagayama, Bektur Ryskeldiev. 396-397 [doi]
- Towards a multimodal adaptive lighting system for visually impaired childrenEuan Freeman, Graham A. Wilson, Stephen A. Brewster. 398-399 [doi]
- Multimodal affective feedback: combining thermal, vibrotactile, audio and visual signalsGraham A. Wilson, Euan Freeman, Stephen A. Brewster. 400-401 [doi]
- Niki and Julie: a robot and virtual human for studying multimodal social interactionRon Artstein, David R. Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, Mikio Nakano. 402-403 [doi]
- A demonstration of multimodal debrief generation for AUVs, post-mission and in-missionHelen F. Hastie, Xingkun Liu, Pedro Patrón. 404-405 [doi]
- Laughter detection in the wild: demonstrating a tool for mobile social signal processing and visualizationSimon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André. 406-407 [doi]
- Multimodal system for public speaking with real time feedback: a positive computing perspectiveFiona Dermody, Alistair Sutherland. 408-409 [doi]
- Multimodal biofeedback system integrating low-cost easy sensing devicesWataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, Mayu Yokoya. 410-411 [doi]
- A telepresence system using a flexible textile displayKana Kushida, Hideyuki Nakanishi. 412-413 [doi]
- Large-scale multimodal movie dialogue corpusRyu Yasuhara, Masashi Inoue, Ikuya Suga, Tetsuo Kosaka. 414-415 [doi]
- Immersive virtual reality with multimodal interaction and streaming technologyWan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, Min-Chun Hu. 416 [doi]
- Multimodal interaction with the autonomous Android ERICADivesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, Tatsuya Kawahara. 417-418 [doi]
- Ask Alice: an artificial retrieval of information agentMichel F. Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew P. Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn W. Schuller, Yue Zhang 0014, Dirk Heylen, Mariët Theune, Jelte van Waterschoot. 419-420 [doi]
- Design of multimodal instructional tutoring agents using augmented reality and smart learning objectsAnmol Srivastava, Pradeep Yammiyavar. 421-422 [doi]
- AttentiveVideo: quantifying emotional responses to mobile video advertisementsPhuong Pham, Jingtao Wang. 423-424 [doi]
- Young Merlin: an embodied conversational agent in virtual realityIván Gris, Diego A. Rivera, Alex Rayon, Adriana Camacho, David G. Novick. 425-426 [doi]
- EmotiW 2016: video and group-level emotion recognition challengesAbhinav Dhall, Roland Göcke, Jyoti Joshi, Jesse Hoey, Tom Gedeon. 427-432 [doi]
- Emotion recognition in the wild from videos using imagesSarah Adel Bargal, Emad Barsoum, Cristian Canton-Ferrer, Cha Zhang. 433-436 [doi]
- A deep look into group happiness prediction from imagesAleksandra Cerekovic. 437-444 [doi]
- Video-based emotion recognition using CNN-RNN and C3D hybrid networksYin Fan, Xiangju Lu, Dian Li, Yuanliu Liu. 445-450 [doi]
- LSTM for dynamic emotion and group emotion recognition in the wildBo Sun, Qinglan Wei, Liandong Li, Qihua Xu, Jun He, Lejun Yu. 451-457 [doi]
- Multi-clue fusion for emotion recognition in the wildJingwei Yan, Wenming Zheng, Zhen Cui, Chuangao Tang, Tong Zhang, Yuan Zong, Ning Sun. 458-463 [doi]
- Multi-view common space learning for emotion recognition in the wildJianlong Wu, Zhouchen Lin, Hongbin Zha. 464-471 [doi]
- HoloNet: towards robust emotion recognition in the wildAnbang Yao, Dongqi Cai, Ping Hu, Shandong Wang, Liang Sha, Yurong Chen. 472-478 [doi]
- Group happiness assessment using geometric features and dataset balancingVassilios Vonikakis, Yasin Yazici, Viet-Dung Nguyen, Stefan Winkler 0001. 479-486 [doi]
- Happiness level prediction with sequential inputs via multiple regressionsJianshu Li, Sujoy Roy, Jiashi Feng, Terence Sim. 487-493 [doi]
- Video emotion recognition in the wild based on fusion of multimodal featuresShizhe Chen, Xinrui Li, Qin Jin, Shilei Zhang, Yong Qin. 494-500 [doi]
- Wild wild emotion: a multimodal ensemble approachJohn Gideon, Biqiao Zhang, Zakaria Aldeneh, Yelin Kim, Soheil Khorram, Duc Le, Emily Mower Provost. 501-505 [doi]
- Audio and face video emotion recognition in the wild using deep neural networks and small datasetsWan Ding, Mingyu Xu, Dong-Yan Huang, Weisi Lin, Minghui Dong, Xinguo Yu, Haizhou Li. 506-513 [doi]
- Automatic emotion recognition in the wild using an ensemble of static and dynamic representationsMostafa Mehdipour-Ghazi, Hazim Kemal Ekenel. 514-521 [doi]
- The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humansMaike Paetzel. 522-526 [doi]
- Viewing support system for multi-view videosXueting Wang. 527-531 [doi]
- Engaging children with autism in a shape perception task using a haptic force feedback interfaceAlix Pérusseau-Lambert. 532-535 [doi]
- Modeling user's decision process through gaze behaviorKei Shimonishi. 536-540 [doi]
- Multimodal positive computing system for public speaking with real-time feedbackFiona Dermody. 541-545 [doi]
- Prediction/Assessment of communication skill using multimodal cues in social interactionsSowmya Rasipuram. 546-549 [doi]
- Player/Avatar body relations in multimodal augmented reality gamesNina Rosa. 550-553 [doi]
- Computational model for interpersonal attitude expressionSoumia Dermouche. 554-558 [doi]
- Assessing symptoms of excessive SNS usage based on user behavior and emotionPloypailin Intapong, Tipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura. 559-562 [doi]
- Kawaii feeling estimation by product attributes and biological signalsTipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura. 563-566 [doi]
- Multimodal sensing of affect intensityShalini Bhatia. 567-571 [doi]
- Enriching student learning experience using augmented reality and smart learning objectsAnmol Srivastava. 572-576 [doi]
- Automated recognition of facial expressions authenticityKrystian Radlak, Bogdan Smolka. 577-581 [doi]
- Improving the generalizability of emotion recognition systems: towards emotion recognition in the wildBiqiao Zhang. 582-586 [doi]
- Emotion recognition in the wild challenge 2016Abhinav Dhall, Roland Goecke, Jyoti Joshi, Tom Gedeon. 587-588 [doi]
- 1st international workshop on embodied interaction with smart environments (workshop summary)Patrick Holthaus, Thomas Hermann, Sebastian Wrede, Sven Wachsmuth, Britta Wrede. 589-590 [doi]
- ASSP4MI2016: 2nd international workshop on advancements in social signal processing for multimodal interaction (workshop summary)Khiet P. Truong, Dirk Heylen, Toyoaki Nishida, Mohamed Chetouani. 591-592 [doi]
- ERM4CT 2016: 2nd international workshop on emotion representations and modelling for companion systems (workshop summary)Kim Hartmann, Ingo Siegert, Albert Ali Salah, Khiet P. Truong. 593-595 [doi]
- International workshop on multimodal virtual and augmented reality (workshop summary)Wolfgang Hürst, Daisuke Iwai, Prabhakaran Balakrishnan. 596-597 [doi]
- International workshop on social learning and multimodal interaction for designing artificial agents (workshop summary)Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane Venture. 598-600 [doi]
- 1st international workshop on multi-sensorial approaches to human-food interaction (workshop summary)Anton Nijholt, Carlos Velasco, Kasun Karunanayaka, Gijs Huisman. 601-603 [doi]
- International workshop on multimodal analyses enabling artificial agents in human- machine interaction (workshop summary)Ronald Böck, Francesca Bonin, Nick Campbell, Ronald Poppe. 604-605 [doi]