Abstract is missing.
- A Brief History of IntelligenceHsiao-Wuen Hon. 1 [doi]
- Challenges of Multimodal Interaction in the Era of Human-Robot CoexistenceZhengyou Zhang. 2 [doi]
- Connecting Humans with Humans: Multimodal, Multilingual, Multiparty MediationAlexander Waibel. 3-4 [doi]
- Socially-Aware User Interfaces: Can Genuine Sensitivity Be Learnt at all?Elisabeth André. 5 [doi]
- Multi-modal Active Learning From Human Data: A Deep Reinforcement Learning ApproachOgnjen Rudovic, Meiru Zhang, Björn W. Schuller, Rosalind W. Picard. 6-15 [doi]
- Comparing Pedestrian Navigation Methods in Virtual Reality and Real LifeGian-Luca Savino, Niklas Emanuel, Steven Kowalzik, Felix Kroll, Marvin C. Lange, Matthis Laudan, Rieke Leder, Zhanhua Liang, Dayana Markhabayeva, Martin Schmeißer, Nicolai Schütz, Carolin Stellmacher, Zihe Xu, Kerstin Bub, Thorsten Kluss, Jaime Leonardo Maldonado Cañón, Ernst Kruijff, Johannes Schöning. 16-25 [doi]
- Video and Text-Based Affect Analysis of Children in Play TherapyMetehan Doyran, Batikan Türkmen, Eda Aydin Oktay, Sibel Halfon, Albert Ali Salah. 26-34 [doi]
- Facial Expression Recognition via Relation-based Conditional Generative Adversarial NetworkByung Cheol Song, Min-Kyu Lee, Dong-Yoon Choi. 35-39 [doi]
- Continuous Emotion Recognition in Videos by Fusing Facial Expression, Head Pose and Eye GazeSuowei Wu, Zhengyin Du, Weixin Li, Di Huang 0001, Yunhong Wang. 40-48 [doi]
- Effect of Feedback on Users' Immediate Emotions: Analysis of Facial Expressions during a Simulated Target Detection TaskMd Abdullah Al Fahim, Mohammad Maifi Hasan Khan, Theodore Jensen, Yusuf Albayram, Emil Coman, Ross Buck. 49-58 [doi]
- Multimodal Analysis and Estimation of Intimate Self-DisclosureMohammad Soleymani 0001, Kalin Stefanov, Sin-Hwa Kang, Jan Ondras, Jonathan Gratch. 59-68 [doi]
- A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression CapabilitiesDeepali Aneja, Daniel J. McDuff, Shital Shah. 69-73 [doi]
- To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic ConversationsChaitanya Ahuja, Shugao Ma, Louis-Philippe Morency, Yaser Sheikh. 74-84 [doi]
- Multitask Prediction of Exchange-level Annotations for Multimodal Dialogue SystemsYuki Hirano, Shogo Okada, Haruto Nishimoto, Kazunori Komatani. 85-94 [doi]
- Multimodal Learning for Identifying Opportunities for Empathetic ResponsesLeili Tavabi, Kalin Stefanov, Setareh Nasihati Gilani, David R. Traum, Mohammad Soleymani 0001. 95-104 [doi]
- Dynamic Adaptive Gesturing Predicts Domain Expertise in MathematicsAbishek Sriramulu, Jionghao Lin, Sharon L. Oviatt. 105-113 [doi]
- VisualTouch: Enhancing Affective Touch Communication with Multi-modality StimulationZhuoming Zhang, Robin Héron, Eric Lecolinet, Françoise Détienne, Stéphane Safin. 114-123 [doi]
- TouchPhoto: Enabling Independent Picture Taking and Understanding for Visually-Impaired UsersJongho Lim, Yongjae Yoo, Hanseul Cho, Seungmoon Choi. 124-134 [doi]
- Creativity Support and Multimodal Pen-based InteractionIlhan Aslan, Katharina Weitz, Ruben Schlagowski, Simon Flutura, Susana Garcia Valesco, Marius Pfeil, Elisabeth André. 135-144 [doi]
- Motion Eavesdropper: Smartwatch-based Handwriting Recognition Using Deep LearningHao Jiang. 145-153 [doi]
- Predicting Cognitive Load in an Emergency Simulation Based on Behavioral and Physiological MeasuresTobias Appel, Natalia Sevcenko, Franz Wortha, Katerina Tsarava, Korbinian Moeller, Manuel Ninaus, Enkelejda Kasneci, Peter Gerjets. 154-163 [doi]
- Driving Anomaly Detection with Conditional Generative Adversarial Network using Physiological and CAN-Bus DataYuning Qiu, Teruhisa Misu, Carlos Busso. 164-173 [doi]
- Controlling for Confounders in Multimodal Emotion Classification via Adversarial LearningMimansa Jaiswal, Zakaria Aldeneh, Emily Mower Provost. 174-184 [doi]
- Multimodal Classification of EEG During Physical ActivityYi Ding, Brandon Huynh, Aiwen Xu, Tom Bullock, Hubert Cecotti, Matthew Turk, Barry Giesbrecht, Tobias Höllerer. 185-194 [doi]
- "Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VRErik Wolf, Sara Klüber, Chris Zimmerer, Jean-Luc Lugrin, Marc Erich Latoschik. 195-204 [doi]
- VCMNet: Weakly Supervised Learning for Automatic Infant Vocalisation Maturity AnalysisNajla Al Futaisi, Zixing Zhang 0001, Alejandrina Cristià, Anne S. Warlaumont, Björn W. Schuller. 205-209 [doi]
- Evidence for Communicative Compensation in Debt Advice with Reduced MultimodalityNicole Andelic, Aidan Feeney, Gary McKeown. 210-219 [doi]
- Speaker-Independent Speech-Driven Visual Speech Synthesis using Domain-Adapted Acoustic ModelsAhmed Hussen Abdelaziz, Barry-John Theobald, Justin Binder, Gabriele Fanelli, Paul Dixon, Nicholas Apostoloff, Thibaut Weise, Sachin Kajareker. 220-225 [doi]
- Smooth Turn-taking by a Robot Using an Online Continuous Model to Generate Turn-taking CuesDivesh Lala, Koji Inoue, Tatsuya Kawahara. 226-234 [doi]
- Towards Automatic Detection of Misinformation in Online Medical VideosRui Hou, Verónica Pérez-Rosas, Stacy Loeb, Rada Mihalcea. 235-243 [doi]
- Modeling Team-level Multimodal Dynamics during Multiparty CollaborationLucca Eloy, Angela E. B. Stewart, Mary Jean Amon, Caroline Reinhardt, Amanda Michaels, Chen Sun, Valerie Shute, Nicholas D. Duran, Sidney D'Mello. 244-258 [doi]
- Smile and Laugh Dynamics in Naturalistic Dyadic Interactions: Intensity Levels, Sequences and RolesKevin El Haddad, Sandeep Nallan Chakravarthula, James Kennedy. 259-263 [doi]
- Task-independent Multimodal Prediction of Group Performance Based on Product DimensionsGo Miura, Shogo Okada. 264-273 [doi]
- Emergent Leadership Detection Across DatasetsPhilipp Matthias Müller, Andreas Bulling. 274-278 [doi]
- A Multimodal Robot-Driven Meeting Facilitation System for Group Decision-Making SessionsAmeneh Shamekhi, Timothy W. Bickmore. 279-290 [doi]
- What's behind a choice? Understanding Modality Choices under Changing Environmental ConditionsStephanie Arevalo, Stanislaw Miller, Martha Janka, Jens Gerken. 291-301 [doi]
- Modeling Emotion Influence Using Attention-based Graph Convolutional Recurrent NetworkYulan Chen, Jia Jia 0001, Zhiyong Wu. 302-309 [doi]
- Evaluation of Ultrasound Haptics as a Supplementary Feedback Cue for Grasping in Virtual EnvironmentsMaite Frutos Pascual, Jake Michael Harrison, Chris Creed, Ian Williams. 310-318 [doi]
- Understanding the Attention Demand of Touch and Tangible Interaction on a Composite TaskYosra Rekik, Walid Merrad, Christophe Kolski. 319-328 [doi]
- TouchGazePath: Multimodal Interaction with Touch and Gaze Path for Secure Yet Efficient PIN EntryChandan Kumar, Daniyal Akbari, Raphael Menges, Scott MacKenzie, Steffen Staab. 329-338 [doi]
- WiBend: Wi-Fi for Sensing Passive Deformable SurfacesMira Sarkis, Céline Coutrix, Laurence Nigay, Andrzej Duda. 339-348 [doi]
- ElderReact: A Multimodal Dataset for Recognizing Emotional Response in Aging AdultsKaixin Ma, Xinyu Wang, Xinru Yang, Mingtong Zhang, Jeffrey M. Girard, Louis-Philippe Morency. 349-357 [doi]
- Unsupervised Deep Fusion Cross-modal HashingJiaming Huang, Chen Min, Liping Jing. 358-366 [doi]
- DIF : Dataset of Perceived Intoxicated Faces for Drunk Person IdentificationVineet Mehta, Sai Srinadhu Katta, Devendra Pratap Yadav, Abhinav Dhall. 367-374 [doi]
- Generative Model of Agent's Behaviors in Human-Agent InteractionSoumia Dermouche, Catherine Pelachaud. 375-384 [doi]
- Improved Visual Focus of Attention Estimation and Prosodic Features for Analyzing Group InteractionsLingyu Zhang, Mallory Morgan, Indrani Bhattacharya, Michael Foley, Jonas Braasch, Christoph Riedl, Brooke Foucault Welles, Richard J. Radke. 385-394 [doi]
- DeepReviewer: Collaborative Grammar and Innovation Neural Network for Automatic Paper ReviewYoufang Leng, Li Yu, Jie Xiong. 395-403 [doi]
- CorrFeat: Correlation-based Feature Extraction Algorithm using Skin Conductance and Pupil Diameter for Emotion RecognitionTianyi Zhang, Abdallah El-Ali, Chen Wang, Xintong Zhu, Pablo César. 404-408 [doi]
- Multimodal Behavioral Markers Exploring Suicidal Intent in Social Media VideosAnkit Parag Shah, Vasu Sharma, Vaibhav Vaibhav, Mahmoud Alismail, Louis-Philippe Morency. 409-413 [doi]
- Estimating Uncertainty in Task-Oriented DialogueDimosthenis Kontogiorgos, Andre Pereira, Joakim Gustafson. 414-418 [doi]
- Determining Iconic Gesture Forms based on Entity Image RepresentationFumio Nihei, Yukiko I. Nakano, Ryuichiro Higashinaka, Ryo Ishii. 419-425 [doi]
- Interaction Process Label Recognition in Group DiscussionSixia Li, Shogo Okada, Jianwu Dang. 426-434 [doi]
- Exploring Transfer Learning between Scripted and Spontaneous Speech for Emotion RecognitionQingqing Li, Theodora Chaspari. 435-439 [doi]
- Engagement Modeling in Dyadic InteractionSoumia Dermouche, Catherine Pelachaud. 440-445 [doi]
- Detecting Temporal Phases of Anxiety in The Wild: Toward Continuously Adaptive Self-Regulation TechnologiesHashini Senaratne. 446-452 [doi]
- Multimodal Machine Learning for Interactive Mental Health TherapyLeili Tavabi. 453-456 [doi]
- Tailoring Motion Recognition Systems to Children's MotionsAishat Aloba. 457-462 [doi]
- Multi-modal Fusion Methods for Robust Emotion Recognition using Body-worn Physiological Sensors in Mobile EnvironmentsTianyi Zhang. 463-467 [doi]
- Communicative Signals and Social Contextual Factors in Multimodal Affect RecognitionMichel-Pierre Jansen. 468-472 [doi]
- Co-located Collaboration AnalyticsSambit Praharaj. 473-476 [doi]
- Coalescing Narrative and Dialogue for Grounded Pose ForecastingChaitanya Ahuja. 477-481 [doi]
- Attention-driven Interaction Systems for Augmented RealityLisa-Marie Vortmann. 482-486 [doi]
- Multimodal Driver Interaction with Gesture, Gaze and SpeechAbdul Rafey Aftab. 487-492 [doi]
- The Dyslexperience: Use of Projection Mapping to Simulate DyslexiaZi Fong Yong, Ai Ling Ng, Yuta Nakayama. 493-495 [doi]
- A Real-Time Scene Recognition System Based on RGB-D Video StreamsYuyun Hua, Sixian Zhang, Xinhang Song, Jia'ning Li, Shuqiang Jiang. 496-498 [doi]
- Hang Out with the Language AssistantJin-hwan Oh, Sudhakar Sah, Jihoon Kim, Yoori Kim, Jeonghwa Lee, Wooseung Lee, Myeongsoo Shin, Jaeyon Hwang, Seongwon Kim. 499-500 [doi]
- A Searching and Automatic Video Tagging Tool for Events of Interest during Volleyball Training SessionsFahim A. Salim, Fasih Haider, Sena Busra Yengec Tasdemir, Vahid Naghashi, Izem Tengiz, Kubra Cengiz, Dees Postma, Robby van Delden, Dennis Reidsma, Saturnino Luz, Bert-Jan van Beijnum. 501-503 [doi]
- Seeing Is Believing but Feeling Is the Truth: Visualising Mid-Air Haptics in Oil Baths and LightboxesAbdenaceur Abdouni, Rory Clark, Orestis Georgiou. 504-505 [doi]
- Chemistry Pods: A Mutlimodal Real Time and Retrospective Tool for the ClassroomKhalil J. Anderson, Theodore Dubiel, Kenji Tanaka, Marcelo Worsley, Cody Poultney, Steve Brenneman. 506-507 [doi]
- A Proxemics Measurement Tool Integrated into VAIF and UnityAaron E. Rodriguez, Adriana I. Camacho, Laura J. Hinojos, Mahdokht Afravi, David Novick. 508-509 [doi]
- Transfer Learning Methods for Spoken Language UnderstandingXu Wang, Chengda Tang, Xiaotian Zhao, Xuancai Li, Zhuolin Jin, Dequan Zheng, Tiejun Zhao. 510-515 [doi]
- Streamlined Decoder for Chinese Spoken Language UnderstandingHeyan Huang, Xianling Mao, Puhai Yang. 516-520 [doi]
- CATSLU: The 1st Chinese Audio-Textual Spoken Language Understanding ChallengeSu Zhu, Zijian Zhao, Tiejun Zhao, Chengqing Zong, Kai Yu 0004. 521-525 [doi]
- Multi-Classification Model for Spoken Language UnderstandingChaohong Tan, Zhenhua Ling. 526-530 [doi]
- Robust Spoken Language Understanding with Acoustic and Domain KnowledgeHao Li, Chen Liu, Su Zhu, Kai Yu. 531-535 [doi]
- Spotting Visual Keywords from Temporal Sliding WindowsYue Yao, Tianyu Wang, Heming Du, Liang Zheng 0001, Tom Gedeon. 536-539 [doi]
- Deep Audio-visual System for Closed-set Word-level Speech RecognitionYougen Yuan, Wei Tang, Minhao Fan, Yue Cao, Peng Zhang 0005, Lei Xie 0001. 540-545 [doi]
- EmotiW 2019: Automatic Emotion, Engagement and Cohesion Prediction TasksAbhinav Dhall. 546-550 [doi]
- Bootstrap Model Ensemble and Rank Loss for Engagement Intensity RegressionKai Wang, Jianfei Yang, Da Guo, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao 0001. 551-556 [doi]
- Exploring Regularizations with Face, Body and Image Cues for Group Cohesion PredictionDa Guo, Kai Wang, Jianfei Yang, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao 0001. 557-561 [doi]
- Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion RecognitionHengshun Zhou, Debin Meng, Yuanyuan Zhang, Xiaojiang Peng, Jun Du, Kai Wang, Yu Qiao 0001. 562-566 [doi]
- Engagement Intensity Prediction withFacial Behavior FeaturesVan Thong Huynh, Soo-Hyung Kim, Gueesang Lee, Hyung Jeong Yang. 567-571 [doi]
- Group-level Cohesion Prediction using Deep Learning Models with A Multi-stream Hybrid NetworkTien Xuan Dang, Soo-Hyung Kim, Hyung Jeong Yang, Gueesang Lee, Thanh Hung Vo. 572-576 [doi]
- Automatic Group Cohesiveness Detection With Multi-modal FeaturesBin Zhu, Xin Guo, Kenneth E. Barner, Charles Boncelet. 577-581 [doi]
- Multi-feature and Multi-instance Learning with Anti-overfitting Strategy for Engagement Intensity PredictionJianming Wu, Zhiguang Zhou, Yanan Wang, Yi Li, Xin Xu, Yusuke Uchida. 582-588 [doi]
- Bi-modality Fusion for Emotion Recognition in the WildSunan Li, Wenming Zheng, Yuan Zong, Cheng Lu, Chuangao Tang, Xingxun Jiang, Jiateng Liu, Wanchuang Xia. 589-594 [doi]
- Multi-Attention Fusion Network for Video-based Emotion RecognitionYanan Wang, Jianming Wu, Keiichiro Hoashi. 595-601 [doi]