Abstract is missing.
- From Hands to Brains: How Does Human Body Talk, Think and Interact in Face-to-Face Language Use?Asli Özyürek. 1-2 [doi]
- Musical Multimodal Interaction: From Bodies to EcologiesAtau Tanaka. 3 [doi]
- Human-centered Multimodal Machine IntelligenceShrikanth Shri Narayanan. 4-5 [doi]
- A Multi-modal System to Assess Cognition in Children from their Physical MovementsAshwin Ramesh Babu, Mohammad Zaki Zadeh, Ashish Jaiswal, Alexis Lueckenhoff, Maria Kyrarini, Fillia Makedon. 6-14 [doi]
- A Neural Architecture for Detecting User Confusion in Eye-tracking DataShane D. Sims, Cristina Conati. 15-23 [doi]
- Analysis of Face-Touching Behavior in Large Scale Social Interaction DatasetCigdem Beyan, Matteo Bustreo, Muhammad Shahid, Gian Luca Bailo, Nicolò Carissimi, Alessio Del Bue. 24-32 [doi]
- Attention Sensing through Multimodal User Modeling in an Augmented Reality Guessing GameFelix Putze, Dennis Küster, Timo Urban, Alexander Zastrow, Marvin Kampen. 33-40 [doi]
- BreathEasy: Assessing Respiratory Diseases Using Mobile Multimodal SensorsMd. Mahbubur Rahman, Mohsin Y. Ahmed, Tousif Ahmed, Bashima Islam, Viswam Nathan, Korosh Vatanparvar, Ebrahim Nemati, Daniel McCaffrey, Jilong Kuang, Jun Alex Gao. 41-49 [doi]
- Bring the Environment to Life: A Sonification Module for People with Visual Impairments to Improve Situation AwarenessAngela Constantinescu, Karin Müller 0001, Monica Haurilet, Vanessa Petrausch, Rainer Stiefelhagen. 50-59 [doi]
- Combining Auditory and Mid-Air Haptic Feedback for a Light Switch ButtonÇisem Özkul, David Geerts, Isa Rutten. 60-69 [doi]
- Depression Severity Assessment for Adolescents at High Risk of Mental DisordersMichal Muszynski, Jamie Zelazny, Jeffrey M. Girard, Louis-Philippe Morency. 70-78 [doi]
- Detecting Depression in Less Than 10 Seconds: Impact of Speaking Time on Depression Detection SensitivityNujud Aloshban, Anna Esposito, Alessandro Vinciarelli. 79-87 [doi]
- Did the Children Behave?: Investigating the Relationship Between Attachment Condition and Child Computer InteractionDong-Bach Vo, Stephen A. Brewster, Alessandro Vinciarelli. 88-96 [doi]
- Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction DatasetHuili Chen, Yue Zhang 0014, Felix Weninger, Rosalind W. Picard, Cynthia Breazeal, Hae Won Park. 97-106 [doi]
- Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning AnalyticsAndrew Emerson, Nathan L. Henderson, Jonathan P. Rowe, Wookhee Min, Seung Lee, James Minogue, James C. Lester. 107-116 [doi]
- Effects of Visual Locomotion and Tactile Stimuli Duration on the Emotional Dimensions of the Cutaneous Rabbit IllusionMounia Ziat, Katherine Chin, Roope Raisamo. 117-124 [doi]
- Eliciting Emotion with Vibrotactile Stimuli Evocative of Real-World SensationsShaun Alexander Macdonald, Stephen A. Brewster, Frank E. Pollick. 125-133 [doi]
- Enhancing Affect Detection in Game-Based Learning Environments with Multimodal Conditional Generative ModelingNathan L. Henderson, Wookhee Min, Jonathan P. Rowe, James C. Lester. 134-143 [doi]
- Estimating the Intensity of Facial Expressions Accompanying Feedback Responses in Multiparty Video-Mediated CommunicationRyosuke Ueno, Yukiko I. Nakano, Jie Zeng, Fumio Nihei. 144-152 [doi]
- Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced EmotionsBernd Dudzik, Joost Broekens, Mark A. Neerincx, Hayley Hung. 153-162 [doi]
- Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative VisualizationsOswald Barral, Sébastien Lallé, Grigorii Guz, Alireza Iranpour, Cristina Conati. 163-173 [doi]
- Facial Electromyography-based Adaptive Virtual Reality Gaming for Cognitive TrainingLorcan Reidy, Dennis Chan, Charles Nduka, Hatice Gunes. 174-183 [doi]
- Facilitating Flexible Force Feedback Design with FeelixAnke van Oosterhout, Miguel Bruns, Eve E. Hoggan. 184-193 [doi]
- FeetBack: Augmenting Robotic Telepresence with Haptic Feedback on the FeetBrennan Jones, Jens Maiero, Alireza Mogharrab, Ivan A. Aguilar, Ashu Adhikari, Bernhard E. Riecke, Ernst Kruijff, Carman Neustaedter, Robert W. Lindeman. 194-203 [doi]
- Fifty Shades of Green: Towards a Robust Measure of Inter-annotator Agreement for Continuous SignalsBrandon M. Booth, Shrikanth S. Narayanan. 204-212 [doi]
- FilterJoint: Toward an Understanding of Whole-Body Gesture ArticulationAishat Aloba, Julia Woodward, Lisa Anthony. 213-221 [doi]
- Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual RealityChris Zimmerer, Erik Wolf, Sara Wolf, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik. 222-231 [doi]
- Force9: Force-assisted Miniature Keyboard on Smart WearablesLik Hang Lee, Ngo Yan Yeung, Tristan Braud, Tong Li, Xiang Su, Pan Hui 0001. 232-241 [doi]
- Gesticulator: A framework for semantically-aware speech-driven gesture generationTaras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, Hedvig Kjellström. 242-250 [doi]
- Gesture Enhanced Comprehension of Ambiguous Human-to-Robot InstructionsDulanga Weerakoon, Vigneshwaran Subbaraju, Nipuni Karumpulli, Tuan Tran, Qianli Xu, U-Xuan Tan, Joo-Hwee Lim, Archan Misra. 251-259 [doi]
- Going with our Guts: Potentials of Wearable Electrogastrography (EGG) for Affect DetectionAngela Vujic, Stephanie Tong, Rosalind W. Picard, Pattie Maes. 260-268 [doi]
- Hand-eye Coordination for Textual Difficulty Detection in Text SummarizationJun Wang, Grace Ngai, Hong Va Leong. 269-277 [doi]
- How Good is Good Enough?: The Impact of Errors in Single Person Action Classification on the Modeling of Group Interactions in VolleyballLian Beenhakker, Fahim A. Salim, Dees Postma, Robby van Delden, Dennis Reidsma, Bert-Jan van Beijnum. 278-286 [doi]
- Incorporating Measures of Intermodal Coordination in Automated Analysis of Infant-Mother InteractionLauren Klein, Victor Ardulov, Yuhua Hu, Mohammad Soleymani 0001, Alma Gharib, Barbara Thompson, Pat Levitt, Maja J. Mataric. 287-295 [doi]
- Influence of Electric Taste, Smell, Color, and Thermal Sensory Modalities on the Liking and Mediated Emotions of Virtual Flavor PerceptionNimesha Ranasinghe, Meetha Nesam James, Michael Gecawicz, Jonathan Bland, David Smith. 296-304 [doi]
- Introducing Representations of Facial Affect in Automated Multimodal Deception DetectionLeena Mathur, Maja J. Mataric. 305-314 [doi]
- Is She Truly Enjoying the Conversation?: Analysis of Physiological Signals toward Adaptive Dialogue SystemsShun Katada, Shogo Okada, Yuki Hirano, Kazunori Komatani. 315-323 [doi]
- Job Interviewer Android with Elaborate Follow-up Question GenerationKoji Inoue, Kohei Hara, Divesh Lala, Kenta Yamamoto, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara. 324-332 [doi]
- LASO: Exploiting Locomotive and Acoustic Signatures over the Edge to Annotate IMU Data for Human Activity RecognitionSoumyajit Chatterjee, Avijoy Chakma, Aryya Gangopadhyay, Nirmalya Roy, Bivas Mitra, Sandip Chakraborty. 333-342 [doi]
- LDNN: Linguistic Knowledge Injectable Deep Neural Network for Group Cohesiveness UnderstandingYanan Wang, Jianming Wu, Jinfa Huang, Gen Hattori, Yasuhiro Takishima, Shinya Wada, Rui Kimura, Jie Chen, Satoshi Kurihara. 343-350 [doi]
- Mimicker-in-the-Browser: A Novel Interaction Using Mimicry to Augment the Browsing ExperienceRiku Arakawa, Hiromu Yakura. 351-360 [doi]
- Mitigating Biases in Multimodal Personality AssessmentShen Yan, Di Huang, Mohammad Soleymani 0001. 361-369 [doi]
- MMGatorAuth: A Novel Multimodal Dataset for Authentication Interactions in Gesture and VoiceSarah Morrison-Smith, Aishat Aloba, Hangwei Lu, Brett Benda, Shaghayegh Esmaeili, Gianne Flores, Jesse Smith, Nikita Soni, Isaac Wang, Rejin Joy, Damon L. Woodard, Jaime Ruiz, Lisa Anthony. 370-377 [doi]
- Modality Dropout for Improved Performance-driven Talking FacesAhmed Hussen Abdelaziz, Barry-John Theobald, Paul Dixon, Reinhard Knothe, Nicholas Apostoloff, Sachin Kajareker. 378-386 [doi]
- MORSE: MultimOdal sentiment analysis for Real-life SEttingsYiqun Yao, Verónica Pérez-Rosas, Mohamed Abouelenien, Mihai Burzo. 387-396 [doi]
- MSP-Face Corpus: A Natural Audiovisual Emotional DatabaseAndrea Vidal, Ali Salman, Wei-Cheng Lin, Carlos Busso. 397-405 [doi]
- Multimodal Automatic Coding of Client Behavior in Motivational InterviewingLeili Tavabi, Kalin Stefanov, Larry Zhang, Brian Borsari, Joshua D. Woolley, Stefan Scherer, Mohammad Soleymani 0001. 406-413 [doi]
- Multimodal Data Fusion based on the Global Workspace TheoryCong Bao, Zafeirios Fountas, Temitayo A. Olugbade, Nadia Bianchi-Berthouze. 414-422 [doi]
- Multimodal, Multiparty Modeling of Collaborative Problem Solving PerformanceShree Krishna Subburaj, Angela E. B. Stewart, Arjun Ramesh Rao, Sidney K. D'Mello. 423-432 [doi]
- PiHearts: Resonating Experiences of Self and Others Enabled by a Tangible Somaesthetic DesignIlhan Aslan, Andreas Seiderer, Chi-Tai Dang, Simon Rädler, Elisabeth André. 433-441 [doi]
- Predicting Video Affect via Induced Affection in the WildYi Ding, Radha Kumaran, Tianjiao Yang, Tobias Höllerer. 442-451 [doi]
- Preserving Privacy in Image-based Emotion Recognition through User AnonymizationVansh Narula, Kexin Feng, Theodora Chaspari. 452-460 [doi]
- Purring Wheel: Thermal and Vibrotactile Notifications on the Steering WheelPatrizia Di Campli San Vito, Stephen A. Brewster, Frank E. Pollick, Simon Thompson, Lee Skrypchuk, Alexandros Mouzakitis. 461-469 [doi]
- SmellControl: The Study of Sense of Agency in SmellPatricia Ivette Cornelio Martinez, Emanuela Maggioni, Giada Brianza, Sriram Subramanian, Marianna Obrist. 470-480 [doi]
- Speaker-Invariant Adversarial Domain Adaptation for Emotion RecognitionYufeng Yin, Baiyu Huang, Yizhen Wu, Mohammad Soleymani 0001. 481-490 [doi]
- StrategicReading: Understanding Complex Mobile Reading Strategies via Implicit Behavior SensingWei Guo, Byeong-Young Cho, Jingtao Wang. 491-500 [doi]
- Studying Person-Specific Pointing and Gaze Behavior for Multimodal Referencing of Outside Objects from a Moving VehicleAmr Gomaa, Guillermo Reyes, Alexandra Alles, Lydia Rupp, Michael Feld. 501-509 [doi]
- Temporal Attention and Consistency Measuring for Video Question AnsweringLingyu Zhang, Richard J. Radke. 510-518 [doi]
- The eyes know it: FakeET- An Eye-tracking Database to Understand Deepfake PerceptionParul Gupta, Komal Chugh, Abhinav Dhall, Ramanathan Subramanian. 519-527 [doi]
- The WoNoWa Dataset: Investigating the Transactive Memory System in Small Group InteractionsBéatrice Biancardi, Lou Maisonnave-Couterou, Pierrick Renault, Brian Ravenet, Maurizio Mancini, Giovanna Varni. 528-537 [doi]
- Toward Adaptive Trust Calibration for Level 2 Driving AutomationKumar Akash, Neera Jain, Teruhisa Misu. 538-547 [doi]
- Toward Multimodal Modeling of Emotional ExpressivenessVictoria Lin 0001, Jeffrey M. Girard, Michael A. Sayette, Louis-Philippe Morency. 548-557 [doi]
- Towards Engagement Recognition of People with Dementia in Care SettingsLars Steinert, Felix Putze, Dennis Küster, Tanja Schultz. 558-565 [doi]
- Understanding Applicants' Reactions to Asynchronous Video Interviews Through Self-reports and Nonverbal CuesSkanda Muralidhar, Emmanuelle Patricia Kleinlogel, Eric Mayor, Adrian Bangerter, Marianne Schmid Mast, Daniel Gatica-Perez. 566-574 [doi]
- Using Emotions to Complement Multi-Modal Human-Robot Interaction in Urban Search and Rescue ScenariosSami Alperen Akgun, Moojan Ghafurian, Mark Crowley 0001, Kerstin Dautenhahn. 575-584 [doi]
- "Was that successful?" On Integrating Proactive Meta-Dialogue in a DIY-Assistant using Multimodal CuesMatthias Kraus, Marvin R. G. Schiller, Gregor Behnke, Pascal Bercher, Michael Dorna, Michael Dambier, Birte Glimm, Susanne Biundo, Wolfgang Minker. 585-594 [doi]
- You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger PointingAbdul Rafey Aftab, Michael von der Beeck, Michael Feld. 595-603 [doi]
- A Comparison between Laboratory and Wearable Sensors in the Context of Physiological SynchronyJasper J. van Beers, Ivo V. Stuldreher, Nattapong Thammasan, Anne-Marie Brouwer. 604-608 [doi]
- Analyzing Nonverbal Behaviors along with PraisingToshiki Onishi, Arisa Yamauchi, Ryo Ishii, Yushi Aono, Akihiro Miyata. 609-613 [doi]
- Automated Time Synchronization of Cough Events from Multimodal Sensors in Mobile DevicesTousif Ahmed, Mohsin Y. Ahmed, Md. Mahbubur Rahman, Ebrahim Nemati, Bashima Islam, Korosh Vatanparvar, Viswam Nathan, Daniel McCaffrey, Jilong Kuang, Jun Alex Gao. 614-619 [doi]
- Conventional and Non-conventional Job Interviewing Methods: A Comparative Study in Two CountriesKumar Shubham, Emmanuelle Patricia Kleinlogel, Anaïs Butera, Marianne Schmid Mast, Dinesh Babu Jayagopi. 620-624 [doi]
- Detection of Listener Uncertainty in Robot-Led Second Language Conversation PracticeRonald Cumbal, José Lopes, Olov Engwall. 625-629 [doi]
- Effect of Modality on Human and Machine Scoring of Presentation VideosHaley Lepp, Chee Wee Leong, Katrina Roohr, Michelle P. Martin-Raugh, Vikram Ramanarayanan. 630-634 [doi]
- Examining the Link between Children's Cognitive Development and Touchscreen Interaction PatternsZiyang Chen, Yu-Peng Chen, Alex Shaw, Aishat Aloba, Pavlo Antonenko, Jaime Ruiz, Lisa Anthony. 635-639 [doi]
- Gaze Tracker Accuracy and Precision Measurements in Virtual Reality HeadsetsJari Kangas, Olli Koskinen, Roope Raisamo. 640-644 [doi]
- Leniency to those who confess?: Predicting the Legal Judgement via Multi-Modal AnalysisLiang Yang 0003, Jingjie Zeng, Tao Peng, Xi Luo, Jinghui Zhang, Hongfei Lin. 645-649 [doi]
- Multimodal Assessment of Oral Presentations using HMMsEverlyne Kimani, Prasanth Murali, Ameneh Shamekhi, Dhaval Parmar, Sumanth Munikoti, Timothy W. Bickmore. 650-654 [doi]
- Multimodal Gated Information Fusion for Emotion Recognition from EEG Signals and Facial BehaviorsSoheil Rayatdoost, David Rudrauf, Mohammad Soleymani 0001. 655-659 [doi]
- OpenSense: A Platform for Multimodal Data Acquisition and Behavior PerceptionKalin Stefanov, Baiyu Huang, Zongjian Li, Mohammad Soleymani 0001. 660-664 [doi]
- Personalized Modeling of Real-World Vocalizations from Nonverbal IndividualsJaya Narain, Kristina T. Johnson, Craig Ferguson, Amanda O'Brien, Tanya Talkar, Yue Zhang Weninger, Peter Wofford, Thomas F. Quatieri, Rosalind W. Picard, Pattie Maes. 665-669 [doi]
- Predicting the Effectiveness of Systematic Desensitization Through Virtual Reality for Mitigating Public Speaking AnxietyMargaret von Ebers, Ehsanul Haque Nirjhar, Amir H. Behzadan, Theodora Chaspari. 670-674 [doi]
- Punchline Detection using Context-Aware Hierarchical Multimodal FusionAkshat Choube, Mohammad Soleymani 0001. 675-679 [doi]
- ROSMI: A Multimodal Corpus for Map-based Instruction-GivingMiltiadis Marios Katsakioris, Ioannis Konstas, Pierre Yves Mignotte, Helen Hastie. 680-684 [doi]
- The iCub Multisensor Datasets for Robot and Computer Vision ApplicationsMurat Kirtay, Ugo Albanese, Lorenzo Vannucci, Guido Schillaci, Cecilia Laschi, Egidio Falotico. 685-688 [doi]
- The Sensory Interactive Table: Exploring the Social Space of EatingRoelof Anne Jelle de Vries, Juliet A. M. Haarman, Emiel C. Harmsen, Dirk K. J. Heylen, Hermie J. Hermens. 689-693 [doi]
- Touch Recognition with Attentive End-to-End ModelWail El Bani, Mohamed Chetouani. 694-698 [doi]
- Automating Facilitation and Documentation of Collaborative Ideation ProcessesMatthias Merk. 699-702 [doi]
- Detection of Micro-expression Recognition Based on Spatio-Temporal Modelling and Spatial AttentionMengjiong Bai. 703-707 [doi]
- How to Complement Learning Analytics with Smartwatches?: Fusing Physical Activities, Environmental Context, and Learning ActivitiesGeorge-Petru Ciordas-Hertel. 708-712 [doi]
- Multimodal Groups' Analysis for Automated Cohesion EstimationLucien Maman. 713-717 [doi]
- Multimodal Physiological Synchrony as Measure of Attentional EngagementIvo V. Stuldreher. 718-722 [doi]
- Personalised Human Device Interaction through Context aware Augmented RealityMadhawa Perera. 723-727 [doi]
- Robot Assisted Diagnosis of Autism in ChildrenB. Ashwini. 728-732 [doi]
- Supporting Instructors to Provide Emotional and Instructional Scaffolding for English Language Learners through Biosensor-based FeedbackHeera Lee. 733-737 [doi]
- Towards a Multimodal and Context-Aware Framework for Human Navigational Intent InferenceZhitian Zhang. 738-742 [doi]
- Towards Multimodal Human-Like Characteristics and Expressive Visual Prosody in Virtual AgentsMireille Fares. 743-747 [doi]
- Towards Real-Time Multimodal Emotion Recognition among CouplesGeorge Boateng. 748-753 [doi]
- Zero-Shot Learning for Gesture RecognitionNaveen Madapana. 754-757 [doi]
- Alfie: An Interactive Robot with Moral CompassCigdem Turan, Patrick Schramowski, Constantin A. Rothkopf, Kristian Kersting. 758-759 [doi]
- FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic RecruitmentAlejandro Peña, Ignacio Serna, Aythami Morales, Julian Fiérrez. 760-761 [doi]
- LieCatcher: Game Framework for Collecting Human Judgments of Deceptive SpeechSarah Ita Levitan, Xinyue Tan, Julia Hirschberg. 762-763 [doi]
- Spark Creativity by Speaking Enthusiastically: Communication Training using an E-CoachCarla Viegas, Albert Lu, Annabel Su, Carter Strear, Yi Xu, Albert Topdjian, Daniel Limon, J. J. Xu. 764-765 [doi]
- The AI-Medic: A Multimodal Artificial Intelligent Mentor for Trauma SurgeryEdgar Rojas-Muñoz, Kyle Couperus, Juan P. Wachs. 766-767 [doi]
- A Multi-Modal Approach for Driver Gaze Prediction to Remove Identity BiasZehui Yu, Xiehe Huang, Xiubao Zhang, Haifeng Shen, Qun Li, Weihong Deng, Jian Tang, Yi Yang, Jieping Ye. 768-776 [doi]
- Advanced Multi-Instance Learning Method with Multi-features Engineering and Conservative Optimization for Engagement Intensity PredictionJianming Wu, Bo Yang, Yanan Wang, Gen Hattori. 777-783 [doi]
- EmotiW 2020: Driver Gaze, Group Emotion, Student Engagement and Physiological Signal based ChallengesAbhinav Dhall, Garima Sharma, Roland Goecke, Tom Gedeon. 784-789 [doi]
- Extract the Gaze Multi-dimensional Information Analysis Driver BehaviorKui Lyu, Minghao Wang, Liyu Meng. 790-797 [doi]
- Fusical: Multimodal Fusion for Video SentimentBoyang Tom Jin, Leila Abdelrahman, Cong Kevin Chen, Amil Khanzada. 798-806 [doi]
- Group Level Audio-Video Emotion Recognition Using Hybrid NetworksChuanhe Liu, Wenqiang Jiang, Minghao Wang, Tianhao Tang. 807-812 [doi]
- Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual ApproachAnastasia Petrova, Dominique Vaufreydaz, Philippe Dessus. 813-820 [doi]
- Group-level Speech Emotion Recognition Utilising Deep Spectrum FeaturesSandra Ottl, Shahin Amiriparian, Maurice Gerczuk, Vincent Karas, Björn W. Schuller. 821-826 [doi]
- Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion RecognitionYanan Wang, Jianming Wu, Panikos Heracleous, Shinya Wada, Rui Kimura, Satoshi Kurihara. 827-834 [doi]
- Multi-modal Fusion Using Spatio-temporal and Static Features for Group Emotion RecognitionMo Sun, Jian Li, Hui Feng, Wei Gou, Haifeng Shen, Jian Tang, Yi Yang, Jieping Ye. 835-840 [doi]
- Multi-rate Attention Based GRU Model for Engagement PredictionBin Zhu, Xinjie Lan, Xin Guo, Kenneth E. Barner, Charles Boncelet. 841-848 [doi]
- Recognizing Emotion in the Wild using Multimodal DataShivam Srivastava, Saandeep Aathreya Sidhapur Lakshminarayan, Saurabh Hinduja, Sk Rahatul Jannat, Hamza Elhamdadi, Shaun J. Canavan. 849-857 [doi]
- X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the WildLukas Stappen, Georgios Rizos, Björn W. Schuller. 858-867 [doi]
- Bridging Social Sciences and AI for Understanding Child BehaviourHeysem Kaya, Roy S. Hessels, Maryam Najafian, Sandra Hanekamp, Saeid Safavi 0001. 868-870 [doi]
- International Workshop on Deep Video UnderstandingKeith Curtis, George Awad, Shahzad Rajput, Ian Soboroff. 871-873 [doi]
- Face and Gesture Analysis for Health InformaticsZakia Hammal, Di Huang 0001, Kévin Bailly, Liming Chen 0002, Mohamed Daoudi. 874-875 [doi]
- Workshop on Interdisciplinary Insights into Group and Team DynamicsHayley Hung, Gabriel Murray, Giovanna Varni, Nale Lehmann-Willenbrock, Fabiola H. Gerpott, Catharine Oertel. 876-877 [doi]
- Multisensory Approaches to Human-Food InteractionCarlos Velasco, Anton Nijholt, Charles Spence, Takuji Narumi, Kosuke Motoki, Gijs Huisman, Marianna Obrist. 878-880 [doi]
- Multimodal Interaction in PsychopathologyItir Önal Ertugrul, Jeffrey F. Cohn, Hamdi Dibeklioglu. 881-882 [doi]
- Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the WildDennis Küster, Felix Putze, Patricia Alves-Oliveira, Maike Paetzel, Tanja Schultz. 883-885 [doi]
- Speech, Voice, Text, and Meaning: A Multidisciplinary Approach to Interview Data through the use of digital toolsArjan van Hessen, Silvia Calamai, Henk van den Heuvel, Stefania Scagliola, Norah Karrouche, Jeannine Beeken, Louise Corti, Christoph Draxler. 886-887 [doi]
- Multimodal Affect and Aesthetic ExperienceTheodoros Kostoulas, Michal Muszynski, Theodora Chaspari, Panos Amelidis. 888-889 [doi]
- First Workshop on Multimodal e-CoachesLeonardo Angelini, Mira El Kamali, Elena Mugellini, Omar Abou Khaled, Yordan Dimitrov, Vera Veleva, Zlatka Gospodinova, Nadejda Miteva, Richard Wheeler, Zoraida Callejas, David Griol, Kawtar Benghazi Akhlaki, Manuel Noguera, Panagiotis D. Bamidis, Evdokimos I. Konstantinidis, Despoina Petsani, Andoni Beristain Iraola, Dimitrios I. Fotiadis, Gérard Chollet, Inés Torres, Anna Esposito, Hannes Schlieter. 890-892 [doi]
- Social Affective Multimodal Interaction for HealthHiroki Tanaka, Satoshi Nakamura 0001, Jean-Claude Martin, Catherine Pelachaud. 893-894 [doi]
- The First International Workshop on Multi-Scale Movement TechnologiesEleonora Ceccaldi, Benoît G. Bardy, Nadia Bianchi-Berthouze, Luciano Fadiga, Gualtiero Volpe, Antonio Camurri. 895-896 [doi]