Abstract is missing.
- Incorporating Haptics into the Theatre of Multimodal Experience design: and the Ecosystem this RequiresKaron E. MacLean. 1-2 [doi]
- Theory Driven Approaches to the Design of Multimodal Assessments of Learning, Emotion, and Self-Regulation in MedicineSusanne P. Lajoie. 3 [doi]
- Socially Interactive Artificial Intelligence: Past, Present and FutureElisabeth André. 4 [doi]
- From Differentiable Reasoning to Self-supervised Embodied Active LearningRuss R. Salakhutdinov. 5 [doi]
- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment AnalysisWei Han, Hui Chen, Alexander F. Gelbukh, Amir Zadeh 0001, Louis-Philippe Morency, Soujanya Poria. 6-15 [doi]
- Exploiting the Interplay between Social and Task Dimensions of Cohesion to Predict its Dynamics Leveraging Social SciencesLucien Maman, Laurence Likforman-Sulem, Mohamed Chetouani, Giovanna Varni. 16-24 [doi]
- Dynamic Mode Decomposition with Control as a Model of Multimodal Behavioral CoordinationLauren Klein, Victor Ardulov, Alma Gharib, Barbara Thompson, Pat Levitt, Maja J. Mataric. 25-33 [doi]
- A Contrastive Learning Approach for Compositional Zero-Shot LearningMuhammad Umer Anwaar, Rayyan Ahmad Khan, Zhihui Pan, Martin Kleinsteuber. 34-42 [doi]
- Efficient Deep Feature Calibration for Cross-Modal Joint Embedding LearningZhongwei Xie, Ling Liu, Lin Li, Luo Zhong. 43-51 [doi]
- A Multimodal Dataset and Evaluation for Feature Estimators of Temporal Phases of AnxietyHashini Senaratne, Levin Kuhlmann, Kirsten Ellis, Glenn Melvin, Sharon L. Oviatt. 52-61 [doi]
- Inclusive Action Game Presenting Real-time Multimodal Presentations for Sighted and Blind PersonsMasaki Matsuo, Takahiro Miura, Ken-ichiro Yabu, Atsushi Katagiri, Masatsugu Sakajiri, Junji Onishi, Takeshi Kurata, Tohru Ifukube. 62-70 [doi]
- ViCA: Combining visual, Social, and Task-orientedconversational AI in a Healthcare SettingGeorge Pantazopoulos, Jeremy Bruyere, Malvina Nikandrou, Thibaud Boissier, Supun Hemanthage, Binha Kumar Sachish, Vidyul Shah, Christian Dondrup, Oliver Lemon. 71-79 [doi]
- Towards Sound Accessibility in Virtual RealityDhruv Jain, Sasa Junuzovic, Eyal Ofek, Mike Sinclair, John R. Porter, Chris Yoon, Swetha Machanavajhala, Meredith Ringel Morris. 80-91 [doi]
- Am I Allergic to This? Assisting Sight Impaired People in the KitchenElisa Ramil Brick, Vanesa Caballero Alonso, Conor O'Brien, Sheron Tong, Emilie Tavernier, Amit Parekh, Angus Addlesee, Oliver Lemon. 92-102 [doi]
- MindfulNest: Strengthening Emotion Regulation with Tangible User InterfacesSamantha Speer, Emily Hamner, Michael Tasota, Lauren Zito, Sarah K. Byrne-Houser. 103-111 [doi]
- A Systematic Cross-Corpus Analysis of Human Reactions to Robot Conversational FailuresDimosthenis Kontogiorgos, Minh Tran, Joakim Gustafson, Mohammad Soleymani 0001. 112-120 [doi]
- Recognizing Perceived Interdependence in Face-to-Face Negotiations through Multimodal Analysis of Nonverbal BehaviorBernd Dudzik, Simon Columbus, Tiffany Matej Hrkalovic, Daniel Balliet, Hayley Hung. 121-130 [doi]
- Modelling and Predicting Trust for Developing Proactive Dialogue Strategies in Mixed-Initiative InteractionMatthias Kraus, Nicolas Wagner, Wolfgang Minker. 131-140 [doi]
- Recognizing Social Signals with Weakly Supervised Multitask Learning for Multimodal Dialogue SystemsYuki Hirano, Shogo Okada, Kazunori Komatani. 141-149 [doi]
- Decision-Theoretic Question Generation for Situated Reference Resolution: An Empirical Study and Computational ModelFelix Gervits, Gordon Briggs, Antonio Roque, Genki A. Kadomatsu, Dean Thurston, Matthias Scheutz, Matthew Marge. 150-158 [doi]
- Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-RepresentationRiku Arakawa, Zendai Kashino, Shinnosuke Takamichi, Adrien Verhulst, Masahiko Inami. 159-167 [doi]
- Hierarchical Classification and Transfer Learning to Recognize Head Gestures and Facial Expressions Using EarbudsShkurta Gashi, Aaqib Saeed, Alessandra Vicini, Elena Di Lascio, Silvia Santini. 168-176 [doi]
- Integrated Speech and Gesture SynthesisSiyang Wang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, Éva Székely. 177-185 [doi]
- Co-Verbal Touch: Enriching Video Telecommunications with Remote Touch TechnologyAngela Chan, Francis K. H. Quek, Takashi Yamauchi, Jinsil Hwaryoung Seo. 186-194 [doi]
- HapticLock: Eyes-Free Authentication for Mobile DevicesGloria Dhandapani, Jamie Ferguson, Euan Freeman. 195-202 [doi]
- The Impact of Prior Knowledge on the Effectiveness of Haptic and Visual Modalities for Teaching ForcesKern Qi, David Borland, Emily Brunsen, James Minogue, Tabitha C. Peck. 203-211 [doi]
- Toddler-Guidance Learning: Impacts of Critical Period on Multimodal AI AgentsJunseok Park, Kwanyoung Park, Hyunseok Oh, Ganghun Lee, Min Su Lee, Youngki Lee, Byoung-Tak Zhang. 212-220 [doi]
- Attachment Recognition in School Age Children Based on Automatic Analysis of Facial Expressions and Nonverbal Vocal BehaviourHuda Alsofyani, Alessandro Vinciarelli. 221-228 [doi]
- Characterizing Children's Motion Qualities: Implications for the Design of Motion Applications for ChildrenAishat Aloba, Lisa Anthony. 229-238 [doi]
- Temporal Graph Convolutional Network for Multimodal Sentiment AnalysisJian Huang, Zehang Lin, Zhenguo Yang, Wenyin Liu. 239-247 [doi]
- Conversational Group Detection with Graph Neural NetworksSydney Thompson, Abhijit Gupta, Anjali W. Gupta, Austin Chen, Marynel Vázquez. 248-252 [doi]
- Self-supervised Contrastive Learning of Multi-view Facial ExpressionsShuvendu Roy, Ali Etemad. 253-257 [doi]
- What's Fair is Fair: Detecting and Mitigating Encoded Bias in Multimodal Models of Museum Visitor AttentionHalim Acosta, Nathan Henderson, Jonathan P. Rowe, Wookhee Min, James Minogue, James C. Lester. 258-267 [doi]
- Bias and Fairness in Multimodal Machine Learning: A Case Study of Automated Video InterviewsBrandon M. Booth, Louis Hickman, Shree Krishna Subburaj, Louis Tay, Sang Eun Woo, Sidney K. D'Mello. 268-277 [doi]
- Technology as Infrastructure for Dehumanization: : Three Hundred Million People with the Same FaceSharon L. Oviatt. 278-287 [doi]
- Investigating Trust in Human-Machine Learning Collaboration: A Pilot Study on Estimating Public Anxiety from SpeechAbdullah Aman Tutul, Ehsanul Haque Nirjhar, Theodora Chaspari. 288-296 [doi]
- Impact of the Size of Modules on Target Acquisition and Pursuit for Future Modular Shape-changing Physical User InterfacesLaura Pruszko, Yann Laurillau, Benoît Piranda, Julien Bourgeois, Céline Coutrix. 297-307 [doi]
- Why Do I Have to Take Over Control? Evaluating Safe Handovers with Advance Notice and Explanations in HADFrederik Wiehr, Anke Hirsch, Lukas Schmitz, Nina Knieriemen, Antonio Krüger, Alisa Kovtunova, Stefan Borgwardt, Ernie Chang, Vera Demberg, Marcel Steinmetz, Jörg Hoffmann 0001. 308-317 [doi]
- ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects From a Moving VehicleAmr Gomaa, Guillermo Reyes, Michael Feld. 318-327 [doi]
- Advances in Multimodal Behavioral Analytics for Early Dementia Diagnosis: A ReviewChathurika Palliya Guruge, Sharon L. Oviatt, Pari Delir Haghighi, Elizabeth Pritchard. 328-340 [doi]
- ConAn: A Usable Tool for Multimodal Conversation AnalysisAnna Penzkofer, Philipp Müller 0001, Felix Bühler, Sven Mayer, Andreas Bulling. 341-351 [doi]
- Prediction of Interlocutors' Subjective Impressions Based on Functional Head-Movement Features in Group MeetingsShumpei Otsuchi, Yoko Ishii, Momoko Nakatani, Kazuhiro Otsuka. 352-360 [doi]
- Inflation-Deflation Networks for Recognizing Head-Movement Functions in Face-to-Face ConversationsKazuki Takeda, Kazuhiro Otsuka. 361-369 [doi]
- Deep Transfer Learning for Recognizing Functional Interactions via Head Movements in Multiparty ConversationsTakashi Mori, Kazuhiro Otsuka. 370-378 [doi]
- Investigating the Effect of Polarity in Auditory and Vibrotactile Displays Under Cognitive LoadJamie Ferguson, Euan Freeman, Stephen A. Brewster. 379-386 [doi]
- User Preferences for Calming Affective Haptic Stimuli in Social SettingsShaun Alexander Macdonald, Euan Freeman, Stephen A. Brewster, Frank E. Pollick. 387-396 [doi]
- Improving the Movement Synchrony Estimation with Action Quality Assessment in Children Play TherapyJicheng Li, Anjana Bhat, Roghayeh Barmaki. 397-406 [doi]
- Learning Oculomotor Behaviors from ScanpathBeibin Li, Nicholas Nuechterlein, Erin Barney, Claire E. Foster, Minah Kim, Monique Mahony, Adham Atyabi, Li Feng, Quan Wang 0003, Pamela Ventola, Linda G. Shapiro, Frederick Shic. 407-415 [doi]
- Multimodal Detection of Drivers Drowsiness and DistractionKapotaksha Das, Salem Sharak, Kais Riani, Mohamed Abouelenien, Mihai Burzo, Michalis Papakostas. 416-424 [doi]
- On the Transition of Social Interaction from In-Person to Online: Predicting Changes in Social Media Usage of College Students during the COVID-19 Pandemic based on Pre-COVID-19 On-Campus ColocationWeichen Wang, Jialing Wu, Subigya Kumar Nepal, Alex daSilva, Elin Hedlund, Eilis Murphy, Courtney Rogers, Jeremy F. Huckins. 425-434 [doi]
- Head Matters: Explainable Human-centered Trait Prediction from Head Motion DynamicsSurbhi Madan, Monika Gahalawat, Tanaya Guha, Ramanathan Subramanian. 435-443 [doi]
- An Automated Mutual Gaze Detection Framework for Social Behavior Assessment in Therapy for Children with AutismZhang Guo, Kangsoo Kim, Anjana Bhat, Roghayeh Barmaki. 444-452 [doi]
- Design and Development of a Low-cost Device for Weight and Center of Gravity Simulation in Virtual RealityDiego Vilela Monteiro, Hai-Ning Liang, Xian Wang, Wenge Xu, Huawei Tu. 453-460 [doi]
- Inclusive Voice Interaction Techniques for Creative Object PositioningFarkhandah Aziz, Chris Creed, Maite Frutos Pascual, Ian Williams. 461-469 [doi]
- Interaction Modalities for Notification Signals in Augmented RealityMay Jorella S. Lazaro, Sung-Ho Kim, Jaeyong Lee, Jaemin Chun, Myung Hwan Yun. 470-477 [doi]
- PARA: Privacy Management and Control in Emerging IoT Ecosystems using Augmented RealityCarlos Bermejo Fernandez, Lik Hang Lee, Petteri Nurmi, Pan Hui 0001. 478-486 [doi]
- Feature Perception in Broadband Sonar Analysis - Using the Repertory Grid to Elicit Interface Designs to Support Human-Autonomy TeamingFaye McCabe, Christopher Baber. 487-493 [doi]
- To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech GesturesPieter Wolfert, Jeffrey M. Girard, Taras Kucherenko, Tony Belpaeme. 494-502 [doi]
- Audiovisual Speech Synthesis using Tacotron2Ahmed Hussen Abdelaziz, Anushree Prasanna Kumar, Chloe Seivwright, Gabriele Fanelli, Justin Binder, Yannis Stylianou, Sachin Kajareker. 503-511 [doi]
- What's This? A Voice and Touch Multimodal Approach for Ambiguity Resolution in Voice AssistantsJaewook Lee, Sebastian S. Rodriguez, Raahul Natarrajan, Jacqueline Chen, Harsh Deep, Alex Kirlik. 512-520 [doi]
- Graph Capsule Aggregation for Unaligned Multimodal SequencesJianfeng Wu, Sijie Mai, Haifeng Hu 0001. 521-529 [doi]
- Cross-modal Assisted Training for Abnormal Event Recognition in ElevatorsXinmeng Chen, Xuchen Gong, Ming Cheng, Qi Deng, Ming Li. 530-538 [doi]
- Towards Automatic Narrative Coherence PredictionFilip Bendevski, Jumana Ibrahim, Tina Krulec, Theodore Waters, Nizar Habash, Hanan Salam, Himadri Mukherjee, Christin Camia. 539-547 [doi]
- TaxoVec: Taxonomy Based Representation for Web User ProfilingQinpei Zhao, Xiongbaixue Yan, Yingjia Zhang, Weixiong Rao, Jiangfeng Li, Chao Mi, Jessie Chen. 548-556 [doi]
- Approximating the Mental Lexicon from Clinical Interviews as a Support Tool for Depression DetectionEsaú Villatoro-Tello, Gabriela Ramírez-de-la-Rosa, Daniel Gática-Pérez, Mathew Magimai-Doss, Héctor Jiménez-Salazar. 557-566 [doi]
- Long-Term, in-the-Wild Study of Feedback about Speech Intelligibility for K-12 Students Attending Class via a Telepresence RobotMatthew Rueben, Mohammad Syed, Emily London, Mark Camarena, Eunsook Shin, Yulun Zhang, Timothy S. Wang, Thomas R. Groechel, Rhianna Lee, Maja J. Mataric. 567-576 [doi]
- EyeMU Interactions: Gaze + IMU Gestures on Mobile DevicesAndy Kong, Karan Ahuja, Mayank Goel, Chris Harrison 0001. 577-585 [doi]
- Multimodal User Satisfaction Recognition for Non-task Oriented Dialogue SystemsWenqing Wei, Sixia Li, Shogo Okada, Kazunori Komatani. 586-594 [doi]
- Cross Lingual Video and Text Retrieval: A New Benchmark Dataset and AlgorithmJayaprakash Akula, Abhishek Sharma, Rishabh Dabral, Preethi Jyothi, Ganesh Ramakrishnan. 595-603 [doi]
- Interaction Techniques for 3D-positioning Objects in Mobile Augmented RealityCarl-Philipp Hellmuth, Miroslav Bachinski, Jörg Müller 0001. 604-612 [doi]
- Engagement Rewarded Actor-Critic with Conservative Q-Learning for Speech-Driven Laughter Backchannel GenerationÖykü Zeynep Bayramoglu, Engin Erzin, Tevfik Metin Sezgin, Yücel Yemez. 613-618 [doi]
- Knowing Where and What to Write in Automated Live Video Comments: A Unified Multi-Task ApproachHao Wu, Gareth James Francis Jones, François Pitié. 619-627 [doi]
- Tomato Dice: A Multimodal Device to Encourage Breaks During WorkMarissa A. Thompson, Lynette Tan, Cecilia Soto, Jaitra Dixit, Mounia Ziat. 628-635 [doi]
- Looking for Laughs: Gaze Interaction with Laughter Pragmatics and CoordinationChiara Mazzocconi, Vladislav Maraev, Vidya Somashekarappa, Christine Howes. 636-644 [doi]
- Improved Speech Emotion Recognition using Transfer Learning and Spectrogram AugmentationSarala Padi, Seyed Omid Sadjadi, Ram D. Sriram, Dinesh Manocha. 645-652 [doi]
- Mass-deployable Smartphone-based Objective Hearing Screening with Otoacoustic EmissionsNils Heitmann, Thomas Rosner, Samarjit Chakraborty. 653-661 [doi]
- ThermEarhook: Investigating Spatial Thermal Haptic Feedback on the Auricular Skin AreaArshad Nasser, Kexin Zheng, Kening Zhu. 662-672 [doi]
- Gaze-based Multimodal Meaning Recovery for Noisy / Complex EnvironmentsÖzge Alaçam, Ganeshan Malhotra, Eugen Ruppert, Chris Biemann. 673-681 [doi]
- Semi-supervised Visual Feature Integration for Language Models through Sentence VisualizationLisai Zhang, Qingcai Chen, Joanna Siebert, Buzhou Tang. 682-686 [doi]
- Speech Guided Disentangled Visual Representation Learning for Lip ReadingYa Zhao, Cheng Ma, Zunlei Feng, Mingli Song. 687-691 [doi]
- Enhancing Ultrasound Haptics with Parametric Audio EffectsEuan Freeman. 692-696 [doi]
- Perception of Ultrasound Haptic Focal Point MotionEuan Freeman, Graham A. Wilson. 697-701 [doi]
- Intra- and Inter-Contrastive Learning for Micro-expression Action Unit DetectionYante Li, Guoying Zhao. 702-706 [doi]
- HEMVIP: Human Evaluation of Multiple Videos in ParallelPatrik Jonell, Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Gustav Eje Henter. 707-711 [doi]
- Knowledge- and Data-Driven Models of Multimodal Trajectories of Public Speaking Anxiety in Real and Virtual SettingsEhsanul Haque Nirjhar, Amir H. Behzadan, Theodora Chaspari. 712-716 [doi]
- Predicting Gaze from Egocentric Social Interaction Videos and IMU DataSanket Kumar Thakur, Cigdem Beyan, Pietro Morerio, Alessio Del Bue. 717-722 [doi]
- An Interpretable Approach to Hateful Meme DetectionTanvi Deshpande, Nitya Mani. 723-727 [doi]
- Human-Guided Modality Informativeness for Affective StatesTorsten Wörtwein, Lisa B. Sheeber, Nicholas Allen, Jeffrey F. Cohn, Louis-Philippe Morency. 728-734 [doi]
- Direct Gaze Triggers Higher Frequency of Gaze Change: An Automatic Analysis of Dyads in Unstructured ConversationGeorgiana Cristina Dobre, Marco Gillies, Patrick Falk, Jamie A. Ward, Antonia F. de C. Hamilton, Xueni Pan. 735-739 [doi]
- Online Study Reveals the Multimodal Effects of Discrete Auditory Cues in Moving Target Estimation TaskKatsutoshi Masai, Akemi Kobayashi, Toshitaka Kimura. 740-744 [doi]
- DynGeoNet: Fusion Network for Micro-expression SpottingThuong-Khanh Tran, Quang Nhat Vo, Guoying Zhao. 745-749 [doi]
- Earthquake Response Drill Simulator based on a 3-DOF Motion base in Augmented RealityNamKyoo Kang, SeungJoon Kwon, Jongchan Lee, Sang-Woo Seo. 750-752 [doi]
- States of Confusion: Eye and Head Tracking Reveal Surgeons' Confusion during Arthroscopic SurgeryBenedikt Hosp, Myat Su Yin, Peter Haddawy, Ratthaphum Watcharopas, Paphon Sa-Ngasoongsong, Enkelejda Kasneci. 753-757 [doi]
- Personality Prediction with Cross-Modality Feature ProjectionDaisuke Kamisaka, Yuichi Ishikawa. 758-762 [doi]
- Attention-based Multimodal Feature Fusion for Dance Motion GenerationKosmas Kritsis, Aggelos Gkiokas, Aggelos Pikrakis, Vassilis Katsouros. 763-767 [doi]
- Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural NetworksYashish M. Siriwardena, Carol Y. Espy-Wilson, Chris Kitchen, Deanna L. Kelly. 768-772 [doi]
- M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in ConversationsDushyant Singh Chauhan, Gopendra Vikram Singh, Navonil Majumder, Amir Zadeh 0001, Asif Ekbal, Pushpak Bhattacharyya, Louis-Philippe Morency, Soujanya Poria. 773-777 [doi]
- Optimized Human-AI Decision Making: A Personal PerspectiveAlex Pentland. 778-780 [doi]
- Dependability and Safety: Two Clouds in the Blue Sky of Multimodal InteractionPhilippe A. Palanque, David Navarre. 781-787 [doi]
- Towards Sonification in Multimodal and User-friendlyExplainable Artificial IntelligenceBjörn W. Schuller, Tuomas Virtanen, Maria Riveiro, Georgios Rizos, Jing Han 0010, Annamaria Mesaros, Konstantinos Drossos. 788-792 [doi]
- Photogrammetry-based VR Interactive Pedagogical Agent for K12 EducationLaduona Dai. 793-796 [doi]
- Assisted End-User Robot ProgrammingGopika Ajaykumar. 797-801 [doi]
- Using Generative Adversarial Networks to Create Graphical User Interfaces for Video GamesChristopher Acornley. 802-806 [doi]
- Natural Language Stage of Change Modelling for "Motivationally-driven" Weight Loss SupportSelina Meyer. 807-811 [doi]
- Understanding Personalised Auditory-Visual Associations in Multi-Modal InteractionsPatrick O'Toole. 812-816 [doi]
- Semi-Supervised Learning for Multimodal Speech and Emotion RecognitionYuanchao Li. 817-821 [doi]
- Development of an Interactive Human/Agent Loop using Multimodal Recurrent Neural NetworksJieyeon Woo. 822-826 [doi]
- What If I Interrupt YouLiu Yang. 827-831 [doi]
- Accessible Applications - Study and Design of User Interfaces to Support Users with DisabilitiesMarianna Di Gregorio. 832-834 [doi]
- Web-ECA: A Web-based ECA PlatformFumio Nihei, Yukiko I. Nakano. 835-836 [doi]
- Multimodal Interaction in the Production Line - An OPC UA-based Framework for Injection Molding MachineryFerdinand Fuhrmann, Anna Maria Weber, Stefan Ladstätter, Stefan Dietrich, Johannes Rella. 837-838 [doi]
- Haply 2diy: An Accessible Haptic Plateform Suitable for Remote LearningAntoine Weill-Duflos, Nicholas Ong, Felix Desourdy, Benjamin Delbos, Steve Ding, Colin R. Gallacher. 839-840 [doi]
- Combining Visual and Social Dialogue for Human-Robot InteractionNancie Gunson, Daniel Hernández García, Jose L. Part, Yanchao Yu, Weronika Sieinska, Christian Dondrup, Oliver Lemon. 841-842 [doi]
- Introducing an Integrated VR Sensor Suite and Cloud PlatformKai-min Kevin Chang, Yueran Yuan. 843-845 [doi]
- NLP-guided Video Thin-slicing for Automated Scoring of Non-Cognitive, Behavioral Performance TasksChee Wee Leong, Xianyang Chen, Vinay Basheerabad, Chong Min Lee, Patrick Houghton. 846-847 [doi]
- The EMPATHIC Virtual Coach: a demoJavier M. Olaso, Alain Vázquez, Leila Ben Letaifa, Mikel de Velasco, Aymen Mtibaa, Mohamed Amine Hmani, Dijana Petrovska-Delacrétaz, Gérard Chollet, César Montenegro, Asier López-Zorrilla, Raquel Justo, Roberto Santana, Jofre Tenorio-Laranga, Eduardo González-Fraile, Begoña Fernández-Ruanova, Gennaro Cordasco, Anna Esposito, Kristin Beck Gjellesvik, Anna Torp Johansen, Maria Stylianou Kornes, Colin Pickard, Cornelius Glackin, Gary Cahalane, Pau Buch, Cristina Palmero, Sergio Escalera, Olga Gordeeva, Olivier Deroo, Anaïs Fernández, Daria Kyslitska, José Antonio Lozano, María Inés Torres, Stephan Schlögl. 848-851 [doi]
- Automated Assessment of PainZakia Hammal, Nadia Berthouze, Steffen Walter. 852 [doi]
- 2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH)Hiroki Tanaka, Satoshi Nakamura, Jean-Claude Martin, Catherine Pelachaud. 853-854 [doi]
- Insights on Group and Team DynamicsJoseph Allen, Hayley Hung, Joann Keyton, Gabriel Murray, Catharine Oertel, Giovanna Varni. 855-856 [doi]
- CATS2021: International Workshop on Corpora And Tools for Social skills annotationBeatrice Biancardi, Eleonora Ceccaldi, Chloé Clavel, Mathieu Chollet, Tanvi Dinkar. 857-859 [doi]
- 3rd Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the WildDennis Küster, Felix Putze, David St-Onge, Pascal E. Fortin, Nerea Urrestilla, Tanja Schultz. 860-861 [doi]
- 2nd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child BehaviourSaeid Safavi 0001, Heysem Kaya, Roy S. Hessels, Maryam Najafian, Sandra Hanekamp. 862-863 [doi]
- ASMMC21: The 6th International Workshop on Affective Social Multimedia ComputingDongyan Huang, Björn Schuller, Jianhua Tao, Lei Xie, Jie Yang. 864-867 [doi]
- Workshop on Multimodal Affect and Aesthetic ExperienceMichal Muszynski, Edgar Roman-Rangel, Leimin Tian, Theodoros Kostoulas, Theodora Chaspari, Panos Amelidis. 868-869 [doi]
- Empowering Interactive Robots by Learning Through Multimodal Feedback ChannelCigdem Turan, Dorothea Koert, Karl David Neergaard, Rudolf Lioutikov. 870-871 [doi]
- GENEA Workshop 2021: The 2nd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied AgentsTaras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Zerrin Yumak, Gustav Henter. 872-873 [doi]
- Socially Informed AI for Healthcare: Understanding and Generating Multimodal Nonverbal CuesOya Çeliktutan, Alexandra Livia Georgescu, Nicholas Cummins. 874-876 [doi]