Abstract is missing.
- Multimodal Task Analysis in Wearable ContextsJulien Epps. 1 [doi]
- Designing for Meaningful Oversight: Human and Organisational Agency in Multimodal AI SystemsLiming Zhu 0001. 2 [doi]
- Multimodal AI for Transforming Industries and Empowering Social InteractionFang Chen 0001. 3 [doi]
- Multimodal Behavioral Characterization of Dyadic Alliance in Support GroupsKevin Hyekang Joo, Zongjian Li, Yunwen Wang, Yuanfeixue Nan, Mina J. Kian, Shriya Upadhyay, Maja J. Mataric, Lynn Carol Miller, Mohammad Soleymani 0001. 4-15 [doi]
- What makes you say yes? An investigation of mental state and personality in persuasion during a dyadic conversationSiyuan Chen 0002. 16-24 [doi]
- Decoding Affective States without Labels: Bimodal Image-brain SupervisionVadym Gryshchuk, Maria Maistro, Christina Lioma, Tuukka Ruotsalo. 25-34 [doi]
- Can Adaptive Interviewer Robot Based on Social Signals Make a Better Impression on Interviewees and Encourage Self-Disclosure?Fuminori Nagasawa, Shogo Okada. 35-43 [doi]
- Foundation Feature-Guided Hierarchical Fusion of EEG-Physiological for Emotion EstimationHaifeng Zhang, Von Ralph Dane Marquez Herbuela, Yukie Nagai. 44-50 [doi]
- Evaluating the Efficacy of Pulse Transit Time between Palm and Forehead in Blood Pressure EstimationChuchu Qiu, Jing Wei Chin, Tsz Tai Chan, Kwan Long Wong, Richard Hau Yue So. 51-59 [doi]
- From Lab to Wrist: Bridging Metabolic Monitoring and Consumer Wearables for Heart Rate and Oxygen Consumption ModelingBarak Gahtan, Sanketh Vedula, Gil Samuelly Leichtag, Einat Kodesh, Alex M. Bronstein. 60-77 [doi]
- SpikEy: Preventing Drink Spiking using WearablesZhigang Yin, Ngoc Thi Nguyen, Agustin Zuniga, Mohan Liyanage, Petteri Nurmi, Huber Flores. 78-86 [doi]
- From Speech and PPG to EDA: Stress Detection Based on Cross-Modal Fine-Tuning of Foundation ModelsAlia Ahmed Al Dossary, Mathieu Chollet, Alessandro Vinciarelli. 87-95 [doi]
- Psychological and Neurophysiological Indicators of Stress and Relaxation in Immersive Virtual Reality Environments: A Multimodal ApproachAnkit Arvind Prasad, Shashank Laxmikant Bidwai, Ashutosh Jitendra Zawar, Diven Ashwani Ahuja, Apostolos Kalatzis, Vishnunarayan Girishan Prabhu. 96-105 [doi]
- Exploring the effects of force feedback on VR Keyboards with varying visual designsZhenxing Li, Jari Kangas 0001, Ahmed Farooq, Roope Raisamo. 106-115 [doi]
- Functional Near-Infrared Spectroscopy (fNIRS) Analysis of Interaction Techniques in Touchscreen-Based Educational Gaming: fNIRS Analysis of Interaction Techniques in Touchscreen-Based Educational GamingShayla Sharmin, Elham Bakhshipour, Mohammad Fahim Abrar, Behdokht Kiafar, Pinar Kullu, Nancy Getchell, Roghayeh Leila Barmaki. 116-125 [doi]
- AirSpartOne: One-Handed Distal Pointing for Large Displays on Mobile Devices and in MidairMartin Birlouez, Yosra Rekik, Laurent Grisoni. 126-134 [doi]
- StoryDiffusion: How to Support UX Storyboarding With Generative-AIZhaohui Liang, Xiaoyu Zhang, Kevin Ma, Zhao Liu, Xipei Ren, Kosa Goucher-Lambert, Can Liu. 135-144 [doi]
- A Scenario-Based Design Pack for Exploring Multimodal Human-GenAI RelationsJosh Andres, Chris Danta, Andrea Bianchi, Sahar Farzanfar, Gloria Milena Fernández Nieto, Alexa Becker, Tara Capel, Frances Liddell, Shelby Hagemann, Ned Cooper, Sungyeon Hong, Li Lin, Eduardo Benítez Sandoval, Anna Brynskov, Hubert Dariusz Zajac, Zhuying Li, Tianyi Zhang, Arngeir Berge. 145-154 [doi]
- Lightweight Transformers for Isolated Sign Language RecognitionCristina Luna Jiménez, Lennart Eing, Annalena Bea Aicher, Fabrizio Nunnari, Elisabeth André. 155-163 [doi]
- All of That in 15 Minutes? Exploring Privacy Perceptions Across Cognitive Abilities via Ad-hoc LLM-Generated Profiles Inferred from Social Media UseKirill Kronhardt, Sebastian Hoffmann, Fabian Adelt, Max Pascher, Jens Gerken. 164-172 [doi]
- SignFlow: End-to-End Sign Language Generation for One-to-Many Modeling using Conditional Flow MatchingNabeela Khan, Bowen Wu 0002, Sihan Tan, Carlos Toshinori Ishi, Kazuhiro Nakadai. 173-180 [doi]
- MENA: A Multimodal Framework for Analyzing Caregiver Emotions and Competencies in AR Geriatric SimulationsBehdokht Kiafar, Pavan Uttej Ravva, Salam Daher, Asif Ahmmed Joy, Roghayeh Leila Barmaki. 181-190 [doi]
- Multimodal LLM using Federated Visual Instruction Tuning for Visually ImpairedAnkith Bala, Alina Vereshchaka. 191-199 [doi]
- Enhancing Gaze Prediction in Multi-Party Conversations via Speaker-Aware Multimodal AdaptationMeng Chen Lee, Zhigang Deng 0001. 200-208 [doi]
- Real-time Generation of Various Types of Nodding for Avatar Attentive Listening SystemKazushi Kato, Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara. 209-217 [doi]
- Converting Spatial to Social: Using Persistent Homology to Understand Social GroupsValerie K. Chen, Claire Liang, Julie A. Shah, Sean Andrist. 218-227 [doi]
- Multimodal Analysis of Disagreement in Dyadic Conversations: An Approach Based on Emotion RecognitionAreej Buker, Emily Smith, Olga Perepelkina, Alessandro Vinciarelli. 228-237 [doi]
- Speech-to-Joy: Self-Supervised Features for Enjoyment Prediction in Human-Robot ConversationRicardo Santana, Bahar Irfan, Erik Lagerstedt, Gabriel Skantze, André Pereira 0001. 238-248 [doi]
- Multimodal Quantitative Measures for Multiparty Behavior EvaluationOjas Shirekar, Wim Pouw, Chenxu Hao, Vrushank Phadnis, Thabo Beeler, Chirag Raman. 249-264 [doi]
- Learning Multimodal Motion Cues for Online End-of-Turn Prediction in Multi-Party DialogueMeng Chen Lee, Zhigang Deng 0001. 265-274 [doi]
- Leveraging Pre-Trained Transformers and Facial Embeddings for Multimodal Hirability Prediction in Job InterviewsEric Fithian, Theodora Chaspari. 275-283 [doi]
- Beyond Utterance: Understanding Group Problem Solving through Discussion SequencesZhuoxu Duan, Zhengye Yang, Brooke Foucault Welles, Richard J. Radke. 284-293 [doi]
- Using a Secondary Channel to Display the Internal Empathic Resonance of LLM-Driven Agents for Mental Health SupportMatthias Schmidmaier, Jonathan Rupp, Sven Mayer. 294-304 [doi]
- Adaptive Gen-AI Guidance in Virtual Reality: A Multimodal Exploration of Engagement in Neapolitan Pizza-MakingKa Hei Carrie Lau, Sema Sen, Philipp Stark, Efe Bozkir, Enkelejda Kasneci. 305-316 [doi]
- Privileged Contrastive Pretraining for Multimodal Affect ModellingKosmas Pinitas, Konstantinos Makantasis, Georgios N. Yannakakis. 317-325 [doi]
- USER-VLM 360: Personalized Vision Language Models with User-aware Tuning for Social Human-Robot InteractionsHamed Rahimi, Adil Bahaj, Mouad Abrini, Mahdi Khoramshahi, Mounir Ghogho, Mohamed Chetouani. 326-336 [doi]
- Demographic User Modeling for Social Robotics with Multimodal Pre-trained ModelsHamed Rahimi, Mouad Abrini, Jeanne Malecot, Ying Lai, Adrien Jacquet Crétides, Mahdi Khoramshahi, Mohamed Chetouani. 337-343 [doi]
- Disentangling Cross-Modal Interactions for Enhanced Multimodal Emotion Recognition in ConversationJian Ding, Bo Zhang, Dailin Li, Jian Wang, Hongfei Lin. 344-353 [doi]
- Exploring the Impact of Distance on XR Selection TechniquesBecky Spittle, Maite Frutos Pascual, Chris Creed, Ian Williams 0001. 354-363 [doi]
- A Multifaceted Multi-Agent Framework for Zero-Shot Emotion Analysis and Recognition of Symbolic MusicJiahao Zhao, Yunjia Li, Kazuyoshi Yoshii. 364-371 [doi]
- Motion Diffusion Autoencoders: Enabling Attribute Manipulation in Human Motion Demonstrated on Karate TechniquesAnthony Richardson, Felix Putze. 372-380 [doi]
- Towards Audio Personalization for Accessible Digital MediaDhruv Jain, Jason Miller. 381-386 [doi]
- WatchHAR: Real-time On-device Human Activity Recognition System for SmartwatchesTaeyoung Yeon, Vasco Xu, Henry Hoffmann, Karan Ahuja. 387-394 [doi]
- Team Dynamics in Human-AI Collaboration: Effects on Confidence, Satisfaction, and AccountabilityMamehgol Yousefi, Ahmad Shahi, Mos Sharifi, Alvaro J. Jorge Romera, Simon Hoermann, Thammathip Piumsomboon. 395-404 [doi]
- MERD-360VR: A Multimodal Emotional Response Dataset from 360° VR Videos Across Different Age GroupsQiang Chen, Shikun Zhou, Yuming Fang, Dan Luo, Tingsong Lu. 405-414 [doi]
- When Robots Listen: Predicting Empathy Valence from Multimodal Storytelling DataJiayu Wang, Himadri Shekhar Mondal, Tom Gedeon, Md. Zakir Hossain. 415-423 [doi]
- Unobtrusive Universal Acoustic Adversarial Attacks on Speech Foundation Models in the WildJayden Fassett, Anjila Budathoki, Jack Morris, Qin Hu 0001, Yi Ding. 424-433 [doi]
- Time-channel Adaptive Fusion and Hierarchical Attention Mechanism for Dynamic Hand Gesture RecognitionLongjie Huang, Jianhai Liu, Yong Gu, Kai Jiang, Haibo Li. 434-445 [doi]
- Predicting End-of-turn and Backchannel Based on Multimodal Voice Activity Prediction ModelRyo Ishii, Shin'ichiro Eitoku, Ryota Yokoyama, Junichi Sawase. 446-455 [doi]
- Seeing, Hearing, Feeling: Designing Multimodal Alerts for Critical Drone ScenariosNina Knieriemen, Anke Hirsch, Muhammad Moiz Sakha, Florian Daiber, Hannah Kolb, Simone Hüning, Frederik Wiehr, Antonio Krüger. 456-465 [doi]
- Analyzing Character Representation in Media Content using Multimodal Foundation Model: Effectiveness and TrustEvdoxia Taka, Debadyuti Bhattacharya, Joanne Garde-Hansen, Sanjay Sharma, Tanaya Guha. 466-474 [doi]
- Multimodal Behavioral Patterns Analysis with Eye-Tracking and LLM-Based ReasoningDongyang Guo, Yasmeen Abdrabou, Enkeleda Thaqi, Enkelejda Kasneci. 475-484 [doi]
- A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal InterfacesRonja Heinrich, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik. 485-495 [doi]
- Please Let Me Think: The Influence of Conversational Fillers on Transparency and Perception of Waiting Time when Interacting with a Conversational AI in Virtual RealityDavid Obremski, Paula Friedrich, Carolin Wienrich. 496-505 [doi]
- DifussionCleft: Facial Anomaly Synthesis Guided by TextKaren Rosero, Lucas M. Harrison, Alex A. Kane, Rami R. Hallac, Carlos Busso. 506-515 [doi]
- A Multimodal Classroom Video Question-Answering Framework for Automated Understanding of Collaborative LearningNithin Sivakumaran, Chia-Yu Yang, Abhay Zala, Shoubin Yu, Daeun Hong, Xiaotian Zou, Elias Stengel-Eskin, Dan Carpenter, Wookhee Min, Cindy E. Hmelo-Silver, Jonathan P. Rowe, James C. Lester, Mohit Bansal. 516-525 [doi]
- Investigating differences in Paramedic trainees' multimodal interaction during low and high physiological synchronyVasundhara Joshi, Surely Akiri, Sanaz Taherzadeh, Gary Williams 0002, Andrea Kleinsmith. 526-534 [doi]
- A multimodal Framework for exploring behavioural cues for automatic Stress DetectionRebecca Valerio, Marwa Mahmoud. 535-539 [doi]
- Write! Draw! Move!: Investigating the Effects of Positive and Negative Self-Reflection on Emotion through Self-Expression ModalitiesGolnaz Moharrer, Kavya Rajendran, Rowena Pinto, Andrea Kleinsmith. 540-549 [doi]
- When Words Fall Short: The Case for Conversational Interfaces that Don't ListenJames Simpson, Hamish Stening, Gaurav Patil, Patrick Nalepka, Mark Dras, Rachel W. Kallen, Simon G. Hosking, Michael J. Richardson, Deborah Richards 0001. 550-560 [doi]
- Large Language Models For Multimodal User Interaction in Virtual EnvironmentsAhmed Sayed, Kevin Pfeil. 561-569 [doi]
- Disentangling Perceptual Ambiguity in Multifunctional Nonverbal Behaviors in Conversations via Tensor Spectrum DecompositionIssa Tamura, Momoka Tajima, Shiro Kumano, Kazuhiro Otsuka. 570-578 [doi]
- A Block-Level Fine-Graining Framework for Multimodal Fusion in Federated LearningGuozhi Zhang, Mengying Jia, Shuyan Feng, Zixuan Liu. 579-587 [doi]
- Multimodal Synthetic Data Finetuning and Model Collapse: Insights from VLMs and Diffusion ModelsZizhao Hu, Mohammad Rostami, Jesse Thomason. 588-599 [doi]
- Understanding and Supporting Multimodal AI Chat Interactions of DHH College Students: an Empirical StudyNan Zhuang, Yanni Ma, Xin Zhao, Wang Ying, Shaolong Chai, Shitong Weng, Mengru Xue, Yuxi Mao, Cheng Yao. 600-604 [doi]
- BiFuseNet: A Multimodal Network for Estimating Blood Alcohol Concentration via Bidirectional Hierarchical FusionAbdullah Tariq, Arooba Maqsood, Martin Masek, Syed Zulqarnain Gilani. 605-613 [doi]
- Punctual or Continuous? Analyzing Depression Traces in Language and Paralanguage with Multiple Instance LearningRawan Alsarrani, Anna Esposito, Alessandro Vinciarelli. 614-623 [doi]
- Pinching Visuo-haptic Display: Investigating Cross-Modal Effects of Visual Textures on Electrostatic Cloth Tactile SensationsTakekazu Kitagishi, Chun Wei Ooi, Yuichi Hiroi, Jun Rekimoto. 624-633 [doi]
- Knowledge Graphs and Fine-Grained Visual Features: A Potent Duo Against CheapfakesTuan-Vinh La, Minh Hieu Nguyen, Minh-Son Dao. 634-642 [doi]
- A Multilingual, Multimodal Dataset for Disinformation and Out-of-Context Analysis with Rich Supportive InformationShuhan Cui, Hanrui Wang 0005, Ching-Chun Chang, Huy H. Nguyen, Isao Echizen. 643-651 [doi]
- Causal Explanation of the Quality of Parent-Child Interactions with Multimodal Behavioral FeaturesKatherine Guerrerio, Lujie Karen Chen, Lisa Berlin, Brenda Jones Harden. 652-662 [doi]
- Few-shot Fine-grained Image Classification with Interpretable Prompt Learning through Distribution AlignmentDongliang Guo, Handong Zhao, Ryan Rossi, SungChul Kim, Nedim Lipka, Tong Yu, Sheng Li. 663-672 [doi]
- VitaStress: A Multimodal Dataset for Stress DetectionPaul Schreiber, Simon Burbach, Beyza Cinar, Lennart Mackert, Maria Maleshkova. 673-681 [doi]
- Talking-to-Build: How LLM-Assisted Interface Shapes Player Performance and Experience in MinecraftXin Sun, Lei Wang, Yue Li 0044, Jie Li, Massimo Poesio, Julian Frommel, Koen V. Hindriks, Jiahuan Pei. 682-692 [doi]
- Human Authenticity and Flourishing in an AI-Driven World: Edmund's Journey and the Call for MindfulnessSebastian Zepf, Mark Colley. 693-698 [doi]
- MUSE: A Multimodal, Generative, and Symbolic Framework for Human Experience ModelingMohammad Rashedul Hasan. 699-705 [doi]
- Designing and Evaluating Gen-AI for Cultural ResilienceKa Hei Carrie Lau. 706-710 [doi]
- Cognitive Effort Analysis in Digital Learning EnvironmentsShayla Sharmin. 711-715 [doi]
- Multimodal Conversational Events Estimation in Complex Social ScenesLitian Li. 716-720 [doi]
- Modeling Social Dynamics from Multimodal Cues in Natural ConversationsKevin Hyekang Joo. 721-725 [doi]
- Multimodal Analysis of Caregiving Interactions in Simulation-Based TrainingBehdokht Kiafar. 726-729 [doi]
- Towards Context-sensitive Emotion RecognitionSayak Mukherjee. 730-734 [doi]
- Designing Multimodal Nonverbal Communication Cues for Multirobot Supervision Through Event Detection and Policy MappingRichard Attfield. 735-739 [doi]
- Developing Virtual Reality (VR) Simulations with Embedded User Analytics for Cognitive Rehabilitation in PTSD VeteransRavi Varman Selvakumaran. 740-744 [doi]
- Towards Seamless Interaction: Neuroadaptive Virtual Reality Interfaces for Target SelectionJalynn Blu Nicoly. 745-748 [doi]
- Towards Intelligent Adaption in Cognitive Assistance Systems through Physiological ComputingJordan Schneider. 749-753 [doi]
- Enhancing Accessibility in Animation: A Context-Aware Audio Description System for Visually Impaired ChildrenMd Fahad Bin Zamal. 754-758 [doi]
- Differentiating Frustration from Cognitive Workload in a Dual-task SystemHeting Wang. 759-763 [doi]
- Decoding social interaction to understand traumatic behaviours in social dynamicsPritesh Nalinbhai Contractor. 764-768 [doi]
- Simulated Insight, Real-World Impact: Enhancing Driving Safety with CARLA-Simulated Personalized Lessons and Eye-Tracking Risk CoachingWenbin Gan, Minh-Son Dao, Koji Zettsu. 769-771 [doi]
- The Crock of Shh: A Whispering Water Interface for Reshaping RealityBrandon Waylan Ables. 772-774 [doi]
- The Human Record Needle: A Novel Interface for Embodied Music InteractionBrandon Waylan Ables. 775-776 [doi]
- PoseDoc: An Interactive Tool for Efficient Annotation in Human Pose EstimationChengyu Fan, Tahiya Chowdhury. 777-780 [doi]
- SocialWise: LLM-Agentic Conversation Therapy for Individuals with Autism Spectrum Disorder to Enhance Communication SkillsAlbert Tang. 781-784 [doi]
- Realtime Multimodal Emotion Estimation using Behavioral and Neurophysiological DataVon Ralph Dane Marquez Herbuela, Yukie Nagai. 785-787 [doi]
- LayLens: Improving Deepfake Understanding through Simplified ExplanationsAbhijeet Narang, Parul Gupta, Liuyijia Su, Abhinav Dhall. 788-790 [doi]
- Affective and Physiological Responses to Immersive Intangible Cultural Heritage Experiences in Extended RealityFasih Haider, Sofia de la Fuente Garcia, Alicia Núñez García, Saturnino Luz. 791-793 [doi]
- A Multilingual Telegram Chatbot for Mental Health Data CollectionDanila Mamontov, Alexey Karpov 0001, Wolfgang Minker. 794-796 [doi]
- ICMI'25 Grand Challenge: A Thermal and Spectral Multimodal Image Dataset for Contaminant Detection in Industrial Organic Food WasteMatthew Vestal, James Ireland, Xing Wang, Ram Subramanian, Damith Herath. 797-802 [doi]
- mIoG: An Evaluation Metric for Multispectral InstanceSegmentation in RoboticsYue Peng, Yizheng Liu, Mengxuan Liang. 803-807 [doi]
- CCMI 2025: Cross-Cultural Multimodal InteractionKoji Inoue, Shogo Okada, Divesh Lala, Sahba Zojaji, Nancy F. Chen, Tatsuya Kawahara. 808-810 [doi]
- The Fifth Edition of the Automated Assessment of Pain (AAP 2025)Zakia Hammal, Steffen Walter 0001, Nadia Bianchi-Berthouze. 811-813 [doi]
- HRAI 2025: The 1st Workshop on Holistic and Responsible Affective IntelligenceYuanchao Li, Dimitrios Kollias, Guillaume Chanel, Marios A. Fanourakis, Michal Muszynski, Brandon M. Booth, Leimin Tian, Madhawa Perera, Catherine Lai, Huili Chen. 814-817 [doi]