Abstract is missing.
- User-Defined Interaction for Very Low-Cost Head-Mounted DisplaysYuen C. Law, Harrison Mendieta-Dávila, Daniel García-Fallas, Rogelio González-Quirós, Mario Chacón-Rivas. 1-5 [doi]
- Effects of Incoherence in Multimodal Explanations of Robot FailuresPradip Pramanick, Luca Raggioli, Alessandra Rossi 0001, Silvia Rossi 0002. 6-10 [doi]
- Design and Preliminary Evaluation of a Stress Reflection System for High-Stress Training EnvironmentsSurely Akiri, Vasundhara Joshi, Sanaz Taherzadeh, Gary Williams, Helena M. Mentis, Andrea Kleinsmith. 11-15 [doi]
- Haptic Feedback to Reduce Individual Differences in Corrective Actions for Skill LearningShigeharu Ono, Noboru Ninomiya, Hideaki Kanai. 16-20 [doi]
- Towards Multimodality: Comparing Quantifications of Movement CoordinationChengyu Fan, Verónica Romero 0002, Alexandra Paxton, Tahiya Chowdhury. 21-25 [doi]
- The Potential of Multimodal Compositionality for Enhanced Recommendations through Sentiment AnalysisSaba Nazir, Mehrnoosh Sadrzadeh. 26-30 [doi]
- Enhancing Autism Spectrum Disorder Screening: Implementation and Pilot Testing of a Robot-Assisted Digital ToolAlessandro G. Di Nuovo, Adam Kay. 31-35 [doi]
- Understanding LLMs Ability to Aid Malware Analysts in Bypassing Evasion TechniquesMiuyin Yong Wong, Kevin Valakuzhy, Mustaque Ahamad, Douglas M. Blough, Fabian Monrose. 36-40 [doi]
- "Is This It?": Towards Ecologically Valid Benchmarks for Situated CollaborationDan Bohus, Sean Andrist, Yuwei Bao, Eric Horvitz, Ann Paradiso. 41-45 [doi]
- An Audiotactile System for Accessible Graphs on a Coordinate PlaneCrystal Yang, Paul Taele. 46-50 [doi]
- Levels of Multimodal InteractionAnoop K. Sinha, Chinmay Kulkarni 0001, Alex Olwal. 51-55 [doi]
- Comparing Subjective Measures of Workload in Video Game Play: Evaluating the Test-Retest Reliability of the VGDS and NASA-TLXEmma Jane Pretty, Renan Luigi Martins Guarese, Haytham M. Fayek, Fabio Zambetta. 56-60 [doi]
- Towards Investigating Biases in Spoken Conversational SearchSachin Pathiyan Cherumanal, Falk Scholer, Johanne R. Trippas, Damiano Spina. 61-66 [doi]
- Crossmodal Correspondences between Piquancy/Spiciness and Visual ShapeYukun Wang, Masaki Ohno, Takuji Narumi, Young Ah Seong. 67-71 [doi]
- The OpenVIMO Platform: A Tutorial on Building and Managing Large-scale Online Experiments involving VideoconferencingBernd Dudzik, José Vargas Quiros. 72-74 [doi]
- An LLM-powered Socially Interactive Agent with Adaptive Facial Expressions for Conversing about HealthJoaquin Molto, Jonathan Fields, Ubbo Visser, Christine L. Lisetti. 75-77 [doi]
- Bespoke: Using LLM agents to generate just-in-time interfaces by reasoning about user intentPalash Nandy, Sigurdur O. Adalgeirsson, Anoop K. Sinha, Tanya Kraljic, Mike Cleron, Lei Shi, Angad Singh, Ashish Chaudhary, Ashwin Ganti, Chris Melancon, Shudi Zhang, David Robishaw, Horia Stefan Ciurdar, Justin Secor, Kenneth Aleksander Robertsen, Kirsten Climer, Madison Le, Mathangi Venkatesan, Peggy Chi, Peixin Li, Peter F. McDermott, Rachel Shim, Selcen Onsan, Shilp Vaishnav, Stephanie Guamán. 78-81 [doi]
- An AI-Powered Interactive Interface to Enhance Accessibility of Interview Training for Military VeteransRakesh Chowdary Yarlagadda, Pranjal Aggarwal, Vaibhav Jamadagni, Ghritachi Mahajani, Pavan Kumar Malasani, Ehsanul Haque Nirjhar, Theodora Chaspari. 82-84 [doi]
- ARCADE: An Augmented Reality Display Environment for Multimodal Interaction with Conversational AgentsCarolin Schindler, Daiki Mayumi, Yuki Matsuda 0001, Niklas Rach, Keiichi Yasumoto, Wolfgang Minker. 85-87 [doi]
- Let's Dance Together! AI Dancers Can Dance to Your Favorite Music and StyleRyo Ishii, Shin'ichiro Eitoku, Shohei Matsuo, Motohiro Makiguchi, Ayami Hoshi, Louis-Philippe Morency. 88-90 [doi]
- Enhancing Biodiversity Monitoring: An Interactive Tool for Efficient Identification of Species in Large Bioacoustics DatasetsHannes Kath, Ilira Troshani, Bengt Lüers, Thiago S. Gouvêa, Daniel Sonntag. 91-93 [doi]
- Combining Generative and Discriminative AI for High-Stakes Interview PracticeChee Wee Leong, Navaneeth Jawahar, Vinay Basheerabad, Torsten Wörtwein, Andrew Emerson, Guy Sivan. 94-96 [doi]
- Human Contact Annotator: Annotating Physical Contact in Dyadic InteractionsMetehan Doyran, Albert Ali Salah, Ronald Poppe. 97-99 [doi]
- Smart Compost Bin for Measurement of Consumer Food WasteAidan J. Beery, Daniel W. Eastman, Jake Enos, William Richards, Patrick J. Donnelly. 100-107 [doi]
- Towards Wine Tasting Activity Recognition for a Digital SommelierMario O. Parra, Jesús Favela, Luís A. Castro 0001, Daniel Gatica-Perez. 108-112 [doi]
- Computational Gastronomy and Eating with AcoustophoresisLei Gao, Yutaka Tokuda, Shubhi Bansal, Sriram Subramanian. 113-116 [doi]
- Automatic Recognition of Commensal Activities in Co-located and Online settingsKheder Yazgi, Cigdem Beyan, Maurizio Mancini, Radoslaw Niewiadomski. 117-121 [doi]
- Do We Need Artificial Dining Companions? Exploring Human Attitudes Toward Robots in Commensality SettingsAlbana Hoxha, Hunter Fong, Radoslaw Niewiadomski. 122-128 [doi]
- Analyzing Emotion Impact of Mukbang Viewing Through Facial Expression Recognition using Support Vector MachineAnnika Capada, Ryan Deculawan, Lauren Garcia, Sophia Oquias, Ron Resurreccion, Jocelynn Cu, Merlin Suarez. 129-133 [doi]
- How does red taste?: Exploring how colour-taste associations affect our experience of food In Real Life and Extended RealityHaeji Shin, Christopher Dawes, Jing Xue, Marianna Obrist. 134-137 [doi]
- Towards interpretable co-speech gestures synthesis using STARGATELouis Abel, Vincent Colotte, Slim Ouni. 138-146 [doi]
- Qualitative study of gesture annotation corpus : Challenges and perspectivesMickaëlla Grondin-Verdon, Domitille Caillat, Slim Ouni. 147-155 [doi]
- Gesture Evaluation in Virtual RealityAxel Wiebe Werner, Jonas Beskow, Anna Deichler. 156-164 [doi]
- Gesture Area Coverage to Assess Gesture Expressiveness and Human-LikenessRodolfo Luis Tonoli, Paula Dornhofer Paro Costa, Leonardo Boulitreau de Menezes Martins Marques, Lucas Hideki Ueda. 165-169 [doi]
- Benchmarking Speech-Driven Gesture Generation Models for Generalization to Unseen Voices and Noisy EnvironmentsJohsac Isbac Gomez Sanchez, Kevin Adier Inofuente Colque, Leonardo Boulitreau de Menezes Martins Marques, Paula Dornhofer Paro Costa, Rodolfo Luis Tonoli. 170-174 [doi]
- 3D Gaze Tracking for Studying Collaborative Interactions in Mixed-Reality EnvironmentsEduardo Davalos, Yike Zhang, Ashwin T. S, Joyce Horn Fonteles, Umesh Timalsina, Gautam Biswas. 175-183 [doi]
- Gaze-Informed Vision Transformers: Predicting Driving Decisions Under UncertaintySharath C. Koorathota, Nikolas Papadopoulos, Jia-Li Ma, Shruti Kumar, Xiaoxiao Sun, Arunesh Mittal, Patrick Adelman, Paul Sajda. 184-194 [doi]
- Detecting when Users Disagree with Generated CaptionsOmair Shahzad Bhatti, Harshinee Sriram, Abdulrahman Mohamed Selim, Cristina Conati, Michael Barz, Daniel Sonntag. 195-203 [doi]
- Investigating the Impact of Illumination Change on the Accuracy of Head-Mounted Eye Trackers: A Protocol and Initial ResultsMohammadhossein Salari, Roman Bednarik. 204-210 [doi]
- Enhancing Digital Agriculture with XAI: Case Studies on Tabular Data and Future DirectionsRui Pedro da Costa Porfirio, Pedro Albuquerque Santos, Rui Neves Madeira. 211-217 [doi]
- Coupling of Task and Partner Model: Investigating the Intra-Individual Variability in Gaze during Human-Robot Explanatory DialogueAmit Singh, Katharina J. Rohlfing. 218-224 [doi]
- Quote to Explain: Using Multimodal Metalinguistic Markers to Explain Large Language Models' Understanding CapabilitiesMilena Belosevic, Hendrik Buschmeier. 225-227 [doi]
- Towards Multimodal Co-Construction of Explanations for Robots: Combining Inductive Logic Programming and Large Language Models to Explain Robot FaultsYoussef Mahmoud Youssef, Teena Hassan. 228-230 [doi]