Abstract is missing.
- Multimodal information processing in communication: the nature of faces and voicesSophie K. Scott. 1 [doi]
- A Robot Just for You: Multimodal Personalized Human-Robot Interaction and the Future of Work and CareMaja Mataric. 2-3 [doi]
- Projecting life onto machinesSimone Natale. 4 [doi]
- A Multimodal Approach to Investigate the Role of Cognitive Workload and User Interfaces in Human-robot CollaborationApostolos Kalatzis, Saidur Rahman, Vishnunarayan Girishan Prabhu, Laura M. Stanley, Mike P. Wittie. 5-14 [doi]
- Acoustic and Visual Knowledge Distillation for Contrastive Audio-Visual LocalizationEhsan Yaghoubi, André Peter Kelm, Timo Gerkmann, Simone Frintrop. 15-23 [doi]
- AIUnet: Asymptotic inference with U2-Net for referring image segmentationJiangquan Li, Shimin Shan, Yu Liu 0035, Kaiping Xu, Xiwen Hu, Mingcheng Xue. 24-32 [doi]
- Analyzing and Recognizing Interlocutors' Gaze Functions from Multimodal Nonverbal CuesAyane Tashiro, Mai Imamura, Shiro Kumano, Kazuhiro Otsuka. 33-41 [doi]
- Analyzing Synergetic Functional Spectrum from Head Movements and Facial Expressions in ConversationsMai Imamura, Ayane Tashiro, Shiro Kumano, Kazuhiro Otsuka. 42-50 [doi]
- Annotations from speech and heart rate: impact on multimodal emotion recognitionKaushal Sharma, Guillaume Chanel. 51-59 [doi]
- AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture SynthesisHendric Voß, Stefan Kopp. 60-69 [doi]
- ASMRcade: Interactive Audio Triggers for an Autonomous Sensory Meridian ResponseSilvan Mertes, Marcel Strobl, Ruben Schlagowski, Elisabeth André. 70-78 [doi]
- Augmented Immersive Viewing and Listening Experience Based on Arbitrarily Angled Interactive Audiovisual RepresentationToshiharu Horiuchi, Shota Okubo, Tatsuya Kobayashi. 79-83 [doi]
- Breathing New Life into COPD Assessment: Multisensory Home-monitoring for Predicting SeverityZixuan Xiao, Michal Muszynski, Ricards Marcinkevics, Lukas Zimmerli, Adam Daniel Ivankay, Dario Kohlbrenner, Manuel Kuhn, Yves Nordmann, Ulrich Muehlner, Christian Clarenbach, Julia E. Vogt, Thomas Brunschwiler. 84-93 [doi]
- Can empathy affect the attribution of mental states to robots?Cristina Gena, Francesca Manini, Antonio Lieto, Alberto Lillo, Fabiana Vernero. 94-103 [doi]
- Classification of Alzheimer's Disease with Deep Learning on Eye-tracking DataHarshinee Sriram, Cristina Conati, Thalia Shoshana Field. 104-113 [doi]
- Component attention network for multimodal dance improvisation recognitionJia Fu, Jiarui Tan, Wenjie Yin, Sepideh Pashami, Mårten Björkman. 114-118 [doi]
- Computational analyses of linguistic features with schizophrenic and autistic traits along with formal thought disordersTakeshi Saga, Hiroki Tanaka, Satoshi Nakamura 0001. 119-124 [doi]
- Cross-Device Shortcuts: Seamless Attention-guided Content Transfer via Opportunistic Deep Links between Apps and DevicesMarilou Beyeler, Yi Fei Cheng 0001, Christian Holz 0001. 125-134 [doi]
- Crucial Clues: Investigating Psychophysiological Behaviors for Measuring Trust in Human-Robot InteractionMuneeb Ahmad, Abdullah Alzahrani. 135-143 [doi]
- Deciphering Entrepreneurial Pitches: A Multimodal Deep Learning Approach to Predict Probability of InvestmentPepijn Van Aken, Merel M. Jung, Werner Liebregts, Itir Önal Ertugrul. 144-152 [doi]
- Deep Breathing Phase Classification with a Social Robot for Mental HealthKayla Matheus, Ellie Mamantov, Marynel Vázquez, Brian Scassellati. 153-162 [doi]
- Detecting When the Mind Wanders Off Task in Real-time: An Overview and Systematic ReviewVishal Kuvar, Julia W. Y. Kam, Stephen Hutt, Caitlin Mills 0001. 163-173 [doi]
- Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and BaselinesMonisha Singh, Ximi Hoque, Donghuo Zeng, Yanan Wang, Kazushi Ikeda, Abhinav Dhall. 174-182 [doi]
- Early Classifying Multimodal SequencesAlexander Cao, Jean Utke, Diego Klabjan. 183-189 [doi]
- EEG-based Cognitive Load Classification using Feature Masked Autoencoding and Emotion Transfer LearningDustin Pulver, Prithila Angkan, Paul Hungler, Ali Etemad. 190-197 [doi]
- Embracing Contact: Detecting Parent-Infant InteractionsMetehan Doyran, Ronald Poppe, Albert Ali Salah. 198-206 [doi]
- Enhancing Resilience to Missing Data in Audio-Text Emotion Recognition with Multi-Scale Chunk RegularizationWei-Cheng Lin, Lucas Goncalves, Carlos Busso. 207-215 [doi]
- Estimation of Violin Bow Pressure Using Photo-Reflective SensorsYurina Mizuho, Riku Kitamura, Yuta Sugiura. 216-223 [doi]
- Ether-Mark: An Off-Screen Marking Menu For Mobile DevicesHanaë Rateau, Yosra Rekik, Edward Lank. 224-233 [doi]
- Evaluating Outside the Box: Lessons Learned on eXtended Reality Multi-modal Experiments Beyond the LaboratoryBernardo Marques, Samuel S. Silva, Rafael Maio, João Alves 0001, Carlos Ferreira 0001, Paulo Dias, Beatriz Sousa Santos. 234-242 [doi]
- Evaluating the Potential of Caption Activation to Mitigate Confusion Inferred from Facial Gestures in Virtual MeetingsMelanie Heck, Jinhee Jeong, Christian Becker 0001. 243-252 [doi]
- Expanding the Role of Affective Phenomena in Multimodal Interaction ResearchLeena Mathur, Maja J. Mataric, Louis-Philippe Morency. 253-260 [doi]
- Explainable Depression Detection via Head Motion PatternsMonika Gahalawat, Raul Fernandez Rojas, Tanaya Guha, Ramanathan Subramanian, Roland Goecke. 261-270 [doi]
- Exploring Feedback Modality Designs to Improve Young Children's Collaborative ActionsAmy Melniczuk, Egesa Vrapi. 271-281 [doi]
- FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation LearningKazi Injamamul Haque, Zerrin Yumak. 282-291 [doi]
- Frame-Level Event Representation Learning for Semantic-Level Generation and Editing of Avatar MotionAyaka Ideno, Takuhiro Kaneko, Tatsuya Harada. 292-300 [doi]
- Gait Event Prediction of People with Cerebral Palsy using Feature Uncertainty: A Low-Cost ApproachSaikat Chakraborty, Noble Thomas, Anup Nandy. 301-306 [doi]
- GCFormer: A Graph Convolutional Transformer for Speech Emotion RecognitionYingxue Gao, Huan Zhao, Yufeng Xiao, Zixing Zhang 0001. 307-313 [doi]
- HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory TransformerYubin Kim, Dong-Won Lee, Paul Pu Liang, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park. 314-325 [doi]
- How Noisy is Too Noisy? The Impact of Data Noise on Multimodal Recognition of Confusion and Conflict During Collaborative LearningYingbo Ma, Mehmet Celepkolu, Kristy Elizabeth Boyer, Collin F. Lynch, Eric N. Wiebe, Maya Israel. 326-335 [doi]
- Identifying Interlocutors' Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic FeaturesShumpei Otsuchi, Koya Ito, Yoko Ishii, Ryo Ishii, Shinichirou Eitoku, Kazuhiro Otsuka. 336-344 [doi]
- Implicit Search Intent Recognition using EEG and Eye Tracking: Novel Dataset and Cross-User PredictionMansi Sharma, Shuang Chen, Philipp Müller, Maurice Rekrut, Antonio Krüger. 345-354 [doi]
- Increasing Heart Rate and Anxiety Level with Vibrotactile and Audio Presentation of Fast HeartbeatRuoqi Wang, Haifeng Zhang, Shaun Alexander Macdonald, Patrizia Di Campli San Vito. 355-363 [doi]
- Influence of hand representation on a grasping task in augmented realityLouis Lafuma, Guillaume Bouyer, Olivier Goguel, Jean-Yves Pascal Didier. 364-372 [doi]
- Interpreting Sign Language Recognition using Transformers and MediaPipe LandmarksCristina Luna Jiménez, Manuel Gil-Martín, Ricardo Kleinlein, Rubén San Segundo, Fernando Fernández-Martínez. 373-377 [doi]
- Large language models in textual analysis for gesture selectionLaura Birka Hensel, Nutchanon Yongsatianchot, Parisa Torshizi, Elena Minucci, Stacy Marsella. 378-387 [doi]
- Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait GenerationYasheng Sun, Qianyi Wu, Hang Zhou, Kaisiyuan Wang, Tianshu Hu, Chen-Chieh Liao, Shio Miyafuji, Ziwei Liu 0002, Hideki Koike. 388-396 [doi]
- MMASD: A Multimodal Dataset for Autism Intervention AnalysisJicheng Li, Vuthea Chheang, Pinar Kullu, Eli Brignac, Zhang Guo, Anjana Bhat, Kenneth E. Barner, Roghayeh Leila Barmaki. 397-405 [doi]
- Multimodal Analysis and Assessment of Therapist Empathy in Motivational InterviewsTrang tran, Yufeng Yin 0002, Leili Tavabi, Joannalyn Delacruz, Brian Borsari, Joshua D. Woolley, Stefan Scherer, Mohammad Soleymani 0001. 406-415 [doi]
- Multimodal Bias: Assessing Gender Bias in Computer Vision Models with NLP TechniquesAbhishek Mandal, Suzanne Little, Susan Leavy. 416-424 [doi]
- Multimodal Fusion Interactions: A Study of Human and Automatic QuantificationPaul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency. 425-435 [doi]
- Multimodal Turn Analysis and Prediction for Multi-party ConversationsMeng Chen Lee, Mai Trinh, Zhigang Deng. 436-444 [doi]
- Neural Mixed Effects for Nonlinear Personalized PredictionsTorsten Wörtwein, Nicholas B. Allen, Lisa B. Sheeber, Randy P. Auerbach, Jeffrey F. Cohn, Louis-Philippe Morency. 445-454 [doi]
- On Head Motion for Recognizing Aggression and Negative Affect during Speaking and ListeningSiska Fitrianie, Iulia Lefter. 455-464 [doi]
- Out of Sight, ... How Asymmetry in Video-Conference Affects Social InteractionCamille Sallaberry, Gwenn Englebienne, Jan B. F. Van Erp, Vanessa Evers. 465-469 [doi]
- Paying Attention to Wildfire: Using U-Net with Attention Blocks on Multimodal Data for Next Day PredictionJack FitzGerald, Ethan Seefried, James E. Yost, Sangmi Pallickara, Nathaniel Blanchard. 470-480 [doi]
- Performance Exploration of RNN Variants for Recognizing Daily Life Stress Levels by Using Multimodal Physiological SignalsYekta Said Can, Elisabeth André. 481-487 [doi]
- Predicting Player Engagement in Tom Clancy's The Division 2: A Multimodal Approach via Pixels and Gamepad ActionsKosmas Pinitas, David Renaudie, Mike Thomsen, Matthew Barthet, Konstantinos Makantasis, Antonios Liapis, Georgios N. Yannakakis. 488-497 [doi]
- Recognizing Intent in Collaborative ManipulationZhanibek Rysbek, Ki Hwan Oh, Milos Zefran. 498-506 [doi]
- ReNeLiB: Real-time Neural Listening Behavior Generation for Socially Interactive AgentsDaksitha Senel Withanage Don, Philipp Müller 0001, Fabrizio Nunnari, Elisabeth André, Patrick Gebhard. 507-516 [doi]
- Representation Learning for Interpersonal and Multimodal Behavior Dynamics: A Multiview Extension of Latent Change Score ModelsAlexandria K. Vail, Jeffrey M. Girard, Lauren M. Bylsma, Jay Fournier, Holly A. Swartz, Jeffrey F. Cohn, Louis-Philippe Morency. 517-526 [doi]
- Robot Duck Debugging: Can Attentive Listening Improve Problem Solving?Maria Teresa Parreira, Sarah Gillet, Iolanda Leite. 527-536 [doi]
- SHAP-based Prediction of Mother's History of Depression to Understand the Influence on Child BehaviorManeesh Bilalpur, Saurabh Hinduja, Laura A. Cariola, Lisa Sheeber, Nicholas Allen, Louis-Philippe Morency, Jeffrey F. Cohn. 537-544 [doi]
- Synerg-eye-zing: Decoding Nonlinear Gaze Dynamics Underlying Successful Collaborations in Co-located TeamsG. S. Rajshekar Reddy, Lucca Eloy, Rachel Dickler, Jason G. Reitman, Samuel L. Pugh, Peter W. Foltz, Jamie C. Gorman, Julie L. Harrison, Leanne M. Hirshfield. 545-554 [doi]
- The Role of Audiovisual Feedback Delays and Bimodal Congruency for Visuomotor Performance in Human-Machine InteractionAnnika Dix, Clarissa Sabrina Arlinghaus, A. Marie Harkin, Sebastian Pannasch. 555-563 [doi]
- TongueTap: Multimodal Tongue Gesture Recognition with Head-Worn DevicesTan Gemicioglu, R. Michael Winters, Yu-Te Wang, Thomas M. Gable, Ivan J. Tashev. 564-573 [doi]
- Toward Fair Facial Expression Recognition with Improved Distribution AlignmentMojtaba Kolahdouzi, Ali Etemad. 574-583 [doi]
- Towards Autonomous Physiological Signal Extraction From Thermal Videos Using Deep LearningKapotaksha Das, Mohamed Abouelenien, Mihai G. Burzo, John Elson, Kwaku O. Prakah-Asante, Clay Maranville. 584-593 [doi]
- µGeT: Multimodal eyes-free text selection technique combining touch interaction and microgesturesGauthier Robert Jean Faisandaz, Alix Goguey, Christophe Jouffrais, Laurence Nigay. 594-603 [doi]
- Understanding the Social Context of Eating with Multimodal Smartphone Sensing: The Role of Country DiversityNathan Kammoun, Lakmal Meegahapola, Daniel Gatica-Perez. 604-612 [doi]
- User Feedback-based Online Learning for Intent ClassificationKaan Gönç, Baturay Saglam, Onat Dalmaz, Tolga Çukur, Serdar S. Kozat, Hamdi Dibeklioglu. 613-621 [doi]
- Using Augmented Reality to Assess the Role of Intuitive Physics in the Water-Level TaskRomina Abadi, Laurie M. Wilcox, Robert Scott Allison. 622-630 [doi]
- Using Explainability for Bias Mitigation: A Case Study for Fair Recruitment AssessmentGizem Sogancioglu, Heysem Kaya, Albert Ali Salah. 631-639 [doi]
- Using Speech Patterns to Model the Dimensions of Teamness in Human-Agent TeamsEmily Doherty, Cara A. Spencer, Lucca Eloy, Nitin Kumar, Rachel Dickler, Leanne M. Hirshfield. 640-648 [doi]
- Video-based Respiratory Waveform Estimation in Dialogue: A Novel Task and Dataset for Human-Machine InteractionTakao Obi, Kotaro Funakoshi. 649-660 [doi]
- ViFi-Loc: Multi-modal Pedestrian Localization using GAN with Camera-Phone CorrespondencesHansi Liu, Hongsheng Lu, Kristin Data, Marco Gruteser. 661-669 [doi]
- WiFiTuned: Monitoring Engagement in Online Participation by Harmonizing WiFi and AudioVijay Kumar Singh, Pragma Kar, Ayush Madhan-Sohini, Madhav Rangaiah, Sandip Chakraborty, Mukulika Maity. 670-678 [doi]
- A New Theory of Data Processing: Applying Artificial Intelligence to Cognition and HumanityJingwei Liu. 679-683 [doi]
- From Natural to Non-Natural Interaction: Embracing Interaction Design Beyond the Accepted Convention of NaturalRadu-Daniel Vatavu. 684-688 [doi]
- Towards Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous SystemsAmr Gomaa, Michael Feld. 689-694 [doi]
- Bridging Multimedia Modalities: Enhanced Multimodal AI Understanding and Intelligent AgentsSushant Gautam. 695-699 [doi]
- Come Fl.. Run with Me: Understanding the Utilization of Drones to Support Recreational Runners' Well BeingAswin Balasubramaniam. 700-705 [doi]
- Conversational Grounding in Multimodal Dialog SystemsBiswesh Mohapatra. 706-710 [doi]
- Crowd Behaviour Prediction using Visual and Location Data in Super-Crowded ScenariosAntonius Bima Murti Wijaya. 711-715 [doi]
- Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal SensingArnaud Allemang-Trivalle. 716-720 [doi]
- Explainable Depression Detection using Multimodal Behavioural CuesMonika Gahalawat. 721-725 [doi]
- Modeling Social Cognition and its Neurologic Deficits with Artificial Neural NetworksLaurent P. Mertens. 726-730 [doi]
- Recording multimodal pair-programming dialogue for reference resolution by conversational agentsCecilia Domingo. 731-735 [doi]
- Smart Garments for Immersive Home Rehabilitation Using VRLuz Alejandra Magre, Shirley Coyle. 736-740 [doi]
- Audio-Visual Group-based Emotion Recognition using Local and Global Feature Aggregation based Multi-Task LearningSunan Li, Hailun Lian, Cheng Lu, Yan Zhao, Chuangao Tang, Yuan Zong, Wenming Zheng. 741-745 [doi]
- EmotiW 2023: Emotion Recognition in the Wild ChallengeAbhinav Dhall, Monisha Singh, Roland Goecke, Tom Gedeon, Donghuo Zeng, Yanan Wang, Kazushi Ikeda. 746-749 [doi]
- Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant FeaturesAnderson Augusma, Dominique Vaufreydaz, Frédérique Letué. 750-754 [doi]
- Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio RepresentationAnna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow. 755-762 [doi]
- FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture GenerationLeon Harz, Hendric Voß, Stefan Kopp. 763-771 [doi]
- Gesture Motion Graphs for Few-Shot Speech-Driven Gesture ReenactmentZeyu Zhao, Nan Gao, Zhi Zeng, Guixuan Zhang, Jie Liu, Shuwu Zhang. 772-778 [doi]
- The DiffuseStyleGesture+ entry to the GENEA Challenge 2023Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li 0001, Zhiyong Wu 0001, Xiaofei Wu, Songcen Xu, Zonghong Dai. 779-785 [doi]
- The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generationVladislav Korzun, Anna Beloborodova, Arkady Ilin. 786-791 [doi]
- The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settingsTaras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter. 792-801 [doi]
- The UEA Digital Humans entry to the GENEA Challenge 2023Jonathan Windle, Iain A. Matthews, Ben Milner, Sarah Taylor. 802-810 [doi]
- 4th ICMI Workshop on Bridging Social Sciences and AI for Understanding Child BehaviourHeysem Kaya, Anouk Neerincx, Maryam Najafian, Saeid Safavi 0001. 811-813 [doi]
- 4th International Workshop on Multimodal Affect and Aesthetic ExperienceMichal Muszynski, Theodoros Kostoulas, Leimin Tian, Edgar Roman-Rangel, Theodora Chaspari, Panos Amelidis. 814-815 [doi]
- 4th Workshop on Social Affective Multimodal Interaction for Health (SAMIH)Hiroki Tanaka, Satoshi Nakamura 0001, Jean-Claude Martin, Catherine Pelachaud. 816-817 [doi]
- ACE: how Artificial Character Embodiment shapes user behaviour in multi-modal interactionEleonora Ceccaldi, Béatrice Biancardi, Sara Falcone, Silvia Ferrando, Geoffrey Gorisse, Thomas Janssoone, Anna Martin Coesel, Pierre Raimbaud. 818-819 [doi]
- Automated Assessment of Pain (AAP)Zakia Hammal, Steffen Walter 0001, Nadia Berthouze. 820-821 [doi]
- GENEA Workshop 2023: The 4th Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied AgentsYoungwoo Yoon, Taras Kucherenko, Jieyeon Woo, Pieter Wolfert, Rajmund Nagy, Gustav Eje Henter. 822-823 [doi]
- Multimodal Conversational Agents for People with Neurodevelopmental DisordersFabio Catania, Tanya Talkar, Franca Garzotto, Benjamin R. Cowan, Thomas F. Quatieri, Satrajit S. Ghosh. 824-825 [doi]
- Multimodal, Interactive Interfaces for EducationDaniel C. Tozadore, Lise Aubin, Soizic Gauthier, Barbara Bruno, Salvatore Maria Anzalone. 826-827 [doi]
- The 5th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild (MSECP-Wild)Bernd Dudzik, Tiffany Matej Hrkalovic, Dennis Küster, David St-Onge, Felix Putze, Laurence Devillers. 828-829 [doi]