Abstract is missing.
- A Portable Ball with Unity-based Computer Game for Interactive Arm Motor Control ExerciseYuqing Zhou, Yijia An, Qisong Niu, Qinglei Bu, Yung C. Liang, Mark Leach, Jie Sun. 1-5 [doi]
- "Am I listening?", Evaluating the Quality of Generated Data-driven Listening MotionPieter Wolfert, Gustav Eje Henter, Tony Belpaeme. 6-10 [doi]
- ASAR Dataset and Computational Model for Affective State Recognition During ARAT Assessment for Upper Extremity Stroke SurvivorsTamim Ahmed, Thanassis Rikakis, Aisling Kelliher, Mohammad Soleymani 0001. 11-15 [doi]
- Assessing Infant and Toddler Behaviors through Wearable Inertial Sensors: A Preliminary InvestigationAyaka Onodera, Riku Ishioka, Yuuki Nishiyama, Kaoru Sezaki. 16-20 [doi]
- Characterization of collaboration in a virtual environment with gaze and speech signalsAurélien Léchappé, Aurélien Milliat, Cédric Fleury, Mathieu Chollet, Cédric Dumas. 21-25 [doi]
- Detection of contract cheating in pen-and-paper exams through the analysis of handwriting styleKonstantin Kuznetsov 0001, Michael Barz, Daniel Sonntag. 26-30 [doi]
- Developing a Generic Focus Modality for Multimodal Interactive EnvironmentsFábio Barros, António J. S. Teixeira, Samuel S. Silva. 31-35 [doi]
- Do Body Expressions Leave Good Impressions? - Predicting Investment Decisions based on Pitcher's Body ExpressionsMerel M. Jung, Mark Van Vlierden, Werner Liebregts, Itir Önal Ertugrul. 36-40 [doi]
- Exploring Neurophysiological Responses to Cross-Cultural Deepfake VideosMuhammad Riyyan Khan, Shahzeb Naeem, Usman Tariq, Abhinav Dhall, Malik Nasir Afzal Khan, Fares Al-shargie, Hasan Al-Nashash. 41-45 [doi]
- HEARD-LE: An Intelligent Conversational Interface for WordleCrystal Yang, Karen Arredondo, Jung-In Koh, Paul Taele, Tracy Hammond. 46-50 [doi]
- Insights Into the Importance of Linguistic Textual Features on the Persuasiveness of Public SpeakingAlisa Barkar, Mathieu Chollet, Béatrice Biancardi, Chloé Clavel. 51-55 [doi]
- Leveraging gaze for potential error prediction in AI-support systems: An exploratory analysis of interaction with a simulated robotBjörn Severitt, Nora Jane Castner, Olga Lukashova-Sanz, Siegfried Wahl. 56-60 [doi]
- LinLED: Low latency and accurate contactless gesture interactionStéphane Viollet, Chauvet Martin, Ingargiola Jean-Marc. 61-65 [doi]
- Multimodal Entrainment in Bio-Responsive Multi-User VR InteractivesMeehae Song, Steve DiPaola. 66-70 [doi]
- Multimodal Prediction of User's Performance in High-Stress Dialogue InteractionsSetareh Nasihati Gilani, Kimberly A. Pollard, David R. Traum. 71-75 [doi]
- Multimodal Synchronization in Musical Ensembles: Investigating Audio and Visual CuesSutirtha Chakraborty, Joseph Timoney. 76-80 [doi]
- The Limitations of Current Similarity-Based Objective Metrics in the Context of Human-Agent Interaction ApplicationsArmand Deffrennes, Lucile Vincent, Marie Pivette, Kevin El Haddad, Jacqueline Deanna Bailey, Monica Perusquía-Hernández, Soraia M. Alarcão, Thierry Dutoit. 81-85 [doi]
- Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User BehaviorsKoji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze. 86-90 [doi]
- Understanding the Physiological Arousal of Novice Performance Drivers for the Design of Intelligent Driving SystemsEverlyne Kimani, Alexandre L. S. Filipowicz, Hiroshi Yasuda. 91-95 [doi]
- Virtual Reality Music Instrument Playing Game for Upper Limb Rehabilitation TrainingMuxiao Sun, Qinglei Bu, Ying Hou, Xiaowen Ju, Limin Yu, Eng Gee Lim, Jie Sun. 96-100 [doi]
- Tutorial on Multimodal Machine Learning: Principles, Challenges, and Open QuestionsPaul Pu Liang, Louis-Philippe Morency. 101-104 [doi]
- Platform for Situated Intelligence and OpenSense: A Tutorial on Building Multimodal Interactive Applications for ResearchSean Andrist, Dan Bohus, Zongjian Li, Mohammad Soleymani 0001. 105-106 [doi]
- A Versatile Finger-Interaction Device with Audio-Tactile FeedbackStefano Papetti, Eric Larrieux, Martin Fröhlich 0002. 107-108 [doi]
- An Adaptive Virtual Agent Platform for Automated Social Skills TrainingTakeshi Saga, Jieyeon Woo, Alexis Gerard, Hiroki Tanaka, Catherine Achard, Satoshi Nakamura 0001, Catherine Pelachaud. 109-111 [doi]
- Gesticulating with NAO: Real-time Context-Aware Co-Speech Gesture Generation for Human-Robot InteractionNguyen Tan Viet Tuyen, Viktor Schmuck, Oya Çeliktutan. 112-114 [doi]
- HAT3: The Human Autonomy Team Trust ToolkitCatherine Neubauer. 115-118 [doi]
- Melody Slot Machine II: Sound Enhancement with Multimodal InterfaceMasatoshi Hamanaka. 119-120 [doi]
- Pain Recognition Differences between Female and Male Subjects: An Analysis based on the Physiological Signals of the X-ITE Pain DatabaseTobias B. Ricken, Peter Bellmann, Sascha Gruss, Hans A. Kestler, Steffen Walter 0003, Friedhelm Schwenker. 121-130 [doi]
- Towards Automated Pain Assessment using Embodied Conversational AgentsPrasanth Murali, Mehdi Arjmand, Matias Volonte, Zixi Li, James Griffith, Michael K. Paasche-Orlow, Timothy W. Bickmore. 131-140 [doi]
- Do We Speak to Robots Looking Like Humans As We Speak to Humans? A Study of Pitch in French Human-Machine and Human-Human InteractionsNatalia Kalashnikova, Mathilde Hutin, Ioana Vasilescu, Laurence Devillers. 141-145 [doi]
- Expectations vs. Reality: The Impact of Adaptation Gap on Avatars in Social VR PlatformsAndrey Goncharov, Özge Nilay Yalçin, Steve DiPaola. 146-153 [doi]
- eXtended Reality of socio-motor interactions: Current Trends and Ethical Considerations for Mixed Reality Environments DesignJulia Ayache, Marta Bienkiewicz, Kathleen Richardson, Benoît G. Bardy. 154-158 [doi]
- Multimodal prompts effectively elicit robot-initiated social touch interactionsSpatika Sampath Gujran, Merel M. Jung. 159-163 [doi]
- A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual AgentsNada Alalyani, Nikhil Krishnaswamy. 164-173 [doi]
- Co-Speech Gesture Generation via Audio and Text Feature EngineeringGeunmo Kim, Jaewoong Yoo, Hyedong Jung. 174-178 [doi]
- DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion ModelsWeiyu Zhao, Liangxiao Hu, Shengping Zhang. 179-185 [doi]
- Discrete Diffusion for Co-Speech Gesture SynthesisAnkur Chemburkar, Shuhong Lu, Andrew Feng. 186-192 [doi]
- Gesture Generation with Diffusion Models Aided by Speech Activity InformationRodolfo L. Tonoli, Leonardo B. de M. M. Marques, Lucas H. Ueda, Paula Dornhofer Paro Costa. 193-199 [doi]
- Look What I Made It Do - The ModelIT Method for Manually Modeling Nonverbal Behavior of Socially Interactive AgentsAnna Lea Reinwarth, Tanja Schneeberger, Fabrizio Nunnari, Patrick Gebhard, Uwe Altmann, Janet Wessler. 200-204 [doi]
- MultiFacet: A Multi-Tasking Framework for Speech-to-Sign Language GenerationMounika Kanakanti, Shantanu Singh, Manish Shrivastava 0001. 205-213 [doi]
- The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. SpeakerViktor Schmuck, Nguyen Tan Viet Tuyen, Oya Çeliktutan. 214-219 [doi]
- The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generationGwantae Kim, Yuanming Li, Hanseok Ko. 220-227 [doi]
- Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agentAlice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stéphane Ayache. 228-237 [doi]
- Affective gaming using adaptive speed controlled by biofeedbackYann Frachi, Guillaume Chanel, Mathieu Barthet. 238-246 [doi]
- Art creation as an emergent multimodal journey in Artificial Intelligence latent spaceSteve DiPaola, Suk Kyoung Choi. 247-253 [doi]
- Emotions and Gambling: Towards a Computational Model of Gambling ExperienceVasileios Tsampallas, Laura Renshaw-Vuillier, Fred Charles, Theodoros Kostoulas. 254-258 [doi]
- Design of Generative Multimodal AI Agents to Enable Persons with Learning DisabilityRajagopal A., Nirmala V., Immanuel Johnraja Jebadurai, Arun Muthuraj Vedamanickam, Prajakta Uthaya Kumar. 259-271 [doi]
- Using Implicit Measures to Assess User Experience in Children: A Case Study on the Application of the Implicit Association Test (IAT)Eleonora Aida Beccaluva, Marta Curreri, Giulia Da Lisca, Pietro Crovari. 272-281 [doi]
- A Reading Comprehension Interface for Students with Learning DisordersMartina Galletti, Eleonora Pasqua, Francesca Bianchi, Manuela Calanca, Francesca Padovani, Daniele Nardi, Donatella Tomaiuoli. 282-287 [doi]
- Embodied edutainment experience in a museum: discovering glass-blowing gesturesAlina Glushkova, Dimitrios Makrygiannis, Sotirios Manitsaris. 288-291 [doi]
- Gaze-Driven Sentence Simplification for Language Learners: Enhancing Comprehension and ReadabilityTaichi Higasa, Keitaro Tanaka, Qi Feng, Shigeo Morishima. 292-296 [doi]
- The iReCheck project: using tablets and robots for personalised handwriting practiceDaniel C. Tozadore, Soizic Gauthier, Barbara Bruno, Chenyang Wang, Jianling Zou, Lise Aubin, Dominique Archambault, Mohamed Chetouani, Pierre Dillenbourg, David Cohen, Salvatore Maria Anzalone. 297-301 [doi]
- The TouchBox MK3: An Open-Source Device for Finger-Based Interaction with Advanced Auditory and Vibrotactile FeedbackStefano Papetti, Eric Larrieux, Martin Fröhlich 0002. 302-305 [doi]
- Toward a Tool Against Stereotype Threat in Math: Children's Perceptions of Virtual Role ModelsMarjorie Armando, Isabelle Régner, Magalie Ochs. 306-310 [doi]
- A multi-task, multi-modal approach for predicting categorical and dimensional emotionsAlex-Razvan Ispas, Théo Deschamps-Berger, Laurence Devillers. 311-317 [doi]
- Combining Artificial Intelligence, Bio-Sensing and Multimodal Control for Bio-Responsive InteractivesSteve DiPaola, Meehae Song. 318-322 [doi]
- GraphITTI: Attributed Graph-based Dominance Ranking in Social Interaction VideosGarima Sharma, Shreya Ghosh 0001, Abhinav Dhall, Munawar Hayat, Jianfei Cai 0001, Tom Gedeon. 323-329 [doi]
- Guidelines for designing and building an automated multimodal textual annotation systemJoshua Y. Kim, Kalina Yacef. 330-336 [doi]
- Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center ConversationsThéo Deschamps-Berger, Lori Lamel, Laurence Devillers. 337-343 [doi]
- SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signalAuriane Boudin, Roxane Bertrand, Stéphane Rauzy, Matthis Houlès, Thierry Legou, Magalie Ochs, Philippe Blache. 344-352 [doi]
- Emotion Expression Estimates to Measure and Improve Multimodal Social-Affective InteractionsJeffrey A. Brooks, Vineet Tiruvadi, Alice Baird, Panagiotis Tzirakis, HaoQi Li, Chris Gagne 0001, Moses Oh, Alan Cowen. 353-358 [doi]
- Engaging with an embodied conversational agent in a computerized cognitive training: an acceptability study with the elderlyJoan Fruitet, Mélodie Fouillen, Valentine Facque, Hanna Chainay, Stéphanie De Chalvron, Franck Tarpin-Bernard. 359-362 [doi]
- Investigating the Impact of a Virtual Audience's Gender and Attitudes on a Human SpeakerMarion Ristorcelli, Emma Gallego, Kévin Nguy, Jean-Marie Pergandi, Rémy Casanova, Magalie Ochs. 363-367 [doi]
- Towards Effective Automatic Evaluation of Generated Reflections for Motivational InterviewingZixiu Wu, Rim Helaoui, Diego Reforgiato Recupero, Daniele Riboni. 368-373 [doi]
- Automated Detection of Joint Attention and Mutual Gaze in Free Play Parent-Child InteractionsPeitong Li, Hui Lu, Ronald W. Poppe, Albert Ali Salah. 374-382 [doi]
- Automatic Detection of Gaze and Smile in Children's Video CallsDhia-Elhak Goumri, Thomas Janssoone, Leonor Becerra-Bonache, Abdellah Fourtassi. 383-388 [doi]
- Composite AI for Behavior Analysis in Social InteractionsBruno Carlos Dos Santos Melício, Linyun Xiang, Emily Dillon, Latha Soorya, Mohamed Chetouani, Andras Sarkany, Peter Kun, Kristian Fenech, András Lörincz. 389-397 [doi]
- Exploring the Potential of Multimodal Emotion Recognition for Hearing-Impaired Children Using Physiological Signals and Facial ExpressionsSeyma Takir, Elif Toprak, Pinar Uluer, Duygun Erol Barkana, Hatice Kose. 398-405 [doi]
- Speech Features of Children with Mild Intellectual DisabilitiesOlga V. Frolova, Aleksandr Nikolaev, Platon Grave, Elena E. Lyakso. 406-413 [doi]
- The AI4Autism Project: A Multimodal and Interdisciplinary Approach to Autism Diagnosis and StratificationSamy Tafasca, Anshul Gupta, Nada Kojovic, Mirko Gelsomini, Thomas Maillart, Michela Papandrea, Marie Schaer, Jean-Marc Odobez. 414-425 [doi]
- Towards early prediction of neurodevelopmental disorders: Computational model for Face Touch and Self-adaptors in InfantsBruno Tafur, Staci Weiss, Marwa Mahmoud. 426-434 [doi]