Abstract is missing.
- What is Multimodal?Louis-Philippe Morency. 1 [doi]
- Real Talk, Real Listening, Real ChangeDeb Roy. 2 [doi]
- Focus on People: Five Questions from Human-Centered ComputingDaniel Gatica-Perez. 3 [doi]
- The Future of the Body in Tomorrow's WorkplaceJustine Cassell. 4 [doi]
- Detecting Change Talk in Motivational Interviewing using Verbal and Facial InformationYukiko I. Nakano, Eri Hirose, Tatsuya Sakato, Shogo Okada, Jean-Claude Martin. 5-14 [doi]
- Exploratory Study on the Perception of Intelligent Virtual Agents With Non-Native Accents Using Synthetic and Natural Speech in GermanDavid Obremski, Helena Babette Hering, Paula Friedrich, Birgit Lugrin. 15-24 [doi]
- Commensality or Reverie in Eating? Exploring the Solo Dining ExperienceMimi Bocanegra, Mailin Lemke, Roelof Anne Jelle de Vries, Geke D. S. Ludden. 25-35 [doi]
- Structured Multimodal Fusion Network for Referring Image SegmentationMingcheng Xue, Yu Liu 0035, Kaiping Xu, Haiyang Zhang, Chengyang Yu. 36-47 [doi]
- Does Audio help in deep Audio-Visual Saliency prediction models?Ritvik Agrawal, Shreyank Jyoti, Rohit Girmaji, Sarath Sivaprasad, Vineet Gandhi. 48-56 [doi]
- A Spatio-temporal Learning for Music Conditioned Dance GenerationLi Zhou, Yan Luo. 57-62 [doi]
- Emotions Matter: Towards Personalizing Human-System Interactions Using a Two-layer Multimodal ApproachApostolos Kalatzis, Vishnunarayan Girishan Prabhu, Saidur Rahman, Mike P. Wittie, Laura M. Stanley. 63-72 [doi]
- Pose Uncertainty Aware Movement Synchrony Estimation via Spatial-Temporal Graph TransformerJicheng Li, Anjana Bhat, Roghayeh Barmaki. 73-82 [doi]
- Generalized Product-of-Experts for Learning Multimodal Representations in Noisy EnvironmentsAbhinav Joshi, Naman Gupta, Jinang Shah, Binod Bhattarai, Ashutosh Modi, Danail Stoyanov. 83-93 [doi]
- Towards creating a conversational memory for long-term meeting support: predicting memorable moments in multi-party conversations through eye-gazeMaria Tsfasman, Kristian Fenech, Morita Tarvirdians, András Lörincz, Catholijn J. Jonker, Catharine Oertel. 94-104 [doi]
- Keep in Touch: Combining Touch Interaction with Thumb-to-Finger µGestures for People with Visual ImpairmentGauthier Robert Jean Faisandaz, Alix Goguey, Christophe Jouffrais, Laurence Nigay. 105-116 [doi]
- Evaluating Just-In-Time Vibrotactile Feedback for Communication AnxietyJason Raether, Ehsanul Haque Nirjhar, Theodora Chaspari. 117-127 [doi]
- Towards using Breathing Features for Multimodal Estimation of Depression SeverityFrancisca Pessanha, Heysem Kaya, Alkim Almila Akdag Salah, Albert Ali Salah. 128-138 [doi]
- Text-based Interpretable Depression Severity Modeling via Symptom PredictionsFloris Van Steijn, Gizem Sogancioglu, Heysem Kaya. 139-147 [doi]
- Frisson: Leveraging Metasomatic Interactions for Generating Aesthetic ChillsAbhinandan Jain, Felix Schoeller, Emilie Zhang, Pattie Maes. 148-158 [doi]
- Group Formation in Multi-Robot Human Interaction During Service ScenariosXiang Zhi Tan, Elizabeth Jeanne Carter, Prithu Pareek, Aaron Steinfeld. 159-169 [doi]
- Conversation Group Detection With Spatio-Temporal ContextStephanie Tan, David M. J. Tax, Hayley Hung. 170-180 [doi]
- Unpretty Please: Ostensibly Polite Wakewords Discourage Politeness in both Robot-Directed and Human-Directed CommunicationRuchen Wen, Brandon Barton, Sebastian Fauré, Tom Williams 0001. 181-190 [doi]
- Review of realistic behavior and appearance generation in embodied conversational agents: A comparison between traditional and modern approachesKumar Shubham, Anirban Mukherjee, Dinesh Babu Jayagopi. 191-197 [doi]
- The Effects of an Embodied Pedagogical Agent's Synthetic Speech Accent on Learning OutcomesTiffany D. Do, Mamtaj Akter, Zubin Choudhary, Roger Azevedo, Ryan P. McMahan. 198-206 [doi]
- Comfortability Recognition from Visual Non-verbal CuesMaria Elena Lechuga Redondo, Radoslaw Niewiadomski, Rea Francesco, Alessandra Sciutti. 207-216 [doi]
- AffectPro: Towards Constructing Affective Profile Combining Smartphone Typing Interaction and Emotion Self-reporting PatternSatchit Hari, Ajay, Sayan Sarcar, Sougata Sen, Surjya Ghosh. 217-223 [doi]
- Evaluating Calibration-free Webcam-based Eye Tracking for Gaze-based User ModelingStephen Hutt, Sidney K. D'Mello. 224-235 [doi]
- Exploring the Detection of Spontaneous Recollections during Video-viewing In-the-Wild using Facial Behavior AnalysisBernd Dudzik, Hayley Hung. 236-246 [doi]
- Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent ModuleYihe Liu, Ziqi Yuan, Huisheng Mao, Zhiyun Liang, Wanqiuyue Yang, Yuanzhe Qiu, Tie Cheng, Xiaoteng Li, Hua Xu, Kai Gao. 247-258 [doi]
- The Impact of Thermal Cues on Affective Responses to Emotionally Resonant VibrationsShaun Alexander Macdonald, Frank E. Pollick, Stephen Anthony Brewster. 259-269 [doi]
- Pull Gestures with Coordinated Graphics on Dual-Screen DevicesVivian Shen, Chris Harrison 0001. 270-277 [doi]
- All Birds Must Fly: The Experience of Multimodal Hands-free Gaming with Gaze and Nonverbal Voice SynchronizationRamin Hedeshy, Chandan Kumar 0003, Mike Lauer, Steffen Staab. 278-287 [doi]
- EdgeSelect: Smartwatch Data Interaction with Minimal Screen OcclusionAli Neshati, Aaron Salo, Shariff A. M. Faleel, Ziming Li 0003, Hai-Ning Liang, Celine Latulipe, Pourang Irani. 288-298 [doi]
- Two-Step Gaze GuidanceTiffany C. K. Kwok, Peter Kiefer, Martin Raubal. 299-309 [doi]
- Multi-level Fusion of Multi-modal Semantic Embeddings for Zero Shot LearningZhe Kong, Xin Wang, Neng Gao, Yifei Zhang, Yuhan Liu, Chenyang Tu. 310-318 [doi]
- WEDAR: Webcam-based Attention Analysis via Attention Regulator Behavior Recognition with a Novel E-reading DatasetYoon Lee, Haoyu Chen, Guoying Zhao, Marcus Specht. 319-328 [doi]
- RGBDGaze: Gaze Tracking on Smartphones with RGB and Depth DataRiku Arakawa, Mayank Goel, Chris Harrison 0001, Karan Ahuja. 329-336 [doi]
- Cognitive Workload Assessment via Eye Gaze and EEG in an Interactive Multi-Modal Driving TaskAyca Aygun, Boyang Lyu, Thuan Nguyen, Zachary Haga, Shuchin Aeron, Matthias Scheutz. 337-348 [doi]
- Transformer-Based Physiological Feature Learning for Multimodal Analysis of Self-Reported SentimentShun Katada, Shogo Okada, Kazunori Komatani. 349-358 [doi]
- Investigating the relationship between dialogue and exchange-level impressionWenqing Wei, Sixia Li, Shogo Okada. 359-367 [doi]
- Is Lip Region-of-Interest Sufficient for Lipreading?Jing-Xuan Zhang, Genshun Wan, Jia Pan. 368-372 [doi]
- A Framework for Video-Text Retrieval with Noisy SupervisionZahra Vaseqi, Pengnan Fan, James Clark, Martin Levine. 373-383 [doi]
- A cognitive knowledge-based system for hair and makeup recommendation based on facial features classificationJuhyun Lee, Joosun Yum, Marvin Lee, Ji-Hyun Lee. 384-394 [doi]
- Real-Time Multimodal Emotion Recognition in Conversation for Multi-Party InteractionsSandratra Rasendrasoa, Alexandre Pauchet, Julien Saunier, Sébastien Adam. 395-403 [doi]
- Comparative Analysis of Entity Identification and Classification of Indian EpicsShreya Sharma, Mukesh Mohania. 404-413 [doi]
- Neural Encoding of Songs is Modulated by Their EnjoymentGulshan Sharma, Pankaj Pandey, Ramanathan Subramanian, Krishna Prasad Miyapuram, Abhinav Dhall. 414-419 [doi]
- Multimodal Across Domains Gaze Target DetectionFrancesco Tonini, Cigdem Beyan, Elisa Ricci 0001. 420-431 [doi]
- DynaTags: Low-Cost Fiducial Marker MechanismsCassandra Scheirer, Chris Harrison 0001. 432-443 [doi]
- End-to-End Learning and Analysis of Infant Engagement During Guided Play: Prediction and ExplainabilityMarc Fraile, Christine Fawcett, Joakim Lindblad, Natasa Sladoje, Ginevra Castellano. 444-454 [doi]
- Unimodal vs. Multimodal Prediction of Antenatal Depression from Smartphone-based Survey Data in a Longitudinal StudyMengyu Zhong, Vera van Zoest, Ayesha Mae Bilal, Fotios Papadopoulos, Ginevra Castellano. 455-467 [doi]
- Identification of Adaptive Driving Style Preference through Implicit Inputs in SAE L2 VehiclesZhaobo Zheng, Kumar Akash, Teruhisa Misu, Vidya Krishnamoorthy, Miaomiao Dong, Yuni Lee, Gaojian Huang. 468-475 [doi]
- Continual Learning about Objects in the Wild: An Interactive ApproachDan Bohus, Sean Andrist, Ashley Feniello, Nick Saw, Eric Horvitz. 476-486 [doi]
- Toward Causal Understanding of Therapist-Client Relationships: A Study of Language Modality and Social EntrainmentAlexandria K. Vail, Jeffrey M. Girard, Lauren M. Bylsma, Jeffrey F. Cohn, Jay Fournier, Holly Swartz, Louis-Philippe Morency. 487-494 [doi]
- Privacy Preserving Personalization for Video Facial Expression Recognition Using Federated LearningAli N. Salman, Carlos Busso. 495-503 [doi]
- Improved Word-level Lipreading with Temporal Shrinkage Network and NetVLADHeng Yang, Tao Luo, Yakun Zhang, Mingwu Song, Liang Xie, Ye Yan, Erwei Yin. 504-508 [doi]
- Inclusive Multimodal Voice Interaction for Code NavigationBharat Paudyal, Chris Creed, Ian Williams, Maite Frutos Pascual. 509-519 [doi]
- POLLY: A Multimodal Cross-Cultural Context-Sensitive Framework to Predict Political Lying from VideosChongyang Bai, Maksim Bolonkin, Viney Regunath, V. S. Subrahmanian. 520-530 [doi]
- Supervised Contrastive Learning for Affect ModellingKosmas Pinitas, Konstantinos Makantasis, Antonios Liapis, Georgios N. Yannakakis. 531-539 [doi]
- CreativeBot: a Creative Storyteller robot to stimulate creativity in childrenMaha Elgarf, Sahba Zojaji, Gabriel Skantze, Christopher E. Peters. 540-548 [doi]
- Towards Commensal Activities RecognitionRadoslaw Niewiadomski, Gabriele De Lucia, Gabriele Grazzi, Maurizio Mancini. 549-557 [doi]
- Influence of Passive Haptic and Auditory Feedback on Presence and Mindfulness in Virtual Reality EnvironmentsNadine Wagener, Alex Ackermann, Gian-Luca Savino, Bastian Dänekas, Jasmin Niess, Johannes Schöning. 558-569 [doi]
- Age Regression for Human VoicesMartin T. Schorradt, Douglas W. Cunningham. 570-578 [doi]
- Touchless touch with biosignal transfer for online communicationDaria Joanna Hemmerling, Maciej Stroinski, Kamil Kwarciak, Krzysztof Trusiak, Maciej Szymkowski, Weronika Celniak, William Frier, Orestis Georgiou, Mykola Maksymenko. 579-590 [doi]
- GazeScale: Towards General Gaze-Based Interaction in Public PlacesMarco Porta, Antonino Caminiti, Piercarlo Dondi. 591-596 [doi]
- Multimodal classification of interruptions in humans' interactionLiu Yang, Catherine Achard, Catherine Pelachaud. 597-604 [doi]
- X-Norm: Exchanging Normalization Parameters for Bimodal FusionYufeng Yin 0002, Jiashu Xu, Tianxin Zu, Mohammad Soleymani 0001. 605-614 [doi]
- Assessing Multimodal Dynamics in Multi-Party Collaborative Interactions with Multi-Level Vector AutoregressionRobert G. Moulder, Nicholas D. Duran, Sidney K. D'Mello. 615-625 [doi]
- Towards Accessible Sign Language Assessment and LearningNeha Tarigopula, Sandrine Tornay, Skanda Muralidhar, Mathew Magimai-Doss. 626-631 [doi]
- Personalized Productive Engagement Recognition in Robot-Mediated Collaborative LearningVetha Vikashini Chithrra Raghuram, Hanan Salam, Jauwairia Nasir, Barbara Bruno, Oya Çeliktutan. 632-641 [doi]
- A Deep Dive Into Neural Synchrony Evaluation for Audio-visual TranslationShravan Nayak, Christian Schuler, Debjoy Saha, Timo Baumann. 642-647 [doi]
- Beyond the Blue Sky of Multimodal Interaction: A Centennial Vision of Interplanetary Virtual Spaces in Turn-based MetaverseLik Hang Lee, Carlos Bermejo Fernandez, Ahmad Yousef Alhilal, Tristan Braud, Simo Hosio, Esmée Henrieke Anne de Haas, Pan Hui 0001. 648-652 [doi]
- On the Horizon: Interactive and Compositional DeepfakesEric Horvitz. 653-661 [doi]
- Decentralized, not Dehumanized in the Metaverse: Bringing Utility to NFTs through Multimodal InteractionAnqi Wang, Ze Gao, Lik Hang Lee, Tristan Braud, Pan Hui 0001. 662-667 [doi]
- Non-verbal Signals in Oral History ArchivesFrancisca Pessanha. 668-672 [doi]
- Effective Human-Robot Collaboration via Generalized Robot Error Management Using Natural Human ResponsesMaia Stiber. 673-678 [doi]
- Designing Hybrid Intelligence Techniques for Facilitating Collaboration Informed by Social ScienceTiffany Matej Hrkalovic. 679-684 [doi]
- Towards Human-Machine Collaboration: Multimodal Group Potency EstimationNicola Corbellini. 685-689 [doi]
- Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive InterfacesAmr Gomaa. 690-695 [doi]
- Physiological Sensing for Media Perception & Activity RecognitionGulshan Sharma. 696-700 [doi]
- Real-time Feedback for Developing Conversation LiteracyKhalil J. Anderson. 701-704 [doi]
- Interdisciplinary Corpus-based Approach for Exploring Multimodal Conversational FeedbackAuriane Boudin. 705-710 [doi]
- Mood-Emotion Interplay: A Computational PerspectiveSoujanya Narayana. 711-716 [doi]
- Multimodal Representation Learning For Real-World ApplicationsAbhinav Joshi. 717-723 [doi]
- Multimodal Representations and Assessments of Emotional Fluctuations of Speakers in Call Centers ConversationsYajing Feng. 724-729 [doi]
- Sound Scope Pad: Controlling a VR Concert with Natural MovementMasatoshi Hamanaka. 730-732 [doi]
- MIDriveSafely: Multimodal Interaction for Drive SafelyDenis Ivanko, Alexey M. Kashevnik, Dmitry Ryumin, Andrey Kitenko, Alexandr Axyonov, Igor Lashkov, Alexey Karpov 0001. 733-735 [doi]
- The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generationYoungwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter. 736-747 [doi]
- Hybrid Seq2Seq Architecture for 3D Co-Speech Gesture GenerationKhaled Saleh. 748-752 [doi]
- TransGesture: Autoregressive Gesture Generation with RNN-TransducerNaoshi Kaneko, Yuna Mitsubayashi, Geng Mu. 753-757 [doi]
- The ReprGesture entry to the GENEA Challenge 2022Sicheng Yang, Zhiyong Wu 0001, Minglei Li, Mengchen Zhao, Jiuxin Lin, Liyang Chen, Weihong Bao. 758-763 [doi]
- GestureMaster: Graph-based Speech-driven Gesture GenerationChi Zhou, Tengyue Bian, Kang Chen. 764-770 [doi]
- UEA Digital Humans entry to the GENEA Challenge 2022Jonathan Windle, David Greenwood 0001, Sarah Taylor. 771-777 [doi]
- Exemplar-based Stylized Gesture Generation from Speech: An Entry to the GENEA Challenge 2022Saeed Ghorbani, Ylva Ferstl, Marc-André Carbonneau. 778-783 [doi]
- The IVI Lab entry to the GENEA Challenge 2022 - A Tacotron2 Based Method for Co-Speech Gesture Generation With Locality-Constraint Attention MechanismChe-Jui Chang, Sen Zhang, Mubbasir Kapadia. 784-789 [doi]
- The DeepMotion entry to the GENEA Challenge 2022Shuhong Lu, Andrew Feng. 790-796 [doi]
- Multimodal Affect and Aesthetic ExperienceTheodoros Kostoulas, Michal Muszynski, Leimin Tian, Edgar Roman-Rangel, Theodora Chaspari, Panos Amelidis. 797-798 [doi]
- GENEA Workshop 2022: The 3rd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied AgentsPieter Wolfert, Taras Kucherenko, Carla Viegas, Zerrin Yumak, Youngwoo Yoon, Gustav Eje Henter. 799-800 [doi]
- Second International Workshop on Deep Video UnderstandingKeith Curtis, George Awad, Shahzad Rajput, Ian Soboroff. 801-802 [doi]
- The 4th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild (MSECP-Wild)Bernd Dudzik, Dennis Küster, David St-Onge, Felix Putze. 803-804 [doi]
- 3rd Workshop on Social Affective Multimodal Interaction for Health (SAMIH)Hiroki Tanaka, Satoshi Nakamura, Kazuhiro Shidara, Jean-Claude Martin, Catherine Pelachaud. 805-806 [doi]
- 3rd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child BehaviourAnika van der Klis, Heysem Kaya, Maryam Najafian, Saeid Safavi 0001. 807-809 [doi]