- Juri Yoneyama, Yuichiro Fujimoto, Kosuke Okazaki, Taishi Sawabe, Masayuki Kanbara, Hirokazu Kato 0001. Augmented conversations: AR face filters for facilitating comfortable in-person interactions. J. Multimodal User Interfaces, 19(1):57-74, March 2025.
- Dimitra Anastasiou, Valérie Maquil. Pointing gestures accelerate collaborative problem-solving on tangible user interfaces. J. Multimodal User Interfaces, 19(1):75-92, March 2025.
- Anthony Basille, Élise Lavoué, Audrey Serna. Impact of communication modalities on social presence and regulation processes in a collaborative game. J. Multimodal User Interfaces, 19(1):101-118, March 2025.
- Natalia Karhu, Jussi Rantala, Ahmed Farooq, Antti Sand, Kyösti Pennanen, Jenni Lappi, Mohit Nayak, Nesli Sözer, Roope Raisamo. The effects of haptic, visual and olfactory augmentations on food consumed while wearing an extended reality headset. J. Multimodal User Interfaces, 19(1):37-55, March 2025.
- Wanjoo Park, Muhammad Hassan Jamil, Mohamad A. Eid. Vibration feedback reduces perceived difficulty of virtualized fine motor task. J. Multimodal User Interfaces, 19(1):93-99, March 2025.
- Carla Dei, Matteo Meregalli Falerni, Turgut Cilsal, Davide Felice Redaelli, Matteo Lavit Nicora, Mattia Chiappini, Fabio Alexander Storm, Matteo Malosio. Design and testing of (A)MICO: a multimodal feedback system to facilitate the interaction between cobot and human operator. J. Multimodal User Interfaces, 19(1):21-36, March 2025.
- Alfarabi Imashev, Nurziya Oralbayeva, Gulmira Baizhanova, Anara Sandygulova. Assessment of comparative evaluation techniques for signing agents: a study with deaf adults. J. Multimodal User Interfaces, 19(1):1-19, March 2025.
- Nicolas Leins, Jana Gonnermann-Müller, Malte Teichmann. Correction: Comparing head-mounted and handheld augmented reality for guided assembly. J. Multimodal User Interfaces, 18(4):329, December 2024.
- Zahra J. Muhsin, Rami Qahwaji, Faruque Ghanchi, Majid A. Al-Taee. Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects. J. Multimodal User Interfaces, 18(1):135-156, March 2024.
- Sophia C. Steinhaeusser, Albin Zehe, Peggy Schnetter, Andreas Hotho, Birgit Lugrin. Towards the development of an automated robotic storyteller: comparing approaches for emotional story annotation for non-verbal expression via body language. J. Multimodal User Interfaces, 18(4):1-23, December 2024.