Journal: J. Multimodal User Interfaces

Volume 14, Issue 1

1 -- 15Vincenzo Lussu, Radoslaw Niewiadomski, Gualtiero Volpe, Antonio Camurri. The role of respiration audio in multimodal analysis of movement qualities
17 -- 23Wei Wei 0009, QingXuan Jia, Yongli Feng, Gang Chen 0029, Ming Chu. Multi-modal facial expression feature based on deep-neural networks
25 -- 48David Rudi, Peter Kiefer, Ioannis Giannopoulos, Martin Raubal. Gaze-based interactions in the cockpit of the future: a survey
49 -- 59Ahmed Alsswey, Hosam Al-Samarraie. Elderly users' acceptance of mHealth user interface (UI) design-based culture: the moderator role of age
61 -- 72Mriganka Biswas, Marta Romeo, Angelo Cangelosi, Ray Jones. Are older people any different from younger people in the way they want to interact with robots? Scenario based survey
73 -- 82Hiroki Tanaka, Hidemi Iwasaka, Hideki Negoro, Satoshi Nakamura 0001. Analysis of conversational listening skills toward agent-based social skills training
83 -- 100Justin Mathew, Stéphane Huot, Brian F. G. Katz. Comparison of spatial and temporal interaction techniques for 3D audio trajectory authoring
101 -- 121Gowdham Prabhakar, Aparna Ramakrishnan, Modiksha Madan, L. R. D. Murthy, Vinay Krishna Sharma, Sachin Deshmukh, Pradipta Biswas. Interactive gaze and finger controlled HUD for cars
123 -- 137Hayoung Jeong, Taeho Kang, Jiwon Choi, Jong Kim 0001. A comparative assessment of Wi-Fi and acoustic signal-based HCI methods on the practicality