1 | -- | 15 | Vincenzo Lussu, Radoslaw Niewiadomski, Gualtiero Volpe, Antonio Camurri. The role of respiration audio in multimodal analysis of movement qualities |
17 | -- | 23 | Wei Wei 0009, QingXuan Jia, Yongli Feng, Gang Chen 0029, Ming Chu. Multi-modal facial expression feature based on deep-neural networks |
25 | -- | 48 | David Rudi, Peter Kiefer, Ioannis Giannopoulos, Martin Raubal. Gaze-based interactions in the cockpit of the future: a survey |
49 | -- | 59 | Ahmed Alsswey, Hosam Al-Samarraie. Elderly users' acceptance of mHealth user interface (UI) design-based culture: the moderator role of age |
61 | -- | 72 | Mriganka Biswas, Marta Romeo, Angelo Cangelosi, Ray Jones. Are older people any different from younger people in the way they want to interact with robots? Scenario based survey |
73 | -- | 82 | Hiroki Tanaka, Hidemi Iwasaka, Hideki Negoro, Satoshi Nakamura 0001. Analysis of conversational listening skills toward agent-based social skills training |
83 | -- | 100 | Justin Mathew, Stéphane Huot, Brian F. G. Katz. Comparison of spatial and temporal interaction techniques for 3D audio trajectory authoring |
101 | -- | 121 | Gowdham Prabhakar, Aparna Ramakrishnan, Modiksha Madan, L. R. D. Murthy, Vinay Krishna Sharma, Sachin Deshmukh, Pradipta Biswas. Interactive gaze and finger controlled HUD for cars |
123 | -- | 137 | Hayoung Jeong, Taeho Kang, Jiwon Choi, Jong Kim 0001. A comparative assessment of Wi-Fi and acoustic signal-based HCI methods on the practicality |