115 | -- | 131 | Benjamin Weiss 0001, Ina Wechsung, Stefan Hillmann, Sebastian Möller. Multimodal HCI: exploratory studies on effects of first impression and single modality ratings in retrospective evaluation |
133 | -- | 148 | Youngsun Kim, Jaedong Lee, Gerard Jounghyun Kim. Design and application of 2D illusory vibrotactile feedback for hand-held tablets |
149 | -- | 172 | Alexy Bhowmick, Shyamanta M. Hazarika. An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends |
173 | -- | 184 | Paola Salomoni, Catia Prandi, Marco Roccetti, Lorenzo Casanova, Luca Marchetti, Gustavo Marfia. Diegetic user interfaces for virtual environments with HMDs: a user experience study with oculus rift |
185 | -- | 196 | Yuya Chiba, Takashi Nose, Akinori Ito. Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt |
197 | -- | 210 | Jérémy Lacoche, Thierry Duval, Bruno Arnaldi, Eric Maisel, Jérôme Royan. Providing plasticity and redistribution for 3D user interfaces using the D3PART model |
211 | -- | 225 | Julian Abich IV, Daniel J. Barber. The impact of human-robot multimodal communication on mental workload, usability preference, and expectations of robot behavior |
227 | -- | 239 | Sunil Kumar, M. K. Bhuyan, Biplab Ketan Chakraborty. Extraction of texture and geometrical features from informative facial regions for sign language recognition |