Journal: J. Multimodal User Interfaces

Volume 11, Issue 4

301 -- 314Radu-Daniel Vatavu. Characterizing gesture knowledge transfer across multiple contexts of use
315 -- 325Youngwon R. Kim, Euijai Ahn, Gerard Jounghyun Kim. Evaluation of hand-foot coordinated quadruped interaction for mobile applications
327 -- 340Hernán F. García, Mauricio A. Álvarez, Álvaro Á. Orozco. Dynamic facial landmarking selection for emotion recognition using Gaussian processes

Volume 11, Issue 3

241 -- 250Hansol Kim, Kun Ha Suh, Eui Chul Lee. Multi-modal user interface combining eye tracking and hand gesture recognition
251 -- 265Roman Hak, Tomás Zeman. Consistent categorization of multimodal integration patterns during human-computer interaction
267 -- 276S. Devadethan, Geevarghese Titus. An ICA based head movement classification system using video signals
277 -- 287Justin Mathew, Stéphane Huot, Brian F. G. Katz. Survey and implications for the design of new 3D audio production and authoring tools
289 -- 299Jaedong Lee, Changhyeon Lee, Gerard Jounghyun Kim. Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions

Volume 11, Issue 2

115 -- 131Benjamin Weiss 0001, Ina Wechsung, Stefan Hillmann, Sebastian Möller. Multimodal HCI: exploratory studies on effects of first impression and single modality ratings in retrospective evaluation
133 -- 148Youngsun Kim, Jaedong Lee, Gerard Jounghyun Kim. Design and application of 2D illusory vibrotactile feedback for hand-held tablets
149 -- 172Alexy Bhowmick, Shyamanta M. Hazarika. An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends
173 -- 184Paola Salomoni, Catia Prandi, Marco Roccetti, Lorenzo Casanova, Luca Marchetti, Gustavo Marfia. Diegetic user interfaces for virtual environments with HMDs: a user experience study with oculus rift
185 -- 196Yuya Chiba, Takashi Nose, Akinori Ito. Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt
197 -- 210Jérémy Lacoche, Thierry Duval, Bruno Arnaldi, Eric Maisel, Jérôme Royan. Providing plasticity and redistribution for 3D user interfaces using the D3PART model
211 -- 225Julian Abich IV, Daniel J. Barber. The impact of human-robot multimodal communication on mental workload, usability preference, and expectations of robot behavior
227 -- 239Sunil Kumar, M. K. Bhuyan, Biplab Ketan Chakraborty. Extraction of texture and geometrical features from informative facial regions for sign language recognition

Volume 11, Issue 1

1 -- 7Xiao-Li Guo, Ting-Ting Yang. Gesture recognition based on HMM-FNN model using a Kinect
9 -- 23Cristian A. Torres-Valencia, Mauricio A. Álvarez, Álvaro Orozco-Gutiérrez. SVM-based feature selection methods for emotion recognition from multimodal data
25 -- 38Tim Vets, luc nijs, Micheline Lesaffre, Bart Moens, Federica Bressan, Pieter Colpaert, Peter Lambert, Rik Van de Walle, Marc Leman. Gamified music improvisation with BilliArT: a multimodal installation with balls
39 -- 55Alan Del Piccolo, Davide Rocchesso. Non-speech voice for sonic interaction: a catalogue
57 -- 65Marine Taffou, Jan Ondrej, Carol O'Sullivan, Olivier Warusfel, Isabelle Viaud-Delmon. Judging crowds' size by ear and by eye in virtual reality
67 -- 80Ayoung Hong, Dong Gun Lee, Heinrich H. Bülthoff, Hyoung Il Son. Multimodal feedback for teleoperation of multiple mobile robots in an outdoor environment
81 -- 96Merel M. Jung, Mannes Poel, Ronald Poppe, Dirk K. J. Heylen. Automatic recognition of touch gestures in the corpus of social touch
97 -- 111Thi Thuong Huyen Nguyen, Charles Pontonnier, Simon Hilt, Thierry Duval, Georges Dumont. VR-based operating modes and metaphors for collaborative ergonomic design of industrial workstations
113 -- 114Gérard Bailly. Critical review of the book "Gaze in Human-Robot Communication"