Abstract is missing.
- Development of a High Definition Haptic Rendering for Stability and FidelityKatsuhito Akahane, Takeo Hamada, Takehiko Yamaguchi, Makoto Sato. 3-12 [doi]
- Designing a Better Morning: A Study on Large Scale Touch Interface DesignOnur Asan, Mark Omernick, Dain Peer, Enid N. H. Montague. 13-22 [doi]
- Experimental Evaluations of Touch Interaction Considering Automotive RequirementsAndreas Haslbeck, Severina Popova, Michael Krause, Katrina Pecot, Jürgen Mayer, Klaus Bengler. 23-32 [doi]
- More than Speed? An Empirical Study of Touchscreens and Body Awareness on an Object Manipulation TaskRachelle Kristof Hippler, Dale S. Klopfer, Laura M. Leventhal, G. Michael Poor, Brandi A. Klein, Samuel D. Jaffee. 33-42 [doi]
- TiMBA - Tangible User Interface for Model Building and AnalysisChih-Pin Hsiao, Brian R. Johnson. 43-52 [doi]
- Musical Skin: A Dynamic Interface for Musical PerformanceHeng Jiang, Teng-Wen Chang, Cha-Lin Liu. 53-61 [doi]
- Analyzing User Behavior within a Haptic SystemSteven L. Johnson, Yueqing Li, Chang Soo Nam, Takehiko Yamaguchi. 62-70 [doi]
- Usability Testing of the Interaction of Novices with a Multi-touch Table in Semi Public SpaceMarkus Jokisch, Thomas Bartoschek, Angela Schwering. 71-80 [doi]
- Niboshi for Slate Devices: A Japanese Input Method Using Multi-touch for Slate DevicesGimpei Kimioka, Buntarou Shizuki, Jiro Tanaka. 81-89 [doi]
- An Investigation on Requirements for Co-located Group-Work Using Multitouch-, Pen-Based- and Tangible-InteractionKarsten Nebe, Tobias Müller, Florian Klompmaker. 90-99 [doi]
- Exploiting New Interaction Techniques for Disaster Control Management Using Multitouch-, Tangible- and Pen-Based-InteractionKarsten Nebe, Florian Klompmaker, Helge Jung, Holger Fischer. 100-109 [doi]
- Saving and Restoring Mechanisms for Tangible User Interfaces through Tangible Active ObjectsEckard Riedenklau, Thomas Hermann, Helge Ritter. 110-118 [doi]
- Needle Insertion Simulator with Haptic FeedbackSeungJae Shin, Wanjoo Park, Hyunchul Cho, Se Hyung Park, Laehyun Kim. 119-124 [doi]
- Measurement of Driver s Distraction for an Early Prove of Concepts in Automotive Industry at the Example of the Development of a Haptic TouchpadRoland Spies, Andreas Blattner, Christian Lange, Martin Wohlfarter, Klaus Bengler, Werner Hamberger. 125-132 [doi]
- A Tabletop-Based Real-World-Oriented InterfaceHiroshi Takeda, Hidetoshi Miyao, Minoru Maruyama, David Asano. 133-139 [doi]
- What You Feel Is What I Do: A Study of Dynamic Haptic Interaction in Distributed Collaborative Virtual EnvironmentSehat Ullah, Xianging Liu, Samir Otmane, Paul Richard, Malik Mallem. 140-147 [doi]
- A Framework Interweaving Tangible Objects, Surfaces and SpacesAndy Wu, Jayraj Jog, Sam Mendenhall, Ali Mazalek. 148-157 [doi]
- The Effect of Haptic Cues on Working Memory in 3D Menu SelectionTakehiko Yamaguchi, Damien Chamaret, Paul Richard. 158-166 [doi]
- Eye-gaze Detection by Image Analysis under Natural LightKiyohiko Abe, Shoichi Ohi, Minoru Ohyama. 176-184 [doi]
- Multi-user Pointing and Gesture Interaction for Large Screen Using Infrared Emitters and AccelerometersLeonardo Angelini, Maurizio Caon, Stefano Carrino, Omar Abou Khaled, Elena Mugellini. 185-193 [doi]
- Gesture Identification Based on Zone Entry and Axis CrossingRyosuke Aoki, Yutaka Karatsu, Masayuki Ihara, Atsuhiko Maeda, Minoru Kobayashi, Shingo Kagami. 194-203 [doi]
- Attentive User Interface for Interaction within Virtual Reality Environments Based on Gaze AnalysisFlorin Barbuceanu, Csaba Antonya, Mihai Duguleana, Zoltán Rusak. 204-213 [doi]
- A Low-Cost Natural User Interaction Based on a Camera Hand-Gestures RecognizerMohamed-Ikbel Boulabiar, Thomas Burger, Franck Poirier, Gilles Coppin. 214-221 [doi]
- Head-Computer Interface: A Multimodal Approach to Navigate through Real and Virtual WorldsFrancesco Carrino, Julien Tscherrig, Elena Mugellini, Omar Abou Khaled, Rolf Ingold. 222-230 [doi]
- 3D-Position Estimation for Hand Gesture Interface Using a Single CameraSeung-Hwan Choi, Ji-Hyeong Han, Jong-Hwan Kim. 231-237 [doi]
- Hand Gesture for Taking Self PortraitShaowei Chu, Jiro Tanaka. 238-247 [doi]
- Hidden-Markov-Model-Based Hand Gesture Recognition Techniques Used for a Human-Robot Interaction SystemChin-Shyurng Fahn, Keng-Yu Chu. 248-258 [doi]
- Manual and Accelerometer Analysis of Head Nodding Patterns in Goal-oriented DialoguesMasashi Inoue, Toshio Irino, Nobuhiro Furuyama, Ryoko Hanada, Takako Ichinomiya, Hiroyasu Massaki. 259-267 [doi]
- Facial Expression Recognition Using AAMICPFJun-Sung Lee, Chi-Min Oh, Chil-Woo Lee. 268-274 [doi]
- Verification of Two Models of Ballistic MovementsJui-Feng Lin, Colin G. Drury. 275-284 [doi]
- Gesture Based Automating Household AppliancesWei Lun Ng, Ng Chee Kyun, Nor Kamariah Noordin, Borhanuddin Mohd Ali. 285-293 [doi]
- Upper Body Gesture Recognition for Human-Robot InteractionChi-Min Oh, Md. Zahidul Islam, Jun-Sung Lee, Chil-Woo Lee, In-So Kweon. 294-303 [doi]
- Gaze-Directed Hands-Free Interface for Mobile InteractionGie-seo Park, Jong-gil Ahn, Gerard Jounghyun Kim. 304-313 [doi]
- Eye-Movement-Based Instantaneous Cognition Model for Non-verbal Smooth Closed FiguresYuzo Takahashi, Shoko Koshi. 314-322 [doi]
- VOSS -A Voice Operated Suite for the Barbadian VernacularDavid Byer, Colin Depradine. 325-330 [doi]
- New Techniques for Merging Text VersionsDarius Dadgari, Wolfgang Stuerzlinger. 331-340 [doi]
- Modeling the Rhetoric of Human-Computer InteractionIris K. Howley, Carolyn Penstein Rosé. 341-350 [doi]
- Recommendation System Based on Interaction with Multiple Agents for Users with Vague IntentionItaru Kuramoto, Atsushi Yasuda, Mitsuru Minakuchi, Yoshihiro Tsujino. 351-357 [doi]
- A Review of Personality in Voice-Based Man Machine InteractionFlorian Metze, Alan Black, Tim Polzehl. 358-367 [doi]
- Can Indicating Translation Accuracy Encourage People to Rectify Inaccurate Translations?Mai Miyabe, Takashi Yoshino. 368-377 [doi]
- Design of a Face-to-Face Multilingual Communication System for a Handheld Device in the Medical FieldShun Ozaki, Takuo Matsunobe, Takashi Yoshino, Aguri Shigeno. 378-386 [doi]
- Computer Assistance in Bilingual Task-Oriented Human-Human DialoguesSven Schmeier, Matthias Rebel, Renlong Ai. 387-395 [doi]
- Developing and Exploiting a Multilingual Grammar for Human-Computer InteractionXian Zhang, Rico Andrich, Dietmar Rösner. 396-405 [doi]
- Dancing Skin: An Interactive Device for MotionSheng-Han Chen, Teng-Wen Chang, Sheng-Cheng Shih. 409-416 [doi]
- A Hybrid Brain-Computer Interface for Smart Home ControlGünter Edlinger, Clemens Holzner, Christoph Guger. 417-426 [doi]
- Integrated Context-Aware and Cloud-Based Adaptive Home Screens for Android PhonesTor-Morten Grønli, Jarle Hansen, Gheorghita Ghinea. 427-435 [doi]
- Evaluation of User Support of a Hemispherical Sub-display with GUI Pointing FunctionsShinichi Ike, Saya Yokoyama, Yuya Yamanishi, Naohisa Matsuuchi, Kazunori Shimamura, Takumi Yamaguchi, Haruya Shiba. 436-445 [doi]
- Uni-model Human System Interface Using sEMGSrinivasan Jayaraman, Venkatesh Balasubramanian. 446-453 [doi]
- An Assistive Bi-modal User Interface Integrating Multi-channel Speech Recognition and Computer VisionAlexey Karpov, Andrey Ronzhin, Irina S. Kipyatkova. 454-463 [doi]
- A Method of Multiple Odors Detection and RecognitionDong-Kyu Kim, Yong-Wan Roh, Kwang-Seok Hong. 464-473 [doi]
- Report on a Preliminary Study Using Breath Control and a Virtual Jogging Scenario as Biofeedback for Resilience TrainingJacquelyn Ford Morie, Eric Chance, J. Galen Buckwalter. 474-480 [doi]
- Low Power Wireless EEG Headset for BCI ApplicationsShrishail Patki, Bernard Grundlehner, Toru Nakada, Julien Penders. 481-490 [doi]
- Virtual Mouse: A Low Cost Proximity-Based Gestural Pointing DeviceSheng Kai Tang, Wen Chieh Tseng, Wei Wen Luo, Kuo Chung Chiu, Sheng Ta Lin, Yen Ping Liu. 491-499 [doi]
- Innovative User Interfaces for Wearable Computers in Real Augmented EnvironmentYun Zhou, Bertrand David, René Chalon. 500-509 [doi]
- Influence of Prior Knowledge and Embodiment on Human-Agent InteractionYugo Hayashi, Victor V. Kryssanov, Kazuhisa Miwa, Hitoshi Ogawa. 513-522 [doi]
- The Effect of Physical Embodiment of an Animal Robot on Affective Prosody RecognitionMyounghoon Jeon, Infantdani A. Rayan. 523-532 [doi]
- Older User-Computer Interaction on the Internet: How Conversational Agents Can HelpWi-Suk Kwon, Veena Chattaraman, Soo In Shim, Hanan Alnizami, Juan E. Gilbert. 533-536 [doi]
- An Avatar-Based Help System for Web-PortalsHelmut Lang, Christian Mosch, Bastian Boegel, David Michel Benoit, Wolfgang Minker. 537-546 [doi]
- mediRobbi: An Interactive Companion for Pediatric Patients during Hospital VisitSzu-Chia Lu, Nicole Blackwell, Ellen Yi-Luen Do. 547-556 [doi]
- Design of Shadows on the OHP Metaphor-Based Presentation Interface Which Visualizes a Presenter s ActionsYuichi Murata, Kazutaka Kurihara, Toshio Mochizuki, Buntarou Shizuki, Jiro Tanaka. 557-564 [doi]
- Web-Based Nonverbal Communication Interface Using 3DAgents with Natural GesturesToshiya Naka, Toru Ishida. 565-574 [doi]
- Taking Turns in Flying with a Virtual WingmanPim Nauts, Willem A. van Doesburg, Emiel Krahmer, Anita H. M. Cremers. 575-584 [doi]
- A Configuration Method of Visual Media by Using Characters of Audiences for Embodied Sport CheeringKentaro Okamoto, Michiya Yamamoto, Tomio Watanabe. 585-592 [doi]
- Introducing Animatronics to HCI: Extending Reality-Based InteractionG. Michael Poor, Robert J. K. Jacob. 593-602 [doi]
- Development of Embodied Visual Effects Which Expand the Presentation Motion of Emphasis and IndicationYuya Takao, Michiya Yamamoto, Tomio Watanabe. 603-612 [doi]
- Experimental Study on Appropriate Reality of Agents as a Multi-modal Interface for Human-Computer InteractionKaori Tanaka, Tatsunori Matsui, Kazuaki Kojima. 613-622 [doi]