Abstract is missing.
- Multimodal user interfaces: who s the user?Anil K. Jain. 1 [doi]
- New techniques for evaluating innovative interfaces with eye trackingSandra P. Marshall. 2 [doi]
- Crossmodal attention and multisensory integration: implications for multimodal interface designCharles Spence. 3 [doi]
- A system for fast, full-text entry for small electronic devicesSaied Bozorgui-Nesbat. 4-11 [doi]
- Mutual disambiguation of 3D multimodal interaction in augmented and virtual realityEdward C. Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Philip R. Cohen, Steven Feiner. 12-19 [doi]
- Learning and reasoning about interruptionEric Horvitz, Johnson Apacible. 20-27 [doi]
- Providing the basis for human-robot-interaction: a multi-modal attention system for a mobile robotSebastian Lang, Marcus Kleinehagenbrock, Sascha Hohenner, Jannik Fritsch, Gernot A. Fink, Gerhard Sagerer. 28-35 [doi]
- Selective perception policies for guiding sensing and computation in multimodal systems: a comparative analysisNuria Oliver, Eric Horvitz. 36-43 [doi]
- Toward a theory of organized multimodal integration patterns during human-computer interactionSharon L. Oviatt, Rachel Coulston, Stefanie Tomko, Benfang Xiao, Rebecca Lunsford, R. Matthews Wesson, Lesley Carmichael. 44-51 [doi]
- TorqueBAR: an ungrounded haptic feedback deviceColin Swindells, Alex Unden, Tao Sang. 52-59 [doi]
- Towards tangibility in gameplay: building a tangible affective interface for a computer gameAna Paiva, Rui Prada, Ricardo Chaves, Marco Vala, Adrian Bullock, Gerd Andersson, Kristina Höök. 60-67 [doi]
- Multimodal biometrics: issues in design and testingRobert Snelick, Mike Indovina, James Yen, Alan Mink. 68-72 [doi]
- Sensitivity to haptic-audio asynchronyBernard D. Adelstein, Durand R. Begault, Mark R. Anderson, Elizabeth M. Wenzel. 73-76 [doi]
- A multi-modal approach for determining speaker location and focusMichael Siracusa, Louis-Philippe Morency, Kevin Wilson, John W. Fisher III, Trevor Darrell. 77-80 [doi]
- Distributed and local sensing techniques for face-to-face collaborationKen Hinckley. 81-84 [doi]
- Georgia tech gesture toolkit: supporting experiments in gesture recognitionTracy L. Westeyn, Helene Brashear, Amin Atrash, Thad Starner. 85-92 [doi]
- Architecture and implementation of multimodal plug and playChristian Elting, Stefan Rapp, Gregor Möhler, Michael Strube. 93-100 [doi]
- SmartKom: adaptive and flexible multimodal access to multiple applicationsNorbert Reithinger, Jan Alexandersson, Tilman Becker, Anselm Blocher, Ralf Engel, Markus Löckelt, Jochen Müller, Norbert Pfleger, Peter Poller, Michael Streit, Valentin Tschernomas. 101-108 [doi]
- A framework for rapid development of multimodal interfacesFrans Flippo, Allan Meng Krebs, Ivan Marsic. 109-116 [doi]
- Capturing user tests in a multimodal, multidevice informal prototyping toolAnoop K. Sinha, James A. Landay. 117-124 [doi]
- Large vocabulary sign language recognition based on hierarchical decision treesGaolin Fang, Wen Gao, Debin Zhao. 125-131 [doi]
- Hand motion gestural oscillations and multimodal discourseYingen Xiong, Francis K. H. Quek, David McNeill. 132-139 [doi]
- Pointing gesture recognition based on 3D-tracking of face, hands and head orientationKai Nickel, Rainer Stiefelhagen. 140-146 [doi]
- Untethered gesture acquisition and recognition for a multimodal conversational systemTeresa Ko, David Demirdjian, Trevor Darrell. 147-150 [doi]
- Where is :::: it ::::? Event Synchronization in Gaze-Speech Input SystemsManpreet Kaur, Marilyn Tremaine, Ning Huang, Joseph Wilder, Zoran Gacovski, Frans Flippo, Chandra Sekhar Mantravadi. 151-158 [doi]
- Eyetracking in cognitive state detection for HCIDarrell S. Rudmann, George W. McConkie, Xianjun Sam Zheng. 159-163 [doi]
- A multimodal learning interface for grounding spoken language in sensory perceptionsChen Yu, Dana H. Ballard. 164-171 [doi]
- A computer-animated tutor for spoken and written language learningDominic W. Massaro. 172-175 [doi]
- Augmenting user interfaces with adaptive speech commandsPeter Gorniak, Deb Roy. 176-179 [doi]
- Combining speech and haptics for intuitive and efficient navigation through image databasesThomas Käster, Michael Pfeiffer, Christian Bauckhage. 180-187 [doi]
- Interactive skills using active gaze trackingRowel Atienza, Alexander Zelinsky. 188-195 [doi]
- Error recovery in a blended style eye gaze and speech interfaceYeow Kee Tan, Nasser Sherkat, Tony Allen. 196-202 [doi]
- Using an autonomous cube for basic navigation and inputKristof Van Laerhoven, Nicolas Villar, Albrecht Schmidt, Gerd Kortuem, Hans-Werner Gellersen. 203-210 [doi]
- GWindows: robust stereo vision for gesture-based control of windowsAndrew Wilson, Nuria Oliver. 211-218 [doi]
- A visually grounded natural language interface for reference to spatial scenesPeter Gorniak, Deb Roy. 219-226 [doi]
- Perceptual user interfaces using vision-based eye trackingRavikrishna Ruddarraju, Antonio Haro, Kris Nagel, Quan T. Tran, Irfan A. Essa, Gregory D. Abowd, Elizabeth D. Mynatt. 227-233 [doi]
- Sketching informal presentationsYang Li, James A. Landay, Zhiwei Guan, Xiangshi Ren, Guozhong Dai. 234-241 [doi]
- Gestural communication over video stream: supporting multimodal interaction for remote collaborative physical tasksJiazhi Ou, Susan R. Fussell, Xilin Chen, Leslie D. Setlock, Jie Yang. 242-249 [doi]
- The role of spoken feedback in experiencing multimodal interfaces as human-likePernilla Qvarfordt, Arne Jönsson, Nils Dahlbäck. 250-257 [doi]
- Real time facial expression recognition in video using support vector machinesPhilipp Michel, Rana El Kaliouby. 258-264 [doi]
- Modeling multimodal integration patterns and performance in seniors: toward adaptive processing of individual differencesBenfang Xiao, Rebecca Lunsford, Rachel Coulston, R. Matthews Wesson, Sharon L. Oviatt. 265-272 [doi]
- Auditory, graphical and haptic contact cues for a reach, grasp, and place task in an augmented environmentMihaela A. Zahariev, Christine L. MacKenzie. 273-276 [doi]
- Mouthbrush: drawing and painting by hand and mouthChi-Ho Chan, Michael J. Lyons, Nobuji Tetsutani. 277-280 [doi]
- XISL: a language for describing multimodal interaction scenariosKouichi Katsurada, Yusaku Nakamura, Hirobumi Yamada, Tsuneo Nitta. 281-284 [doi]
- IRYS: a visualization tool for temporal analysis of multimodal interactionDaniel Bauer, James D. Hollan. 285-288 [doi]
- Towards robust person recognition on handheld devices using face and speaker identification technologiesTimothy J. Hazen, Eugene Weinstein, Alex Park. 289-292 [doi]
- Algorithms for controlling cooperation between output modalities in 2D embodied conversational agentsSarkis Abrilian, Jean-Claude Martin, Stéphanie Buisine. 293-296 [doi]
- Towards an attentive robotic dialog partnerTorsten Wilhelm, Hans-Joachim Böhme, Horst-Michael Gross. 297-300 [doi]
- Demo: a multi-modal training environment for surgeonsShahram Payandeh, John Dill, Graham Wilson, Hui Zhang, Lilong Shi, Alan J. Lomax, Christine L. MacKenzie. 301-302 [doi]
- Demo: playingfFantasyA with senToyAna Paiva, Rui Prada, Ricardo Chaves, Marco Vala, Adrian Bullock, Gerd Andersson, Kristina Höök. 303-304 [doi]