Abstract is missing.
- The puzzle of sensory perception: putting together multisensory informationMarc O. Ernst. 1 [doi]
- Integrating sketch and speech inputs using spatial informationBee-Wah Lee, Alvin W. Yeo. 2-9 [doi]
- Distributed pointing for multimodal collaboration over sketched diagramsPaulo Barthelmess, Edward C. Kaiser, Xiao Huang, David Demirdjian. 10-17 [doi]
- Contextual recognition of head gesturesLouis-Philippe Morency, Candace L. Sidner, Christopher Lee, Trevor Darrell. 18-24 [doi]
- Combining environmental cues & head gestures to interact with wearable devicesMarc Hanheide, Christian Bauckhage, Gerhard Sagerer. 25-31 [doi]
- Automatic detection of interaction groupsOliver Brdiczka, Jérôme Maisonnasse, Patrick Reignier. 32-36 [doi]
- Meeting room configuration and multiple camera calibration in meeting analysisYingen Xiong, Francis K. H. Quek. 37-44 [doi]
- A multimodal perceptual user interface for video-surveillance environmentsGiancarlo Iannizzotto, Carlo Costanzo, Francesco La Rosa, Pietro Lanzafame. 45-52 [doi]
- Inferring body pose using speech contentSy Bor Wang, David Demirdjian. 53-60 [doi]
- A joint particle filter for audio-visual speaker trackingKai Nickel, Tobias Gehrig, Rainer Stiefelhagen, John W. McDonough. 61-68 [doi]
- The connector: facilitating context-aware communicationMaria Danninger, G. Flaherty, Keni Bernardin, Hazim Kemal Ekenel, T. Köhler, Robert Malkin, Rainer Stiefelhagen, Alex Waibel. 69-75 [doi]
- A user interface framework for multimodal VR interactionsMarc Erich Latoschik. 76-83 [doi]
- Multimodal output specification / simulation platformCyril Rousseau, Yacine Bellik, Frédéric Vernier. 84-91 [doi]
- Migratory MultiModal interfaces in MultiDevice environmentsSilvia Berti, Fabio Paternò. 92-99 [doi]
- Exploring multimodality in the laboratory and the fieldLynne Baillie, Raimund Schatz. 100-107 [doi]
- Understanding the effect of life-like interface agents through users eye movementsHelmut Prendinger, Chunling Ma, Jin Yingzi, Arturo Nakasone, Mitsuru Ishizuka. 108-115 [doi]
- Analyzing and predicting focus of attention in remote collaborative tasksJiazhi Ou, Lui Min Oh, Susan R. Fussell, Tal Blum, Jie Yang. 116-123 [doi]
- Gaze-based selection of standard-size menu itemsOleg Spakov, Darius Miniotas. 124-128 [doi]
- Region extraction of a gaze object using the gaze point and view image sequencesNorimichi Ukita, Tomohisa Ono, Masatsugu Kidode. 129-136 [doi]
- Interactive humanoids and androids as ideal interfaces for humansHiroshi Ishiguro. 137 [doi]
- Probabilistic grounding of situated speech using plan recognition and reference resolutionPeter Gorniak, Deb Roy. 138-143 [doi]
- Augmenting conversational dialogue by means of latent semantic googlingRobin Senior, Roel Vertegaal. 144-150 [doi]
- Human-style interaction with a robot for cooperative learning of scene objectsShuyin Li, Axel Haasch, Britta Wrede, Jannik Fritsch, Gerhard Sagerer. 151-158 [doi]
- A look under the hood: design and development of the first SmartWeb system demonstratorNorbert Reithinger, Simon Bergweiler, Ralf Engel, Gerd Herzog, Norbert Pfleger, Massimo Romanelli, Daniel Sonntag. 159-166 [doi]
- Audio-visual cues distinguishing self- from system-directed speech in younger and older adultsRebecca Lunsford, Sharon L. Oviatt, Rachel Coulston. 167-174 [doi]
- Identifying the intended addressee in mixed human-human and human-computer interaction from non-verbal featuresKoen van Turnhout, Jacques M. B. Terken, Ilse Bakx, Berry Eggen. 175-182 [doi]
- Multimodal multispeaker probabilistic tracking in meetingsDaniel Gatica-Perez, Guillaume Lathoud, Jean-Marc Odobez, Iain McCowan. 183-190 [doi]
- A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterancesKazuhiro Otsuka, Yoshinao Takemae, Junji Yamato. 191-198 [doi]
- Socially aware computation and communicationAlex Pentland. 199 [doi]
- Synthetic characters as multichannel interfacesElena Not, Koray Balci, Fabio Pianesi, Massimo Zancanaro. 200-207 [doi]
- XfaceEd: authoring tool for embodied conversational agentsKoray Balci. 208-213 [doi]
- A first evaluation study of a database of kinetic facial expressions (DaFEx)Alberto Battocchi, Fabio Pianesi, Dina Goren-Bar. 214-221 [doi]
- Hapticat: exploration of affective touchSteve Yohanan, Mavis Chan, Jeremy Hopkins, Haibo Sun, Karon E. MacLean. 222-229 [doi]
- Using observations of real designers at work to inform the development of a novel haptic modeling systemUmberto Giraudo, Monica Bordegoni. 230-235 [doi]
- A comparison of two methods of scaling on form perception via a haptic interfaceMounia Ziat, Olivier Gapenne, John Stewart, Charles Lenay. 236-243 [doi]
- An initial usability assessment for symbolic haptic rendering of music parametersMeghan Allen, Jennifer Gluck, Karon E. MacLean, Erwin Tang. 244-251 [doi]
- Tangible user interfaces for 3D clipping plane interaction with volumetric data: a case studyWen Qi, Jean-Bernard Martens. 252-258 [doi]
- A transformational approach for multimodal web user interfaces based on UsiXMLAdrian Stanciulescu, Quentin Limbourg, Jean Vanderdonckt, Benjamin Michotte, Francisco Montero. 259-266 [doi]
- A pattern mining method for interpretation of interactionTomoyuki Morita, Yasushi Hirano, Yasuyuki Sumi, Shoji Kajita, Kenji Mase. 267-273 [doi]
- A study of manual gesture-based selection for the PEMMI multimodal transport management interfaceFang Chen, Eric H. C. Choi, Julien Epps, Serge Lichman, Natalie Ruiz, Yu Shi, Ronnie Taib, Mike Wu. 274-281 [doi]
- Recognition of sign language subwords based on boosted hidden Markov modelsLiang-Guo Zhang, Xilin Chen, Chunli Wang, Yiqiang Chen, Wen Gao. 282-287 [doi]
- Gesture-driven American sign language phraselatorJose L. Hernandez-Rebollar. 288-292 [doi]
- Interactive vision to detect target objects for helper robotsMd. Altab Hossain, Rahmadi Kurnia, Akio Nakamura, Yoshinori Kuno. 293-300 [doi]
- The contrastive evaluation of unimodal and multimodal interfaces for voice otput communication aidsMelanie Baljko. 301-308 [doi]
- Agent-based architecture for implementing multimodal learning environments for visually impaired childrenRami Saarinen, Janne Järvi, Roope Raisamo, Jouni Salo. 309-316 [doi]
- Perceiving ordinal data haptically under workloadAnthony Tang, Peter McLachlan, Karen Lowe, Chalapati Rao Saka, Karon E. MacLean. 317-324 [doi]
- Virtual tangible widgets: seamless universal interaction with personal sensing devicesEiji Tokunaga, Hiroaki Kimura, Nobuyuki Kobayashi, Tatsuo Nakajima. 325-332 [doi]