Abstract is missing.
- Two-way eye contact between humans and robotsYoshinori Kuno, Arihiro Sakurai, Dai Miyauchi, Akio Nakamura. 1-8 [doi]
- Another person s eye gaze as a cue in solving programming problemsRandy Stein, Susan Brennan. 9-15 [doi]
- EyePrint: support of document browsing with eye gaze traceTakehiko Ohno. 16-23 [doi]
- A framework for evaluating multimodal integration by humans and a role for embodied conversational agentsDominic W. Massaro. 24-31 [doi]
- From conversational tooltips to grounded discourse: head poseTracking in interactive dialog systemsLouis-Philippe Morency, Trevor Darrell. 32-37 [doi]
- Evaluation of spoken multimodal conversationNiels Ole Bernsen, Laila Dybkjær. 38-45 [doi]
- Multimodal transformed social interactionMatthew Turk, Jeremy N. Bailenson, Andrew C. Beall, Jim Blascovich, Rosanna E. Guadagno. 46-52 [doi]
- Multimodal interaction in an augmented reality scenarioGunther Heidemann, Ingo Bax, Holger Bekel. 53-60 [doi]
- The ThreadMill architecture for stream-oriented human communication analysis applicationsPaulo Barthelmess, Clarence A. Ellis. 61-68 [doi]
- TouchLight: an imaging touch screen and display for gesture-based interactionAndrew D. Wilson. 69-76 [doi]
- Walking-pad: a step-in-place locomotion interface for virtual environmentsLaroussi Bouguila, Florian Evéquoz, Michèle Courant, Béat Hirsbrunner. 77-81 [doi]
- Multimodal detection of human interaction events in a nursing home environmentDatong Chen, Robert Malkin, Jie Yang. 82-89 [doi]
- Elvis: situated speech and gesture understanding for a robotic chandelierJoshua Juster, Deb Roy. 90-96 [doi]
- Towards integrated microplanning of language and iconic gesture for multimodal outputStefan Kopp, Paul Tepper, Justine Cassell. 97-104 [doi]
- Exploiting prosodic structuring of coverbal gesticulationSanshzar Kettebekov. 105-112 [doi]
- Visual and linguistic information in gesture classificationJacob Eisenstein, Randall Davis. 113-120 [doi]
- Multimodal model integration for sentence unit detectionMary P. Harper, Elizabeth Shriberg. 121-128 [doi]
- When do we interact multimodally?: cognitive load and multimodal communication patternsSharon L. Oviatt, Rachel Coulston, Rebecca Lunsford. 129-136 [doi]
- Bimodal HCI-related affect recognitionZhihong Zeng, Jilin Tu, Ming Liu, Tong Zhang, Nicholas Rizzolo, ZhenQiu Zhang, Thomas S. Huang, Dan Roth, Stephen E. Levinson. 137-143 [doi]
- Identifying the addressee in human-human-robot interactions based on head pose and speechMichael Katzenmaier, Rainer Stiefelhagen, Tanja Schultz. 144-151 [doi]
- Articulatory features for robust visual speech recognitionKate Saenko, Trevor Darrell, James R. Glass. 152-158 [doi]
- M/ORIS: a medical/operating room interaction systemSébastien Grange, Terrence Fong, Charles Baur. 159-166 [doi]
- Modality fusion for graphic design applicationsAndré D. Milota. 167-174 [doi]
- Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gesturesHartwig Holzapfel, Kai Nickel, Rainer Stiefelhagen. 175-182 [doi]
- AROMA: ambient awareness through olfaction in a messaging applicationAdam Bodnar, Richard Corbett, Dmitry Nekrasovski. 183-190 [doi]
- The virtual haptic back for palpatory trainingRobert L. Williams II, Mayank Srivastava, John N. Howell, Robert R. Conatser Jr., David C. Eland, Janet M. Burns, Anthony G. Chila. 191-197 [doi]
- A vision-based sign language recognition system using tied-mixture density HMMLiang-Guo Zhang, Yiqiang Chen, Gaolin Fang, Xilin Chen, Wen Gao. 198-204 [doi]
- Analysis of emotion recognition using facial expressions, speech and multimodal informationCarlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul-Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, Shrikanth Narayanan. 205-211 [doi]
- Support for input adaptability in the ICON toolkitPierre Dragicevic, Jean-Daniel Fekete. 212-219 [doi]
- User walkthrough of multimodal access to multidimensional databasesMyra P. van Esch-Bussemakers, Anita H. M. Cremers. 220-226 [doi]
- Multimodal interaction under exerted conditions in a natural field settingSanjeev Kumar, Philip R. Cohen, Rachel Coulston. 227-234 [doi]
- A segment-based audio-visual speech recognizer: data collection, development, and initial experimentsTimothy J. Hazen, Kate Saenko, Chia-Hao La, James R. Glass. 235-242 [doi]
- A model-based approach for real-time embedded multimodal systems in military aircraftsRémi Bastide, David Navarre, Philippe A. Palanque, Amélie Schyn, Pierre Dragicevic. 243-250 [doi]
- ICARE software components for rapidly developing multimodal interfacesJullien Bouchet, Laurence Nigay, Thierry Ganille. 251-258 [doi]
- MacVisSTA: a system for multimodal analysisR. Travis Rose, Francis K. H. Quek, Yang Shi. 259-264 [doi]
- Context based multimodal fusionNorbert Pfleger. 265-272 [doi]
- Emotional Chinese talking head systemJianhua Tao, Tieniu Tan. 273-280 [doi]
- Experiences on haptic interfaces for visually impaired young childrenSaija Patomäki, Roope Raisamo, Jouni Salo, Virpi Pasto, Arto Hippula. 281-288 [doi]
- Visual touchpad: a two-handed gestural input deviceShahzad Malik, Joseph Laszlo. 289-296 [doi]
- An evaluation of virtual human technology in informational kiosksCurry I. Guinn, Robert C. Hubal. 297-302 [doi]
- Software infrastructure for multi-modal virtual environmentsBrian F. Goldiez, Glenn A. Martin, Jason Daly, Donald Washburn, Todd Lazarus. 303-308 [doi]
- GroupMedia: distributed multi-modal interfacesAnmol Madan, Ron Caneel, Alex Pentland. 309-316 [doi]
- Agent and library augmented shared knowledge areas (ALASKA)Eric R. Hamilton. 317-318 [doi]
- MULTIFACE: multimodal content adaptations for heterogeneous devicesSongsak Channarukul, Susan Weber McRoy, Syed S. Ali. 319-320 [doi]
- Command and control resource performance predictor(C:::2:::RP:::2:::)Joseph M. Dalton, Ali Ahmad, Kay M. Stanney. 321-322 [doi]
- A multi-modal architecture for cellular phonesLuca Nardelli, Marco Orlandi, Daniele Falavigna. 323-324 [doi]
- SlidingMap : introducing and evaluating a new modality for map interactionMatthias Merdes, Jochen Häußler, Matthias Jöst. 325-326 [doi]
- Multimodal interaction for distributed collaborationLevent Bolelli, Guoray Cai, Hongmei Wang, Bita Mortazavi, Ingmar Rauschert, Sven Fuhrmann, Rajeev Sharma, Alan M. MacEachren. 327-328 [doi]
- A multimodal learning interface for sketch, speak and point creation of a schedule chartEdward C. Kaiser, David Demirdjian, Alexander Gruenstein, Xiaoguang Li, John Niekrasz, Matt Wesson, Sanjeev Kumar. 329-330 [doi]
- Real-time audio-visual tracking for meeting analysisDavid Demirdjian, Kevin Wilson, Michael Siracusa, Trevor Darrell. 331-332 [doi]
- Collaboration in parallel worldsAshutosh Morde, Jun Hou, S. Kicha Ganapathy, Carlos D. Correa, Allan Meng Krebs, Lawrence Rabiner. 333-334 [doi]
- Segmentation and classification of meetings using multiple information streamsPaul E. Rybski, Satanjeev Banerjee, Fernando De la Torre, Carlos Vallespí, Alexander I. Rudnicky, Manuela M. Veloso. 335-336 [doi]
- A maximum entropy based approach for multimodal integrationPéter Pál Boda. 337-338 [doi]
- Multimodal interface platform for geographical information systems (GeoMIP) in crisis managementPyush Agrawal, Ingmar Rauschert, Keerati Inochanon, Levent Bolelli, Sven Fuhrmann, Isaac Brewer, Guoray Cai, Alan M. MacEachren, Rajeev Sharma. 339-340 [doi]
- Adaptations of multimodal content in dialog systems targeting heterogeneous devicesSongsak Channarukul. 341 [doi]
- Utilizing gestures to better understand dynamic structure of human communicationLei Chen. 342 [doi]
- Multimodal programming for dyslexic studentsDale-Marie Wilson. 343 [doi]
- Gestural cues for speech understandingJacob Eisenstein. 344 [doi]
- Using language structure for adaptive multimodal language acquisitionRajesh Chandrasekaran. 345 [doi]
- Private speech during multimodal human-computer interactionRebecca Lunsford. 346 [doi]
- Projection augmented models: the effect of haptic feedback on subjective and objective human factorsEmily Bennett. 347 [doi]
- Multimodal interface design for multimodal meeting content retrievalAgnes Lisowska. 348 [doi]
- Determining efficient multimodal information-interaction spaces for C:::2::: systemsLeah Reeves. 349 [doi]
- Using spatial warning signals to capture a driver s visual attentionCristy Ho. 350 [doi]
- Multimodal interfaces and applications for visually impaired childrenSaija Patomäki. 351 [doi]
- Multilayer architecture in sign language recognition systemFeng Jiang, Hongxun Yao, Guilin Yao. 352-353 [doi]
- Computer vision techniques and applications in human-computer interactionErno Mäkinen. 354 [doi]
- Multimodal response generation in GISLevent Bolelli. 355 [doi]
- Adaptive multimodal recognition of voluntary and involuntary gestures of people with motor disabilitiesIngmar Rauschert. 356 [doi]