Abstract is missing.
- Weight, weight, don t tell meTed Warburton. 1 [doi]
- Movement and music: designing gestural interfaces for computer-based musical instrumentsM. Sile O Modhrain. 2 [doi]
- Mixing virtual and actualHerbert H. Clark. 3 [doi]
- Collaborative multimodal photo annotation over digital paperPaulo Barthelmess, Edward C. Kaiser, Xiao Huang, David McGee, Philip R. Cohen. 4-11 [doi]
- MyConnector: analysis of context cues to predict human availability for communicationMaria Danninger, Tobias Kluge, Rainer Stiefelhagen. 12-19 [doi]
- Human perception of intended addressee during computer-assisted meetingsRebecca Lunsford, Sharon L. Oviatt. 20-27 [doi]
- Automatic detection of group functional roles in face to face interactionsMassimo Zancanaro, Bruno Lepri, Fabio Pianesi. 28-34 [doi]
- Speaker localization for microphone array-based ASR: the effects of accuracy on overlapping speechHari Krishna Maganti, Daniel Gatica-Perez. 35-38 [doi]
- Automatic speech recognition for webcasts: how good is good enough and what to do when it isn tCosmin Munteanu, Gerald Penn, Ronald Baecker, Yuecheng Zhang. 39-42 [doi]
- Cross-modal coordination of expressive strength between voice and gesture for personified mediaTomoko Yonezawa, Noriko Suzuki, Shinji Abe, Kenji Mase, Kiyoshi Kogure. 43-50 [doi]
- VirtualHuman: dialogic and affective interaction with virtual charactersNorbert Reithinger, Patrick Gebhard, Markus Löckelt, Alassane Ndiaye, Norbert Pfleger, Martin Klesen. 51-58 [doi]
- From vocal to multimodal dialogue managementMiroslav Melichar, Pavel Cenek. 59-67 [doi]
- Human-Robot dialogue for joint construction tasksMary Ellen Foster, Tomas By, Markus Rickert, Alois Knoll. 68-71 [doi]
- roBlocks: a robotic construction kit for mathematics and science educationEric Schweikardt, Mark D. Gross. 72-75 [doi]
- GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applicationsEdward Tse, Saul Greenberg, Chia Shen. 76-83 [doi]
- Co-Adaptation of audio-visual speech and gesture classifiersChris Mario Christoudias, Kate Saenko, Louis-Philippe Morency, Trevor Darrell. 84-91 [doi]
- Towards the integration of shape-related information in 3-D gestures and speechTimo Sowa. 92-99 [doi]
- Which one is better?: information navigation techniques for spatially aware handheld displaysMichael Rohs, Georg Essl. 100-107 [doi]
- Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysisJennifer L. Burke, Matthew S. Prewett, Ashley A. Gray, Liuquin Yang, Frederick R. B. Stilson, Michael D. Coovert, Linda R. Elliot, Elizabeth Redden. 108-117 [doi]
- Multimodal estimation of user interruptibility for smart mobile telephonesRobert Malkin, Datong Chen, Jie Yang, Alex Waibel. 118-125 [doi]
- Short message dictation on Symbian series 60 mobile phonesE. Karpov, I. Kiss, J. Leppänen, J. Olsen, D. Oria, S. Sivadas, J. Tian. 126-127 [doi]
- The NIST smart data flow system II multimodal data transport infrastructureAntoine Fillinger, Stéphane Degré, Imad Hamchi, Vincent Stanford. 128 [doi]
- A contextual multimodal integratorPéter Pál Boda. 129-130 [doi]
- Collaborative multimodal photo annotation over digital paperPaulo Barthelmess, Edward C. Kaiser, Xiao Huang, David McGee, Philip R. Cohen. 131-132 [doi]
- CarDialer: multi-modal in-vehicle cellphone control applicationVladimír Bergl, Martin Cmejrek, Martin Fanta, Martin Labský, Ladislav Serédi, Jan Sedivý, Lubos Ures. 133-134 [doi]
- Gender and age estimation system robust to pose variationsErina Takikawa, Koichi Kinoshita, Shihong Lao, Masato Kawade. 135-136 [doi]
- A fast and robust 3D head pose and gaze estimation systemKoichi Kinoshita, Yong Ma, Shihong Lao, Masato Kawade. 137-138 [doi]
- Audio-visual emotion recognition in adult attachment interviewZhihong Zeng, Yuxiao Hu, Yun Fu, Thomas S. Huang, Glenn I. Roisman, Zhen Wen. 139-145 [doi]
- Modeling naturalistic affective states via facial and vocal expressions recognitionGeorge Caridakis, Lori Malatesta, Loïc Kessous, Noam Amir, Amaryllis Raouzaiou, Kostas Karpouzis. 146-154 [doi]
- A need to know system for group classificationWen Dong, Jonathan Gips, Alex Pentland. 155-161 [doi]
- Spontaneous vs. posed facial behavior: automatic analysis of brow actionsMichel François Valstar, Maja Pantic, Zara Ambadar, Jeffrey F. Cohn. 162-170 [doi]
- Gaze-X: adaptive affective multimodal interface for single-user office scenariosLudo Maat, Maja Pantic. 171-178 [doi]
- Human computing, virtual humans and artificial imperfectionZsófia Ruttkay, Dennis Reidsma, Anton Nijholt. 179-184 [doi]
- Using maximum entropy (ME) model to incorporate gesture cues for SU detectionLei Chen 0004, Mary P. Harper, Zhongqiang Huang. 185-192 [doi]
- Salience modeling based on non-verbal modalities for spoken language understandingShaolin Qu, Joyce Y. Chai. 193-200 [doi]
- EM detection of common origin of multi-modal cuesAthanasios K. Noulas, Ben J. A. Kröse. 201-208 [doi]
- Prototyping novel collaborative multimodal systems: simulation, data collection and analysis tools for the next decadeAlexander M. Arthur, Rebecca Lunsford, Matt Wesson, Sharon L. Oviatt. 209-216 [doi]
- Combining audio and video to predict helpers focus of attention in multiparty remote collaboration on physical tasksJiazhi Ou, Yanxin Shi, Jeffrey Wong, Susan R. Fussell, Jie Yang. 217-224 [doi]
- The role of psychological ownership and ownership markers in collaborative working environmentQianYing Wang, Alberto Battocchi, Ilenia Graziola, Fabio Pianesi, Daniel Tomasini, Massimo Zancanaro, Clifford Nass. 225-232 [doi]
- Foundations of human computing: facial expression and emotionJeffrey F. Cohn. 233-238 [doi]
- Human computing and machine understanding of human behavior: a surveyMaja Pantic, Alex Pentland, Anton Nijholt, Thomas S. Huang. 239-248 [doi]
- Computing human faces for human viewers: automated animation in photographs and paintingsVolker Blanz. 249-256 [doi]
- Detection and application of influence rankings in small group meetingsRutger Rienks, Dong Zhang, Daniel Gatica-Perez, Wilfried Post. 257-264 [doi]
- Tracking the multi person wandering visual focus of attentionKevin Smith, Sileye O. Ba, Daniel Gatica-Perez, Jean-Marc Odobez. 265-272 [doi]
- Toward open-microphone engagement for multiparty interactionsRebecca Lunsford, Sharon L. Oviatt, Alexander M. Arthur. 273-280 [doi]
- Tracking head pose and focus of attention with multiple far-field camerasMichael Voit, Rainer Stiefelhagen. 281-286 [doi]
- Recognizing gaze aversion gestures in embodied conversational discourseLouis-Philippe Morency, Chris Mario Christoudias, Trevor Darrell. 287-294 [doi]
- Explorations in sound for tilting-based interfacesMatthias Rath, Michael Rohs. 295-301 [doi]
- Haptic phonemes: basic building blocks of haptic communicationMario J. Enriquez, Karon E. MacLean, Christian Chita. 302-309 [doi]
- Toward haptic rendering for a virtual dissectionNasim Melony Vafai, Shahram Payandeh, John Dill. 310-317 [doi]
- Embrace system for remote counselingOsamu Morikawa, Sayuri Hashimoto, Tsunetsugu Munakata, Junzo Okunaka. 318-325 [doi]
- Enabling multimodal communications for enhancing the ability of learning for the visually impairedFrancis K. H. Quek, David McNeill, Francisco Oliveira. 326-332 [doi]
- The benefits of multimodal information: a meta-analysis comparing visual and visual-tactile feedbackMatthew S. Prewett, Liuquin Yang, Frederick R. B. Stilson, Ashley A. Gray, Michael D. Coovert, Jennifer L. Burke, Elizabeth Redden, Linda R. Elliot. 333-338 [doi]
- Word graph based speech rcognition error correction by handwriting inputPeng Liu, Frank K. Soong. 339-346 [doi]
- Using redundant speech and handwriting for learning new vocabulary and understanding abbreviationsEdward C. Kaiser. 347-356 [doi]
- Multimodal fusion: a new hybrid strategy for dialogue systemsPilar Manchón Portillo, Guillermo Pérez García, Gabriel Amores Carredano. 357-363 [doi]
- Evaluating usability based on multimodal information: an empirical studyTao Lin, Atsumi Imamiya. 364-371 [doi]
- A new approach to haptic augmentation of the GUIThomas N. Smyth, Arthur E. Kirkpatrick. 372-379 [doi]
- HMM-based synthesis of emotional facial expressions during speech in synthetic talking headsNadia Mana, Fabio Pianesi. 380-387 [doi]
- Embodiment and multimodalityFrancis K. H. Quek. 388-390 [doi]