Abstract is missing.
- Layered Representations for Human Activity RecognitionNuria Oliver, Eric Horvitz, Ashutosh Garg. 3-8 [doi]
- Evaluating Integrated Speech- and Image UnderstandingChristian Bauckhage, Jannik Fritsch, Katharina J. Rohlfing, Sven Wachsmuth, Gerhard Sagerer. 9-14 [doi]
- Techniques for Interactive Audience ParticipationDan Maynes-Aminzade, Randy Pausch, Steven M. Seitz. 15-20 [doi]
- Perceptual Collaboration in NeemPaulo Barthelmess, Clarence A. Ellis. 21-26 [doi]
- A Tracking Framework for Collaborative Human Computer InteractionEdiz Polat, Mohammed Yeasin, Rajeev Sharma. 27-32 [doi]
- A Structural Approach to Distance Rendering in Personal Auditory DisplaysFederico Fontana, Davide Rocchesso, Laura Ottaviani. 33-38 [doi]
- A Multimodal Electronic Travel Aid DeviceAndrea Fusiello, Antonello Panuccio, Vittorio Murino, Federico Fontana, Davide Rocchesso. 39-46 [doi]
- Lecture and Presentation Tracking in an Intelligent Meeting RoomIvica Rogina, Thomas Schaaf. 47-52 [doi]
- Parallel Computing-Based Architecture for Mixed-Initiative Spoken DialogueRyuta Taguma, Tatsuhiro Moriyama, Koji Iwano, Sadaoki Furui. 53-58 [doi]
- 3-D N-Best Search for Simultaneous Recognition of Distant-Talking Speech of Multiple TalkersSatoshi Nakamura, Panikos Heracleous. 59-63 [doi]
- Integration of Tone Related Feature for Chinese Speech RecognitionPui-Fung Wong, Man-Hung Siu. 64-68 [doi]
- Talking Heads: Which Matching between Faces and Synthetic Voices?Marc Mersiol, Noël Chateau, Valérie Maffiolo. 69-74 [doi]
- Robust Noisy Speech Recognition with Adaptive Frequency Bank SelectionYe Tian, Ji Wu, Zuoying Wang, Dajin Lu. 75-80 [doi]
- Covariance-Tied Clustering Method In Speaker IdentificationZiqiang Wang, Yang Liu, Peng Ding, Xu Bo. 81-86 [doi]
- Context-Based Multimodal Input Understanding in Conversational SystemsJoyce Y. Chai, Shimei Pan, Michelle X. Zhou, Keith Houck. 87-92 [doi]
- Context-Sensitive Help for Multimodal DialogueHelen Wright Hastie, Michael Johnston, Patrick Ehlen. 93-98 [doi]
- Referring to Objects with Spoken and Haptic ModalitiesFrédéric Landragin, Nadia Bellalem, Laurent Romary. 99-104 [doi]
- Towards Visually-Grounded Spoken Language AcquisitionDeb Roy. 105-110 [doi]
- Modeling Output in the EMBASSI Multimodal Dialog SystemChristian Elting, Gregor Möhler. 111-116 [doi]
- Multimodal Dialogue Systems for Interactive TVApplicationsAseel Ibrahim, Pontus Johansson. 117-122 [doi]
- Human - Robot Interaction: Engagement between Humans and Robots for Hosting ActivitiesCandace L. Sidner, Myroslava Dzikovska. 123-128 [doi]
- Viewing and Analyzing Multimodal Human-computer Tutorial Dialogue: A Database ApproachJack Mostow, Joseph E. Beck, Raghu Chalasani, Andrew Cuneo, Peng Jia. 129-134 [doi]
- Adaptive Dialog Based upon Multimodal Language AcquisitionSorin Dusan, James L. Flanagan. 135-140 [doi]
- Integrating Emotional Cues into a Framework for Dialogue ManagementHartwig Holzapfel, Christian Fügen, Matthias Denecke, Alex Waibel. 141-148 [doi]
- Data Driven Design of an ANN/HMM System for On-line Unconstrained Handwritten Character RecognitionHaifeng Li, Thierry Artières, Patrick Gallinari. 149-154 [doi]
- Gesture Patterns during Speech RepairsLei Chen 0004, Mary P. Harper, Francis K. H. Quek. 155-160 [doi]
- Prosody Based Co-analysis for Continuous Recognition of Coverbal GesturesSanshzar Kettebekov, Mohammed Yeasin, Rajeev Sharma. 161-166 [doi]
- Purdue RVL-SLLL ASL Database for Automatic Recognition of American Sign LanguageAleix M. Martínez, Ronnie B. Wilbur, Robin Shay, Avinash C. Kak. 167-172 [doi]
- The Role of Gesture in Multimodal Referring ActionsFrédéric Landragin. 173-178 [doi]
- Hand Gesture Symmetric Behavior Detection and Analysis in Natural ConversationYingen Xiong, Francis K. H. Quek, David McNeill. 179-184 [doi]
- A Multi-Class Pattern Recognition System for Practical Finger Spelling TranslationJose L. Hernandez-Rebollar, Robert W. Lindeman, Nicholas Kyriakopoulos. 185-190 [doi]
- A Map-Based System Using Speech and 3D Gestures for Pervasive ComputingAndrea Corradini, Richard M. Wesson, Philip R. Cohen. 191-196 [doi]
- Hand Tracking Using Spatial Gesture Modeling and Visual Feedback for a Virtual DJ SystemEdward Lin, Andy Cassidy, Dan Hook, Avinash Baliga, Tsuhan Chen. 197-202 [doi]
- State Sharing in a Hybrid Neuro-Markovian On-Line Handwriting Recognition System through a Simple Hierarchical Clustering AlgorithmHaifeng Li, Thierry Artières, Patrick Gallinari. 203-210 [doi]
- An Automatic Speech Translation System on PDAs for Travel ConversationRyosuke Isotani, Kiyoshi Yamabana, Shinichi Ando, Ken Hanazawa, Shin-ya Ishikawa, Tadashi Emori, Ken-ichi Iso, Hiroaki Hattori, Akitoshi Okumura, Takao Watanabe. 211-216 [doi]
- A PDA-Based Sign TranslatorJing Zhang, Xilin Chen, Jie Yang, Alex Waibel. 217-222 [doi]
- The NESPOLE! Multimodal Interface for Cross-lingual Communication - Experience and Lessons Learned Loredana Taddei, Erica Costantini, Alon Lavie. 223-228 [doi]
- Research of Machine Learning Method for Specific Information Recognition on the InternetDequan Zheng, Yi Hu, Tiejun Zhao, Hao Yu, Sheng Li. 229-234 [doi]
- The Added Value of Multimodality in the NESPOLE! Speech-to-Speech Translation System: an Experimental StudyErica Costantini, Fabio Pianesi, Susanne Burger. 235-240 [doi]
- Multi-Modal Translation System and Its EvaluationShigeo Morishima, Satoshi Nakamura. 241-246 [doi]
- Towards Universal Speech RecognitionZhirong Wang, Umut Topkara, Tanja Schultz, Alex Waibel. 247-252 [doi]
- Improved Named Entity Translation and Bilingual Named Entity ExtractionFei Huang, Stephan Vogel. 253-260 [doi]
- Active Gaze Tracking for Human-Robot InteractionRowel Atienza, Alexander Zelinsky. 261-266 [doi]
- 3-D Articulated Pose Tracking for Untethered Diectic ReferenceDavid Demirdjian, Trevor Darrell. 267-272 [doi]
- Tracking Focus of Attention in MeetingsRainer Stiefelhagen. 273-280 [doi]
- A Probabilistic Dynamic Contour Model for Accurate and Robust Lip TrackingQiang Wang, Haizhou Ai, Guangyou Xu. 281-286 [doi]
- Attentional Object Spotting by Integrating Multimodal InputChen Yu, Dana H. Ballard, Shenghuo Zhu. 287-292 [doi]
- Lip Tracking for MPEG-4 Facial AnimationZhilin Wu, Petar S. Aleksic, Aggelos K. Katsaggelos. 293-298 [doi]
- Achieving Real-Time Lip Synch via SVM-Based Phoneme Classification and Lip Shape RefinementTaeyoon Kim, Yongsung Kang, Hanseok Ko. 299-304 [doi]
- Multi-Modal Temporal Asynchronicity Modeling by Product HMMs for RobustSatoshi Nakamura, Ken ichi Kumatani, Satoshi Tamura. 305-312 [doi]
- A Multi-Modal Interface for an Interactive Simulated Vascular Reconstruction SystemElena V. Zudilova, Peter M. A. Sloot, Robert G. Belleman. 313-318 [doi]
- Universal Interfaces to Multimedia DocumentsHelen Petrie, Wendy Fisher, Ine Langer, Gerhard Weber, Keith Gladstone, Cathy Rundle, Liesbeth Pyfers. 319-324 [doi]
- A Video Based Interface to Textual Information for the Visually ImpairedAli Zandifar, Ramani Duraiswami, Antoine Chahine, Larry S. Davis. 325-330 [doi]
- Modular Approach of Multimodal Integration in a Virtual EnvironmentRajarathinam Arangarasan, George N. Phillips Jr.. 331-336 [doi]
- Mobile Multi-Modal Data Services for GPRS Phones and BeyondGeorg Niklfeld, Michael Pucher, Robert Finan, Wolfgang Eckhart. 337-342 [doi]
- Flexi-Modal and Multi-Machine User InterfacesBrad A. Myers, Robert Malkin, Michael Bett, Alex Waibel, Ben Bostwick, Robert C. Miller, Jie Yang, Matthias Denecke, Edgar Seemann, Jie Zhu, Choon Hong Peck, Dave Kong, Jeffrey Nichols, William L. Scherlis. 343-348 [doi]
- A Real-Time Framework for Natural Multimodal Interaction with Large Screen DisplaysNils Krahnstoever, Sanshzar Kettebekov, Mohammed Yeasin, Rajeev Sharma. 349-354 [doi]
- Embarking on Multimodal Interface DesignAnoop K. Sinha, James A. Landay. 355-360 [doi]
- Multi Modal User Interaction in an Automatic Pool TrainerLars Bo Larsen, Morten Damm Jensen, Wisdom Kobby Vodzi. 361-366 [doi]
- Multimodal Contextual Car-Driver InterfaceDaniel P. Siewiorek, Asim Smailagic, Matthew Hornyak. 367-376 [doi]
- Requirements for Automatically Generating Multi-Modal Interfaces for Complex AppliancesJeffrey Nichols, Brad A. Myers, Thomas K. Harris, Roni Rosenfeld, Stefanie Shriver, Michael Higgins, Joseph Hughes. 377-382 [doi]
- Articulated Model Based People Tracking Using Motion ModelsHuazhong Ning, Liang Wang, Weiming Hu, Tieniu Tan. 383-388 [doi]
- Audiovisual Arrays for Untethered Spoken InterfacesKevin Wilson, Vibhav Rangarajan, Neal Checka, Trevor Darrell. 389-394 [doi]
- Fingerprint Classification by Directional FieldsSen Wang, Wei Wei Zhang, Yang Sheng Wang. 395-399 [doi]
- Towards Vision-Based 3-D People Tracking in a Smart RoomDirk Focken, Rainer Stiefelhagen. 400-405 [doi]
- Using TouchPad Pressure to Detect Negative AffectHelena M. Mentis, Geri Gay. 406-410 [doi]
- Designing Transition Networks for Multimodal VR-Interactions Using a Markup LanguageMarc Erich Latoschik. 411-416 [doi]
- Musically Expressive Doll in Face-to-Face CommunicationTomoko Yonezawa, Kenji Mase. 417-422 [doi]
- Towards Monitoring Human Activities Using an Omnidirectional CameraXilin Chen, Jie Yang. 423-428 [doi]
- Smart Platform - A Software Infrastructure for Smart Space (SISS)Weikai Xie, Yuanchun Shi, Guangyou Xu, Yanhua Mao. 429-436 [doi]
- Do Multimodal Signals Need to Come from the Same Place? Crossmodal Attentional Links Between Proximal and Distal SurfacesRob Gray, Hong Z. Tan, J. Jay Young. 437-441 [doi]
- CATCH-2004 Multi-Modal Browser: Overview Description with Usability AnalysisJan Kleindienst, Ladislav Serédi, Pekka Kapanen, Janne Bergman. 442-447 [doi]
- Multimodal Interaction During Multiparty Dialogues: Initial ResultsPhilip R. Cohen, Rachel Coulston, Kelly Krout. 448-453 [doi]
- Multi-Modal Embodied Agents ScriptingYasmine Arafa, Abe Mamdani. 454-459 [doi]
- A Methodology for Evaluating Multimodality in a Home Entertainment SystemJason Williams, Georg Michelitsch, Gregor Möhler, Stefan Rapp. 460-465 [doi]
- Body-Based InterfacesChangseok Cho, Huichul Yang, Gerard Jounghyun Kim, Sung Ho Han. 466-472 [doi]
- Evaluation of the Command and Control CubeJérôme Grosjean, Jean-Marie Burkhardt, Sabine Coquillart, Paul Richard. 473-478 [doi]
- Interruptions as Multimodal Outputs: Which are the Less Disruptive?Ernesto Arroyo, Ted Selker, Alexandre Stouffs. 479-482 [doi]
- Experimentally Augmenting an Intelligent Tutoring System with Human-Supplied Capabilities: Adding Human-Provided Emotional Scaffolding to an Automated Reading Tutor that ListensGregory Aist, Barry Kort, Rob Reilly, Jack Mostow, Rosalind W. Picard. 483-490 [doi]
- Individual Differences in Facial Expression: Stability over Time, Relation to Self-Reported Emotion, and Ability to Inform Person IdentificationJeffrey F. Cohn, Karen L. Schmidt, Ralph Gross, Paul Ekman. 491-498 [doi]
- Training a Talking HeadMichael M. Cohen, Dominic W. Massaro, Rashid Clark. 499-504 [doi]
- Labial Coarticulation Modeling for Realistic Facial AnimationPiero Cosi, Emanuela Magno Caldognetto, Giulio Perin, Claudio Zmarich. 505-510 [doi]
- Improved Information Maximization based Face and Facial Feature Detection from Real-time Video and Application in a Multi-Modal Person Identification SystemZiyou Xiong, Yunqiang Chen, Roy Wang, Thomas S. Huang. 511-516 [doi]
- Animating Arbitrary Topology 3D Facial Model Using the MPEG-4 FaceDefTablesDalong Jiang, Wen Gao, Zhiguo Li, Zhaoqi Wang. 517-522 [doi]
- An Improved Active Shape Model for Face AlignmentWei Wang, Shiguang Shan, Wen Gao, Bo Cao, Baocai Yin. 523-528 [doi]
- Head-Pose Invariant Facial Expression Recognition Using Convolutional Neural NetworksBeat Fasel. 529-534 [doi]
- An Improved Algorithm for Hairstyle DynamicsWenjun Lao, Dehui Kong, Baocai Yin. 535-540 [doi]