Abstract is missing.
- Interfacing life: a year in the life of a research labYuri Ivanov. 1 [doi]
- The great challenge of multimodal interfacestowards symbiosis of human and robotsNorihiro Hagita. 2 [doi]
- Just in time learning: implementing principles of multimodal processing and learning for educationDominic W. Massaro. 3-8 [doi]
- The painful face: pain expression recognition using active appearance modelsAhmed Bilal Ashraf, Simon Lucey, Jeffrey F. Cohn, Tsuhan Chen, Zara Ambadar, Ken Prkachin, Patty Solomon, Barry-John Theobald. 9-14 [doi]
- Faces of pain: automated measurement of spontaneousallfacial expressions of genuine and posed painGwen Littlewort, Marian Stewart Bartlett, Kang Lee. 15-21 [doi]
- Visual inference of human emotion and behaviourShaogang Gong, Caifeng Shan, Tao Xiang. 22-29 [doi]
- Audiovisual recognition of spontaneous interest within conversationsBjörn Schuller, Ronald Müller, Benedikt Hörnler, Anja Höthker, Hitoshi Konosu, Gerhard Rigoll. 30-37 [doi]
- How to distinguish posed from spontaneous smiles using geometric featuresMichel François Valstar, Hatice Gunes, Maja Pantic. 38-45 [doi]
- Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorderRana El Kaliouby, Alea Teeters. 46-53 [doi]
- Statistical segmentation and recognition of fingertip trajectories for a gesture interfaceKazuhiro Morimoto, Chiyomi Miyajima, Norihide Kitaoka, Katunobu Itou, Kazuya Takeda. 54-57 [doi]
- A tactile language for intuitive human-robot communicationAndreas J. Schmid, Martin Hoffmann, Heinz Wörn. 58-65 [doi]
- Simultaneous prediction of dialog acts and address types in three-party conversationsYosuke Matsusaka, Mika Enomoto, Yasuharu Den. 66-73 [doi]
- Developing and analyzing intuitive modes for interactive object modelingAlexander Kasper, Regine Becher, Peter Steinhaus, Rüdiger Dillmann. 74-81 [doi]
- Extraction of important interactions in medical interviewsusing nonverbal informationYuichi Sawamoto, Yuichi Koyama, Yasushi Hirano, Shoji Kajita, Kenji Mase, Kimiko Katsuyama, Kazunobu Yamauchi. 82-85 [doi]
- Towards smart meeting: enabling technologies and a real-world applicationZhiwen Yu, Motoyuki Ozeki, Yohsuke Fujii, Yuichi Nakamura. 86-93 [doi]
- Multimodalcues for addressee-hood in triadic communication with a human information retrieval agentJacques M. B. Terken, Irene Joris, Linda De Valk. 94-101 [doi]
- The effect of input mode on inactivity and interaction times of multimodal systemsManolis Perakakis, Alexandros Potamianos. 102-109 [doi]
- Positional mapping: keyboard mapping based on characters writing positions for mobile devicesYe Kyaw Thu, Yoshiyori Urano. 110-117 [doi]
- Five-key text input using rhythmic mappingsChristine Szentgyorgyi, Edward Lank. 118-121 [doi]
- Toward content-aware multimodal tagging of personal photo collectionsPaulo Barthelmess, Edward C. Kaiser, David McGee. 122-125 [doi]
- A survey of affect recognition methods: audio, visual and spontaneous expressionsZhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang. 126-133 [doi]
- Real-time expression cloning using appearance modelsBarry-John Theobald, Iain A. Matthews, Jeffrey F. Cohn, Steven M. Boker. 134-139 [doi]
- Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-trackingTomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe. 140-145 [doi]
- Map navigation with mobile devices: virtual versus physical movement with and without visual contextMichael Rohs, Johannes Schöning, Martin Raubal, Georg Essl, Antonio Krüger. 146-153 [doi]
- Can you talk or only touch-talk: A VoIP-based phone feature for quick, quiet, and private communicationMaria Danninger, Leila Takayama, QianYing Wang, Courtney Schultz, Jörg Beringer, Paul Hofmann, Frankie James, Clifford Nass. 154-161 [doi]
- Designing audio and tactile crossmodal icons for mobile devicesEve E. Hoggan, Stephen A. Brewster. 162-169 [doi]
- A study on the scalability of non-preferred hand mode manipulationJaime Ruiz, Edward Lank. 170-177 [doi]
- Voicepen: augmenting pen input with simultaneous non-linguisitic vocalizationSusumu Harada, T. Scott Saponas, James A. Landay. 178-185 [doi]
- A large-scale behavior corpus including multi-angle video data for observing infants long-term developmental processesShinya Kiriyama, Goh Yamamoto, Naofumi Otani, Shogo Ishikawa, Yoichi Takebayashi. 186-192 [doi]
- The micole architecture: multimodal support for inclusion of visually impaired childrenThomas Pietrzak, Benoît Martin, Isabelle Pecci, Rami Saarinen, Roope Raisamo, Janne Järvi. 193-200 [doi]
- Interfaces for musical activities and interfaces for musicians are not the same: the case for codes, a web-based environment for cooperative music prototypingEvandro Manara Miletto, Luciano Vargas Flores, Marcelo Soares Pimenta, Jérôme Rutily, Leonardo Santagada. 201-207 [doi]
- Totalrecall: visualization and semi-automatic annotation of very large audio-visual corporaRony Kubat, Philip DeCamp, Brandon Roy. 208-215 [doi]
- Extensible middleware framework for multimodal interfaces in distributed environmentsVitor Fernandes, Tiago João Vieira Guerreiro, Bruno Araújo, Joaquim A. Jorge, João Pereira. 216-219 [doi]
- Temporal filtering of visual speech for audio-visual speech recognition in acoustically and visually challenging environmentsJong-Seok Lee, Cheol Hoon Park. 220-227 [doi]
- Reciprocal attentive communication in remote meeting with a humanoid robotTomoyuki Morita, Kenji Mase, Yasushi Hirano, Shoji Kajita. 228-235 [doi]
- Password management using doodlesNaveen Sundar Govindarajulu, Sriganesh Madhvanath. 236-239 [doi]
- A computational model for spatial expression resolutionAndrea Corradini. 240-246 [doi]
- Disambiguating speech commands using physical contextKatherine Everitt, Susumu Harada, Jeff A. Bilmes, James A. Landay. 247-254 [doi]
- Automatic inference of cross-modal nonverbal interactions in multiparty conversations: who responds to whom, when, and how? from gaze, head gestures, and utterancesKazuhiro Otsuka, Hiroshi Sawada, Junji Yamato. 255-262 [doi]
- Influencing social dynamics in meetings through a peripheral displayJanienke Sturm, Olga Houben-van Herwijnen, Anke Eyck, Jacques M. B. Terken. 263-270 [doi]
- Using the influence model to recognize functional roles in meetingsWen Dong, Bruno Lepri, Alessandro Cappelletti, Alex Pentland, Fabio Pianesi, Massimo Zancanaro. 271-278 [doi]
- User impressions of a stuffed doll robot s facing direction in animation systemsHiroko Tochigi, Kazuhiko Shinozawa, Norihiro Hagita. 279-284 [doi]
- Speech-driven embodied entrainment character system with hand motion input in mobile environmentKouzi Osaki, Tomio Watanabe, Michiya Yamamoto. 285-290 [doi]
- Natural multimodal dialogue systems: a configurable dialogue and presentation strategies componentMeriam Horchani, Benjamin Caron, Laurence Nigay, Franck Panaget. 291-298 [doi]
- Modeling human interaction resources to support the design of wearable multimodal systemsTobias Klug, Max Mühlhäuser. 299-306 [doi]
- Speech-filtered bubble ray: improving target acquisition on display wallsEdward Tse, Mark S. Hancock, Saul Greenberg. 307-314 [doi]
- Using pen input features as indices of cognitive loadNatalie Ruiz, Ronnie Taib, Yu (David) Shi, Eric H. C. Choi, Fang Chen. 315-318 [doi]
- Automated generation of non-verbal behavior for virtual embodied charactersWerner Breitfuss, Helmut Prendinger, Mitsuru Ishizuka. 319-322 [doi]
- Detecting communication errors from visual cues during the system s conversational turnSy Bor Wang, David Demirdjian, Trevor Darrell. 323-326 [doi]
- Multimodal interaction analysis in a smart housePilar Manchón Portillo, Carmen del Solar, Gabriel Amores Carredano, Guillermo Pérez. 327-334 [doi]
- A multi-modal mobile device for learning japanese kanji characters through mnemonic storiesNorman Lin, Shoji Kajita, Kenji Mase. 335-338 [doi]
- 3d augmented mirror: a multimodal interface for string instrument learning and teaching with gesture supportKia C. Ng, Tillman Weyde, Oliver Larkin, Kerstin Neubarth, Thijs Koerselman, Bee Ong. 339-345 [doi]
- Interest estimation based on dynamic bayesian networks for visual attentive presentation agentsBoris Brandherm, Helmut Prendinger, Mitsuru Ishizuka. 346-349 [doi]
- On-line multi-modal speaker diarizationAthanasios K. Noulas, Ben J. A. Kröse. 350-357 [doi]
- Presentation sensei: a presentation training system using speech and image processingKazutaka Kurihara, Masataka Goto, Jun Ogata, Yosuke Matsusaka, Takeo Igarashi. 358-365 [doi]
- The world of mushrooms: human-computer interaction prototype systems for ambient intelligenceYasuhiro Minami, Minako Sawaki, Kohji Dohsaka, Ryuichiro Higashinaka, Kentaro Ishizuka, Hideki Isozaki, Tatsushi Matsubayashi, Masato Miyoshi, Atsushi Nakamura, Takanobu Oba, Hiroshi Sawada, Takeshi Yamada, Eisaku Maeda. 366-373 [doi]
- Evaluation of haptically augmented touchscreen gui elements under cognitive loadRock Leung, Karon E. MacLean, Martin Bue Bertelsen, Mayukh Saubhasik. 374-381 [doi]
- Multimodal interfaces in semantic interactionNaoto Iwahashi, Mikio Nakano. 382 [doi]
- Workshop on tagging, mining and retrieval of human related activity informationPaulo Barthelmess, Edward C. Kaiser. 383-384 [doi]
- Workshop on massive datasetsChristopher Richard Wren, Yuri A. Ivanov. 385 [doi]