Abstract is missing.
- Living better with robotsCynthia Breazeal. 1-2 [doi]
- Discovering group nonverbal conversational patterns with topicsDinesh Babu Jayagopi, Daniel Gatica-Perez. 3-6 [doi]
- Agreement detection in multiparty conversationSebastian Germesin, Theresa Wilson. 7-14 [doi]
- Multimodal floor control shift detectionLei Chen 0004, Mary P. Harper. 15-22 [doi]
- Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalitiesStavros Petridis, Hatice Gunes, Sebastian Kaltwang, Maja Pantic. 23-30 [doi]
- Dialog in the open world: platform and applicationsDan Bohus, Eric Horvitz. 31-38 [doi]
- Towards adapting fantasy, curiosity and challenge in multimodal dialogue systems for preschoolersTheofanis Kannetis, Alexandros Potamianos. 39-46 [doi]
- Building multimodal applications with EMMAMichael Johnston. 47-54 [doi]
- A speaker diarization method based on the probabilistic fusion of audio-visual location informationKentaro Ishizuka, Shoko Araki, Kazuhiro Otsuka, Tomohiro Nakatani, Masakiyo Fujimoto. 55-62 [doi]
- Dynamic robot autonomy: investigating the effects of robot decision-making in a human-robot team taskPaul W. Schermerhorn, Matthias Scheutz. 63-70 [doi]
- A speech mashup framework for multimodal mobile servicesGiuseppe Di Fabbrizio, Thomas Okken, Jay G. Wilpon. 71-78 [doi]
- Detecting, tracking and interacting with people in a public spaceSunsern Cheamanunkul, Evan Ettinger, Matt Jacobsen, Patrick Lai, Yoav Freund. 79-86 [doi]
- Cache-based language model adaptation using visual attention for ASR in meeting scenariosNeil Cooke, Martin J. Russell. 87-90 [doi]
- Multimodal end-of-turn prediction in multi-party meetingsIwan de Kok, Dirk Heylen. 91-98 [doi]
- Recognizing communicative facial expressions for discovering interpersonal emotions in group meetingsShiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato. 99-106 [doi]
- Classification of patient case discussions through analysis of vocalisation graphsSaturnino Luz, Bridget Kane. 107-114 [doi]
- Learning from preferences and selected multimodal features of playersGeorgios N. Yannakakis. 115-118 [doi]
- Detecting user engagement with a robot companion using task and social interaction-based featuresGinevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, Peter W. McOwan. 119-126 [doi]
- Multi-modal features for real-time detection of human-robot interaction categoriesIan R. Fasel, Masahiro Shiomi, Pilippe-Emmanuel Chadutaud, Takayuki Kanda, Norihiro Hagita, Hiroshi Ishiguro. 127-134 [doi]
- Modeling culturally authentic style shifting with virtual peersJustine Cassell, Kathleen Geraghty, Berto Gonzalez, John Borland. 135-142 [doi]
- Between linguistic attention and gaze fixations inmultimodal conversational interfacesRui Fang, Joyce Y. Chai, Fernanda Ferreira. 143-150 [doi]
- Head-up interaction: can we break our addiction to the screen and keyboard?Stephen A. Brewster. 151-152 [doi]
- Fusion engines for multimodal input: a surveyDenis Lalanne, Laurence Nigay, Philippe A. Palanque, Peter Robinson, Jean Vanderdonckt, Jean-François Ladry. 153-160 [doi]
- A fusion framework for multimodal interactive applicationsHildeberto Mendonça, Jean-Yves Lionel Lawson, Olga Vybornova, Benoit M. Macq, Jean Vanderdonckt. 161-168 [doi]
- Benchmarking fusion engines of multimodal interactive systemsBruno Dumas, Rolf Ingold, Denis Lalanne. 169-176 [doi]
- Temporal aspects of CARE-based multimodal fusion: from a fusion mechanism to composition components and WoZ componentsMarcos Serrano, Laurence Nigay. 177-184 [doi]
- Formal description techniques to support the design, construction and evaluation of fusion engines for sure (safe, usable, reliable and evolvable) multimodal interfacesJean-François Ladry, David Navarre, Philippe A. Palanque. 185-192 [doi]
- Multimodal inference for driver-vehicle interactionTevfik Metin Sezgin, Ian Davies, Peter Robinson. 193-198 [doi]
- Multimodal integration of natural gaze behavior for intention recognition during object manipulationThomas Bader, Matthias Vogelgesang, Edmund Klaus. 199-206 [doi]
- Salience in the generation of multimodal referring actsPaul Piwek. 207-210 [doi]
- Communicative gestures in coreference identification in multiparty meetingsTyler Baldwin, Joyce Y. Chai, Katrin Kirchhoff. 211-218 [doi]
- Realtime meeting analysis and 3D meeting viewer based on omnidirectional multimodal sensorsKazuhiro Otsuka, Shoko Araki, Dan Mikami, Kentaro Ishizuka, Masakiyo Fujimoto, Junji Yamato. 219-220 [doi]
- Guiding hand: a teaching tool for handwritingNalini Vishnoi, Cody Narber, Zoran Duric, Naomi Lynn Gerber. 221-222 [doi]
- A multimedia retrieval system using speech inputAndrei Popescu-Belis, Peter Poller, Jonathan Kilgour. 223-224 [doi]
- Navigation with a passive brain based interfaceJan B. F. Van Erp, Peter J. Werkhoven, Marieke E. Thurlings, Anne-Marie Brouwer. 225-226 [doi]
- A multimodal predictive-interactive application for computer assisted transcription and translationVicente Alabau, Daniel Ortiz, Verónica Romero, Jorge Ocampo. 227-228 [doi]
- Multi-modal communication systemVictor S. Finomore, Dianne K. Popik, Douglas Brungart, Brian D. Simpson. 229-230 [doi]
- HephaisTK: a toolkit for rapid prototyping of multimodal interfacesBruno Dumas, Denis Lalanne, Rolf Ingold. 231-232 [doi]
- State, : an assisted document transcription systemDavid Llorens, Andrés Marzal, Federico Prat, Juan Miguel Vilar. 233-234 [doi]
- Demonstration: first steps in emotional expression of the humanoid robot NaoJérôme Monceaux, Joffrey Becker, Céline Boudier, Alexandre Mazel. 235-236 [doi]
- WiiNote: multimodal application facilitating multi-user photo annotation activityElena Mugellini, Maria Sokhn, Stefano Carrino, Omar Abou Khaled. 237-238 [doi]
- Are gesture-based interfaces the future of human computer interaction?Frédéric Kaplan. 239-240 [doi]
- Providing expressive eye movement to virtual agentsZheng Li, Xia Mao, Lei Liu. 241-244 [doi]
- Mediated attention with multimodal augmented realityAngelika Dierker, Christian Mertes, Thomas Hermann, Marc Hanheide, Gerhard Sagerer. 245-252 [doi]
- Grounding spatial prepositions for video searchStefanie Tellex, Deb Roy. 253-260 [doi]
- Multi-modal and multi-camera attention in smart environmentsBoris Schauerte, Jan Richarz, Thomas Plötz, Christian Thurau, Gernot A. Fink. 261-268 [doi]
- RVDT: a design space for multiple input devices, multipleviews and multiple display surfaces combinationRami Ajaj, Christian Jacquemin, Frédéric Vernier. 269-276 [doi]
- Learning and predicting multimodal daily life patterns from cell phonesKatayoun Farrahi, Daniel Gatica-Perez. 277-280 [doi]
- Visual based picking supported by context awareness: comparing picking performance using paper-based lists versus lists presented on a head mounted display with contextual supportHendrik Iben, Hannes Baumann, Carmen Ruthenbeck, Tobias Klug. 281-288 [doi]
- Adaptation from partially supervised handwritten text transcriptionsNicolás Serrano, Daniel Pérez, Alberto SanchÃs, Alfons Juan. 289-292 [doi]
- Recognizing events with temporal random forestsDavid Demirdjian, Chenna Varri. 293-296 [doi]
- Activity-aware ECG-based patient authentication for remote health monitoringJanani C. Sriram, Minho Shin, Tanzeem Choudhury, David Kotz. 297-304 [doi]
- GaZIR: gaze-based zooming interface for image retrievalLászló Kozma, Arto Klami, Samuel Kaski. 305-312 [doi]
- Voice key board: multimodal indic text inputPrasenjit Dey, Ramchandrula Sitaram, Rahul Ajmera, Kalika Bali. 313-318 [doi]
- Evaluating the effect of temporal parameters for vibrotactile saltatory patternsJukka Raisamo, Roope Raisamo, Veikko Surakka. 319-326 [doi]
- Mapping information to audio and tactile iconsEve E. Hoggan, Roope Raisamo, Stephen A. Brewster. 327-334 [doi]
- Augmented reality target finding based on tactile cuesTeemu Tuomas Ahmaniemi, Vuokko Lantz. 335-342 [doi]
- Speaker change detection with privacy-preserving audio cuesSree Hari Krishnan Parthasarathi, Mathew Magimai-Doss, Daniel Gatica-Perez, Hervé Bourlard. 343-346 [doi]
- MirrorTrack: tracking with reflection - comparison with top-down approachYannick Verdie, Bing Fang, Francis K. H. Quek. 347-350 [doi]
- A framework for continuous multimodal sign language recognitionDaniel Kelly, Jane Reilly Delannoy, John McDonald, Charles Markham. 351-358 [doi]