Abstract is missing.
- Still looking at peopleDavid A. Forsyth. 1-2 [doi]
- Mining multimodal sequential patterns: a case study on affect detectionHéctor Perez Martínez, Georgios N. Yannakakis. 3-10 [doi]
- Crowdsourced data collection of facial responsesDaniel McDuff, Rana El Kaliouby, Rosalind W. Picard. 11-18 [doi]
- A systematic discussion of fusion techniques for multi-modal affect recognition tasksFlorian Lingenfelser, Johannes Wagner, Elisabeth André. 19-26 [doi]
- Adaptive facial expression recognition using inter-modal top-down contextRavi Kiran Sarvadevabhatla, Mitchel Benovoy, Sam Musallam, Victor Ng-Thow-Hing. 27-34 [doi]
- Brain-computer interaction: can multimodality help?Anton Nijholt, Brendan Z. Allison, Rob J. K. Jacob. 35-40 [doi]
- Modality switching and performance in a thought and speech controlled computer gameHayrettin Gürkök, Gido Hakvoort, Mannes Poel. 41-48 [doi]
- An approach towards human-robot-human interaction using a hybrid brain-computer interfaceNils Hachmeister, Hannes Riechmann, Helge Ritter, Andrea Finke. 49-52 [doi]
- Towards multimodal error responses: a passive BCI for the detection of auditory errorsThorsten O. Zander, Marius David Klippel, Reinhold Scherer. 53-56 [doi]
- Pseudo-haptics: from the theoretical foundations to practical system design guidelinesAndreas Pusch, Anatole Lécuyer. 57-64 [doi]
- 6th senses for everyone!: the value of multimodal feedback in handheld navigation aidsMartin Pielot, Benjamin Poppinga, Wilko Heuten, Susanne Boll. 65-72 [doi]
- Adding haptic feedback to touch screens at the right timeYi Yang, Yuru Zhang, Zhu Hou, Betty Lemaire-Semail. 73-80 [doi]
- Robust user context analysis for multimodal interfacesPrasenjit Dey, Muthuselvam Selvaraj, Bowon Lee. 81-88 [doi]
- The picture says it all!: multimodal interactions and interaction metadataRamadevi Vennelakanti, Prasenjit Dey, Ankit Shekhawat, Phanindra Pisupati. 89-96 [doi]
- Mudra: a unified multimodal interaction frameworkLode Hoste, Bruno Dumas, Beat Signer. 97-104 [doi]
- Humans and smart environments: a novel multimodal interaction approachStefano Carrino, Alexandre Péclat, Elena Mugellini, Omar Abou Khaled, Rolf Ingold. 105-112 [doi]
- Exploiting petri-net structure for activity classification and user instruction within an industrial settingSimon F. Worgan, Ardhendu Behera, Anthony G. Cohn, David C. Hogg. 113-120 [doi]
- JerkTilts: using accelerometers for eight-choice selection on mobile devicesMathias Baglioni, Eric Lecolinet, Yves Guiard. 121-128 [doi]
- On multimodal interactive machine translation using speech recognitionVicent Alabau, Luis Rodríguez-Ruiz, Alberto Sanchís, Pascual Martínez-Gómez, Francisco Casacuberta. 129-136 [doi]
- Multimodal segmentation of object manipulation sequences with product modelsAlexandra Barchunova, Robert Haschke, Mathias Franzius, Helge Ritter. 137-144 [doi]
- Could a dialog save your life?: analyzing the effects of speech interaction strategies while drivingAkos Vetek, Saija Lemmelä. 145-152 [doi]
- Decisions about turns in multiparty conversation: from perception to actionDan Bohus, Eric Horvitz. 153-160 [doi]
- Evaluation of user gestures in multi-touch interaction: a case study in pair-programmingAlessandro Soro, Samuel Aldo Iacolina, Riccardo Scateni, Selene Uras. 161-168 [doi]
- Towards multimodal sentiment analysis: harvesting opinions from the webLouis-Philippe Morency, Rada Mihalcea, Payal Doshi. 169-176 [doi]
- The impact of unwanted multimodal notificationsDavid Warnock, Marilyn Rose McGee-Lennon, Stephen A. Brewster. 177-184 [doi]
- Freeform pen-input as evidence of cognitive load and expertiseNatalie Ruiz, Ronnie Taib, Fang Chen. 185-188 [doi]
- Acquisition of dynamically revealed multimodal targetsTeemu Tuomas Ahmaniemi. 189-192 [doi]
- Emotional responses to thermal stimuliKatri Salminen, Veikko Surakka, Jukka Raisamo, Jani Lylykangas, Johannes Pystynen, Roope Raisamo, Kalle Mäkelä, Teemu Tuomas Ahmaniemi. 193-196 [doi]
- An active learning scenario for interactive machine translationJesús González-Rubio, Daniel Ortiz-Martínez, Francisco Casacuberta. 197-200 [doi]
- Move, and i will tell you who you are: detecting deceptive roles in low-quality dataNimrod Raiman, Hayley Hung, Gwenn Englebienne. 201-204 [doi]
- Multimodal person independent recognition of workload related biosignal patternsJan-Philip Jarvis, Felix Putze, Dominic Heger, Tanja Schultz. 205-208 [doi]
- Study of different interactive editing operations in an assisted transcription systemVerónica Romero, Alejandro Hector Toselli, Enrique Vidal. 209-212 [doi]
- Dynamic perception-production oscillation model in human-machine communicationIgor Jauk, Ipke Wachsmuth, Petra Wagner. 213-216 [doi]
- The effect of clothing on thermal feedback perceptionMartin Halvey, Graham Wilson, Yolanda Vazquez-Alvarez, Stephen A. Brewster, Stephen A. Hughes. 217-220 [doi]
- Comparing multi-touch interaction techniques for manipulation of an abstract parameter spaceSashikanth Damaraju, Andruid Kerne. 221-224 [doi]
- A general framework for incremental processing of multimodal inputsAfshin Ameri Ekhtiarabadi, Batu Akan, Baran Çürüklü, Lars Asplund. 225-228 [doi]
- Learning in and from humans: recalibration makes (the) perfect senseMarc O. Ernst. 229-230 [doi]
- Detecting F-formations as dominant setsHayley Hung, Ben J. A. Kröse. 231-238 [doi]
- Toward multimodal situated analysisChreston A. Miller, Francis K. H. Quek. 239-246 [doi]
- Finding audio-visual events in informal social gatheringsXavier Alameda-Pineda, Vasil Khalidov, Radu Horaud, Florence Forbes. 247-254 [doi]
- Please, tell me about yourself: automatic personality assessment using short self-presentationsLigia Maria Batrinca, Nadia Mana, Bruno Lepri, Fabio Pianesi, Nicu Sebe. 255-262 [doi]
- Gesture-aware remote controls: guidelines and interaction techniqueGilles Bailly, Dong-Bach Vo, Eric Lecolinet, Yves Guiard. 263-270 [doi]
- The effect of sampling rate on the performance of template-based gesture recognizersRadu-Daniel Vatavu. 271-278 [doi]
- American sign language recognition with the kinectZahoor Zafrulla, Helene Brashear, Thad Starner, Harley Hamilton, Peter Presti. 279-286 [doi]
- Perceived physicality in audio-enhanced force inputChi-Hsia Lai, Matti Niinimäki, Koray Tahiroglu, Johan Kildal, Teemu Tuomas Ahmaniemi. 287-294 [doi]
- BeeParking: an ambient display to induce cooperative parking behaviorSilvia Gabrielli, Rosa Maimone, Michele Marchesoni, Jesús Muñoz. 295-298 [doi]
- Speech interaction in a multimodal tool for handwritten text transcriptionMaría José Castro Bleda, Salvador España Boquera, David Llorens, Andrés Marzal, Federico Prat, Juan Miguel Vilar, Francisco Zamora-Martínez. 299-302 [doi]
- Digital pen in mammography patient formsDaniel Sonntag, Marcus Liwicki, Markus Weber. 303-306 [doi]
- MozArt: a multimodal interface for conceptual 3D modelingAnirudh Sharma, Sriganesh Madhvanath, Ankit Shekhawat, Mark Billinghurst. 307-310 [doi]
- Query refinement suggestion in multimodal image retrieval with relevance feedbackLuis A. Leiva, Mauricio Villegas, Roberto Paredes. 311-314 [doi]
- A multimodal music transcription prototype: first steps in an interactive prototype developmentTomás Pérez-García, José Manuel Iñesta Quereda, Pedro J. Ponce de León, Antonio Pertusa. 315-318 [doi]
- Socially assisted multi-view video viewerKenji Mase, Kosuke Niwa, Takafumi Marutani. 319-322 [doi]
- Long-term socially perceptive and interactive robot companions: challenges and future perspectivesRuth Aylett, Ginevra Castellano, Bogdan Raducanu, Ana Paiva, Marc Hanheide. 323-326 [doi]
- Living with a robot companion: empirical study on the interaction with an artificial health advisorAstrid M. von der Pütten, Nicole C. Krämer, Sabrina Eimler. 327-334 [doi]
- Child-robot interaction in the wild: advice to the aspiring experimenterRaquel Ros, Marco Nalin, Rachel Wood, Paul Baxter, Rosemarijn Looije, Yiannis Demiris, Tony Belpaeme, Alessio Giusti, Clara Pozzi. 335-342 [doi]
- Characterization of coordination in an imitation task: human evaluation and automatically computable cuesEmilie Delaherche, Mohamed Chetouani. 343-350 [doi]
- The sounds of social life: observing humans in their natural habitatMatthias R. Mehl. 351-352 [doi]
- Smartphone usage in the wild: a large-scale analysis of applications and contextTrinh Minh Tri Do, Jan Blom, Daniel Gatica-Perez. 353-360 [doi]
- Multimodal mobile interactions: usability studies in real world settingsJulie R. Wiliamson, Andrew Crossan, Stephen A. Brewster. 361-368 [doi]
- Service-oriented autonomic multimodal interaction in a pervasive environmentPierre-Alain Avouac, Philippe Lalanda, Laurence Nigay. 369-376 [doi]
- Evaluation of graphical user-interfaces for order picking using head-mounted displaysHannes Baumann, Thad Starner, Hendrik Iben, Anna Lewandowski, Patrick Zschaler. 377-384 [doi]
- Modeling parallel state charts for multithreaded multimodal dialoguesGregor Mehlmann, Birgit Endraß, Elisabeth André. 385-392 [doi]
- Virtual worlds and active learning for human detectionDavid Vázquez, Antonio M. López, Daniel Ponsa, Javier Marín. 393-400 [doi]
- Making virtual conversational agent aware of the addressee of users' utterances in multi-user conversation using nonverbal informationHung-Hsuan Huang, Naoya Baba, Yukiko I. Nakano. 401-408 [doi]
- Temporal binding of multimodal controls for dynamic map displays: a systems approachEllen C. Haas, Krishna S. Pillalamarri, Chris Stachowiak, Gardner McCullough. 409-416 [doi]