Abstract is missing.
- The co-operative, transformative organization of human action and knowledgeCharles Goodwin. 1-2 [doi]
- Two people walk into a bar: dynamic multi-party social interaction with a robot agentMary Ellen Foster, Andre Gaschler, Manuel Giuliani, Amy Isard, Maria Pateraki, Ronald P. A. Petrick. 3-10 [doi]
- Changes in verbal and nonverbal conversational behavior in long-term interactionDaniel Schulman, Timothy W. Bickmore. 11-18 [doi]
- I already know your answer: using nonverbal behaviors to predict immediate outcomes in a dyadic negotiationSunghyun Park, Jonathan Gratch, Louis-Philippe Morency. 19-22 [doi]
- Modeling dominance effects on nonverbal behaviors using granger causalityKyriaki Kalimeri, Bruno Lepri, Oya Aran, Dinesh Babu Jayagopi, Daniel Gatica-Perez, Fabio Pianesi. 23-26 [doi]
- Multimodal human behavior analysis: learning correlation and interaction across modalitiesYale Song, Louis-Philippe Morency, Randall Davis. 27-30 [doi]
- Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studiesSidney K. D'Mello, Jacqueline Kory. 31-38 [doi]
- Multimodal recognition of personality traits in human-computer collaborative tasksLigia Maria Batrinca, Bruno Lepri, Nadia Mana, Fabio Pianesi. 39-46 [doi]
- Automatic detection of pain intensityZakia Hammal, Jeffrey F. Cohn. 47-52 [doi]
- FaceTube: predicting personality from facial expressions of emotion in online conversational videoJoan-Isaac Biel, Lucia Teijeiro-Mosquera, Daniel Gatica-Perez. 53-56 [doi]
- The blue one to the left: enabling expressive user interaction in a multimodal interface for object selection in virtual 3d environmentsPulkit Budhiraja, Sriganesh Madhvanath. 57-58 [doi]
- Pixene: creating memories while sharing photosRamadevi Vennelakanti, Sriganesh Madhvanath, Anbumani Subramanian, Ajith Sowndararajan, Arun David, Prasenjit Dey. 59-60 [doi]
- Designing multiuser multimodal gestural interactions for the living roomSriganesh Madhvanath, Ramadevi Vennelakanti, Anbumani Subramanian, Ankit Shekhawat, Prasenjit Dey, Amit Rajan. 61-62 [doi]
- Using explanations for runtime dialogue adaptationFlorian Nothdurft, Frank Honold, Peter Kurzok. 63-64 [doi]
- NeuroDialog: an EEG-enabled spoken dialog interfaceSeshadri Sridharan, Yun-Nung Chen, Kai-min Chang, Alexander I. Rudnicky. 65-66 [doi]
- Companion technology for multimodal interactionFrank Honold, Felix Schüssel, Florian Nothdurft, Peter Kurzok. 67-68 [doi]
- IrisTK: a statechart-based toolkit for multi-party face-to-face interactionGabriel Skantze, Samer Al Moubayed. 69-76 [doi]
- Estimating conversational dominance in multiparty interactionYukiko Nakano, Yuki Fukuhara. 77-84 [doi]
- Learning relevance from natural eye movements in pervasive interfacesMelih Kandemir, Samuel Kaski. 85-92 [doi]
- Fishing or a Z?: investigating the effects of error on mimetic and alphabet device-based gesture interactionAbdallah El-Ali, Johan Kildal, Vuokko Lantz. 93-100 [doi]
- Structural and temporal inference search (STIS): pattern identification in multimodal dataChreston A. Miller, Louis-Philippe Morency, Francis K. H. Quek. 101-108 [doi]
- Integrating word acquisition and referential grounding towards physical world interactionRui Fang, Changsong Liu, Joyce Yue Chai. 109-116 [doi]
- Effects of modality on virtual button motion and performanceAdam Faeth, Chris Harding. 117-124 [doi]
- Modeling multimodal integration with event logic chartsGregor Ulrich Mehlmann, Elisabeth André. 125-132 [doi]
- Multimodal motion guidance: techniques for adaptive and dynamic feedbackChristian Schönauer, Kenichiro Fukushi, Alex Olwal, Hannes Kaufmann, Ramesh Raskar. 133-140 [doi]
- Multimodal detection of salient behaviors of approach-avoidance in dyadic interactionsBo Xiao, Panayiotis G. Georgiou, Brian R. Baucom, Shrikanth Narayanan. 141-144 [doi]
- Multimodal analysis of the implicit affective channel in computer-mediated textual communicationJoseph F. Grafsgaard, Robert M. Fulton, Kristy Elizabeth Boyer, Eric N. Wiebe, James C. Lester. 145-152 [doi]
- Towards sensing the influence of visual narratives on human affectMihai Burzo, Daniel McDuff, Rada Mihalcea, Louis-Philippe Morency, Alexis Narvaez, Verónica Pérez-Rosas. 153-160 [doi]
- Integrating video and accelerometer signals for nocturnal epileptic seizure detectionKris Cuppens, Chih-Wei Chen, Kevin Bing-Yung Wong, Anouk Van de Vel, Lieven Lagae, Berten Ceulemans, Tinne Tuytelaars, Sabine Van Huffel, Bart Vanrumste, Hamid K. Aghajan. 161-164 [doi]
- GeoGazemarks: providing gaze history for the orientation on small display mapsIoannis Giannopoulos, Peter Kiefer, Martin Raubal. 165-172 [doi]
- Lost in navigation: evaluating a mobile map app for a fairAnders Bouwer, Frank Nack, Abdallah El-Ali. 173-180 [doi]
- An evaluation of game controllers and tablets as controllers for interactive tv applicationsDale Cox, Justin Wolford, Carlos Jensen, Dedrie Beardsley. 181-188 [doi]
- Towards multimodal deception detection - step 1: building a collection of deceptive videosRada Mihalcea, Mihai Burzo. 189-192 [doi]
- A portable audio/video recorder for longitudinal study of child developmentSoroush Vosoughi, Matthew S. Goodwin, Bill Washabaugh, Deb Roy. 193-200 [doi]
- Integrating PAMOCAT in the research cycle: linking motion capturing and conversation analysisBernhard Andreas Brüning, Christian Schnier, Karola Pitsch, Sven Wachsmuth. 201-208 [doi]
- Motion retrieval based on kinetic features in large motion databaseTianyu Huang, Haiying Liu, Gangyi Ding. 209-216 [doi]
- Vision-based handwriting recognition for unrestricted text input in mid-airAlexander Schick, Daniel Morlock, Christoph Amma, Tanja Schultz, Rainer Stiefelhagen. 217-220 [doi]
- Investigating the midline effect for visual focus of attention recognitionSamira Sheikhi, Jean-Marc Odobez. 221-224 [doi]
- Let's have dinner together: evaluate the mediated co-dining experienceJun Wei, Adrian David Cheok, Ryohei Nakatsu. 225-228 [doi]
- Infusing the physical world into user interfacesIvan Poupyrev. 229-230 [doi]
- Child-computer interaction: ICMI 2012 special sessionAnton Nijholt. 231-232 [doi]
- Knowledge gaps in hands-on tangible interaction researchAlissa Nicole Antle. 233-240 [doi]
- Evaluating artefacts with children: age and technology effects in the reporting of expected and experienced funJanet C. Read. 241-248 [doi]
- Measuring enjoyment of an interactive museum experienceElisabeth M. A. G. van Dijk, Andreas Lingnau, Hub Kockelkorn. 249-256 [doi]
- Bifocal modeling: a study on the learning outcomes of comparing physical and computational models linked in real timePaulo Blikstein. 257-264 [doi]
- Connecting play: understanding multimodal participation in virtual worldsYasmin B. Kafai, Deborah A. Fields. 265-272 [doi]
- Gestures as point clouds: a $P recognizer for user interface prototypesRadu-Daniel Vatavu, Lisa Anthony, Jacob O. Wobbrock. 273-280 [doi]
- Influencing gestural representation of eventualities: insights from ontologyMagdalena Lis. 281-288 [doi]
- Using self-context for multimodal detection of head nods in face-to-face interactionsLaurent Nguyen, Jean-Marc Odobez, Daniel Gatica-Perez. 289-292 [doi]
- Multimodal multiparty social interaction with the furhat headSamer Al Moubayed, Gabriel Skantze, Jonas Beskow, Kalin Stefanov, Joakim Gustafson. 293-294 [doi]
- An avatar-based help system for a grid computing web portalHelmut Lang, Florian Nothdurft. 295-296 [doi]
- GamEMO: how physiological signals show your emotions and enhance your game experienceChanel Guillaume, Kalogianni Konstantina, Thierry Pun. 297-298 [doi]
- Multimodal collaboration for crime scene investigation in mediated realityDragos Datcu, Thomas Swart, Stephan Lukosch, Zoltán Rusák. 299-300 [doi]
- PAMOCAT: linking motion capturing and conversation analysisBernhard Andreas Brüning, Christian Schnier. 301-302 [doi]
- Multimodal dialogue in mobile local searchPatrick Ehlen, Michael Johnston. 303-304 [doi]
- Toward an argumentation-based dialogue framework for human-robot collaborationMohammad Q. Azhar. 305-308 [doi]
- Timing multimodal turn-taking for human-robot cooperationCrystal Chao. 309-312 [doi]
- My automated conversation helper (MACH): helping people improve social skillsMohammed E. Hoque. 313-316 [doi]
- A touch of affect: mediated social touch and affectGijs Huisman. 317-320 [doi]
- Depression analysis: a multimodal approachJyoti Joshi. 321-324 [doi]
- Design space for finger gestures with hand-held tabletsKatrin Wolf. 325-328 [doi]
- Multi-modal interfaces for control of assistive robotic devicesChristopher McMurrough. 329-332 [doi]
- Space, speech, and gesture in human-robot interactionRoss Mead. 333-336 [doi]
- Machine analysis and recognition of social contextsMaria F. O'Connor. 337-340 [doi]
- Task-learning policies for collaborative task solving in human-robot interactionHae Won Park. 341-344 [doi]
- Simulating real danger?: validation of driving simulator test and psychological factors in brake response time to dangerDaniele Ruscio. 345-348 [doi]
- Virtual patients to teach cultural competencyRaghavi Sakpal. 349-352 [doi]
- Multimodal learning analytics: enabling the future of learning through multimodal data analysis and interfacesMarcelo Worsley. 353-356 [doi]
- A hierarchical approach to continuous gesture analysis for natural multi-modal interactionYing Yin. 357-360 [doi]
- AVEC 2012: the continuous audio/visual emotion challenge - an introductionBjörn Schuller, Michel François Valstar, Roddy Cowie, Maja Pantic. 361-362 [doi]
- ICMI'12 grand challenge: haptic voice recognitionKhe Chai Sim, Shengdong Zhao, Kai Yu, Hank Liao. 363-370 [doi]
- Audio-visual robot command recognition: D-META'12 grand challengeJordi Sanchez-Riera, Xavier Alameda-Pineda, Radu Horaud. 371-378 [doi]
- Brain computer interfaces as intelligent sensors for enhancing human-computer interactionMannes Poel, Femke Nijboer, Egon L. van den Broek, Stephen H. Fairclough, Anton Nijholt. 379-382 [doi]
- Using psychophysical techniques to design and evaluate multimodal interfaces: psychophysics and interface designRoberta L. Klatzky. 383-384 [doi]
- Reproducing materials of virtual elements on touchscreens using supplemental thermal feedbackHendrik Richter, Doris Hausen, Sven Osterwald, Andreas Butz. 385-392 [doi]
- Feeling it: the roles of stiffness, deformation range and feedback in the control of deformable uiJohan Kildal, Graham Wilson. 393-400 [doi]
- Audible rendering of text documents controlled by multi-touch interactionYasmine N. El-Glaly, Francis K. H. Quek, Tonya L. Smith-Jackson, Gurjot Dhillon. 401-408 [doi]
- Taste/IP: the sensation of taste for digital communicationNimesha Ranasinghe, Adrian David Cheok, Ryohei Nakatsu. 409-416 [doi]
- Learning speaker, addressee and overlap detection models from multimodal streamsOriol Vinyals, Dan Bohus, Rich Caruana. 417-424 [doi]
- Analysis of the correlation between the regularity of work behavior and stress indices based on longitudinal behavioral dataShogo Okada, Yusaku Sato, Yuki Kamiya, Keiji Yamada, Katsumi Nitta. 425-432 [doi]
- Linking speaking and looking behavior patterns with group composition, perception, and performanceDinesh Babu Jayagopi, Dairazalia Sanchez-Cortes, Kazuhiro Otsuka, Junji Yamato, Daniel Gatica-Perez. 433-440 [doi]
- Semi-automatic generation of multimodal user interfaces for dialogue-based interactive systemsDominik Ertl, Hermann Kaindl. 441-444 [doi]
- Designing multimodal reminders for the home: pairing content with presentationJulie Rico Williamson, Marilyn Rose McGee-Lennon, Stephen A. Brewster. 445-448 [doi]
- AVEC 2012: the continuous audio/visual emotion challengeBjörn Schuller, Michel Valstar, Florian Eyben, Roddy Cowie, Maja Pantic. 449-456 [doi]
- Facial emotion recognition with expression energyAlbert C. Cruz, Bir Bhanu, Ninad Thakoor. 457-464 [doi]
- Multiple classifier combination using reject options and markov fusion networksMichael Glodek, Martin Schels, Günther Palm, Friedhelm Schwenker. 465-472 [doi]
- Audio-visual emotion challenge 2012: a simple approachLaurens van der Maaten. 473-476 [doi]
- Step-wise emotion recognition using concatenated-HMMDerya Ozkan, Stefan Scherer, Louis-Philippe Morency. 477-484 [doi]
- Combining video, audio and lexical indicators of affect in spontaneous conversation via particle filteringArman Savran, Houwei Cao, Miraj Shah, Ani Nenkova, Ragini Verma. 485-492 [doi]
- A multimodal fuzzy inference system using a continuous facial expression representation for emotion detectionCatherine Soladié, Hanan Salam, Catherine Pelachaud, Nicolas Stoiber, Renaud Séguier. 493-500 [doi]
- Robust continuous prediction of human emotions using multiscale dynamic cuesJérémie Nicolle, Vincent Rapp, Kevin Bailly, Lionel Prevost, Mohamed Chetouani. 501-508 [doi]
- Elastic net for paralinguistic speech recognitionPouria Fewzee, Fakhri Karray. 509-516 [doi]
- Improving generalisation and robustness of acoustic affect recognitionFlorian Eyben, Björn Schuller, Gerhard Rigoll. 517-522 [doi]
- Preserving actual dynamic trend of emotion in dimensional speech emotion recognitionWenjing Han, Haifeng Li, Florian Eyben, Lin Ma, Jiayin Sun, Björn Schuller. 523-528 [doi]
- Negative sentiment in scenarios elicit pupil dilation response: an auditory studySerdar Baltaci, Didem Gokcay. 529-532 [doi]
- Design and implementation of the note-taking style haptic voice recognition for mobile devicesSeungwhan Moon, Khe Chai Sim. 533-538 [doi]
- Development of the 2012 SJTU HVR systemHainan Xu, Yuchen Fan, Kai Yu. 539-544 [doi]
- Improving mandarin predictive text input by augmenting pinyin initials with speech and tonal informationGuangsen Wang, Bo Li, Shilin Liu, Xuancong Wang, Xiaoxuan Wang, Khe Chai Sim. 545-550 [doi]
- LUI: lip in multimodal mobile GUI interactionMaryam Azh, Shengdong Zhao. 551-554 [doi]
- Speak-as-you-swipe (SAYS): a multimodal interface combining speech and gesture keyboard synchronously for continuous mobile text entryKhe Chai Sim. 555-560 [doi]
- Interpersonal biocybernetics: connecting through social psychophysiologyAlan T. Pope, Chad L. Stephens. 561-566 [doi]
- Adaptive EEG artifact rejection for cognitive gamesOlexiy Kyrgyzov, Antoine Souloumiac. 567-570 [doi]
- Construction of the biocybernetic loop: a case studyStephen H. Fairclough, Kiel Mark Gilleade. 571-578 [doi]
- An interactive control strategy is more robust to non-optimal classification boundariesVirginia R. de Sa. 579-586 [doi]
- Improving BCI performance after classificationDanny Plass-Oude Bos, Hayrettin Gürkök, Boris Reuderink, Mannes Poel. 587-594 [doi]
- Electroencephalographic detection of visual saliency of motion towards a practical brain-computer interface for video analysisMatthew Weiden, Deepak Khosla, Matthew Keegan. 601-606 [doi]
- Workshop on speech and gesture production in virtually and physically embodied conversational agentsRoss Mead, Maha Salem. 607-608 [doi]
- 1st international workshop on multimodal learning analytics: extended abstractStefan Scherer, Marcelo Worsley, Louis-Philippe Morency. 609-610 [doi]
- 4th workshop on eye gaze in intelligent human machine interaction: eye gaze and multimodalityYukiko I. Nakano, Kristiina Jokinen, Hung-Hsuan Huang. 611-612 [doi]
- The 3rd international workshop on social behaviour in music: SBM2012Antonio Camurri, Donald Glowinski, Maurizio Mancini, Giovanna Varni, Gualtiero Volpe. 613-614 [doi]
- Smart material interfaces: a material step to the futureAnton Nijholt, Leonardo Giusti, Andrea Minuto, Patrizia Marti. 615-616 [doi]