Abstract is missing.
- Behavior imaging and the study of autismJames M. Rehg. 1-2 [doi]
- On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactionsSubramanian Ramanathan, Yan Yan, Jacopo Staiano, Oswald Lanz, Nicu Sebe. 3-10 [doi]
- One of a kind: inferring personality impressions in meetingsOya Aran, Daniel Gatica-Perez. 11-18 [doi]
- Who is persuasive?: the role of perceived personality and communication modality in social multimediaGelareh Mohammadi, Sunghyun Park, Kenji Sagae, Alessandro Vinciarelli, Louis-Philippe Morency. 19-26 [doi]
- Going beyond traits: multimodal classification of personality states in the wildKyriaki Kalimeri, Bruno Lepri, Fabio Pianesi. 27-34 [doi]
- Implementation and evaluation of a multimodal addressee identification mechanism for multiparty conversation systemsYukiko I. Nakano, Naoya Baba, Hung-Hsuan Huang, Yuki Hayashi. 35-42 [doi]
- Managing chaos: models of turn-taking in character-multichild interactionsIolanda Leite, Hannaneh Hajishirzi, Sean Andrist, Jill Fain Lehman. 43-50 [doi]
- Speaker-adaptive multimodal prediction model for listener responsesIwan de Kok, Dirk Heylen, Louis-Philippe Morency. 51-58 [doi]
- User experiences of mobile audio conferencing with spatial audio, haptics and gesturesJussi Rantala, Sebastian Müller, Roope Raisamo, Katja Suhonen, Kaisa Väänänen-Vainio-Mattila, Vuokko Lantz. 59-66 [doi]
- A framework for multimodal data collection, visualization, annotation and learningAnne Loomis Thompson, Dan Bohus. 67-68 [doi]
- Demonstration of sketch-thru-plan: a multimodal interface for command and controlPhilip R. Cohen, M. Cecelia Buchanan, Edward C. Kaiser, Michael Corrigan, Scott Lind, Matt Wesson. 69-70 [doi]
- Robotic learning companions for early language developmentJacqueline M. Kory, Sooyeon Jeong, Cynthia Breazeal. 71-72 [doi]
- WikiTalk human-robot interactionsGraham Wilcock, Kristiina Jokinen. 73-74 [doi]
- Saliency-guided 3D head pose estimation on 3D expression modelsPeng Liu, Michael Reale, Xing Zhang, Lijun Yin. 75-78 [doi]
- Predicting next speaker and timing from gaze transition patterns in multi-party meetingsRyo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato. 79-86 [doi]
- A semi-automated system for accurate gaze coding in natural dyadic interactionsKenneth Alberto Funes Mora, Laurent Son Nguyen, Daniel Gatica-Perez, Jean-Marc Odobez. 87-90 [doi]
- Evaluating the robustness of an appearance-based gaze estimation method for multimodal interfacesNanxiang Li, Carlos Busso. 91-98 [doi]
- A gaze-based method for relating group involvement to individual engagement in multimodal multiparty dialogueCatharine Oertel, Giampiero Salvi. 99-106 [doi]
- Leveraging the robot dialog state for visual focus of attention recognitionSamira Sheikhi, Vasil Khalidov, David Klotz, Britta Wrede, Jean-Marc Odobez. 107-110 [doi]
- CoWME: a general framework to evaluate cognitive workload during multimodal interactionDavide Maria Calandra, Antonio Caso, Francesco Cutugno, Antonio Origlia, Silvia Rossi. 111-118 [doi]
- Hi YouTube!: personality impressions and verbal content in social videoJoan-Isaac Biel, Vagia Tsiminaki, John Dines, Daniel Gatica-Perez. 119-126 [doi]
- Cross-domain personality prediction: from video blogs to small group meetingsOya Aran, Daniel Gatica-Perez. 127-130 [doi]
- Automatic detection of deceit in verbal communicationRada Mihalcea, Verónica Pérez-Rosas, Mihai Burzo. 131-134 [doi]
- Audiovisual behavior descriptors for depression assessmentStefan Scherer, Giota Stratou, Louis-Philippe Morency. 135-140 [doi]
- A Markov logic framework for recognizing complex events from multimodal dataYoung Chol Song, Henry A. Kautz, James F. Allen, Mary Swift, Yuncheng Li, Jiebo Luo, Ce Zhang. 141-148 [doi]
- Interactive relevance search and modeling: support for expert-driven analysis of multimodal dataChreston A. Miller, Francis K. H. Quek, Louis-Philippe Morency. 149-156 [doi]
- Predicting speech overlaps from speech tokens and co-occurring body behaviours in dyadic conversationsCostanza Navarretta. 157-164 [doi]
- Interaction analysis and joint attention tracking in augmented realityAlexander Neumann, Christian Schnier, Thomas Hermann, Karola Pitsch. 165-172 [doi]
- Mo!Games: evaluating mobile gestures in the wildJulie Rico Williamson, Stephen A. Brewster, Rama Vennelakanti. 173-180 [doi]
- Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agentBenjamin Inden, Zofia Malisz, Petra Wagner, Ipke Wachsmuth. 181-188 [doi]
- Video analysis of approach-avoidance behaviors of teenagers speaking with virtual agentsDavid Antonio Gómez Jáuregui, Léonor Philip, Céline Clavel, Stéphane Padovani, Mahin Bailly, Jean-Claude Martin. 189-196 [doi]
- A dialogue system for multimodal human-robot interactionLorenzo Lucignano, Francesco Cutugno, Silvia Rossi, Alberto Finzi. 197-204 [doi]
- The zigzag paradigm: a new P300-based brain computer interfaceQasem Obeidat, Tom Campbell, Jun Kong. 205-212 [doi]
- SpeeG2: a speech- and gesture-based interface for efficient controller-free text inputLode Hoste, Beat Signer. 213-220 [doi]
- Interfaces for thinkers: computer input capabilities that support inferential reasoningSharon Oviatt. 221-228 [doi]
- Adaptive timeline interface to personal history dataAntti Ajanki, Markus Koskela, Jorma Laaksonen, Samuel Kaski. 229-236 [doi]
- Learning a sparse codebook of facial and body microexpressions for emotion recognitionYale Song, Louis-Philippe Morency, Randall Davis. 237-244 [doi]
- Giving interaction a hand: deep models of co-speech gesture in multimodal systemsStefan Kopp. 245-246 [doi]
- Five key challenges in end-user development for tangible and embodied interactionDaniel Tetteroo, Iris Soute, Panos Markopoulos. 247-254 [doi]
- How can i help you': comparing engagement classification strategies for a robot bartenderMary Ellen Foster, Andre Gaschler, Manuel Giuliani. 255-262 [doi]
- Comparing task-based and socially intelligent behaviour in a robot bartenderManuel Giuliani, Ronald P. A. Petrick, Mary Ellen Foster, Andre Gaschler, Amy Isard, Maria Pateraki, Markos Sigalas. 263-270 [doi]
- A dynamic multimodal approach for assessing learners' interaction experienceImene Jraidi, Maher Chaouachi, Claude Frasson. 271-278 [doi]
- Relative accuracy measures for stroke gesturesRadu-Daniel Vatavu, Lisa Anthony, Jacob O. Wobbrock. 279-286 [doi]
- LensGesture: augmenting mobile interactions with back-of-device finger gesturesXiang Xiao, Teng Han, Jingtao Wang. 287-294 [doi]
- Aiding human discovery of handwriting recognition errorsRyan Stedman, Michael A. Terry, Edward Lank. 295-302 [doi]
- Context-based conversational hand gesture classification in narrative interactionShogo Okada, Mayumi Bono, Katsuya Takanashi, Yasuyuki Sumi, Katsumi Nitta. 303-310 [doi]
- A haptic touchscreen interface for mobile devicesJong-uk Lee, Jeong-Mook Lim, Heesook Shin, Ki-Uk Kyung. 311-312 [doi]
- A social interaction system for studying humor with the Robot NAOLaurence Y. Devillers, Mariette Soury. 313-314 [doi]
- TaSST: affective mediated touchAduén Darriba Frederiks, Dirk Heylen, Gijs Huisman. 315-316 [doi]
- Talk ROILA to your RobotOmar Mubin, Joshua Henderson, Christoph Bartneck. 317-318 [doi]
- NEMOHIFI: an affective HiFi agentSyaheerah Lebai Lutfi, Fernando Fernández-Martínez, Jaime Lorenzo-Trueba, Roberto Barra-Chicote, Juan Manuel Montero. 319-320 [doi]
- Persuasiveness in social multimedia: the role of communication modality and the challenge of crowdsourcing annotationsSunghyun Park. 321-324 [doi]
- Towards a dynamic view of personality: multimodal classification of personality states in everyday situationsKyriaki Kalimeri. 325-328 [doi]
- Designing effective multimodal behaviors for robots: a data-driven perspectiveChien-Ming Huang. 329-332 [doi]
- Controllable models of gaze behavior for virtual agents and humanlike robotsSean Andrist. 333-336 [doi]
- The nature of the bots: how people respond to robots, virtual agents and humans as multimodal stimuliJamy Li. 337-340 [doi]
- Adaptive virtual rapport for embodied conversational agentsIvan Gris Sepulveda. 341-344 [doi]
- 3D head pose and gaze tracking and their application to diverse multimodal tasksKenneth Alberto Funes Mora. 345-348 [doi]
- Towards developing a model for group involvement and individual engagementCatharine Oertel. 349-352 [doi]
- Gesture recognition using depth imagesBin Liang. 353-356 [doi]
- Modeling semantic aspects of gaze behavior while catalog browsingErina Ishikawa. 357-360 [doi]
- Computational behaviour modelling for autism diagnosisShyam Sundar Rajagopalan. 361-364 [doi]
- ChaLearn multi-modal gesture recognition 2013: grand challenge and workshop summarySergio Escalera, Jordi Gonzàlez, Xavier Baró, Miguel Reyes, Isabelle Guyon, Vassilis Athitsos, Hugo Jair Escalante, Leonid Sigal, Antonis Argyros, Cristian Sminchisescu, Richard Bowden, Stan Sclaroff. 365-368 [doi]
- Emotion recognition in the wild challenge (EmotiW) challenge and workshop summaryAbhinav Dhall, Roland Goecke, Jyoti Joshi, Michael Wagner, Tom Gedeon. 371-372 [doi]
- ICMI 2013 grand challenge workshop on multimodal learning analyticsLouis-Philippe Morency, Sharon Oviatt, Stefan Scherer, Nadir Weibel, Marcelo Worsley. 373-378 [doi]
- Hands and speech in space: multimodal interaction with augmented reality interfacesMark Billinghurst. 379-380 [doi]
- Evaluating dual-view perceptual issues in handheld augmented reality: device vs. user perspective renderingKlen Copic Pucihar, Paul Coulton, Jason Alexander. 381-388 [doi]
- MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spacesKazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato. 389-396 [doi]
- Investigating appropriate spatial relationship between user and ar character agent for communication using AR WoZ systemReina Aramaki, Makoto Murakami. 397-404 [doi]
- Inferring social activities with mobile sensor networksTrinh Minh Tri Do, Kyriaki Kalimeri, Bruno Lepri, Fabio Pianesi, Daniel Gatica-Perez. 405-412 [doi]
- Effects of language proficiency on eye-gaze in second language conversations: toward supporting second language collaborationIchiro Umata, Seiichi Yamamoto, Koki Ijuin, Masafumi Nishida. 413-420 [doi]
- Predicting where we look from spatiotemporal gapsRyo Yonetani, Hiroaki Kawashima, Takashi Matsuyama. 421-428 [doi]
- Automatic multimodal descriptors of rhythmic body movementMarwa Mahmoud, Louis-Philippe Morency, Peter Robinson. 429-436 [doi]
- Multimodal analysis of body communication cues in employment interviewsLaurent Son Nguyen, Alvaro Marcos-Ramiro, Marta Marrón Romera, Daniel Gatica-Perez. 437-444 [doi]
- Multi-modal gesture recognition challenge 2013: dataset and resultsSergio Escalera, Jordi Gonzàlez, Xavier Baró, Miguel Reyes, Oscar Lopes, Isabelle Guyon, Vassilis Athitsos, Hugo Jair Escalante. 445-452 [doi]
- Fusing multi-modal features for gesture recognitionJiaxiang Wu, Jian Cheng, Chaoyang Zhao, Hanqing Lu. 453-460 [doi]
- A multi modal approach to gesture recognition from audio and video dataImmanuel Bayer, Thierry Silbermann. 461-466 [doi]
- Online RGB-D gesture recognition with extreme learning machinesXi Chen, Markus Koskela. 467-474 [doi]
- A multi-modal gesture recognition system using audio, video, and skeletal joint dataKarthik Nandakumar, Kong-Wah Wan, Siu Man Alice Chan, Wen Zheng Terence Ng, Jian-Gang Wang, Wei-Yun Yau. 475-482 [doi]
- ChAirGest: a challenge for multimodal mid-air gesture recognition for close HCISimon Ruffieux, Denis Lalanne, Elena Mugellini. 483-488 [doi]
- Gesture spotting and recognition using salience detection and concatenated hidden markov modelsYing Yin, Randall Davis. 489-494 [doi]
- Multi-modal social signal analysis for predicting agreement in conversation settingsVíctor Ponce-López, Sergio Escalera, Xavier Baró. 495-502 [doi]
- Multi-modal descriptors for multi-class hand pose recognition in human computer interaction systemsJordi Abella, Raúl Alcaide, Anna Sabaté, Joan Mas, Sergio Escalera, Jordi Gonzàlez, Coen Antens. 503-508 [doi]
- Emotion recognition in the wild challenge 2013Abhinav Dhall, Roland Goecke, Jyoti Joshi, Michael Wagner, Tom Gedeon. 509-516 [doi]
- Multiple kernel learning for emotion recognition in the wildKaran Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort, Marian Stewart Bartlett. 517-524 [doi]
- Partial least squares regression on grassmannian manifold for emotion recognitionMengyi Liu, Ruiping Wang, Zhiwu Huang, Shiguang Shan, Xilin Chen. 525-530 [doi]
- Emotion recognition with boosted tree classifiersMatthew Day. 531-534 [doi]
- Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOPTimur R. Almaev, Anil Yüce, Alexandru Ghitulescu, Michel François Valstar. 535-542 [doi]
- Combining modality specific deep neural networks for emotion recognition in videoSamira Ebrahimi Kanou, Christopher J. Pal, Xavier Bouthillier, Pierre Froumenty, Çaglar Gülçehre, Roland Memisevic, Pascal Vincent, Aaron C. Courville, Yoshua Bengio, Raul Chandias Ferrari, Mehdi Mirza, Sébastien Jean, Pierre Luc Carrier, Yann Dauphin, Nicolas Boulanger-Lewandowski, Abhishek Aggarwal, Jeremie Zumer, Pascal Lamblin, Jean-Philippe Raymond, Guillaume Desjardins, Razvan Pascanu, David Warde-Farley, Atousa Torabi, Arjun Sharma, Emmanuel Bengio, Kishore Reddy Konda, Zhenzhou Wu. 543-550 [doi]
- Multi classifier systems and forward backward feature selection algorithms to classify emotional coloured speechSascha Meudt, Dimitri Zharkov, Markus Kächele, Friedhelm Schwenker. 551-556 [doi]
- Emotion recognition using facial and audio featuresTarun Krishna, Ayush Rai, Shubham Bansal, Shubham Khandelwal, Shubham Gupta, Dushyant Goel. 557-564 [doi]
- Multimodal learning analytics: description of math data corpus for ICMI grand challenge workshopSharon Oviatt, Adrienne Cohen, Nadir Weibel. 563-568 [doi]
- Problem solving, domain expertise and learning: ground-truth performance results for math data corpusSharon Oviatt. 569-574 [doi]
- Automatic identification of experts and performance prediction in the multimodal math data corpus through analysis of speech interactionSaturnino Luz. 575-582 [doi]
- Expertise estimation based on simple multimodal featuresXavier Ochoa, Katherine Chiluiza, Gonzalo Méndez, Gonzalo Luzardo, Bruno Guamán, James Castells. 583-590 [doi]
- Using micro-patterns of speech to predict the correctness of answers to mathematics problems: an exercise in multimodal learning analyticsKate Thompson. 591-598 [doi]
- Written and multimodal representations as predictors of expertise and problem-solving success in mathematicsSharon Oviatt, Adrienne Cohen. 599-606 [doi]
- ERM4HCI 2013: the 1st workshop on emotion representation and modelling in human-computer-interaction-systemsKim Hartmann, Ronald Böck, Christian Becker-Asano, Jonathan Gratch, Björn Schuller, Klaus R. Scherer. 607-608 [doi]
- Gazein'13: the 6th workshop on eye gaze in intelligent human machine interaction: gaze in multimodal interactionRoman Bednarik, Hung-Hsuan Huang, Yukiko I. Nakano, Kristiina Jokinen. 609-610 [doi]
- Smart material interfaces: "another step to a material future"Manuel Kretzer, Andrea Minuto, Anton Nijholt. 611-612 [doi]