Abstract is missing.
- Sharing Representations for Long Tail Computer Vision ProblemsSamy Bengio. 1 [doi]
- Interaction Studies with Social RobotsKerstin Dautenhahn. 3 [doi]
- Connections: 2015 ICMI Sustained Accomplishment Award LectureEric Horvitz. 5 [doi]
- Combining Two Perspectives on Classifying Multimodal Data for Recognizing Speaker TraitsMoitreya Chatterjee, Sunghyun Park, Louis-Philippe Morency, Stefan Scherer. 7-14 [doi]
- Personality Trait Classification via Co-Occurrent Multiparty Multimodal Event DiscoveryShogo Okada, Oya Aran, Daniel Gatica-Perez. 15-22 [doi]
- Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation ScoringVikram Ramanarayanan, Chee Wee Leong, Lei Chen 0004, Gary Feng, David Suendermann-Oeft. 23-30 [doi]
- Gender Representation in Cinematic Content: A Multimodal ApproachTanaya Guha, Che-Wei Huang, Naveen Kumar 0004, Yan Zhu, Shrikanth S. Narayanan. 31-34 [doi]
- Effects of Good Speaking Techniques on Audience EngagementKeith Curtis, Gareth J. F. Jones, Nick Campbell. 35-42 [doi]
- Multimodal Public Speaking Performance AssessmentTorsten Wörtwein, Mathieu Chollet, Boris Schauerte, Louis-Philippe Morency, Rainer Stiefelhagen, Stefan Scherer. 43-50 [doi]
- I Would Hire You in a Minute: Thin Slices of Nonverbal Behavior in Job InterviewsLaurent Son Nguyen, Daniel Gatica-Perez. 51-58 [doi]
- Deception Detection using Real-life Trial DataVerónica Pérez-Rosas, Mohamed Abouelenien, Rada Mihalcea, Mihai Burzo. 59-66 [doi]
- Exploring Turn-taking Cues in Multi-party Human-robot Discussions about ObjectsGabriel Skantze, Martin Johansson, Jonas Beskow. 67-74 [doi]
- Visual Saliency and Crowdsourcing-based Priors for an In-car Situated Dialog SystemTeruhisa Misu. 75-82 [doi]
- Leveraging Behavioral Patterns of Mobile Applications for Personalized Spoken Language UnderstandingYun-Nung Chen, Ming Sun, Alexander I. Rudnicky, Anatole Gershman. 83-86 [doi]
- Who's Speaking?: Audio-Supervised Classification of Active Speakers in VideoPunarjay Chakravarty, Sayeh Mirzaei, Tinne Tuytelaars, Hugo Van Hamme. 87-90 [doi]
- Predicting Participation Styles using Co-occurrence Patterns of Nonverbal Behaviors in Collaborative LearningYukiko I. Nakano, Sakiko Nihonyanagi, Yutaka Takase, Yuki Hayashi, Shogo Okada. 91-98 [doi]
- Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party MeetingsRyo Ishii, Shiro Kumano, Kazuhiro Otsuka. 99-106 [doi]
- Deciphering the Silent Participant: On the Use of Audio-Visual Cues for the Classification of Listener Categories in Group DiscussionsCatharine Oertel, Kenneth Alberto Funes Mora, Joakim Gustafson, Jean-Marc Odobez. 107-114 [doi]
- Retrieving Target Gestures Toward Speech Driven Animation with Meaningful BehaviorsNajmeh Sadoughi, Carlos Busso. 115-122 [doi]
- Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot InputKonstantin Klamka, Andreas Siegel, Stefan Vogt, Fabian Göbel, Sophie Stellmach, Raimund Dachselt. 123-130 [doi]
- Gaze+Gesture: Expressive, Precise and Targeted Free-Space InteractionsIshan Chatterjee, Robert Xiao, Chris Harrison. 131-138 [doi]
- Digital Flavor: Towards Digitally Simulating Virtual FlavorsNimesha Ranasinghe, Gajan Suthokumar, Kuan-Yi Lee, Ellen Yi-Luen Do. 139-146 [doi]
- Different Strokes and Different Folks: Economical Dynamic Surface Sensing and Affect-Related Touch RecognitionXi Laura Cang, Paul Bucci, Andrew Strang, Jeff Allen, Karon E. MacLean, H. Y. Sean Liu. 147-154 [doi]
- MPHA: A Personal Hearing Doctor Based on Mobile DevicesYu-Hao Wu, Jia Jia, Wai-Kim Leung, Yejun Liu, Lianhong Cai. 155-162 [doi]
- Towards Attentive, Bi-directional MOOC Learning on Mobile DevicesXiang Xiao, Jingtao Wang. 163-170 [doi]
- An Experiment on the Feasibility of Spatial Acquisition using a Moving Auditory Cue for Pedestrian NavigationYeseul Park, Kyle Koh, Heonjin Park, Jinwook Seo. 171-174 [doi]
- A Wearable Multimodal Interface for Exploring Urban Points of InterestAntti Jylhä, Yi-Ta Hsieh, Valeria Orso, Salvatore Andolina, Luciano Gamberini, Giulio Jacucci. 175-182 [doi]
- ECA Control using a Single Affective User DimensionFred Charles, Florian Pecune, Gabor Aranyi, Catherine Pelachaud, Marc Cavazza. 183-190 [doi]
- Multimodal Interaction with a Bifocal View on Mobile DevicesSebastien Pelurson, Laurence Nigay. 191-198 [doi]
- NaLMC: A Database on Non-acted and Acted Emotional Sequences in HCIKim Hartmann, Julia Krüger, Jörg Frommer, Andreas Wendemuth. 199-202 [doi]
- Exploiting Multimodal Affect and Semantics to Identify Politically Persuasive Web VideosBehjat Siddiquie, Dave Chisholm, Ajay Divakaran. 203-210 [doi]
- Toward Better Understanding of Engagement in Multiparty Spoken Interaction with ChildrenSamer Al Moubayed, Jill Lehman. 211-218 [doi]
- Gestimator: Shape and Stroke Similarity Based Gesture RecognitionYina Ye, Petteri Nurmi. 219-226 [doi]
- Classification of Children's Social Dominance in Group Interactions with RobotsSarah Strohkorb, Iolanda Leite, Natalie Warren, Brian Scassellati. 227-234 [doi]
- Spectators' Synchronization Detection based on Manifold Representation of Physiological Signals: Application to Movie Highlights DetectionMichal Muszynski, Theodoros Kostoulas, Guillaume Chanel, Patrizia Lombardo, Thierry Pun. 235-238 [doi]
- Implicit User-centric Personality Recognition Based on Physiological Responses to Emotional VideosJulia Wache, Ramanathan Subramanian, Mojtaba Khomami Abadi, Radu-Laurentiu Vieriu, Nicu Sebe, Stefan Winkler. 239-246 [doi]
- Detecting Mastication: A Wearable ApproachAbdelkareem Bedri, Apoorva Verlekar, Edison Thomaz, Valerie Avva, Thad Starner. 247-250 [doi]
- Exploring Behavior Representation for Learning AnalyticsMarcelo Worsley, Stefan Scherer, Louis-Philippe Morency, Paulo Blikstein. 251-258 [doi]
- Multimodal Human Activity Recognition for Industrial Manufacturing Processes in Robotic WorkcellsAlina Roitberg, Nikhil Somani, Alexander Clifford Perzylo, Markus Rickert, Alois Knoll. 259-266 [doi]
- Accuracy vs. Availability Heuristic in Multimodal Affect Detection in the WildNigel Bosch, Huili Chen, Sidney K. D'Mello, Ryan Shaun Baker, Valerie J. Shute. 267-274 [doi]
- Dynamic Active Learning Based on Agreement and Applied to Emotion Recognition in Spoken InteractionsYue Zhang, Eduardo Coutinho, Zixing Zhang, Caijiao Quan, Björn Schuller. 275-278 [doi]
- Sharing Touch Interfaces: Proximity-Sensitive Touch Targets for Tablet-Mediated CollaborationIlhan Aslan, Thomas Meneweger, Verena Fuchsberger, Manfred Tscheligi. 279-286 [doi]
- Analyzing Multimodality of Video for User Engagement AssessmentFahim A. Salim, Fasih Haider, Owen Conlan, Saturnino Luz, Nick Campbell. 287-290 [doi]
- Adjacent Vehicle Collision Warning System using Image Sensor and Inertial Measurement UnitAsif Iqbal, Carlos Busso, Nicholas R. Gans. 291-298 [doi]
- Automatic Detection of Mind Wandering During Reading Using Gaze and PhysiologyRobert Bixler, Nathaniel Blanchard, Luke Garrison, Sidney K. D'Mello. 299-306 [doi]
- Multimodal Detection of Depression in Clinical InterviewsHamdi Dibeklioglu, Zakia Hammal, Ying Yang, Jeffrey F. Cohn. 307-310 [doi]
- Spoken Interruptions Signal Productive Problem Solving and Domain Expertise in MathematicsSharon L. Oviatt, Kevin Hang, Jianlong Zhou, Fang Chen. 311-318 [doi]
- Active Haptic Feedback for Touch Enabled TV RemoteAnton Treskunov, Mike Darnell, Rongrong Wang. 319-322 [doi]
- A Visual Analytics Approach to Finding Factors Improving Automatic Speaker IdentificationsPierrick Bruneau, Mickaël Stefas, Hervé Bredin, Johann Poignant, Thomas Tamisier, Claude Barras. 323-326 [doi]
- The Influence of Visual Cues on Passive Tactile Sensations in a Multimodal Immersive Virtual EnvironmentNina Rosa, Wolfgang Hürst, Wouter Vos, Peter J. Werkhoven. 327-334 [doi]
- Detection of Deception in the Mafia Party GameSergey Demyanov, James Bailey, Kotagiri Ramamohanarao, Christopher Leckie. 335-342 [doi]
- Individuality-Preserving Voice Reconstruction for Articulation Disorders Using Text-to-Speech SynthesisReina Ueda, Tetsuya Takiguchi, Yasuo Ariki. 343-346 [doi]
- Behavioral and Emotional Spoken Cues Related to Mental States in Human-Robot Social InteractionLucile Bechade, Guillaume Dubuisson Duplessis, Mohamed El Amine Sehili, Laurence Devillers. 347-350 [doi]
- Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person ViewSven Bambach, David J. Crandall, Chen Yu. 351-354 [doi]
- A Multimodal System for Real-Time Action Instruction in Motor Skill LearningIwan de Kok, Julian Hough, Felix Hülsmann, Mario Botsch, David Schlangen, Stefan Kopp. 355-362 [doi]
- The Application of Word Processor UI paradigms to Audio and Animation EditingAndré D. Milota. 363-364 [doi]
- CuddleBits: Friendly, Low-cost Furballs that Respond to TouchLaura Cang, Paul Bucci, Karon E. MacLean. 365-366 [doi]
- Public Speaking Training with a Multimodal Interactive Virtual Audience FrameworkMathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer. 367-368 [doi]
- A Multimodal System for Public Speaking with Real Time FeedbackFiona Dermody, Alistair Sutherland. 369-370 [doi]
- Model of Personality-Based, Nonverbal Behavior in Affective Virtual Humanoid CharacterMaryam Saberi, Ulysses Bernardet, Steve DiPaola. 371-372 [doi]
- AttentiveLearner: Adaptive Mobile MOOC Learning via Implicit Cognitive States InferenceXiang Xiao, Phuong Pham, Jingtao Wang. 373-374 [doi]
- Interactive Web-based Image Sonification for the BlindTorsten Wörtwein, Boris Schauerte, Karin E. Müller, Rainer Stiefelhagen. 375-376 [doi]
- Nakama: A Companion for Non-verbal Affective CommunicationChristian J. A. M. Willemse, Gerald M. Munters, Jan B. F. Van Erp, Dirk Heylen. 377-378 [doi]
- Wir im Kiez: Multimodal App for Mutual Help Among Elderly NeighboursSven Schmeier, Aaron Ruß, Norbert Reithinger. 379-380 [doi]
- Interact: Tightly-coupling Multimodal Dialog with an Interactive Virtual AssistantEthan Selfridge, Michael Johnston. 381-382 [doi]
- The UTEP AGENT SystemDavid G. Novick, Iván Gris Sepulveda, Diego A. Rivera, Adriana Camacho, Alex Rayon, Mario Gutierrez. 383-384 [doi]
- A Distributed Architecture for Interacting with NAOFabien Badeig, Quentin Pelorson, Soraya Arias, Vincent Drouard, Israel D. Gebru, Xiaofei Li, Georgios Evangelidis, Radu Horaud. 385-386 [doi]
- Touch Challenge '15: Recognizing Social Touch GesturesMerel M. Jung, Xi Laura Cang, Mannes Poel, Karon E. MacLean. 387-390 [doi]
- The Grenoble System for the Social Touch Challenge at ICMI 2015Viet Cuong Ta, Wafa Johal, Maxime Portaz, Eric Castelli, Dominique Vaufreydaz. 391-398 [doi]
- Social Touch Gesture Recognition using Random Forest and Boosting on Distinct Feature SetsYona Falinie A. Gaus, Temitayo A. Olugbade, Asim Jan, Rui Qin, Jingxin Liu, Fan Zhang, Hongying Meng, Nadia Bianchi-Berthouze. 399-406 [doi]
- Recognizing Touch Gestures for Social Human-Robot InteractionTugce Balli Altuglu, Kerem Altun. 407-413 [doi]
- Detecting and Identifying Tactile Gestures using Deep Autoencoders, Geometric Moments and Gesture Level FeaturesDana Hughes, Nicholas Farrow, Halley Profita, Nikolaus Correll. 415-422 [doi]
- Video and Image based Emotion Recognition Challenges in the Wild: EmotiW 2015Abhinav Dhall, O. V. Ramana Murthy, Roland Goecke, Jyoti Joshi, Tom Gedeon. 423-426 [doi]
- Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression RecognitionBo-kyeong Kim, Hwaran Lee, Jihyeon Roh, Soo-Young Lee. 427-434 [doi]
- Image based Static Facial Expression Recognition with Multiple Deep Network LearningZhiding Yu, Cha Zhang. 435-442 [doi]
- Deep Learning for Emotion Recognition on Small Datasets using Transfer LearningHongwei Ng, Viet-Dung Nguyen, Vassilios Vonikakis, Stefan Winkler. 443-449 [doi]
- Capturing AU-Aware Facial Features and Their Latent Relations for Emotion Recognition in the WildAnbang Yao, Junchao Shao, Ningning Ma, Yurong Chen. 451-458 [doi]
- Contrasting and Combining Least Squares Based Learners for Emotion Recognition in the WildHeysem Kaya, Furkan Gürpinar, Sadaf Afshar, Albert Ali Salah. 459-466 [doi]
- Recurrent Neural Networks for Emotion Recognition in VideoSamira Ebrahimi Kahou, Vincent Michalski, Kishore Reddy Konda, Roland Memisevic, Christopher Joseph Pal. 467-474 [doi]
- Multiple Models Fusion for Emotion Recognition in the WildJianlong Wu, Zhouchen Lin, Hongbin Zha. 475-481 [doi]
- A Deep Feature based Multi-kernel Learning Approach for Video Emotion RecognitionWei Li, Farnaz Abtahi, Zhigang Zhu. 483-490 [doi]
- Transductive Transfer LDA with Riesz-based Volume LBP for Emotion Recognition in The WildYuan Zong, Wenming Zheng, Xiaohua Huang, Jingwei Yan, Tong Zhang. 491-496 [doi]
- Combining Multimodal Features within a Fusion Network for Emotion Recognition in the WildBo Sun, Liandong Li, Guoyan Zhou, Xuewen Wu, Jun He, Lejun Yu, Dongxue Li, Qinglan Wei. 497-502 [doi]
- Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary PatternsGil Levi, Tal Hassner. 503-510 [doi]
- Quantification of Cinematography Semiotics for Video-based Facial Emotion Recognition in the EmotiW 2015 Grand ChallengeAlbert C. Cruz. 511-518 [doi]
- Affect Recognition using Key Frame Selection based on Minimum Sparse ReconstructionMehmet Kayaoglu, Cigdem Eroglu Erdem. 519-524 [doi]
- 2015 Multimodal Learning and Analytics Grand ChallengeMarcelo Worsley, Katherine Chiluiza, Joseph F. Grafsgaard, Xavier Ochoa. 525-529 [doi]
- Providing Real-time Feedback for Student Teachers in a Virtual Rehearsal EnvironmentRoghayeh Barmaki, Charles E. Hughes. 531-537 [doi]
- Presentation Trainer, your Public Speaking Multimodal CoachJan Schneider, Dirk Börner, Peter van Rosmalen, Marcus Specht. 539-546 [doi]
- Utilizing Depth Sensors for Analyzing Multimodal Presentations: Hardware, Software and ToolkitsChee Wee Leong, Lei Chen 0004, Gary Feng, Chong Min Lee, Matthew Mulholland. 547-556 [doi]
- Multimodal Capture of Teacher-Student Interactions for Automated Dialogic Analysis in Live ClassroomsSidney K. D'Mello, Andrew McGregor Olney, Nathaniel Blanchard, Borhan Samei, Xiaoyi Sun, Brooke Ward, Sean Kelly. 557-566 [doi]
- Multimodal Selfies: Designing a Multimodal Recording Device for Students in Traditional ClassroomsFederico Domínguez, Katherine Chiluiza, Vanessa Echeverría, Xavier Ochoa. 567-574 [doi]
- Temporal Association Rules for Modelling Multimodal Social SignalsThomas Janssoone. 575-579 [doi]
- Detecting and Synthesizing Synchronous Joint Action in Human-Robot TeamsTariq Iqbal, Laurel D. Riek. 581-585 [doi]
- Micro-opinion Sentiment Intensity Analysis and Summarization in Online VideosAmir Zadeh. 587-591 [doi]
- Attention and Engagement Aware Multimodal Conversational SystemsZhou Yu. 593-597 [doi]
- Implicit Human-computer Interaction: Two Complementary ApproachesJulia Wache. 599-603 [doi]
- Instantaneous and Robust Eye-Activity Based Task AnalysisHoe Kin Wong. 605-609 [doi]
- Challenges in Deep Learning for Multimodal ApplicationsSayan Ghosh. 611-615 [doi]
- Exploring Intent-driven Multimodal Interface for Geographical Information SystemFeng Sun. 617-621 [doi]
- Software Techniques for Multimodal Input Processing in Realtime Interactive SystemsMartin Fischbach. 623-627 [doi]
- Gait and Postural Sway Analysis, A Multi-Modal SystemHafsa Ismail. 629-633 [doi]
- A Computational Model of Culture-Specific Emotion Detection for Artificial Agents in the Learning DomainGanapreeta R. Naidu. 635-639 [doi]
- Record, Transform & Reproduce Social Encounters in Immersive VR: An Iterative ApproachJan Kolkmeier. 641-644 [doi]
- Multimodal Affect Detection in the Wild: Accuracy, Availability, and GeneralizabilityNigel Bosch. 645-649 [doi]
- Multimodal Assessment of Teaching Behavior in Immersive Rehearsal Environment-TeachLivERoghayeh Barmaki. 651-655 [doi]