Abstract is missing.
- Bursting our Digital Bubbles: Life Beyond the AppYvonne Rogers. 1 [doi]
- um... HesitationsDan Bohus, Eric Horvitz. 2-9 [doi]
- Written Activity, Representations and Fluency as Predictors of Domain Expertise in MathematicsSharon Oviatt, Adrienne Cohen. 10-17 [doi]
- Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party MeetingsRyo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato. 18-25 [doi]
- A Multimodal In-Car Dialogue System That Tracks The Driver's AttentionSpyros Kousidis, Casey Kennington, Timo Baumann, Hendrik Buschmeier, Stefan Kopp, David Schlangen. 26-33 [doi]
- Deep Multimodal Fusion: Combining Discrete Events and Continuous SignalsHéctor Perez Martínez, Georgios N. Yannakakis. 34-41 [doi]
- The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during TutoringJoseph F. Grafsgaard, Joseph B. Wiggins, Alexandria Katarina Vail, Kristy Elizabeth Boyer, Eric N. Wiebe, James C. Lester. 42-49 [doi]
- Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction ApproachSunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, Louis-Philippe Morency. 50-57 [doi]
- Deception detection using a multimodal approachMohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, Mihai Burzo. 58-65 [doi]
- Multimodal Interaction for Future Control Centers: An Interactive DemonstratorFerdinand Fuhrmann, Rene Kaiser. 66-67 [doi]
- Emotional CharadesStefano Piana, Alessandra Staglianò, Francesca Odone, Antonio Camurri. 68-69 [doi]
- Glass Shooter: Exploring First-Person Shooter Game Control with Google GlassChun-Yen Hsu, Ying-Chao Tung, Han-Yu Wang, Silvia Chyou, Jer-Wei Lin, Mike Y. Chen. 70-71 [doi]
- Orchestration for Group Videoconferencing: An Interactive DemonstratorWolfgang Weiss, Rene Kaiser, Manolis Falelakis. 72-73 [doi]
- Integrating Remote PPG in Facial Expression Analysis FrameworkH. Emrah Tasli, Amogh Gudi, Marten den Uyl. 74-75 [doi]
- Context-Aware Multimodal Robotic Health AssistantVidyavisal Mangipudi, Raj Tumuluri. 76-77 [doi]
- WebSanyog: A Portable Assistive Web Browser for People with Cerebral PalsyTirthankar Dasgupta, Manjira Sinha, Gagan Kandra, Anupam Basu. 78-79 [doi]
- The hybrid Agent MARCONicolas Riesterer, Christian Werner Becker-Asano, Julien Hué, Christian Dornhege, Bernhard Nebel. 80-81 [doi]
- Towards Supporting Non-linear Navigation in Educational VideosKuldeep Yadav, Kundan Shrivastava, Om Deshmukh. 82-83 [doi]
- Detecting conversing groups with a single worn accelerometerHayley Hung, Gwenn Englebienne, Laura Cabrera Quiros. 84-91 [doi]
- Identification of the Driver's Interest Point using a Head Pose Trajectory for Situated Dialog SystemsYoung-Ho Kim, Teruhisa Misu. 92-95 [doi]
- An Explorative Study on Crossmodal Congruence Between Visual and Tactile Icons Based on Emotional ResponsesTaekbeom Yoo, Yongjae Yoo, Seungmoon Choi. 96-103 [doi]
- Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video NewsJoseph G. Ellis, Brendan Jou, Shih-Fu Chang. 104-111 [doi]
- Dyadic Behavior Analysis in Depression Severity Assessment InterviewsStefan Scherer, Zakia Hammal, Ying Yang, Louis-Philippe Morency, Jeffrey F. Cohn. 112-119 [doi]
- Touching the Void - Introducing CoST: Corpus of Social TouchMerel M. Jung, Ronald Poppe, Mannes Poel, Dirk Heylen. 120-127 [doi]
- Unsupervised Domain Adaptation for Personalized Facial Emotion RecognitionGloria Zen, Enver Sangineto, Elisa Ricci, Nicu Sebe. 128-135 [doi]
- Predicting Influential Statements in Group Discussions using Speech and Head Motion InformationFumio Nihei, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Shogo Okada. 136-143 [doi]
- The Relation of Eye Gaze and Face Pose: Potential Impact on Speech RecognitionMalcolm Slaney, Andreas Stolcke, Dilek Z. Hakkani-Tür. 144-147 [doi]
- Speech-Driven Animation Constrained by Appropriate Discourse FunctionsNajmeh Sadoughi, Yang Liu, Carlos Busso. 148-155 [doi]
- Many Fingers Make Light Work: Non-Visual Capacitive Surface ExplorationMartin Halvey, Andrew Crossan. 156-163 [doi]
- Multimodal Interaction History and its use in Error Detection and RecoveryFelix Schüssel, Frank Honold, Miriam Schmidt, Nikola Bubalo, Anke Huckauf, Michael Weber 0001. 164-171 [doi]
- Gesture Heatmaps: Understanding Gesture Performance with Colorful VisualizationsRadu-Daniel Vatavu, Lisa Anthony, Jacob O. Wobbrock. 172-179 [doi]
- Personal Aesthetics for Soft Biometrics: A Generative Multi-resolution ApproachCristina Segalin, Alessandro Perina, Marco Cristani. 180-187 [doi]
- Synchronising Physiological and Behavioural Sensors in a Driving SimulatorRonnie Taib, Benjamin Itzstein, Kun Yu. 188-195 [doi]
- Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot InteractionsHenny Admoni, Brian Scassellati. 196-199 [doi]
- Towards Automated Assessment of Public Speaking Skills Using Multimodal CuesLei Chen 0004, Gary Feng, Jilliam Joe, Chee Wee Leong, Christopher Kitchen, Chong Min Lee. 200-203 [doi]
- Increasing Customers' Attention using Implicit and Explicit Interaction in Urban AdvertisementMatthias Wölfel, Luigi Bucchino. 204-207 [doi]
- System for Presenting and Creating Smell Effects to VideoRisa Suzuki, Shutaro Homma, Eri Matsuura, Ken-ichi Okada. 208-215 [doi]
- CrossMotion: Fusing Device and Image Motion for User Identification, Tracking and Device AssociationAndrew D. Wilson, Hrvoje Benko. 216-223 [doi]
- Statistical Analysis of Personality and Identity in Chats Using a Keylogging PlatformGiorgio Roffo, Cinzia Giorgetta, Roberta Ferrario, Walter Riviera, Marco Cristani. 224-231 [doi]
- Understanding Users' Perceived Difficulty of Multi-Touch Gesture ArticulationYosra Rekik, Radu-Daniel Vatavu, Laurent Grisoni. 232-239 [doi]
- A Multimodal Context-based Approach for Distress AssessmentSayan Ghosh, Moitreya Chatterjee, Louis-Philippe Morency. 240-246 [doi]
- Exploring a Model of Gaze for Grounding in Multimodal HRIGregor Mehlmann, Markus Häring, Kathrin Janowski, Tobias Baur, Patrick Gebhard, Elisabeth André. 247-254 [doi]
- Predicting Learning and Engagement in Tutorial Dialogue: A Personality-Based ModelAlexandria Katarina Vail, Joseph F. Grafsgaard, Joseph B. Wiggins, James C. Lester, Kristy Elizabeth Boyer. 255-262 [doi]
- Eye Gaze for Spoken Language Understanding in Multi-modal Conversational InteractionsDilek Hakkani-Tür, Malcolm Slaney, Asli Çelikyilmaz, Larry P. Heck. 263-266 [doi]
- SoundFLEX: Designing Audio to Guide Interactions with Shape-Retaining Deformable InterfacesKoray Tahiroglu, Thomas Svedström, Valtteri Wikström, Simon Overstall, Johan Kildal, Teemu Tuomas Ahmaniemi. 267-274 [doi]
- Investigating Intrusiveness of Workload AdaptationFelix Putze, Tanja Schultz. 275-281 [doi]
- Smart Multimodal Interaction through Big DataCafer Tosun. 282 [doi]
- Natural Communication about Uncertainties in Situated InteractionTomislav Pejsa, Dan Bohus, Michael F. Cohen, Chit W. Saw, James Mahoney, Eric Horvitz. 283-290 [doi]
- The SWELL Knowledge Work Dataset for Stress and User Modeling ResearchSaskia Koldijk, Maya Sappelli, Suzan Verberne, Mark A. Neerincx, Wessel Kraaij. 291-298 [doi]
- Rhythmic Body Movements of LaughterRadoslaw Niewiadomski, Maurizio Mancini, Yu Ding, Catherine Pelachaud, Gualtiero Volpe. 299-306 [doi]
- Automatic Blinking Detection towards Stress DiscoveryAlvaro Marcos-Ramiro, Daniel Pizarro-Perez, Marta Marrón Romera, Daniel Pizarro-Perez, Daniel Gatica-Perez. 307-310 [doi]
- Mid-air Authentication Gestures: An Exploration of Authentication Based on Palm and Finger MotionsIlhan Aslan, Andreas Uhl, Alexander Meschtscherjakov, Manfred Tscheligi. 311-318 [doi]
- Automatic Detection of Naturalistic Hand-over-Face Gesture DescriptorsMarwa Mahmoud, Tadas Baltrusaitis, Peter Robinson 0001. 319-326 [doi]
- Capturing Upper Body Motion in Conversation: An Appearance Quasi-Invariant ApproachAlvaro Marcos-Ramiro, Daniel Pizarro-Perez, Marta Marrón Romera, Daniel Gatica-Perez. 327-334 [doi]
- User Independent Gaze Estimation by Exploiting Similarity Measures in the Eye Pair Appearance EigenspaceNanxiang Li, Carlos Busso. 335-338 [doi]
- Exploring multimodality for translator-computer interactionJulián Zapata. 339-343 [doi]
- Towards Social Touch Intelligence: Developing a Robust System for Automatic Touch RecognitionMerel M. Jung. 344-348 [doi]
- Facial Expression Analysis for Estimating Pain in Clinical SettingsKaran Sikka. 349-353 [doi]
- Realizing Robust Human-Robot Interaction under Real Environments with NoisesTakaaki Sugiyama. 354-358 [doi]
- Speaker- and Corpus-Independent Methods for Affect Classification in Computational ParalinguisticsHeysem Kaya. 359-363 [doi]
- The Impact of Changing Communication PracticesAilbhe Finnerty. 364-368 [doi]
- Multi-Resident Human Behaviour Identification in Ambient Assisted Living EnvironmentsHande Özgür Alemdar. 369-373 [doi]
- Gaze-Based Proactive User Interface for Pen-Based SystemsÇagla Çig. 374-378 [doi]
- Appearance based user-independent gaze estimationNanxiang Li. 379-383 [doi]
- Affective Analysis of Abstract Paintings Using Statistical Analysis and Art TheoryAndreza Sartori. 384-388 [doi]
- The Secret Language of Our Body: Affect and Personality Recognition Using Physiological SignalsJulia Wache. 389-393 [doi]
- Perceptions of Interpersonal Behavior are Influenced by Gender, Facial Expression Intensity, and Head PoseJeffrey M. Girard. 394-398 [doi]
- Authoring Communicative Behaviors for Situated, Embodied CharactersTomislav Pejsa. 399-403 [doi]
- Multimodal Analysis and Modeling of Nonverbal Behaviors during TutoringJoseph F. Grafsgaard. 404-408 [doi]
- Computation of EmotionsPeter Robinson. 409-410 [doi]
- Non-Visual Navigation Using Combined Audio Music and Haptic CuesEmily Fujimoto, Matthew Turk. 411-418 [doi]
- Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless InteractionsEuan Freeman, Stephen A. Brewster, Vuokko Lantz. 419-426 [doi]
- Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile DataAndrey Bogomolov, Bruno Lepri, Jacopo Staiano, Nuria Oliver, Fabio Pianesi, Alex Pentland. 427-434 [doi]
- Impact of Coordinate Systems on 3D Manipulations in Mobile Augmented RealityPhilipp Tiefenbacher, Steven Wichert, Daniel Merget, Gerhard Rigoll. 435-438 [doi]
- Digital Reading Support for The Blind by Multimodal InteractionYasmine N. El-Glaly, Francis K. H. Quek. 439-446 [doi]
- Measuring Child Visual Attention using Markerless Head Tracking from Color and Depth Sensing CamerasJonathan Bidwell, Irfan A. Essa, Agata Rozga, Gregory D. Abowd. 447-454 [doi]
- Bi-Modal Detection of Painful Reaching for Chronic Pain Rehabilitation SystemsTemitayo A. Olugbade, M. S. Hane Aung, Nadia Bianchi-Berthouze, Nicolai Marquardt, Amanda C. de C. Williams. 455-458 [doi]
- A World without Barriers: Connecting the World across Languages, Distances and MediaAlexander H. Waibel. 459-460 [doi]
- Emotion Recognition In The Wild Challenge 2014: Baseline, Data and ProtocolAbhinav Dhall, Roland Goecke, Jyoti Joshi, Karan Sikka, Tom Gedeon. 461-466 [doi]
- Neural Networks for Emotion Recognition in the WildMichal Grosicki. 467-472 [doi]
- Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level FusionFabien Ringeval, Shahin Amiriparian, Florian Eyben, Klaus R. Scherer, Björn Schuller. 473-480 [doi]
- Combining Multimodal Features with Hierarchical Classifier Fusion for Emotion Recognition in the WildBo Sun, Liandong Li, Tian Zuo, Ying Chen, Guoyan Zhou, Xuewen Wu. 481-486 [doi]
- Combining Modality-Specific Extreme Learning Machines for Emotion Recognition in the WildHeysem Kaya, Albert Ali Salah. 487-493 [doi]
- Combining Multiple Kernel Methods on Riemannian Manifold for Emotion Recognition in the WildMengyi Liu, Ruiping Wang, Shaoxin Li, Shiguang Shan, Zhiwu Huang, Xilin Chen. 494-501 [doi]
- Enhanced Autocorrelation in Real World Emotion RecognitionSascha Meudt, Friedhelm Schwenker. 502-507 [doi]
- Emotion Recognition in the Wild with Feature Fusion and Multiple Kernel LearningJunKai Chen, Zenghai Chen, Zheru Chi, Hong Fu. 508-513 [doi]
- Improved Spatiotemporal Local Monogenic Binary Pattern for Emotion Recognition in The WildXiaohua Huang, Qiuhai He, Xiaopeng Hong, Guoying Zhao, Matti Pietikäinen. 514-520 [doi]
- Emotion Recognition in Real-world Conditions with Acoustic and Visual FeaturesMaxim Sidorov, Wolfgang Minker. 521-524 [doi]
- ERM4HCI 2014: The 2nd Workshop on Emotion Representation and Modelling in Human-Computer-Interaction-SystemsKim Hartmann, Björn Schuller, Ronald Böck. 525-526 [doi]
- Gaze-in 2014: the 7th Workshop on Eye Gaze in Intelligent Human Machine InteractionHung-Hsuan Huang, Roman Bednarik, Kristiina Jokinen, Yukiko I. Nakano. 527-528 [doi]
- MAPTRAITS 2014 - The First Audio/Visual Mapping Personality Traits Challenge - An Introduction: Perceived Personality and Social DimensionsOya Çeliktutan, Florian Eyben, Evangelos Sariyanidi, Hatice Gunes, Björn Schuller. 529-530 [doi]
- MLA'14: Third Multimodal Learning Analytics Workshop and Grand ChallengesXavier Ochoa, Marcelo Worsley, Katherine Chiluiza, Saturnino Luz. 531-532 [doi]
- ICMI 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot InteractionMary Ellen Foster, Manuel Giuliani, Ronald P. A. Petrick. 533-534 [doi]
- An Outline of Opportunities for Multimodal ResearchDirk Heylen, Alessandro Vinciarelli. 535-536 [doi]
- UM3I 2014: International Workshop on Understanding and Modeling Multiparty, Multimodal InteractionsSamer Al Moubayed, Dan Bohus, Anna Esposito, Dirk Heylen, Maria Koutsombogera, Harris Papageorgiou, Gabriel Skantze. 537-538 [doi]