Abstract is missing.
- A Multimodal Approach to Understanding Human Vocal Expressions and BeyondShrikanth Narayanan. 1 [doi]
- Using Technology for Health and WellbeingMary Czerwinski. 2 [doi]
- Reinforcing, Reassuring, and Roasting: The Forms and Functions of the Human SmilePaula M. Niedenthal. 3 [doi]
- Put That There: 20 Years of Research on Multimodal InteractionJames L. Crowley. 4 [doi]
- Multimodal Dialogue Management for Multiparty Interaction with InfantsSetareh Nasihati Gilani, David R. Traum, Arcangelo Merla, Eugenia Hee, Zoey Walker, Barbara Manini, Grady Gallagher, Laura-Ann Petitto. 5-13 [doi]
- Predicting Group Performance in Task-Based InteractionGabriel Murray, Catharine Oertel. 14-20 [doi]
- Multimodal Modeling of Coordination and Coregulation Patterns in Speech Rate during Triadic Collaborative Problem SolvingAngela E. B. Stewart, Zachary A. Keirn, Sidney K. D'Mello. 21-30 [doi]
- Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill LevelRyo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita. 31-39 [doi]
- Automated Affect Detection in Deep Brain Stimulation for Obsessive-Compulsive Disorder: A Pilot StudyJeffrey F. Cohn, László A. Jeni, Itir Onal Ertugrul, Donald Malone, Michael S. Okun, David A. Borton, Wayne K. Goodman. 40-44 [doi]
- Smell-O-Message: Integration of Olfactory Notifications into a Messaging Application to Improve Users' PerformanceEmanuela Maggioni, Robert Cobden, Dmitrijs Dmitrenko, Marianna Obrist. 45-54 [doi]
- Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion RecognitionGao-Yi Chao, Chun-Min Chang, Jeng-Lin Li, Ya-Tse Wu, Chi-Chun Lee. 55-62 [doi]
- Adaptive Review for Mobile MOOC Learning via Multimodal Physiological Signal Sensing - A Longitudinal StudyPhuong Pham, Jingtao Wang. 63-72 [doi]
- Olfactory Display Prototype for Presenting and Sensing Authentic and Synthetic OdorsKatri Salminen, Jussi Rantala, Poika Isokoski, Marko Lehtonen, Philipp Müller, Markus Karjalainen, Jari Väliaho, Anton Kontunen, Ville Nieminen, Joni Leivo, Anca A. Telembeci, Jukka Lekkala, Pasi Kallio, Veikko Surakka. 73-77 [doi]
- Evaluation of Real-time Deep Learning Turn-taking Models for Multiple Dialogue ScenariosDivesh Lala, Koji Inoue, Tatsuya Kawahara. 78-86 [doi]
- Ten Opportunities and Challenges for Advancing Student-Centered Multimodal Learning AnalyticsSharon Oviatt. 87-94 [doi]
- If You Ask Nicely: A Digital Assistant Rebuking Impolite Voice CommandsMichael Bonfert, Maximilian Spliethöver, Roman Arzaroli, Marvin Lange, Martin Hanci, Robert Porzel. 95-102 [doi]
- Detecting User's Likes and Dislikes for a Virtual Negotiating AgentCaroline Langlet, Chloé Clavel. 103-110 [doi]
- Attention-based Audio-Visual Fusion for Robust Automatic Speech RecognitionGeorge Sterpu, Christian Saam, Naomi Harte. 111-115 [doi]
- Smart Arse: Posture Classification with Textile Sensors in TrousersSophie Skach, Rebecca Stewart, Patrick G. T. Healey. 116-124 [doi]
- !FTL, an Articulation-Invariant Stroke Gesture Recognizer with Controllable Position, Scale, and Rotation InvariancesJean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina. 125-134 [doi]
- Pen + Mid-Air Gestures: Eliciting Contextual GesturesIlhan Aslan, Tabea Schmidt, Jens Woehrle, Lukas Vogel, Elisabeth André. 135-144 [doi]
- Hand, Foot or Voice: Alternative Input Modalities for Touchless Interaction in the Medical DomainBenjamin Hatscher, Christian Hansen 0001. 145-153 [doi]
- How to Shape the Humor of a Robot - Social Behavior Adaptation Based on Reinforcement LearningKlaus Weber, Hannes Ritschel, Ilhan Aslan, Florian Lingenfelser, Elisabeth André. 154-162 [doi]
- Using Interlocutor-Modulated Attention BLSTM to Predict Personality Traits in Small Group InteractionYun-Shao Lin, Chi-Chun Lee. 163-169 [doi]
- Toward Objective, Multifaceted Characterization of Psychotic Disorders: Lexical, Structural, and Disfluency Markers of Spoken LanguageAlexandria K. Vail, Elizabeth S. Liebson, Justin T. Baker, Louis-Philippe Morency. 170-178 [doi]
- Multimodal Interaction Modeling of Child Forensic InterviewingVictor Ardulov, Madelyn Mendlen, Manoj Kumar, Neha Anand, Shanna Williams, Thomas D. Lyon, Shrikanth Narayanan. 179-185 [doi]
- Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNsMatthew Roddy, Gabriel Skantze, Naomi Harte. 186-190 [doi]
- Estimating Visual Focus of Attention in Multiparty Meetings using Deep Convolutional Neural NetworksKazuhiro Otsuka, Keisuke Kasuga, Martina Köhler. 191-199 [doi]
- Detecting Deception and Suspicion in Dyadic Game InteractionsJan Ondras, Hatice Gunes. 200-209 [doi]
- Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video AdvertisementsAbhinav Shukla, Harish Katti, Mohan S. Kankanhalli, Ramanathan Subramanian. 210-219 [doi]
- Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual SignalsReshmashree Kantharaju, Fabien Ringeval, Laurent Besacier. 220-228 [doi]
- Population-specific Detection of Couples' Interpersonal Conflict using Multi-task LearningAditya Gujral, Theodora Chaspari, Adela C. Timmons, Yehsong Kim, Sarah Barrett, Gayla Margolin. 229-233 [doi]
- I Smell Trouble: Using Multiple Scents To Convey Driving-Relevant InformationDmitrijs Dmitrenko, Emanuela Maggioni, Marianna Obrist. 234-238 [doi]
- "Honey, I Learned to Talk": Multimodal Fusion for Behavior AnalysisShao-Yen Tseng, HaoQi Li, Brian R. Baucom, Panayiotis G. Georgiou. 239-243 [doi]
- TapTag: Assistive Gestural Interactions in Social Media on Touchscreens for Older AdultsShraddha Pandya, Yasmine N. El-Glaly. 244-252 [doi]
- Gazeover - Exploring the UX of Gaze-triggered Affordance Communication for GUI ElementsIlhan Aslan, Michael Dietz, Elisabeth André. 253-257 [doi]
- Dozing Off or Thinking Hard?: Classifying Multi-dimensional Attentional States in the Classroom from VideoFelix Putze, Dennis Küster, Sonja Annerer-Walcher, Mathias Benedek. 258-262 [doi]
- Sensing Arousal and Focal Attention During Visual InteractionOludamilare Matthews, Markel Vigo, Simon Harper. 263-267 [doi]
- Path Word: A Multimodal Password Entry Method for Ad-hoc Authentication Based on Digits' Shape and Smooth Pursuit Eye MovementsAlmoctar Hassoumi, Pourang Irani, Vsevolod Peysakhovich, Christophe Hurter. 268-277 [doi]
- Towards Attentive Speed Reading on Small Screen Wearable DevicesWei Guo, Jingtao Wang. 278-287 [doi]
- Understanding Mobile Reading via Camera Based Gaze Tracking and Kinematic Touch ModelingWei Guo, Jingtao Wang. 288-297 [doi]
- Inferring User Intention using Gaze in VehiclesYu-Sian Jiang, Garrett Warnell, Peter Stone. 298-306 [doi]
- EyeLinks: A Gaze-Only Click Alternative for Heterogeneous ClickablesPedro Figueiredo, Manuel J. Fonseca. 307-314 [doi]
- EEG-based Evaluation of Cognitive Workload Induced by Acoustic Parameters for Data SonificationManeesh Bilalpur, Mohan S. Kankanhalli, Stefan Winkler 0001, Ramanathan Subramanian. 315-323 [doi]
- A Multimodal Approach for Predicting Changes in PTSD Symptom SeverityAdria Mallol-Ragolta, Svati Dhamija, Terrance E. Boult. 324-333 [doi]
- Floor Apportionment and Mutual Gazes in Native and Second-Language ConversationIchiro Umata, Koki Ijuin, Tsuneo Kato, Seiichi Yamamoto. 334-341 [doi]
- Estimating Head Motion from Egocentric VisionSatoshi Tsutsui, Sven Bambach, David J. Crandall, Chen Yu. 342-346 [doi]
- A Multimodal-Sensor-Enabled Room for Unobtrusive Group Meeting AnalysisIndrani Bhattacharya, Michael Foley, Ni Zhang, Tongtao Zhang, Christine Ku, Cameron Mine, Heng Ji, Christoph Riedl, Brooke Foucault Welles, Richard J. Radke. 347-355 [doi]
- Multimodal Analysis of Client Behavioral Change Coding in Motivational InterviewingChanuwas Aswamenakul, Lixing Liu, Kate B. Carey, Joshua Woolley, Stefan Scherer, Brian Borsari. 356-360 [doi]
- End-to-end Learning for 3D Facial Animation from SpeechHai Xuan Pham, YuTing Wang, Vladimir Pavlovic. 361-365 [doi]
- Joint Discrete and Continuous Emotion Prediction Using Ensemble and End-to-End ApproachesEhab Albadawy, Yelin Kim. 366-375 [doi]
- The Multimodal Dataset of Negative Affect and Aggression: A Validation StudyIulia Lefter, Siska Fitrianie. 376-383 [doi]
- Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal InterfaceDavid A. Robb, Francisco Javier Chiyah Garcia, Atanas Laskov, Xingkun Liu, Pedro Patrón, Helen F. Hastie. 384-392 [doi]
- Simultaneous Multimodal Access to Wheelchair and Computer for People with TetraplegiaMd. Nazmus Sahadat, Nordine Sebkhi, Maysam Ghovanloo. 393-399 [doi]
- Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect DetectionPhilip Schmidt, Attila Reiss, Robert Dürichen, Claus Marberger, Kristof Van Laerhoven. 400-408 [doi]
- Enhancing Multiparty Cooperative Movements: A Robotic Wheelchair that Assists in Predicting Next ActionsHisato Fukuda, Keiichi Yamazaki, Akiko Yamazaki, Yosuke Saito, Emi Iiyama, Seiji Yamazaki, Yoshinori Kobayashi, Yoshinori Kuno, Keiko Ikeda. 409-417 [doi]
- Multimodal Representation of Advertisements Using Segment-level AutoencodersKrishna Somandepalli, Victor R. Martinez, Naveen Kumar 0004, Shrikanth Narayanan. 418-422 [doi]
- Survival at the Museum: A Cooperation Experiment with Emotionally Expressive Virtual CharactersIlaria Torre, Emma Carrigan, Killian McCabe, Rachel McDonnell, Naomi Harte. 423-427 [doi]
- Human, Chameleon or Nodding Dog?Leshao Zhang, Patrick G. T. Healey. 428-436 [doi]
- A Generative Approach for Dynamically Varying Photorealistic Facial Expressions in Human-Agent InteractionsYuchi Huang, Saad M. Khan. 437-445 [doi]
- Predicting ADHD Risk from Touch Interaction DataPhilipp Mock, Maike Tibus, Ann-Christine Ehlis, R. Harald Baayen, Peter Gerjets. 446-454 [doi]
- Exploring the Design of Audio-Kinetic Graphics for EducationAnnika Muehlbradt, Madhur Atreya, Darren Guinness, Shaun K. Kane. 455-463 [doi]
- RainCheck: Overcoming Capacitive Interference Caused by Rainwater on SmartphonesYing-Chao Tung, Mayank Goel, Isaac Zinda, Jacob O. Wobbrock. 464-471 [doi]
- Multimodal Local-Global Ranking Fusion for Emotion RecognitionPaul Pu Liang, Amir Zadeh, Louis-Philippe Morency. 472-476 [doi]
- Improving Object Disambiguation from Natural Language using Empirical ModelsDaniel Prendergast, Daniel Szafir. 477-485 [doi]
- Tactile Sensitivity to Distributed Patterns in a PalmBukun Son, Jaeyoung Park. 486-491 [doi]
- Listening Skills Assessment through Computer AgentsHiroki Tanaka, Hideki Negoro, Hidemi Iwasaka, Satoshi Nakamura 0001. 492-496 [doi]
- Using Data-Driven Approach for Modeling Timing Parameters of American Sign LanguageSedeeq Al-khazraji. 497-500 [doi]
- Unobtrusive Analysis of Group Interactions without CamerasIndrani Bhattacharya. 501-505 [doi]
- Multimodal and Context-Aware Interaction in Augmented Reality for Active AssistanceDamien Brun. 506-510 [doi]
- Interpretable Multimodal Deception Detection in VideosHamid Karimi. 511-515 [doi]
- Attention Network for Engagement Prediction in the WildAmanjot Kaur. 516-519 [doi]
- Data Driven Non-Verbal Behavior Generation for Humanoid RobotsTaras Kucherenko. 520-523 [doi]
- Multi-Modal Multi sensor Interaction between Human andHeterogeneous Multi-Robot SystemS. M. al Mahi. 524-528 [doi]
- Responding with Sentiment Appropriate for the User's Current Sentiment in Dialog as Inferred from Prosody and Gaze PatternsAnindita Nath. 529-533 [doi]
- Strike A Pose: Capturing Non-Verbal Behaviour with Textile SensorsSophie Skach. 534-537 [doi]
- Large Vocabulary Continuous Audio-Visual Speech RecognitionGeorge Sterpu. 538-541 [doi]
- Multimodal Teaching and Learning Analytics for Classroom and Online Educational SettingsChinchu Thomas. 542-545 [doi]
- Modeling Empathy in Embodied Conversational Agents: Extended AbstractÖzge Nilay Yalçin. 546-550 [doi]
- EVA: A Multimodal Argumentative Dialogue SystemNiklas Rach, Klaus Weber, Louisa Pragst, Elisabeth André, Wolfgang Minker, Stefan Ultes. 551-552 [doi]
- Online Privacy-Safe Engagement Tracking SystemCheng Zhang, Cheng Chang, Lei Chen, Yang Liu. 553-554 [doi]
- Multimodal Control of Lighter-Than-Air AgentsDaniel M. Lofaro, Donald Sofge. 555-556 [doi]
- MIRIAM: A Multimodal Interface for Explaining the Reasoning Behind Actions of Remote Autonomous SystemsHelen F. Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Atanas Laskov, Pedro Patrón. 557-558 [doi]
- EAT -: The ICMI 2018 Eating Analysis and Tracking ChallengeSimone Hantke, Maximilian Schmitt, Panagiotis Tzirakis, Björn W. Schuller. 559-563 [doi]
- SAAMEAT: Active Feature Transformation and Selection Methods for the Recognition of User Eating ConditionsFasih Haider, Senja Pollak, Eleni Zarogianni, Saturnino Luz. 564-568 [doi]
- Exploring A New Method for Food Likability Rating Based on DT-CWT TheoryYanan Guo, Jing Han 0010, Zixing Zhang 0001, Björn W. Schuller, Yide Ma. 569-573 [doi]
- Deep End-to-End Representation Learning for Food Type Recognition from SpeechBenjamin Sertolli, Nicholas Cummins, Abdulkadir Sengür, Björn W. Schuller. 574-578 [doi]
- Functional-Based Acoustic Group Feature Selection for Automatic Recognition of Eating ConditionDara Pir. 579-583 [doi]
- Video-based Emotion Recognition Using Deeply-Supervised Neural NetworksYingruo Fan, Jacqueline C. K. Lam, Victor O. K. Li. 584-588 [doi]
- An Occam's Razor View on Learning Audiovisual Emotion Recognition with Small Training SetsValentin Vielzeuf, Corentin Kervadec, Stéphane Pateux, Alexis Lechervy, Frédéric Jurie. 589-593 [doi]
- Deep Recurrent Multi-instance Learning with Spatio-temporal Features for Engagement Intensity PredictionJianfei Yang, Kai Wang, Xiaojiang Peng, Yu Qiao 0001. 594-598 [doi]
- Automatic Engagement Prediction with GAP FeatureXuesong Niu, Hu Han, Jiabei Zeng, Xuran Sun, Shiguang Shan, Yan Huang, Songfan Yang, Xilin Chen. 599-603 [doi]
- Predicting Engagement Intensity in the Wild Using Temporal Convolutional NetworkChinchu Thomas, Nitin Nair, Dinesh Babu Jayagopi. 604-610 [doi]
- An Attention Model for Group-Level Emotion RecognitionAarush Gupta, Dakshit Agrawal, Hardik Chauhan, Jose Dolz, Marco Pedersoli. 611-615 [doi]
- An Ensemble Model Using Face and Body Tracking for Engagement DetectionCheng Chang, Cheng Zhang, Lei Chen, Yang Liu. 616-622 [doi]
- Group-Level Emotion Recognition using Deep Models with A Four-stream Hybrid NetworkAhmed-Shehab Khan, Zhiyuan Li, Jie Cai, Zibo Meng, James O'Reilly, Yan Tong. 623-629 [doi]
- Multi-Feature Based Emotion Recognition for Video ClipsChuanhe Liu, Tianhao Tang, Kui Lv, Minghao Wang. 630-634 [doi]
- Group-Level Emotion Recognition Using Hybrid Deep Models Based on Faces, Scenes, Skeletons and Visual AttentionsXin Guo, Bin Zhu, Luisa F. Polanía, Charles Boncelet, Kenneth E. Barner. 635-639 [doi]
- Cascade Attention Networks For Group Emotion Recognition with Face, Body and Image CuesKai Wang, Xiaoxing Zeng, Jianfei Yang, Debin Meng, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao 0001. 640-645 [doi]
- Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the WildCheng Lu, Wenming Zheng, Chaolong Li, Chuangao Tang, Suyuan Liu, Simeng Yan, Yuan Zong. 646-652 [doi]
- EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect PredictionAbhinav Dhall, Amanjot Kaur, Roland Goecke, Tom Gedeon. 653-656 [doi]
- 3rd International Workshop on Multisensory Approaches to Human-Food InteractionAnton Nijholt, Carlos Velasco, Marianna Obrist, Katsunori Okajima, Charles Spence. 657-659 [doi]
- Group Interaction Frontiers in TechnologyGabriel Murray, Hayley Hung, Joann Keyton, Catherine Lai, Nale Lehmann-Willenbrock, Catharine Oertel. 660-662 [doi]
- Modeling Cognitive Processes from Multimodal SignalsFelix Putze, Jutta Hild, Akane Sano, Enkelejda Kasneci, Erin Solovey, Tanja Schultz. 663 [doi]
- Human-Habitat for Health (H3): Human-habitat Multimodal Interaction for Promoting Health and Well-being in the Internet of Things EraTheodora Chaspari, Angeliki Metallinou, Leah I. Stein Duker, Amir Behzadan. 664-665 [doi]
- International Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction (Workshop Summary)Ronald Böck, Francesca Bonin, Nick Campbell 0001, Ronald Poppe. 666-667 [doi]