Abstract is missing.
- Automated mood-aware engagement predictionSvati Dhamija, Terrance E. Boult. 1-8 [doi]
- How different identities affect cooperationWasif Khan, Jesse Hoey. 9-14 [doi]
- What really matters - An information gain analysis of questions and reactions in automated PTSD screeningsTorsten Wörtwein, Stefan Scherer. 15-20 [doi]
- NAA: A multimodal database of negative affect and aggressionIulia Lefter, Catholijn M. Jonker, Stephanie Klein Tuente, Wim Veling, Stefan Bogaerts. 21-27 [doi]
- Recognizing induced emotions of movie audiences: Are induced and perceived emotions the same?Leimin Tian, Michal Muszynski, Catherine Lai, Johanna D. Moore, Theodoros Kostoulas, Patrizia Lombardo, Thierry Pun, Guillaume Chanel. 28-35 [doi]
- Effects of valence and arousal on working memory performance in virtual reality gamingDaniel Gábana Arellano, Laurissa N. Tokarchuk, Emily Hannon, Hatice Gunes. 36-41 [doi]
- Computational model of idiosyncratic perception of others' emotionsShiro Kumano, Ryo Ishii, Kazuhiro Otsuka. 42-49 [doi]
- Comparing empathy perceived by interlocutors in multiparty conversation and external observersShiro Kumano, Ryo Ishii, Kazuhiro Otsuka. 50-57 [doi]
- Towards modeling agent negotiators by analyzing human negotiation behaviorYuyu Xu, Pedro Sequeira, Stacy Marsella. 58-64 [doi]
- Robust emotion recognition from low quality and low bit rate video: A deep learning approachBowen Cheng, Zhangyang Wang, Zhaobin Zhang, Zhu Li, Ding Liu, Jianchao Yang, Shuai Huang, Thomas S. Huang. 65-70 [doi]
- Vocal markers of motor, cognitive, and depressive symptoms in Parkinson's diseaseKara M. Smith, James R. Williamson, Thomas F. Quatieri. 71-78 [doi]
- Processing negative emotions through social communication: Multimodal database construction and analysisNurul Lubis, Michael Heck, Sakriani Sakti, Koichiro Yoshino, Satoshi Nakamura 0001. 79-85 [doi]
- The effect of personality trait, age, and gender on the performance of automatic speech valence recognitionHesam Sagha, Jun Deng, Björn W. Schuller. 86-91 [doi]
- Multiple users' emotion recognition: Improving performance by joint modeling of affective reactionsGuillaume Chanel, Sunny Avry, Gaëlle Molinari, Mireille Bétrancourt, Thierry Pun. 92-97 [doi]
- Smiling from adolescence to old age: A large observational studyDaniel McDuff. 98-104 [doi]
- Noninvasive estimation of cognitive status in mild traumatic brain injury using speech production and facial expressionAdam C. Lammert, James R. Williamson, Austin R. Hess, Tejash Patel, Thomas F. Quatieri, HuiJun Liao, Alexander Lin, Kristin J. Heaton. 105-110 [doi]
- Local-global ranking for facial expression intensity estimationTadas Baltrusaitis, Liandong Li, Louis-Philippe Morency. 111-118 [doi]
- Discovering gender differences in facial emotion recognition via implicit behavioral cuesManeesh Bilalpur, Seyed Mostafa Kia, Tat-Seng Chua, Ramanathan Subramanian. 119-124 [doi]
- Emotion detection using noninvasive low cost sensorsDaniela Girardi, Filippo Lanubile, Nicole Novielli. 125-130 [doi]
- Assessing personality through objective behavioral sensingHui Wang, Stacy Marsella. 131-137 [doi]
- Comparing models for gesture recognition of children's bullying behaviorsMichael Tsang, Vadim Korolik, Stefan Scherer, Maja J. Mataric. 138-145 [doi]
- Evaluating effectiveness of smartphone typing as an indicator of user emotionSurjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De. 146-151 [doi]
- Stress measurement from tongue color imagingJavier Hernandez, Craig Ferguson, Akane Sano, Weixuan Chen, Weihui Li, Albert S. Yeung, Rosalind W. Picard. 152-157 [doi]
- RankTrace: Relative and unbounded affect annotationPhilip L. Lopes, Georgios N. Yannakakis, Antonios Liapis. 158-163 [doi]
- Computational analysis of valence and arousal in virtual reality gaming using lower arm electromyogramsIlia Shumailov, Hatice Gunes. 164-169 [doi]
- Modeling doctor-patient communication with affective text analysisTaylan K. Sen, Mohammad Rafayet Ali, Mohammed (Ehsan) Hoque, Ronald Epstein, Paul Duberstein. 170-177 [doi]
- Response to name: A dataset and a multimodal machine learning framework towards autism studyWenbo Liu, Tianyan Zhou, Chenghao Zhang, Xiaobing Zou, Ming Li. 178-183 [doi]
- Toward affect-sensitive virtual human tutors: The influence of facial expressions on learning and emotionNicholas V. Mudrick, Michelle Taub, Roger Azevedo, Jonathan P. Rowe, James C. Lester. 184-189 [doi]
- Segment-based speech emotion recognition using recurrent neural networksEfthymios Tzinis, Alexandros Potamianos. 190-195 [doi]
- Emo-soundscapes: A dataset for soundscape emotion recognitionJianyu Fan, Miles Thorogood, Philippe Pasquier. 196-201 [doi]
- Multimodal autoencoder: A deep learning approach to filling in missing sensor data and enabling better mood predictionNatasha Jaques, Sara Taylor, Akane Sano, Rosalind W. Picard. 202-208 [doi]
- Hand2Face: Automatic synthesis and recognition of hand over face occlusionsBehnaz Nojavanasghari, Charles E. Hughes, Tadas Baltrusaitis, Louis-Philippe Morency. 209-215 [doi]
- Automatic action unit detection in infants using convolutional neural networkZakia Hammal, Wen-Sheng Chu, Jeffrey F. Cohn, Carrie Heike, Matthew L. Speltz. 216-221 [doi]
- Are you stressed? Your eyes and the mouse can tellJun Wang, Michael Xuelin Huang, Grace Ngai, Hong Va Leong. 222-228 [doi]
- Manual and automatic measures confirm - Intranasal oxytocin increases facial expressivityCatherine Neubauer, Sharon Mozgai, Brandon Chuang, Joshua Woolley, Stefan Scherer. 229-235 [doi]
- Weighted geodesic flow kernel for interpersonal mutual influence modeling and emotion recognition in dyadic interactionsZhaojun Yang, Boqing Gong, Shrikanth Narayanan. 236-241 [doi]
- Perceptual enhancement of emotional mocap head motion: An experimental studyYu Ding 0001, Lei Shi, Zhigang Deng. 242-247 [doi]
- The ordinal nature of emotionsGeorgios N. Yannakakis, Roddy Cowie, Carlos Busso. 248-255 [doi]
- Improved facial expression recognition method based on ROI deep convolutional neutral networkXiao Sun, Man Lv, Changqin Quan, Fuji Ren. 256-261 [doi]
- Speech emotion recognition in noisy and reverberant environmentsPanikos Heracleous, Keiji Yasuda, Fumiaki Sugaya, Akio Yoneyama, Masayuki Hashimoto. 262-266 [doi]
- Exploring sparse representation measures of physiological synchrony for romantic couplesTheodora Chaspari, Adela C. Timmons, Brian R. Baucom, Laura Perrone, Katherine J. W. Baucom, Panayiotis G. Georgiou, Gayla Margolin, Shrikanth S. Narayanan. 267-272 [doi]
- Emotional responses of vibrotactile-thermal stimuli: Effects of constant-temperature thermal stimuliYongjae Yoo, Hojin Lee, Hyejin Choi, Seungmoon Choi. 273-278 [doi]
- The ABC of MOOCs: Affect and its inter-play with behavior and cognitionShazia Afzal, Bikram Sengupta, Munira Syed, Nitesh V. Chawla, G. Alex Ambrose, Malolan Chetlur. 279-284 [doi]
- Affect recognition in an interactive gaming environment using eye trackingAshwaq Al-Hargan, Neil Cooke, Tareq Binjammaz. 285-291 [doi]
- NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpusHuang-Cheng Chou, Wei-Cheng Lin, Lien-Chiang Chang, Chyi-Chang Li, Hsi-Pin Ma, Chi-Chun Lee. 292-298 [doi]
- Aggression recognition using overlapping speechIulia Lefter, Catholijn M. Jonker. 299-304 [doi]
- Emotion-augmented machine learning: Overview of an emerging domainHarald Stromfelt, Yue Zhang 0014, Björn W. Schuller. 305-312 [doi]
- Embedding stacked bottleneck vocal features in a LSTM architecture for automatic pain level classification during emergency triageFu-Sheng Tsai, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee. 313-318 [doi]
- Automatic emotional spoken language text corpus construction from written dialogs in fictionsJinkun Chen, Cong Liu, Ming Li. 319-324 [doi]
- Objective assessment of depressive symptoms with machine learning and wearable sensors dataAsma Ghandeharioun, Szymon Fedor, Lisa Sangermano, Dawn Ionescu, Jonathan Alpert, Chelsea Dale, David Sontag, Rosalind W. Picard. 325-332 [doi]
- Towards general models of player affectElizabeth Camilleri, Georgios N. Yannakakis, Antonios Liapis. 333-339 [doi]
- CAST a database: Rapid targeted large-scale big data acquisition via small-world modelling of social media platformsShahin Amiriparian, Sergey Pugachevskiy, Nicholas Cummins, Simone Hantke, Jouni Pohjalainen, Gil Keren, Björn W. Schuller. 340-345 [doi]
- Designing opportune stress intervention delivery timing using multi-modal dataAkane Sano, Paul Johns, Mary Czerwinski. 346-353 [doi]
- Toward automatic detection of acute stress: Relevant nonverbal behaviors and impact of personality traitsDavid Antonio Gómez Jáuregui, Carole Castanier, Bingbing Chang, Michael Val, Franois Cottin, Christine Le Scanff, Jean-Claude Martin. 354-361 [doi]
- Exploring affection-oriented virtual pet game design strategies in VR attachment, motivations and expectations of users of pet gamesChaolan Lin, Travis Faas, Erin Brady. 362-369 [doi]
- Photorealistic facial expression synthesis by the conditional difference adversarial autoencoderYuQian Zhou, Bertram Emil Shi. 370-376 [doi]
- A bootstrapped multi-view weighted Kernel fusion framework for cross-corpus integration of multimodal emotion recognitionChun-Min Chang, Bo-Hao Su, Shih-Chen Lin, Jeng-Lin Li, Chi-Chun Lee. 377-382 [doi]
- Learning spectro-temporal features with 3D CNNs for speech emotion recognitionJaebok Kim, Khiet P. Truong, Gwenn Englebienne, Vanessa Evers. 383-388 [doi]
- Multimodal classification of driver glanceDaniel Baumann, Marwa Mahmoud, Peter Robinson 0001, Eduardo Dias, Lee Skrypchuk. 389-394 [doi]
- Facial action units detection under pose variations using deep regions learningAsem M. Ali, Islam Alkabbany, Amal Farag, Ian Bennett, Aly A. Farag. 395-400 [doi]
- Facial action unit intensity estimation and feature relevance visualization with random regression forestsPhilipp Werner, Sebastian Handrich, Ayoub Al-Hamadi. 401-406 [doi]
- Exploring moral conflicts in speech: Multidisciplinary analysis of affect and stressMinha Lee, Jaebok Kim, Khiet P. Truong, Yvonne de Kort, Femke Beute, Wijnand A. IJsselsteijn. 407-414 [doi]
- GIFGIF+: Collecting emotional animated GIFs with clustered multi-task learningWeixuan Chen, Ognjen Oggi Rudovic, Rosalind W. Picard. 410-417 [doi]
- Formulating emotion perception as a probabilistic model with application to categorical emotion classificationReza Lotfian, Carlos Busso. 415-420 [doi]
- A taxonomy of mood research and its applications in computer scienceHelma Torkamaan, Jürgen Ziegler 0001. 421-426 [doi]
- Refactoring facial expressions: An automatic analysis of natural occurring facial expressions in iterative social dilemmaGiota Stratou, Job Van Der Schalk, Rens Hoegen, Jonathan Gratch. 427-433 [doi]
- Predicting speaker recognition reliability by considering emotional contentSrinivas Parthasarathy, Carlos Busso. 434-439 [doi]
- An investigation into three visual characteristics of complex scenes that evoke human emotionXin Lu, Reginald B. Adams Jr., Jia Li 0001, Michelle G. Newman, James. Z. Wang. 440-447 [doi]
- Decoding the perception of sincerity in written dialoguesCodruta Gîrlea, Roxana Girju. 448-455 [doi]
- DeepBreath: Deep learning of breathing patterns for automatic stress recognition using low-cost thermal imaging in unconstrained settingsYoungjun Cho, Nadia Bianchi-Berthouze, Simon J. Julier. 456-463 [doi]
- Comparing virtual reality with computer monitors as rating environments for affective dimensions in social interactionsGary McKeown, Christine Spencer, Alex Patterson, Thomas Creaney, Damien Dupré. 464-469 [doi]
- Toward active and unobtrusive engagement assessment of distance learnersBrandon M. Booth, Asem M. Ali, Shrikanth S. Narayanan, Ian Bennett, Aly A. Farag. 470-476 [doi]
- Grounded emotionsVicki Liu, Carmen Banea, Rada Mihalcea. 477-483 [doi]
- DCNN and DNN based multi-modal depression recognitionLe Yang, Dongmei Jiang, Wenjing Han, Hichem Sahli. 484-489 [doi]
- Visual attention in schizophrenia: Eye contact and gaze aversion during clinical interactionsAlexandria Katarina Vail, Tadas Baltrusaitis, Luciana Pennant, Elizabeth S. Liebson, Justin T. Baker, Louis-Philippe Morency. 490-497 [doi]
- Heart rate estimation from facial videos for depression analysisAamir Mustafa, Shalini Bhatia, Munawar Hayat, Roland Goecke. 498-503 [doi]
- Automated video interview judgment on a large-sized corpus collected onlineLei Chen 0004, Ru Zhao, Chee Wee Leong, Blair Lehman, Gary Feng, Mohammed (Ehsan) Hoque. 504-509 [doi]
- Modeling variable length phoneme sequences - A step towards linguistic information for speech emotion recognition in wider worldKalani Wataraka Gamage, Vidhyasaharan Sethu, Eliathamby Ambikairajah. 518-523 [doi]
- An exploratory study of population differences based on massive database of physiological responses to musicWei Huang 0005, R. Benjamin Knapp. 524-530 [doi]
- Investigating gender differences in temporal dynamics during an iterated social dilemma: An automatic analysis using networksGiota Stratou, Rens Hoegen, Gale M. Lucas, Jonathan Gratch. 531-536 [doi]
- Spontaneous and posed smile recognition based on spatial and temporal patterns of facial EMGMonica Perusquía-Hernández, Mazakasu Hirokawa, Kenji Suzuki. 537-541 [doi]
- Using natural language processing tools to develop complex models of student engagementStefan Slater, Jaclyn Ocumpaugh, Ryan S. Baker, Ma. Victoria Almeda, Laura Allen, Neil T. Heffernan. 542-547 [doi]
- The dance of emotion: Demonstrating ubiquitous understanding of human motion and emotion in support of human computer interactionCaitlin Sikora, Winslow Burleson. 548-555 [doi]
- CNN based 3D facial expression recognition using masking and landmark featuresHuiyuan Yang, Lijun Yin. 556-560 [doi]
- Reminiscence therapy improvement using emotional informationSoraia M. Alarcao. 561-565 [doi]
- Perceived emotion from images through deep neural networksAlex Hernandez-Garcia. 566-570 [doi]
- Avatar and participant gender differences in the perception of uncanniness of virtual humansJacqueline Deanna Bailey. 571-575 [doi]
- Building a generalized model for multi-lingual vocal emotion conversionSusmitha Vekkot. 576-580 [doi]
- Learning based visual engagement and self-efficacySvati Dhamija. 581-585 [doi]
- Automatic personality assessment in the wildAmanjot Kaur. 586-590 [doi]
- Wear your heart on your sleeve: Visible psychophysiology for contextualized relaxationAdam Hair. 591-595 [doi]
- Automated mental stress recognition through mobile thermal imagingYoungjun Cho. 596-600 [doi]
- Nonverbal conversation expressions processing for human-agent interactionsKevin El Haddad. 601-605 [doi]
- Dynamic emotion transitions based on emotion hysteresisYu Hao. 606-610 [doi]
- Towards more meaningful interactive narrative with intelligent affective charactersKenneth Chen. 611-615 [doi]
- Temporal patterns of facial expression in deceptive and honest communicationTaylan Kartal Sen. 616-620 [doi]