Abstract is missing.
- Context aware addressee estimation for human robot interactionSamira Sheikhi, Dinesh Babu Jayagopi, Vasil Khalidov, Jean-Marc Odobez. 1-6 [doi]
- The acoustics of eye contact: detecting visual attention from conversational audio cuesFlorian Eyben, Felix Weninger, Lucas Paletta, Björn W. Schuller. 7-12 [doi]
- A dominance estimation mechanism using eye-gaze and turn-taking informationMisato Yatsushiro, Naoya Ikeda, Yuki Hayashi, Yukiko I. Nakano. 13-18 [doi]
- Finding the timings for a guide agent to interveneinter-user conversation in considering their gazebehaviorsShochi Otogi, Hung-Hsuan Huang, Ryo Hotta, Kyoji Kawagoe. 19-24 [doi]
- Situated multi-modal dialog system in vehiclesTeruhisa Misu, Antoine Raux, Ian Lane, Joan Devassy, Rakesh Gupta. 25-28 [doi]
- Agent-assisted multi-viewpoint video viewer and its gaze-based evaluationTakatsugu Hirayama, Takafumi Marutani, Daishi Tanoue, Shogo Tokai, Sidney Fels, Kenji Mase. 29-34 [doi]
- Mutual disambiguation of eye gaze and speech for sight translation and readingRucha Kulkarni, Kritika Jain, Himanshu Bansal, Srinivas Bangalore, Michael Carl. 35-40 [doi]
- Learning aspects of interest from GazeKei Shimonishi, Hiroaki Kawashima, Ryo Yonetani, Erina Ishikawa, Takashi Matsuyama. 41-44 [doi]
- Feature selection for gaze, pupillary, and EEG signals evoked in a 3D environmentDavid C. Jangraw, Paul Sajda. 45-50 [doi]
- Lying through the eyes: detecting lies through eye movementsKai Keat Lim, Max Friedrich, Jenni Radun, Kristiina Jokinen. 51-56 [doi]
- Unrawelling the interaction strategies and gaze in collaborative learning with online video lecturesRoman Bednarik, Marko Kauppinen. 57-62 [doi]