Abstract is missing.
- Developing qualitative metrics for visual analytic environmentsJean Scholtz. 1-7 [doi]
- Many roads lead to Rome: mapping users' problem solving strategiesEva Mayr, Michael Smuc, Hanna Risku. 8-15 [doi]
- Exploring information visualization: describing different interaction patternsMargit Pohl, Sylvia Wiltner, Silvia Miksch. 16-23 [doi]
- Towards information-theoretic visualization evaluation measure: a practical example for Bertin's matricesInnar Liiv. 24-28 [doi]
- Learning-based evaluation of visual analytic systemsRemco Chang, Caroline Ziemkiewicz, Roman Pyzh, Joseph Kielman, William Ribarsky. 29-34 [doi]
- A descriptive model of visual scanningStéphane Conversy, Christophe Hurter, Stéphane Chatty. 35-42 [doi]
- Generating a synthetic video datasetMark A. Whiting, Jereme Haack, Carrie Varley. 43-48 [doi]
- Is your user hunting or gathering insights?: identifying insight drivers across domainsMichael Smuc, Eva Mayr, Hanna Risku. 49-54 [doi]
- Comparing benchmark task and insight evaluation methods on timeseries graph visualizationsPurvi Saraiya, Chris North, Karen Duca. 55-62 [doi]
- Do Mechanical Turks dream of square pie charts?Robert Kosara, Caroline Ziemkiewicz. 63-70 [doi]
- Comparing information graphics: a critical look at eye trackingJoseph H. Goldberg, Jonathan Helfman. 71-78 [doi]
- Evaluating information visualization in large companies: challenges, experiences and recommendationsMichael Sedlmair, Petra Isenberg, Dominikus Baur, Andreas Butz. 79-86 [doi]