Abstract is missing.
- Visualizing dimensionally-reduced data: interviews with analysts and a characterization of task sequencesMatthew Brehmer, Michael Sedlmair, Stephen Ingram, Tamara Munzner. 1-8 [doi]
- User tasks for evaluation: untangling the terminology throughout visualization design and developmentAlexander Rind, Wolfgang Aigner, Markus Wagner, Silvia Miksch, Tim Lammarsch. 9-15 [doi]
- Considerations for characterizing domain problemsKirsten M. Winters, Denise Lach, Judith Bayard Cushing. 16-22 [doi]
- Navigating reductionism and holism in evaluationMichael Correll, Eric C. Alexander, Danielle Albers, Alper Sarikaya, Michael Gleicher. 23-26 [doi]
- Evaluation methodology for comparing memory and communication of analytic processes in visual analyticsEric D. Ragan, John R. Goodall. 27-34 [doi]
- Just the other side of the coin?: from error- to insight-analysisMichael Smuc. 35-40 [doi]
- Evaluating user behavior and strategy during visual explorationKhairi Reda, Andrew E. Johnson, Jason Leigh, Michael E. Papka. 41-45 [doi]
- Value-driven evaluation of visualizationsJohn T. Stasko. 46-53 [doi]
- Benchmark data for evaluating visualization and analysis techniques for eye tracking for video stimuliKuno Kurzhals, Cyrill Fabian Bopp, Jochen Bässler, Felix Ebinger, Daniel Weiskopf. 54-60 [doi]
- Evaluating visual analytics with eye trackingKuno Kurzhals, Brian Fisher, Michael Burch, Daniel Weiskopf. 61-69 [doi]
- Towards analyzing eye tracking data for evaluating interactive visualization systemsTanja Blascheck, Thomas Ertl. 70-77 [doi]
- Gamification as a paradigm for the evaluation of visual analytics systemsNafees Ahmed, Klaus Mueller. 78-86 [doi]
- Crowdster: enabling social navigation in web-based visualization using crowdsourced evaluationYuet Ling Wong, Niklas Elmqvist. 87-94 [doi]
- Repeated measures design in crowdsourcing-based experiments for visualizationAlfie Abdul-Rahman, Karl J. Proctor, Brian Duffy, Min Chen. 95-102 [doi]
- Evaluation of information visualization techniques: analysing user experience with reaction cardsTanja Mercun. 103-109 [doi]
- Toward visualization-specific heuristic evaluationAlvin Tarrell, Ann L. Fruhling, Rita Borgo, Camilla Forsell, Georges G. Grinstein, Jean Scholtz. 110-117 [doi]
- Experiences and challenges with evaluation methods in practice: a case studySimone Kriglstein, Margit Pohl, Nikolaus Suchy, Johannes Gärtner, Theresia Gschwandtner, Silvia Miksch. 118-125 [doi]
- More bang for your research buck: toward recommender systems for visual analyticsLeslie M. Blaha, Dustin Arendt, Fairul Mohd-Zaid. 126-133 [doi]
- Sanity check for class-coloring-based evaluation of dimension reduction techniquesMichaël Aupetit. 134-141 [doi]
- Oopsy-daisy: failure stories in quantitative evaluation studies for visualizationsSung-Hee Kim, Ji Soo Yi, Niklas Elmqvist. 142-146 [doi]
- Pre-design empiricism for information visualization: scenarios, methods, and challengesMatthew Brehmer, Sheelagh Carpendale, Bongshin Lee, Melanie Tory. 147-151 [doi]
- Field experiment methodology for pair analyticsLinda T. Kaastra, Brian Fisher. 152-159 [doi]
- Utility evaluation of modelsJean Scholtz, Oriana Love, Mark A. Whiting, Duncan Hodges, Lia Emanuel, Danaë Stanton Fraser. 160-167 [doi]