Abstract is missing.
- Beyond Accuracy: The Role of Mental Models in Human-AI Team PerformanceGagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, Eric Horvitz. 2-11 [doi]
- Not Everyone Writes Good Examples but Good Examples Can Come from AnywhereShayan Doroudi, Ece Kamar, Emma Brunskill. 12-21 [doi]
- Who Is in Your Top Three? Optimizing Learning in Elections with Many CandidatesNikhil Garg 0001, Lodewijk Gelauff, Sukolsak Sakshuwong, Ashish Goel. 22-31 [doi]
- Interpretable Image Recognition with Hierarchical PrototypesPeter Hase, Chaofan Chen, Oscar Li, Cynthia Rudin. 32-40 [doi]
- Crowdsourced PAC Learning under Classification NoiseShelby Heinecke, Lev Reyzin. 41-49 [doi]
- Testing Stylistic Interventions to Reduce Emotional Impact of Content Moderation WorkersSowmya Karunakaran, Rashmi Ramakrishan. 50-58 [doi]
- Human Evaluation of Models Built for InterpretabilityIsaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J. Gershman, Finale Doshi-Velez. 59-67 [doi]
- Learning to Predict Population-Level Label DistributionsTong Liu 0010, Akash Venkatachalam, Pratik Sanjay Bongale, Christopher M. Homan. 68-76 [doi]
- Progression in a Language Annotation Game with a PurposeChris Madge, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Silviu Paun, Massimo Poesio. 77-85 [doi]
- Second Opinion: Supporting Last-Mile Person Identification with Crowdsourcing and Face RecognitionVikram Mohanty, Kareem Abdol-Hamid, Courtney Ebersohl, Kurt Luther. 86-96 [doi]
- The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent SystemsMahsan Nourani, Samia Kabir, Sina Mohseni, Eric D. Ragan. 97-105 [doi]
- How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image DescriptionsJahna Otterbacher, Pinar Barlas, Styliani Kleanthous, Kyriakos Kyriakou. 106-114 [doi]
- AI-Based Request Augmentation to Increase Crowdsourcing ParticipationJunwon Park, Ranjay Krishna, Pranav Khadpe, Li Fei-Fei 0001, Michael S. Bernstein. 115-124 [doi]
- What You See Is What You Get? The Impact of Representation Criteria on Human Bias in HiringAndi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, Siddharth Suri, Ece Kamar. 125-134 [doi]
- Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing TasksRehab K. Qarout, Alessandro Checco, Gianluca Demartini, Kalina Bontcheva. 135-143 [doi]
- Understanding the Impact of Text Highlighting in Crowdsourcing TasksJorge Ramírez, Marcos Báez, Fabio Casati, Boualem Benatallah. 144-152 [doi]
- Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image RetrievalArijit Ray, Yi Yao, Rakesh Kumar 0001, Ajay Divakaran, Giedrius Burachas. 153-161 [doi]
- Going against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy AnalysisYan Shvartzshnaider, Noah J. Apthorpe, Nick Feamster, Helen Nissenbaum. 162-170 [doi]
- Studying the "Wisdom of Crowds" at ScaleCamelia Simoiu, Chiraag Sumanth, Alok Shankar Mysore, Sharad Goel. 171-179 [doi]
- A Hybrid Approach to Identifying Unknown Unknowns of Predictive ModelsColin Vandenhof. 180-187 [doi]
- Gamification of Loop-Invariant Discovery from CodeAndrew T. Walter, Benjamin Boskin, Seth Cooper, Panagiotis Manolios. 188-196 [doi]
- Fair Work: Crowd Work Minimum Wage with One Line of CodeMark E. Whiting, Grant Hugh, Michael S. Bernstein. 197-206 [doi]