Abstract is missing.
- State Detection Using Adaptive Human Sensor SamplingIoannis Boutsis, Vana Kalogeraki, Dimitrios Gunopulos. 2-11 [doi]
- Crowdsourcing Accurate and Creative Word Problems and HintsYvonne Chen, Travis Mandel, Yun-En Liu, Zoran Popovic. 12-21 [doi]
- Efficient Techniques for Crowdsourced Top-k ListsLuca de Alfaro, Vassilis Polychronopoulos, Neoklis Polyzotis. 22-31 [doi]
- MicroTalk: Using Argumentation to Improve Crowdsourcing AccuracyRyan Drapeau, Lydia B. Chilton, Jonathan Bragg, Daniel S. Weld. 32-41 [doi]
- Extending Workers' Attention Span Through Dummy EventsAvshalom Elmalech, David Sarne, Esther David, Chen Hajaj. 42-51 [doi]
- Understanding Crowdsourcing Workflow: Modeling and Optimizing Iterative and Parallel ProcessesShinsuke Goto, Toru Ishida, Donghui Lin. 52-58 [doi]
- Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation SystemDanna Gurari, Mehrnoosh Sameki, Margrit Betke. 59-68 [doi]
- Leveraging the Contributions of the Casual Majority to Identify Appealing Web ContentTad Hogg, Kristina Lerman. 69-78 [doi]
- "Is There Anything Else I Can Help You With?" Challenges in Deploying an On-Demand Crowd-Powered Conversational AgentTing-Hao Kenneth Huang, Walter S. Lasecki, Amos Azaria, Jeffrey P. Bigham. 79-88 [doi]
- Click Carving: Segmenting Objects in Video with Point ClicksSuyog Dutt Jain, Kristen Grauman. 89-98 [doi]
- Studying the Effects of Task Notification Policies on Participation and Outcomes in On-the-go CrowdsourcingYongSung Kim, Emily Harburg, Shana Azria, Aaron Shaw, Elizabeth Gerber, Darren Gergle, Haoqi Zhang. 99-108 [doi]
- Crowdclass: Designing Classification-Based Citizen Science Learning ModulesDoris Jung Lin Lee, Joanne Lo, Moonhyok Kim, Eric Paulos. 109-118 [doi]
- Validating the Quality of Crowdsourced Psychometric Personality Test ItemsBao Sheng Loe, Francis Smart, Lenka Firtova, Corinna Brauner, Laura Lueneborg, David Stillwell. 119-128 [doi]
- Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to JudgeEddy Maddalena, Marco Basaldella, Dario De Nart, Dante Degl'Innocenti, Stefano Mizzaro, Gianluca Demartini. 129-138 [doi]
- Why Is That Relevant? Collecting Annotator Rationales for Relevance JudgmentsTyler McDonnell, Matthew Lease, Mucahid Kutlu, Tamer Elsayed. 139-148 [doi]
- Probabilistic Modeling for Crowdsourcing Partially-Subjective RatingsAn Thanh Nguyen, Matthew Halpern, Byron C. Wallace, Matthew Lease. 149-158 [doi]
- Learning and Feature Selection under Budget Constraints in CrowdsourcingBesmira Nushi, Adish Singla, Andreas Krause 0001, Donald Kossmann. 159-168 [doi]
- Quality Estimation of Workers in Collaborative Crowdsourcing Using Group TestingPrakhar Ojha, Partha P. Talukdar. 169-178 [doi]
- Learning to Scale Payments in Crowdsourcing with PropeRBoostGoran Radanovic, Boi Faltings. 179-188 [doi]
- CRQA: Crowd-Powered Real-Time Automatic Question Answering SystemDenis Savenkov, Eugene Agichtein. 189-198 [doi]
- Practical Peer Prediction for Peer AssessmentVictor Shnayder, David C. Parkes. 199-208 [doi]
- Learning Privacy Expectations by Crowdsourcing Contextual Informational NormsYan Shvartzshnaider, Schrasing Tong, Thomas Wies, Paula Kift, Helen Nissenbaum, Lakshminarayanan Subramanian, Prateek Mittal. 209-218 [doi]
- Much Ado About Time: Exhaustive Annotation of Temporal DataGunnar A. Sigurdsson, Olga Russakovsky, Ali Farhadi, Ivan Laptev, Abhinav Gupta. 219-228 [doi]
- Evaluating Task-Dependent Taxonomies for NavigationYuyin Sun, Adish Singla, Tori Qiao Yan, Andreas Krause 0001, Dieter Fox. 229-238 [doi]
- Interactive Consensus Agreement Games for Labeling ImagesPaul Upchurch, Daniel Sedra, Andrew Mullen, Haym Hirsh, Kavita Bala. 239-248 [doi]
- Modeling Task Complexity in CrowdsourcingJie Yang 0028, Judith Redi, Gianluca Demartini, Alessandro Bozzon. 249-258 [doi]
- Predicting Crowd Work Quality under Monetary InterventionsMing Yin, Yiling Chen. 259 [doi]