Abstract is missing.
- nEmesis: Which Restaurants Should You Avoid Today?Adam Sadilek, Sean Padraig Brennan, Henry A. Kautz, Vincent Silenzio. [doi]
- Ontology Quality Assurance with the CrowdJonathan Mortensen, Mark A. Musen, Natalya Fridman Noy. [doi]
- HiveMind: Tuning Crowd Response with a Single ValuePreetjot Singh, Walter S. Lasecki, Paulo Barelli, Jeffrey P. Bigham. [doi]
- Crowdsourcing Spatial Phenomena Using Trust-Based Heteroskedastic Gaussian ProcessesMatteo Venanzi, Alex Rogers, Nicholas R. Jennings. [doi]
- Frenzy: A Platform for FriendsourcingLydia B. Chilton, Felicia Cordeiro, Daniel S. Weld, James A. Landay. [doi]
- Winner-Take-All Crowdsourcing Contests with Stochastic ProductionRuggiero Cavallo, Shaili Jain. [doi]
- LabelBoost: An Ensemble Model for Ground Truth Inference Using Boosted TreesSiamak Faridani, Georg Buscher. [doi]
- Cobi: Community-Informed Conference SchedulingJuho Kim, Haoqi Zhang, Paul André, Lydia B. Chilton, Anant Bhardwaj, David R. Karger, Steven P. Dow, Robert C. Miller. [doi]
- Community Clustering: Leveraging an Academic Crowd to Form Coherent Conference SessionsPaul André, Haoqi Zhang, Juho Kim, Lydia B. Chilton, Steven P. Dow, Robert C. Miller. [doi]
- A Ground Truth Inference Model for Ordinal Crowd-Sourced Labels Using Hard Assignment Expectation MaximizationSiamak Faridani, Georg Buscher, Ya Xu. [doi]
- Manipulating Social Roles in a Tagging EnvironmentMieke H. R. Leyssen, Jacco van Ossenbruggen, Arjen P. de Vries, Lynda Hardman. [doi]
- Improving Your Chances: Boosting Citizen Science DiscoveryYexiang Xue, Bistra N. Dilkina, Theodoros Damoulas, Daniel Fink, Carla P. Gomes, Steve Kelling. [doi]
- Automated Support for Collective Memory of Conversational InteractionsWalter Stephen Lasecki, Jeffrey Philip Bigham. [doi]
- Crowdsourcing Objective Answers to Subjective Questions OnlineRavi Iyer. [doi]
- Designing a Crowdsourcing Tool to Analyze Relationships Among Jazz Musicians: The Case of Linked Jazz 52nd StreetHilary K. Thorsen, Maria Cristina Pattuelli. [doi]
- Aggregating Human-Expert Opinions for Multi-Label ClassificationEvgueni N. Smirnov, Hua Zhang, Ralf Peeters, Nikolay I. Nikolaev, Maike Imkamp. [doi]
- Crowdsourcing Translation by Leveraging Tournament Selection and Lattice-Based String AlignmentJulien Bourdaillet, Shourya Roy, Gueyoung Jung, Yu-An Sun. [doi]
- Sponsors [doi]
- A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling TasksOmar Alonso, Catherine C. Marshall, Marc A. Najork. [doi]
- Inferring Users' Preferences from Crowdsourced Pairwise Comparisons: A Matrix Completion ApproachJinfeng Yi, Rong Jin, Shaili Jain, Anil K. Jain. [doi]
- OnDroad Planner: Building Tourist Plans Using Traveling Social Network InformationIsabel Cenamor, Tomás de la Rosa, Daniel Borrajo. [doi]
- Leveraging Collaboration: A Methodology for the Design of Social Problem-Solving SystemsLucas M. Tabajara, Marcelo O. R. Prates, Diego Noble, Luís C. Lamb. [doi]
- Personalized Human ComputationPeter Organisciak, Jaime Teevan, Susan T. Dumais, Robert C. Miller, Adam Tauman Kalai. [doi]
- Using Visibility to Control Collective Attention in CrowdsourcingKristina Lerman, Tad Hogg. [doi]
- Crowdsourcing a HIT: Measuring Workers' Pre-Task Interactions on Microtask MarketsJason T. Jacques, Per Ola Kristensson. [doi]
- Lottery-Based Payment Mechanism for MicrotasksL. Elisa Celis, Shourya Roy, Vivek Mishra. [doi]
- Real-Time Drawing Assistance through CrowdsourcingAlex Limpaecher, Nicolas Feltman, Adrien Treuille, Michael Cohen. [doi]
- Joint Crowdsourcing of Multiple TasksAndrey Kolobov, Mausam, Daniel S. Weld. [doi]
- Curio: A Platform for Supporting Mixed-Expertise CrowdsourcingEdith Law, Conner Dalton, Nick Merrill, Albert Young, Krzysztof Z. Gajos. [doi]
- Human Stigmergy in Augmented EnvironmentsKshanti Greene, Thomas Young. [doi]
- A Framework for Adaptive Crowd Query ProcessingBeth Trushkowsky, Tim Kraska, Michael J. Franklin. [doi]
- Incentives for Privacy Tradeoff in Community SensingAdish Singla, Andreas Krause. [doi]
- Effect of Task Presentation on the Performance of Crowd Workers - A Cognitive StudyHarini Alagarai Sampath, Rajeev Rajeshuni, Bipin Indurkhya, Saraschandra Karanam, Koustuv Dasgupta. [doi]
- What Will Others Choose? How a Majority Vote Reward Scheme Can Improve Human Computation in a Spatial Location Identification TaskHuaming Rao, Shih-Wen Huang, Wai-Tat Fu. [doi]
- The Crowd-Median AlgorithmHannes Heikinheimo, Antti Ukkonen. [doi]
- DataSift: An Expressive and Accurate Crowd-Powered Search ToolkitAditya G. Parameswaran, Ming Han Teh, Hector Garcia-Molina, Jennifer Widom. [doi]
- Crowdsourcing Multi-Label Classification for Taxonomy CreationJonathan Bragg, Mausam, Daniel S. Weld. [doi]
- Crowd, the Teaching Assistant: Educational Assessment CrowdsourcingPallavi Manohar, Shourya Roy. [doi]
- Dwelling on the Negative: Incentivizing Effort in Peer PredictionJens Witkowski, Yoram Bachrach, Peter Key, David C. Parkes. [doi]
- Towards a Language for Non-Expert Specification of POMDPs for CrowdsourcingChristopher H. Lin, Mausam, Daniel S. Weld. [doi]
- Transcribing and Annotating Speech Corpora for Speech Recognition: A Three-Step Crowdsourcing Approach with Quality ControlAnnika Hämäläinen, Fernando Pinto Moreira, Jairo Avelar, Daniela Braga, Miguel Sales Dias. [doi]
- Assessing the Viability of Online Interruption StudiesSandy J. J. Gould, Anna Louise Cox, Duncan P. Brumby, Sarah Wiseman. [doi]
- Why Stop Now? Predicting Worker Engagement in Online CrowdsourcingAndrew Mao, Ece Kamar, Eric Horvitz. [doi]
- SQUARE: A Benchmark for Research on Computing Crowd ConsensusAashish Sheshadri, Matthew Lease. [doi]
- Using Human and Machine Processing in Recommendation SystemsEric Colson. [doi]
- Boosting OCR Accuracy Using CrowdsourcingShuo-Yang Wang, Ming-Hung Wang, Kuan-Ta Chen. [doi]
- Reducing Error in Context-Sensitive Crowdsourced TasksDaniel Haas, Matthew Greenstein, Kainar Kamalov, Adam Marcus, Marek Olszewski, Marc Piette. [doi]
- CrowdBand: An Automated Crowdsourcing Sound Composition SystemMary Pietrowicz, Danish Chopra, Amin Sadeghi, Puneet Chandra, Brian P. Bailey, Karrie Karahalios. [doi]
- Depth-Workload Tradeoffs for Workforce OrganizationHoda Heidari, Michael Kearns. [doi]
- Understanding Potential MicrotaskWorkers for Paid CrowdsourcingMing-Hung Wang, Kuan-Ta Chen, Shuo-Yang Wang, Chin-Laung Lei. [doi]
- Wanted: More Nails for the Hammer - An Investigation Into the Application of Human ComputationElizabeth Brem, Tyler Bick, Andrew W. Schriner, Daniel B. Oerther. [doi]
- Preface [doi]
- Ability Grouping of Crowd Workers via Reward DiscriminationYuko Sakurai, Tenda Okimoto, Masaaki Oka, Masato Shinoda, Makoto Yokoo. [doi]
- Using Crowdsourcing to Generate an Evaluation Dataset for Name Matching TechnologiesAlya Asarina, Olga Simek. [doi]
- Herding the Crowd: Automated Planning for Crowdsourced PlanningKartik Talamadupula, Subbarao Kambhampati, Yuheng Hu, Tuan-Anh Nguyen, Hankz Hankui Zhuo. [doi]
- Volunteering Versus Work for Pay: Incentives and Tradeoffs in CrowdsourcingAndrew Mao, Ece Kamar, Yiling Chen, Eric Horvitz, Megan E. Schwamb, Chris J. Lintott, Arfon M. Smith. [doi]
- An Introduction to the ZooniverseArfon M. Smith, Stuart Lynn, Chris J. Lintott. [doi]
- Scalable Preference Aggregation in Social NetworksSwapnil Dhamal, Y. Narahari. [doi]
- Inserting Micro-Breaks into Crowdsourcing WorkflowsJeffrey M. Rzeszotarski, Ed Chi, Praveen Paritosh, Peng Dai. [doi]
- An Initial Study of Automatic Curb Ramp Detection with Crowdsourced Verification Using Google Street View ImagesKotaro Hara, Jin Sun, Jonah Chazan, David Jacobs, Jon Froehlich. [doi]
- Interpretation of Crowdsourced Activities Using Provenance Network AnalysisTrung Dong Huynh, Mark Ebden, Matteo Venanzi, Sarvapali D. Ramchurn, Stephen J. Roberts, Luc Moreau. [doi]
- Crowdsourcing Quality Control for Item Ordering TasksToshiko Matsui, Yukino Baba, Toshihiro Kamishima, Hisashi Kashima. [doi]
- The Work Exchange: Peer-to-Peer Enterprise CrowdsourcingStephen Dill, Robert Kern, Erika Flint, Melissa Cefkin. [doi]
- 99designs: An Analysis of Creative Competition in Crowdsourced DesignRicardo Matsumura de Araújo. [doi]
- Statistical Quality Estimation for General Crowdsourcing TasksYukino Baba, Hisashi Kashima. [doi]
- EM-Based Inference of True Labels Using Confidence JudgmentsSatoshi Oyama, Yukino Baba, Yuko Sakurai, Hisashi Kashima. [doi]
- Task Redundancy Strategy Based on Volunteers' Credibility for Volunteer Thinking ProjectsLesandro Ponciano, Francisco Vilar Brasileiro, Guilherme Gadelha. [doi]
- TrailView: Combining Gamification and Social Network Voting Mechanisms for Useful Data CollectionMichael Peter Weingert, Kate Larson. [doi]
- Frequency and Duration of Self-Initiated Task-Switching in an Online Investigation of Interrupted PerformanceSandy J. J. Gould, Anna Louise Cox, Duncan P. Brumby. [doi]
- Two Methods for Measuring Question Difficulty and Discrimination in Incomplete Crowdsourced DataSarah K. K. Luger, Jeff Bowles. [doi]
- On the Verification Complexity of Group Decision-Making TasksOfra Amir, Yuval Shahar, Ya'akov Gal, Litan Ilani. [doi]
- CASTLE: Crowd-Assisted System for Text Labeling and ExtractionSean Louis Goldberg, Daisy Zhe Wang, Tim Kraska. [doi]
- Making Crowdwork Work: Issues in Crowdsourcing for OrganizationsObinna Anya, Melissa Cefkin, Steve Dill, Robert Moore, Susan U. Stucky, Osarieme Omokaro. [doi]
- HCOMP-13 Organization [doi]
- English to Hindi Translation Protocols for an Enterprise CrowdSrinivasan Iyengar, Shirish Subhash Karande, Sachin Lodha. [doi]
- In-HIT Example-Guided Annotation Aid for Crowdsourcing UI ComponentsYi-Ching Huang, Chun-I. Wang, Shih-Yuan Yu, Jane Yung-jen Hsu. [doi]
- GameLab: A Tool Suit to Support Designers of Systems with Homo Ludens in the LoopMarkus Krause. [doi]
- Task Sequence Design: Evidence on Price and DifficultyMing Yin, Yiling Chen, Yu-An Sun. [doi]
- Automating Crowdsourcing Tasks in an Industrial EnvironmentVasilis Kandylas, Omar Alonso, Shiroy Choksey, Kedar Rudre, Prashant Jaiswal. [doi]