Crowdsourcing Government: Lessons from Multiple Disciplines
Abstract
Crowdsourcing has proliferated across disciplines and professional fields. Implementers in the public sector face practical challenges, however, in the execution of crowdsourcing. This review synthesizes prior crowdsourcing research and practices from a variety of disciplines and focuses to identify lessons for meeting the practical challenges of crowdsourcing in the public sector. It identifies three distinct categories of crowdsourcing: organizations, products and services, and holistic systems. Lessons about the fundamental logic of process design—alignment, motivation, and evaluation—identified across the three categories are discussed. Conclusions drawn from past studies and the resulting evidence can help public managers better design and implement crowdsourcing in the public sector.
Practitioner Points
- Crowdsourcing studies in the public sector show that properly designed crowdsourcing platforms can empower citizens, create legitimacy for the government with the people, and enhance the effectiveness of public services and goods.
- Research suggests that crowdsourcing decisions should be based on both solutions necessary to resolve public problems and appropriate tasks for participants who have knowledge or skills.
- Evidence shows that prizes and rewards can increase participation rates, but opportunities for learning and skill building are essential for enhancing the quality of participants’ contributions.
- Studies indicate that a crowdsourcing approach empowers participants through peer review by adopting constructive competition and supportive cooperation designs in the review process.
- Studies illustrate that the establishment of an effective reputation system in the crowdsourcing process can ensure legitimate evaluation.
Howe (2006) first identified crowdsourcing as the act of an organization taking a function once performed by an organization's own employees and outsourcing it to people outside the organization (crowd) through an open call online. Government administrators and public managers have begun to recognize the potential value of crowdsourcing. Evidence suggests that governments can utilize crowdsourcing to generate better public services with lower costs, produce policy innovations, and engage larger numbers of public participants (Dutil 2015). Governments have evolved quickly to master these new online platforms with proper management and coordination through a trial-and-error process (Brabham 2015). This article is an initial step toward examining the global exponential growth of crowdsourcing by governments.
Existing studies discuss four types of functions that can be crowdsourced in the public sector: (1) information generation, (2) service coproduction, (3) solution creation, and (4) policy making (Nam 2012). Governments can generate information from citizens to improve public services through crowdsourcing. For instance, the Citizen Science Alliance's Galaxy Zoo in collaboration with NASA (Tokarchuk, Cuel, and Zamarian 2012), engaged the public to provide information about the classification of galaxies. Another important goal of crowdsourcing is to involve citizens in the production of public services, such as Peer to Patent (Noveck 2009), which involves lay stakeholders in the research and review of patent applications. Furthermore, government agencies adopt crowdsourcing for solution creation, such as Challenge.gov (Mergel and Desouza 2013) and Next Stop Design (Brabham 2012), which send open calls for proposals to solve specific public problems. Finally, crowdsourcing is applied in the policy-making process (Prpić, Taeihagh, and Melton 2015). For instance, governments incorporate public participation into policy making, such as Future Melbourne (Liu 2016), federal agencies’ rulemaking in Finland (Aitamurto and Landemore 2015), and the eRulemaking Initiative in the United States (Epstein, Newhart, and Vernon 2014).
Although crowdsourcing is relatively new, calling on citizens to help solve public problems through technology is not. John Dewey (1927) argued that technology can facilitate better communication among citizens and improve their ability to solve public problems that directly affect their lives. The belief that technology can enhance the public's ability to solve public problems is an enduring one. For instance, the literature on citizen engagement and participation has focused on redesigning technology's role in public engagement (Bryson et al. 2013; Fung 2015; Linders 2012). In addition, crowdsourcing shares fundamental elements with public engagement. Crowdsourcing is a form of citizen participation; although crowdsourcing and citizen participation have some similarities, they are not exactly alike (Nabatchi and Amsler 2014). Both ask for information, insights, and solutions from crowds. However, crowdsourcing in the private sector “does not rely on the attitudes of anybody but the sponsors to confer legitimacy on solutions” (Seltzer and Mahmoudi 2013, 10). Citizen participation, depending on its purpose, is expected to give voice to and be inclusive of those who might be affected by public policy decisions (Seltzer and Mahmoudi 2013). Thus, this article addresses questions that are similar to those studied in the public engagement literature, namely, how can government crowdsourcing both empower people to solve public problems that directly influence their lives and increase the government's effectiveness and legitimacy with the people?
The literature shows that crowdsourcing may facilitate relationships between public professionals and citizens, but implementers are confronted by practical challenges in the execution of crowdsourcing (Brabham 2015; Clark and Logan 2011; Garcia, Vivacqua, and Tavares 2011; Hansson et al. 2016; Linders 2012; Mergel 2015; Nam 2012; Prpić, Taeihagh, and Melton 2015; Robinson and Johnson 2016; Seltzer and Mahmoudi 2013). Such challenges include marshaling convincing evidence for the adoption of new technology, identifying appropriate information and communication technologies (ICT) (Mergel and Desouza 2013), designing effective incentives to produce public goods and services, aggregating an overwhelming amount of input from citizens (Benouaret, Valliyur-Ramalingam, and Charoy 2013), and evaluating outcomes (Liu 2016; Nam 2012).
This article responds to the practical challenges of implementing crowdsourcing in the public sector by synthesizing prior research to derive lessons for practitioners. The next section reviews literature that identifies three areas of crowdsourcing applications: organizations, products/services, and systems. The emerging design logic of crowdsourcing alignment, motivation, and evaluation is revealed through the review of these three areas. Six lessons—synthesized from theory, cases, and other empirical evidence for the design of alignment, motivation, and evaluation of crowdsourcing—are presented next. The discussion of each lesson addresses the application of crowdsourcing in the public sector and what makes crowdsourcing effective so that the government can empower people to solve public problems. Each lesson also addresses the use of crowdsourcing to increase government legitimacy and effectiveness. The article concludes with a discussion of next steps and suggestions for future studies.
Crowdsourcing in the Literature
This review is intended to both evaluate the practices and foundational theories of crowdsourcing across disciplines and generate lessons. A systematic literature review was conducted following procedures described by Cooper (2010). A search was conducted of the Web of Science database with a cross-check of Google Scholar for published articles on crowdsourcing. A search using the keywords “crowdsourcing,” “crowdsource,” “crowdsourced,” and “crowd source” returned 1,123 articles published between 2008 to 2015. For the purpose of this review, I selected articles that directly discuss the management and governance of crowdsourcing instead of articles on the improvement of technology or other technical aspects of crowdsourcing. Ultimately, 173 articles on crowdsourcing were selected for the review from the fields of social science, business management, marketing, law, communications, science, computer science, medicine and health, planning, and engineering. (See Appendix A in the supporting information online for details of the review methodology.)
This review shows that crowdsourcing is applied in different contexts, including at the organizational level, for different types of services or products and in different ecosystems. Thus, the review synthesizes the development of crowdsourcing in these three contexts, namely, organizations, services/products, and systems. Figure 1 illustrates the growth in the crowdsourcing literature from 2008 to 2015 within these three areas. Table 1 shows that these three areas have different focuses, units of analysis, and theoretical development paths. The underlying theories and practices highlighted by these three perspectives can help in systematically examining similar concerns and issues in the public sector at various levels.

Category | Description | Level of Analysis | Major Theories | Major Journals | Main Research |
---|---|---|---|---|---|
Organizations (N = 32, 18.50%) | Crowdsourcing as a substitute for or a complement to the traditional functions of an organization. | Organization | Distant search theory; decision-making theories | Academy of Management Review | Afuah and Tucci (2012); Bayus (2013); Bonabeau (2009); Brabham (2012); Mergel and Desouza 2013; Nam 2012 |
Government Information Quarterly | |||||
Harvard Business Review | |||||
Management Science | |||||
MIT Sloan Management Review | |||||
Public Administration Review | |||||
Services/products (N = 107, 61.85%) | Crowdsourcing is an online participatory activity that produces services and/or produces a series of well-designed steps that include an understanding of how to motivate crowds to contribute, how to aggregate and select information contributed by the crowds, and how to evaluate the outcomes. | Services; Products; Industry | Motivation theories; group dynamic theory; collaboration theories; process design | American Behavioral Scientist | Battistella and Nonino (2012); Chandler and Kapelner (2013); Franke, Keinz, and Klausberger (2013); Crump, McDonnell, and Gureckis (2013); Poetz and Schreier (2012) |
International Journal of Research in Marketing | |||||
Innovation: Management, Policy and Practice | |||||
Journal of Economics; Behavior and Organization | |||||
Journal of Information Science | |||||
Organization Science | |||||
PLOS ONE | |||||
MIS Quarterly | |||||
Technological Forecasting and Social Change | |||||
Ecosystem (Commonly referred as Ecosystem) (N = 34, 19.65%) | Crowdsourcing is the result of social or economic changes and should be understood in specific contexts. | Industry; System | Systems theory: social-technical interaction network; network theory | Academy of Management Perspectives | Meyer (2014); Ellis (2014); Linders (2012); Simula and Ahola (2014); Lampel, Jha, and Bhalla (2012) |
Government Information Quarterly | |||||
Industrial Marketing Management | |||||
Information Systems | |||||
Journal of Theoretical and Applied Electronic Commerce Research |
Organizations
The literature focuses on the comparison of a traditional decision-making model with a new alternative: crowdsourcing. Most of the studies come from the management literature and are rooted in decision-making theory. Thus, discussions within this category focus on the substitutability of crowdsourcing for traditional operations such as research and development, market research, and marketing, along with other labor-intensive tasks such as translation, product review, and evaluation. Previous studies have asked the fundamental question of whether an organization adopts crowdsourcing by examining the costs of generating innovative ideas (Afuah and Tucci 2012), outcomes (Bonabeau 2009), information quality (Blohm, Leimeister, and Krcmar 2013), or better decision-making models (Nam 2012).
Services and Products
Services and products are the most frequently studied area in the crowdsourcing literature. Crowdsourcing in this category is defined as “a type of participative online activity” with well-defined initiators, purposes, goals, incentives, process designs, and openness (Estellés-Arolas and González-Ladrón-de-Guevara 2012). The articles in this category come from fields such as marketing, strategic management, information management and science, computer science, and other scientific disciplines. This category has built its theoretical foundation on process design and behavioral science. The literature in this category redefines the relationship between companies and consumers as “prosumers” because consumers participate in the production process in a service or product industry (Kozinets, Hemetsberger, and Schau 2008). Similarly, coproduction theory was applied in the public sector to conceptualize crowdsourcing and define the public as government “partners” (in addition to “citizens” and government “customers”) (Clark, Brudney, and Jang 2013; Thomas 2013). According to the coproduction theory, citizens participate in the planning and implementation of public policies (Thomas 2013). Examples include crime prevention (Graham and Mehmood 2014) and the 311 hotline (Clark, Brudney, and Jang 2013; Minkoff 2016). Crowdsourcing studies show that products or services that include consumer participation in the production process can increase brand recognition, customer satisfaction, and loyalty (Baron and Warnaby 2011; Djelassi and Decoopman 2013; Kozinets, Hemetsberger, and Schau 2008).
Systems
This category considers the development of technology in the social context by exploring the relevant actors and sociotechnical systems. System studies primarily emanate from the fields of information and systems management. These studies evaluate the social interaction between technology and actors (Almirall, Lee, and Majchrzak 2014) to assess the effectiveness of crowdsourcing in a particular industry or system. Studies that examine crowdsourcing from a systems perspective echo Meyer's (2014) framework. Meyer (2014) adopts the social-technical interaction network from Kling, McKim, and King (2003) to analyze relevant actors within a crowdsourcing system, their associated interactions, resources and communication flows, and the system's architectural choice points. By understanding the coordinating role of technology in the crowdsourcing movement, these studies recognize crowdsourcing as a community-building process or an ecosystem that is embedded both in networks and culture (Kozinets, Hemetsberger, and Schau 2008; Lampel, Jha, and Bhalla 2012; Simula and Ahola 2014).
Designing Crowdsourcing for the Public Sector: Alignment, Motivation, and Evaluation
The three categories of crowdsourcing research each represent a continuum from a predominantly organizational focus to a complex system focus. The review of the crowdsourcing literature shows a progression toward systems thinking, as evidenced by the increasing number of studies in this category. Whereas studies on organization crowdsourcing have emphasized the costs and benefits of crowdsourcing adoption, studies on services/products and systems increasingly emphasize market transformation through strategies, leadership changes, and the involvement of different stakeholders. Thus, in the past 10 years, the crowdsourcing literature has adopted an increasingly strategic design perspective and has focused on the implications of crowdsourcing as a process of engaging different stakeholders.
Despite differences in the theoretical foundations of the three categories of research, the design logic of each emphasizes three common elements: alignment, motivation, and evaluation (see figure 2).1 Analysis indicates that approximately 38 percent (N = 65) of the reviewed crowdsourcing studies explicitly address the alignment of crowdsourcing with the mission of an organization and discuss the optimal implementation conditions. Approximately 21 percent (N = 37) of the reviewed studies investigate motivational factors that influence participation in and contribution to crowdsourcing initiatives. A substantial amount of attention has been devoted to the evaluation of crowdsourcing outcomes and processes (44 percent, N = 76). Thus, this article generates lessons for practitioners to utilize crowdsourcing and achieve desired public policy or service objectives. The lessons focus on three aspects of crowdsourcing process design: alignment, motivation, and evaluation, as illustrated in figure 3, which maps the lessons onto each important design process.


Lesson 1: Aligning Crowdsourcing Adoption with Solutions for Public Problems
Studies show that successful crowdsourcing begins with agencies clearly defining their problems and objectives (Mergel and Bretschneider 2013; Ye and Kankanhalli 2015). This review highlights the following criteria for crowdsourcing adoption: justifications for how crowdsourcing is a better choice than internal production or outsourcing to solicit solutions to problems; the segments of public services or goods that can be crowdsourced (Afuah and Tucci 2012; Piezunka and Dahlander 2015); and the types of crowdsourcing that are available for different functions and tasks (Brabham 2015; Nam 2012).
Organizations outsource a segment of their service or product to the public through an online open call because crowdsourcing enables the organizations to find cheaper, more efficient solutions. Afuah and Tucci (2012) propose a list of conditions in which crowdsourcing can reduce the costs of a distant search by engaging outside participants to conduct a local search. Thus, distant search theory is often used to justify an organization's adoption of crowdsourcing (Afuah and Tucci 2012; Piezunka and Dahlander 2015). Afuah and Tucci's distant search theory shows that crowdsourcing is more efficient and effective than in-house production or outsourcing when the following conditions are met: (1) problems can be easily broken down into smaller tasks, (2) the required knowledge can be found outside the organization, (3) crowdsourced participants are highly motivated to solve problems, and (4) the solution is based on user experience and can be evaluated by the users themselves. In addition, it is important to further decide which segment of public service and policy decisions can be crowdsourced. Following the same logic, studies suggest that deciding which segment of services can be crowdsourced can be based on whether public services or problems can be broken down into reasonable tasks to be performed by participants (Afuah and Tucci 2012; Prpić, Taeihagh, and Melton 2015).
The adoption of 311 services by city governments, for instance, demonstrates the application of the distant search theory in crowdsourcing. Citizen 311 hotlines allow residents to report nonemergency issues to relevant city departments with low transaction costs and simple steps (Minkoff 2016). Citizens experience public services firsthand and can easily detect and report problems such as potholes and graffiti (Clark, Brudney, and Jang 2013; Thomas 2013). This user-experience-based information about public services can be best reported by the citizens themselves, and the 311 hotline provides such a communication channel. Scholars have shown that 311 effectively engages citizens to coproduce public services (Thomas 2013) and has transformed citizens into “sensors,” “detectors,” or “reporters” of city-level problems (Clark, Brudney, and Jang 2013).
Furthermore, this review focuses on the decision to adopt crowdsourcing in terms of problem characteristics, available crowdsourcing options, and the segment of the service or policy-making process. In the public sector, for example, the U.S. Office of Science and Technology Policy (2015) instructed federal agencies to adopt citizen science and crowdsourcing that contributes directly to a goal or need that is relevant to the agencies’ missions. This instruction has yielded effective practices by agencies exploring the appropriate crowdsourcing activities for a specific segment of services. For instance, the National Archives Records Administration (NARA) initiated the Citizen Archivist, which engages participants to transcribe documents, thus making archived records publicly accessible online (Bowser and Shanley 2013). Transcribing documents is a type of crowdsourcing activity that involves information creation. This type of activity is effectively crowdsourced because it can be broken down into small tasks for participants to manage. Approximately 170,000 participants indexed 132 million names from the 1940 Census Community Indexing Project within five months (Bowser and Shanley 2013). NARA justified this use of crowdsourcing by stating that the participants’ transcription efforts were “not a substitute for the duties of the professional archivists, rather, the collaboration with the public allows NARA to conduct activities in support of its mission that would not otherwise be possible” (Bowser and Shanley 2013, 23). The next lesson moves on from the decision of adopting crowdsourcing based on problems and objectives and discusses the design of appropriate tasks for achieving the desired outcomes.
Lesson 2: Aligning Crowdsourcing Tasks with the Capacity of the Participants
The literature suggests that one important aspect of crowdsourcing is its focus on how to design the process to allocate appropriate tasks to appropriate participants (Boudreau 2012; Kozinets, Hemetsberger, and Schau 2008; Surowiecki 2005). An extensive body of literature examines the quality of crowdsourcing outcomes based on group and innovation theories (Majchrzak and Malhotra 2013; Poetz and Schreier 2012). To update our knowledge about aligning tasks with the capacity of crowdsourcing participants, this review evaluates two types of crowdsourcing outcomes: (1) task-oriented outcomes, including information and coproduction activities, and (2) innovation-oriented outcomes, including solution-based and policy preference activities.
To update our knowledge about aligning tasks with the capacity of crowdsourcing participants, this review evaluates two types of crowdsourcing outcomes: (1) task-oriented outcomes, including information and coproduction activities, and (2) innovation-oriented outcomes, including solution-based and policy preference activities.
With regard to task-oriented outcomes, the crowdsourcing literature shows that when average participants are asked to perform technical tasks with specific instructions and detailed job classifications, their performance is equal to or better than the performance of experts (Behrend et al. 2011; See et al. 2013). Evidence shows that lay participants can provide useful information that contributes to the sciences, such as information on new galaxies (Clery 2011), translation (Anastasiou and Gupta 2011), medical information (for public health) (Riedl and Riedl 2013), and science data (for public research institutions) (See et al. 2013). For instance, in the public sector, U.S. Department of Agriculture entomologist Lee Cohnstaedt established the Invasive Mosquito Project to ask volunteers to collect mosquito eggs in their communities and then upload data to an online map, which provides real-time information about problem areas and helps professionals to locate and control infected areas.
Although previous studies have shown that crowds might produce noisier outputs than experts (Callison-Burch 2009; Kittur, Chi, and Suh 2008; Snow et al. 2008). See et al.’s (2013) more recent review and experiments found that information provided by nonexpert participants was as reliable as the information provided by experts. Furthermore, the quality of the information provided by nonexpert participants improved more quickly than the quality of the information provided by experts (See et al. 2013). When comparing See et al.’s (2013) experiments with earlier scholars’ studies, these scientific crowdsourcing projects provide simple and clear instructions for individuals to find the tasks that are best suited to them. Thus, understanding the nature of the data for matching crowds with appropriate tasks (Callison-Burch 2009; Snow et al. 2008) and limitations of the crowdsourcing site instructions (Kittur, Chi, and Suh 2008) for task designs are essential.
Nevertheless, studies show mixed results when participants are asked to generate innovative ideas through open calls (Nishikawa, Schreier, and Ogawa 2013; Poetz and Schreier 2012). First, studies show that the larger the crowd, the higher the likelihood of obtaining innovative ideas (Prpić, Taeihagh, and Melton 2015), although size does not guarantee the quality of the contributions (Siddharthan et al. 2016). However, because the outcomes of innovation-driven projects are notably difficult to measure and because it is problematic to determine what qualifies as an innovative idea, studies have found that ideas generated by crowdsourced participants often require more effort by the outsourcers (Poetz and Schreier 2012).
Additionally, Majchrzak and Malhotra (2013) note that crowdsourcing limits innovation because although the evolution of ideas takes time, participants spend little time and have short attention spans. Additionally, although group creativity requires familiarity among collaborators, groups of crowdsourced participants are often strangers to one another. These studies highlight the constraints of adopting crowdsourcing for innovation-oriented outcomes caused by the capacity of the participants, which reflects their relationships with other participants, knowledge, and skills. For instance, Mergel et al. analyzed 203 Challenge.gov projects and found that most projects were about “information and education campaigns that help them better understand how to improve their service delivery, but not necessarily the service itself” (2014, 2082). A few agencies, such as NASA, solicit solutions to complex problems but require “elite problem solvers with expert knowledge” (2081).
These findings have implications for a parallel debate in the public sector with regard to the fundamental question of who should govern (deLeon 1997): educated elites (Lippmann 1922) or average people (Dewey 1927). Emphasizing the quality of information, Lippmann believes that public decisions should be made by experts with sound information. He argues that decisions that are made through public debate or based on public opinion are dangerous because elites can easily persuade or manipulate opinions. Emphasizing the role of the public, Dewey (1927) believes that daily discourse by citizens, not scientific evidence from experts, is the knowledge that is fundamental to democratic governance.
In their discussion of the “harmful consequences of participation,” Bryer and Cooper (2012) note that the cost of engaging citizens might drain resources from professional administrative work. However, they also argue that administrators should incorporate institutional designs to integrate citizen education because “low-quality participation may be attributable not to the capacities and ability of the citizen but to the design and implementation itself” (S111). Therefore, when designing crowdsourcing tasks, public managers must consider the capacity of their potential participants. In the following two lessons, this review will turn to the incentives and evaluation designs that can further improve the quantity, quality, and reliability of crowdsourcing outcomes.
Lesson 3: Incentivizing Participants with Prizes and Rewards
Prizes and rewards have frequently been discussed in the crowdsourcing literature. Evidence shows that monetary rewards can increase participation for two reasons. First, participants treat crowdsourcing projects as jobs and expect rewards for their effort (Kaufmann, Schulze, and Veit 2011). Second, a sufficient number of prizes and rewards can attract the public's attention and create a willingness to participate (Tokarchuk, Cuel, and Zamarian 2012). Fundamental theories, such as self-determination theory (Zhao and Zhu 2014), cognitive evaluation theory and general interest theory (Borst 2010), expectancy theory (Sun et al. 2012), and equity theory (Zou, Zhang, and Liu 2015), have been applied in the existing literature to understand monetary and extrinsic incentives.
Monetary incentives can increase participation because participants treat crowdsourcing projects as employment. Kaufmann, Schulze, and Veit (2011) find that more than 50 percent of the crowd on Amazon's Mechanical Turk spends more than eight hours per week working on the platform, and their survey shows that payment is one of the most important incentives for participation. The crowd on that site can search for and choose different types of tasks according to the payment rate, which ranges from USD $0.01 to USD $50.00. The site attracts talented and skilled workers, such as those who can read product details in another language. These studies on the payments and rewards of Amazon's Mechanical Turk provide a market price system for rewarding crowdsourced participants.
Increasing the monetary compensation may attract more participants and result in an increased probability of obtaining a successful solution because crowds primarily value remuneration (Martinez and Walton 2014). As an example, Threadless creates online contests for amateur T-shirt designers to test their designs before producing a T-shirt for the market. In this case, the data show a strong correlation between designers who participate in contests and the prizes that are offered. The winner of the design contest receives cash and prizes that are worth USD $2,500, and his or her design is mass-produced for sale, with an earnings system in which the designer can earn approximately 30 percent of the sale price of a T-shirt.
In the public sector, prizes and rewards are also adopted as a mechanism to create incentives to crowdsource innovative solutions for government agencies. On December 8, 2009, the director of the U.S. Office of Management and Budget (2009) issued the Open Government Directive, which directed the increased use of prizes and open challenges to promote open government and innovation in the public sector. In this memo, a brief guideline was included for federal agencies to consider offering different types of prizes (e.g., exemplar prizes and point solution prizes) to create incentives for participants to achieve the agency's goals. Refined payment and reward systems for different types of prizes must be studied further to determine the correct prize amount required to attract participation.
Public officials can experiment with different compensation and reward mechanisms to determine whether money is a success factor in public initiative crowdsourcing projects. For instance, the Manor Labs (Newsom 2013) platform was established by the city of Manor, Texas, to engage people in solving city problems. Innobucks, a type of virtual commodity, were awarded to people who submitted ideas or whose ideas were implemented. Innobucks could be used to purchase or receive discounts from local shops and restaurants (Newsom 2013). Systematic studies could be implemented with these experimental government projects to determine whether different levels of compensation attract different types of crowds and increase participation.
Lesson 4: Enhancing the Quality of Contributions by Creating Learning and Skill-Building Opportunities
Despite the benefits of using prizes and rewards as incentives, this review found that monetary incentives have limitations because money can suppress the intrinsic motivating factors that attract productive members (Martinez and Walton 2014). Monetary awards can increase the number of participants but cannot directly affect the quality of their contributions. Because of the relationship between increased crowd size and the likelihood that a good idea will be generated, monetary awards can only indirectly influence the quality of ideas (Martinez and Walton 2014). To improve the quality of contribution, studies show that it is essential to build participants’ skills and capacities by designing a variety of manageable tasks (e.g., Crump, McDonnell, and Gureckis 2013).
Theories on motivations for crowdsourcing suggest that perceived meaningfulness (Chandler and Kapelner 2013) and fairness expectations (Franke, Keinz, and Klausberger 2013) play key roles in improving the quality of contributions and maintaining the sustainability of crowdsourcing projects. Numerous studies suggest improving task design by increasing task autonomy and skill variety (Crump, McDonnell, and Gureckis 2013). Crowdsourcing projects that encourage participants to make repeated contributions allow those participants to experience self-improvement by devoting effort and time to the projects. Participants will stop contributing when they feel that their efforts are only used to increase the profits of private companies (Franke, Keinz, and Klausberger 2013). Studies also have found that participants continue to contribute because the tasks challenge them and improve their skills as they revisit them (Tokarchuk, Cuel, and Zamarian 2012). Amazon's Mechanical Turk, for example, allows agents to post jobs with various skill levels, from finding business contacts for a firm to writing a review for a tour site. Some tasks require specific training and job qualifications.
Furthermore, because crowdsourcing occurs in an open and transparent environment in which employers can evaluate potential employees’ performance prior to participation, a project can demonstrate the participants’ skill and knowledge levels and serve as a signal to future employers. This provides a strong incentive for participants to consistently contribute high-quality work (Lerner and Tirole 2002). Lerner and Tirole (2002) find that having a relevant audience, requiring specific skills, and adopting good performance measures are three key conditions to ensure the success of open-source projects such as Linux. These crowdsourcing platforms not only provide alternative employment opportunities, but they also allow for flexibility in the choice of task from which the crowd can gain different skills (Kaufmann, Schulze, and Veit 2011).
Because crowdsourcing occurs in an open and transparent environment in which employers can evaluate potential employees’ performance prior to participation, a project can demonstrate the participants’ skill and knowledge levels and serve as a signal to future employers.
Similarly, in the public sector, volunteer reviewers for Peer to Patent initiatives devote time and effort to conduct patent research without monetary compensation because they can learn about innovations and increase their knowledge in a specialized industry (Noveck 2009). Real-world practices in public sector crowdsourcing have confirmed Lerner and Tirole's (2002) theory. Peer to Patent allows peer reviews of participants’ contributions on the platform. Noveck (2009) argues that participants, especially college students, treat peer reviews as a learning and creative skill-improvement process that can benefit their career advancement despite the fact that they are not paid for their work.
One major drawback occurs when the government fails to the implement ideas or proposals received during the crowdsourcing process. For instance, hackathons in the public sector, which are multiday competition events for developing software programs or apps, are unsustainable without government support to standardize and sponsor data sets across cities. The reality is that very few apps created from hackathons continue to be actively downloaded or provide monetization for the initiator (Porway 2013; Schrier 2013). To overcome this problem, state governments can standardize these data to build national tools instead of leaving millions of incompatible data spread across various cities (Badger 2013). For example, in Bloomington, Indiana, Open311 has created open-source civic reporting apps that allow other cities to adopt and modify useful apps. With the support of local governments and the close involvement of government officials, this model works through both a dedicated community 311 and open 311 programs (Robinson and Johnson 2016).
In sum, a review of the crowdsourcing literature shows how a simple design in a variety of tasks can be matched with different skill sets, improving the quality of citizen participation. Breaking complex projects down into simple, manageable tasks for participants can help them build their skills incrementally. However, it is time-consuming to foster learning and skill building, which are essential to improving contribution quality.
A review of the crowdsourcing literature shows how a simple design in a variety of tasks can be matched with different skill sets, improving the quality of citizen participation.
Lesson 5: Empowering Participants through Peer Review
This review shows that by designing a fair selection and review process, the government could benefit by empowering participants to select and review their own contributions. A fair process enhances the legitimacy of participants’ contributions and improves the substance of content by aggregating the wisdom of the crowd (Ellis 2014). An unfair process could be harmful to crowdsourcing's sustainability and could prevent participants from returning. This review shows three approaches that empower participants to select and review their own work.
First, the most commonly adopted method is a rating system that incorporates voting by and comments from participants. In many open innovation contests, such as Challenge.gov and Next Stop Design, the participants not only contribute their ideas but also vote for the best designs. An enhanced rating system is a good tool both for distinguishing and analyzing the composition of the participants and demonstrating different individuals’ skills (Dellarocas 2010) by presenting raw activities, scores, or leaderboards and rankings. However, this method may have a “rich-get-richer” effect and cause cheating behavior. For example, participants may attempt to individually or collaboratively promote or demote an idea based on a specific agenda. Studies show that the incorporation of a mechanism to detect unfair ratings from participants or the establishment of clear rules on voting and rating can prevent bias (Allahbakhsh et al. 2014; Karger, Oh, and Shah 2014; Wan 2015).
Second, prior crowdsourcing experiences suggest that collaboration could be a constructive strategy for the public sector to empower participants, as seen in the adoption of a wiki framework that allows participants to create and edit content collectively. The wiki framework allows participants to share information and helps facilitate consensus, as seen in Future Melbourne 2020, which developed a long-term strategic plan for the city of Melbourne, Australia (Liu 2016). Additionally, the wiki approach has been adopted for connecting internal experts or employees with distant locations, as seen in the NASA Wiki (Clark et al. 2016) and Intellipedia, which developed an intranet for sharing data and information within the U.S. intelligence community (Knox 2016). NASA's internal wiki allows experts or employees from different NASA centers to exchange information relevant to common projects, thus reducing transaction costs (Clark et al. 2016) and enhancing communication and collaboration (Verville, Jones, and Rober 2011). A collaborative process helps create consensus among participants because wiki editing forces individuals with different opinions to read and learn opposing views before they can change content.
Third, the decentralization of information control to either citizens or frontline officers helps remove the barriers of asymmetrical information (Mergel and Bretschneider 2013). Studies show that community building is essential to sustain crowdsourcing platforms because participants contribute when they see themselves as community members and feel a sense of belonging through their meaningful contributions. Recognition as a valuable and contributing community member is found to be effective for improving the quality and recourse of contributions. For instance, studies of SeeClickFix (Ashby et al. 2015), Look at Linz in Austria (Schmidthuber and Hilgers 2017), and Yelp (Askay 2017) show that factors such as a sense of community belonging and social identity play important roles.
However, the collaborative approach can suffer from information redundancy and quality control problems (Mergel and Bretschneider 2013). From a systems perspective, several studies argue that an effective design should combine competition and collaboration to build a crowdsourcing community (Almirall, Lee, and Majchrzak 2014; Hutter et al. 2011; Lampel, Jha, and Bhalla 2012; Majchrzak and Malhotra 2013). Hutter et al. (2011) find that the best process design is one that enables competitive participation in a cooperative climate that allows users to improve the quality of submitted ideas through constructive commenting. Such a community-based approach could optimize openness, enable the negotiation of necessary resources among participants (such as data sharing), and internalize priority setting within the system (Almirall, Lee, and Majchrzak 2014).
Evidence from the literature shows that empowering participants requires designs that combine constructive competition and supportive cooperation in a crowdsourcing community (Hutter et al 2011). Bayus (2013) emphasizes the importance of providing feedback about the ideas being implemented or considered, which allows participants to feel that their input is being taken seriously and helps them understand what constitutes a feasible solution. However, governments are bound by bureaucratic rules and procedures and may not be able to respond to participants as quickly as is usually expected in social media contexts (Mergel and Desouza 2013). Furthermore, the overwhelming amount of output from participants requires substantial time and effort from government officials (Benouaret, Valliyur-Ramalingam, and Charoy 2013). Thus, faced with such constraints in its operating environment, the government can benefit by engaging and empowering participants to select and review their own contributions.
Lesson 6: Legitimating Evaluation by Integrating a Reputation System into the Crowdsourcing Process
The literature suggests that a well-designed reputation system that links activities with participants’ contributions with transparency is essential to successful crowdsourcing (Dellarocas 2010; Lampe 2006; Saxton, Oh, and Kishore 2013). Studies show that effective reputation systems can refine crowdsourcing outputs, reveal participants’ preferences, and enhance the fairness of the reward system to the participants (Agerfalk and Fitzgerald 2008; Dellarocas 2010). A well-designed reputation system integrates outputs—such as ratings, voting, scores, and editing—from peer review and openly displays those aggregated reviews. Common ways of displaying reputation information include raw statistics that summarize participants’ activity history, leaderboards that display participants’ ranking, and scores that indicate the levels and tiers of participants’ contributions. The literature shows that the key to an effective reputation system is to balance sufficient incentives for the participants to “earn prestige” in the community with preventing “too much competition” that makes participants cheat the system (Agerfalk and Fitzgerald 2008; Dellarocas 2010; Saxton, Oh, and Kishore 2013).
Making the performance of crowdsourcing contributors transparent creates legitimacy for peer reviews. For instance, Slashdot, a technology and science news site written by amateurs, has adopted a scoring system to select moderators from among its contributors to filter and evaluate comments on the news. This system posts comments based on ratings provided by moderators instead of listing them by date. The process of becoming a moderator requires both an effort to “earn privileges” and the establishment of meta-moderators who are selected as the moderators of moderators (Lampe et al. 2014). Slashdot allows the crowd not only to comment directly on the quality of the comments but also to decide who should moderate the process through a meta-moderator selection process. This process encourages participants to care about their contribution and take responsibility in the discussion and conversation process.
However, when scores and rankings promote unnecessary competition or cheating, a reputation system may damage trust among members of the crowdsourcing community (Brabham 2012). For instance, Brabham (2012) found that cheating in contest ratings and voting was a major concern in an open competition for a bus stop design in Salt Lake City in Utah. He found that 27.6 percent of all votes that were cast in the competition hosted by Next Stop Design were from a handful of users who created several dummy accounts. Thus, a rigorous method is needed to detect fraudulent accounts because preventing cheating is an important step in maintaining the legitimacy of the evaluation system in a competition that provides public goods and services.
In the public sector, accountability constraints bound the establishment of a legitimate reputation system by the existing governmental legal framework that regulates the usage of online platforms with regard to users’ data privacy. For instance, it remains essential for governments to ensure not only the privacy and safety of user-generated content but also the quality of crowdsourcing outcomes. Since April 2010, the Office of Management and Budget has issued three memos on federal agency use of and interaction with social media technologies (Bertot, Jaeger, and Hansen 2012). Memo M-10-22 promotes website analytics and customization of the user experience and ensures the protection of data privacy for users. Although these guidelines help improve the performance of online platform use by governments, for crowdsourcing to be effective a more comprehensive legal framework is necessary (Bertot, Jaeger, and Hansen 2012; Bowser and Shanley 2013).
Conclusions
Crowdsourcing enables governments to outsource public services and policy innovation through well-designed projects. This process empowers citizens by increasing their capacity to solve public problems. Within a short time, the government sector has made enormous progress in implementing various crowdsourcing projects in different government agencies. Approximately 22 percent of the reviewed articles (N = 38) address crowdsourcing practices in the public sector. Building on important reviews of crowdsourcing in the public sector (Brabham 2015; Clark and Logan 2011; Linders 2012; Nam 2012; Prpić, Taeihagh, and Melton 2015), this review highlights the gaps that future studies could further address.
Future studies could address the transferability of private sector crowdsourcing experiences and practices to the public sector in the area of legal constraints. The potential of crowdsourcing for public purposes requires a comprehensive legal framework to ensure not only the accountability of the responsible agencies but also the privacy and rights of the participants. In addition, existing studies on motivation are based on private outcomes and services. Only three of the articles reviewed a focus on motivation in public crowdsourcing projects (see figure 4). Additional studies of motivation for participants and contributors are needed in the crowdsourcing environment that produces public goods and services. Finally, continued efforts to document ongoing crowdsourcing cases and best practices are needed because these reviews can inform public managers about how to align possible types of crowdsourcing and tasks to fulfill different agencies’ objectives in different contexts and at different levels (Brabham 2015; Prpić, Taeihagh, and Melton 2015). Further reviews of the evolution of crowdsourcing cases in the public sector are essential to identify key stakeholders and factors that sustain the crowdsourcing community in the public sector (Prpić, Taeihagh, and Melton 2015).
The potential of crowdsourcing for public purposes requires a comprehensive legal framework to ensure not only the accountability of the responsible agencies but also the privacy and rights of the participants.

The field can also be advanced by additional collaborations between scholars and practitioners. For instance, Brabham (2012) included a study of the government while implementing the Next Stop Design Project in Salt Lake City. Lutz (2011) suggests that marketing journals could allow the public to comment on research and invite practitioners with practical problems to call for solutions through a rigorous review process. To move forward, public administrators can also adopt techniques and technologies from crowdsourcing to make a stronger research contribution and integrate practitioners into the process of creating public administration knowledge (Schweik et al. 2011).
Acknowledgments
The author would like to thank the anonymous reviewers and journal editor for their constructive comments. This research was funded by Seed Funding for Basic Research from the University of Hong Kong and Public Policy Research Fund from the Central Policy Unit (2016.A8.052.17A). The author would like to acknowledge the contributions made by the research assistants. This is an independent research project.
Note
Biography
Helen K. Liu is assistant professor in the Department of Politics and Public Administration at the University of Hong Kong. She received her PhD in public administration from Indiana University, Bloomington. Her research focuses on crowdsourcing adaptation, collaborative governance, and nonprofit management. E-mail: [email protected]