Next Article in Journal
Enhancing Healthcare Decision-Making Process: Findings from Orthopaedic Field
Previous Article in Journal
SMEs’ International Strategic Groups and Top Managers’ Psychological Characteristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

Justice for the Crowd: Organizational Justice and Turnover in Crowd-Based Labor

1
Business Department, Misericordia University, Dallas, PA 18612, USA
2
Michael A. Leven School of Management, Entrepreneurship and Hospitality, Kennesaw State University, Kennesaw, GA 30144, USA
3
Department of Management, University of Alabama, Tuscaloosa, AL 35487, USA
*
Author to whom correspondence should be addressed.
Adm. Sci. 2020, 10(4), 93; https://doi.org/10.3390/admsci10040093
Submission received: 4 November 2020 / Revised: 12 November 2020 / Accepted: 14 November 2020 / Published: 23 November 2020

Abstract

:
Crowd-based labor has been widely implemented to solve human resource shortages cost-effectively and creatively. However, while investigations into the benefits of crowd-based labor for organizations exist, our understanding of how crowd-based labor practices influence crowd-based worker justice perceptions and worker turnover is notably underdeveloped. To address this issue, we review the extant literature concerning crowd-based labor platforms and propose a conceptual model detailing the relationship between justice perceptions and turnover within the crowd-based work context. Furthermore, we identify antecedents and moderators of justice perceptions that are specific to the crowd-based work context, as well as identify two forms of crowd-based turnover as a result of justice violations: requester and platform turnover. In doing so, we provide a novel conceptual model for advancing nascent research on crowd-based worker perceptions and turnover.

1. Introduction

Organizations are faced with an ever-growing challenge to acquire human resources (Chambers et al. 1998). One recent development in addressing this challenge has been to eschew traditional human resources acquisition models and employ a more task-based approach to acquire human resources, that is, instead of hiring permanent employees, organizations contract temporary workers to complete particular tasks or projects (e.g., Segal and Sullivan 1997). The development of the internet and online payment systems have made it possible for organizations to tap into the temporary workforce by going beyond conventional sources, such as staffing agencies or employment centers (Sundararajan 2016). In particular, the “crowd” has become a new source for organizations to capitalize on flexible labor exchange, where the crowd consists of individuals that work outside of organizational boundaries (Barnes et al. 2015), and organizations that assign work to the crowd (i.e., crowdwork) are considered to be utilizing “crowd-based labor” (Howe 2006).
Despite the promise associated with using crowd-based labor, the influence of crowd-based labor practices and policies on crowd-based labor has rarely been investigated. This is particularly true regarding crowd-based labor turnover, which is considered an especially high risk associated with utilizing the crowd as a substitute for the traditional workforce (Chandler et al. 2013). In this review, we contribute to crowd-based labor literature by detailing the nature of this new human resource acquisition technique and discussing the implications of its application for perceptions and behaviors of working individuals within crowd-based labor. We accomplish this by employing an organizational justice framework, which refers to individuals’ perceptions and attitudes toward policies, practices, and activities that are initiated and implemented within an organization (Greenberg 1987; Cropanzano et al. 2001b; Colquitt 2001). Using this framework, we discuss how to integrate prior research and theory in the organizational justice domain with crowd-based worker experience and their interaction with crowd-based labor requesters, as well as how these interactions influence crowd-based labor turnover.

2. Literature Review

Traditional human resource acquisition techniques include employee referrals, direct applications, college placement office/employment agencies, job fairs, and media advertisements (Breaugh et al. 2003). More recently, organizations have started to acquire human resources via online platforms that enable access to the crowd (Kittur et al. 2013), a term used to describe a large network of people with varying levels of knowledge, skills, and abilities that operate outside of the hiring organization (Howe 2006; Nakatsu et al. 2014). Internet-enhanced work systems and online payment methods have enabled organizations to capitalize on the crowd as a new source of flexible labor exchange to acquire human resources (Barnes et al. 2015). The combination of the crowd with labor exchange is generally defined as “crowd-based labor”, which represents a workforce that organizations can use to solve labor shortages by transferring tasks traditionally performed by their employees to a network of people outside of the organization (Howe 2006).
In order to adequately evaluate crowd-based labor as a mean of supplementing traditional human resource practices, it is imperative to first define several key terms in the crowd-based labor domain, including crowdsourcing, crowdwork, requesters, workers, and gig work. Crowdsourcing describes the process of acquiring labor from the crowd, which is the term used to describe the general external network of individuals available for hire (Howe 2006). In this way, crowdsourcing provides access to the broad knowledge base of the crowd in an open call format; furthermore, those participating in the crowd can choose to work spontaneously upon the receipt of these open calls (Dissanayake et al. 2015; Howe 2006). In this way, crowdsourcing can be considered a process by which external human intelligence can be harnessed (Zheng et al. 2011) to perform diverse tasks (Mao et al. 2013).
As a subset of crowdsourcing in the business domain, crowdwork represents the application of crowdsourcing within the business setting (Schulte et al. 2020), which relies on online platforms as virtual locations that allow for requesting organizations (i.e., requesters) and the participating workforce (i.e., workers) to engage in labor exchanges in such a way that requesters post tasks on platforms and workers take and finish tasks in return for financial compensation (Boons et al. 2015). By utilizing crowdsourcing, crowdwork allows requesters to arrange their work over the Internet by assigning the jobs to a crowd of individuals who are not formally affiliated with the organization (Boons et al. 2015), for the purpose of leveraging workers’ dispersed knowledge, skills, and abilities via internet-mediated platforms (Brabham 2008, 2013; Gassenheimer et al. 2013).
As the application of crowdsourcing has expanded to multiple types of tasks (Nakatsu et al. 2014) and multiple business sectors across nations (Mandl et al. 2015), crowdwork has been split into two general categories based on the way services are delivered (Fernandez-Macias 2017). These categories include online-based service delivery operated and controlled by online platforms (i.e., Amazon MTurk) and offline-based service delivery operated and controlled by online platforms (i.e., TaskRabbit). Noteworthy in Fernandez-Macias (2017)’s categorization is that the author used “crowd work” to describe online-based service delivery and “gig work” to describe offline-based service delivery.
More recently, instead of describing offline-based service delivery only, Duggan and colleagues (Duggan et al. 2020) used the term “gig work” to describe all types of crowdwork, both online- and offline-based. Specifically, these authors provided three categories based on platforms, including (1) Capital platform work, which includes work related to selling products or leasing physical assets via platforms (e.g., Airbnb and Etsy); (2) Crowdwork, which includes work assigned to a geographically dispersed crowd via platforms (e.g., Amazon MTurk and Fiverr); and (3) App work, which includes offline work provided on-demand, with requesters and workers connected via platforms (e.g., TaskRabbit, Uber, and Lyft). Compared to Fernandez-Macias (2017)’s categorization, which termed online-based service as “crowd work” and offline-based service as “gig work,” Duggan et al. (2020) considered “gig work” to be an overarching term that covers both online-based service (e.g., crowdwork) and offline-based service (e.g., capital platform work and app work). However, despite the different definitions of “gig work” provided by these authors, they converge in that online- and offline-based service represents the two main types of crowdwork.
In a similar vein, Howcroft and Bergvall-Kåreborn (2019) expanded crowdwork categorization by introducing four types of crowdwork based on compensation and the initiating party, including Type A, online task crowdwork, such that workers finish microtasks (e.g., a smaller piece of task from a major project) and receive pre-specified compensation; Type B, “playbour” crowdwork, such that workers finish tasks for fun and enjoyment instead of receiving financial compensation; Type C, asset-based services, such that workers deliver service offline by utilizing assets/equipment owned by them; and Type D, profession-based freelance crowdwork, such that workers in certain specializations provide professional services, which usually involves a higher level of knowledge, skills, and abilities. This categorization aligns with the two general types of crowdwork provided by Fernandez-Macias (2017) as well, such that Type A and B belong to online-based service, Type C belongs to offline-based service, and Type D can be either online- or offline-based service. Table 1 provides a clear comparison of the works provided by these authors.
Importantly, crowd-based labor is different from traditional outsourced-labor, in that the participating workforce within crowd-based labor comes from the general public with varying level of knowledge, skills, and abilities (Howe 2006), whereas the workforce associated with traditional outsourced-labor comes from one or multiple specific parties that are identified either through open competition or a bidding process (Lankford and Parsa 1999; Marjanovic et al. 2012). By looking at crowd-based labor from its source and purpose, it helps both academics and practitioners have a stronger understanding of crowd-based activities and their procedures.

2.1. Key Elements of Crowd-Based Labor

Recent literature has extensively examined crowd-based labor regarding who should be considered the crowd, how requesters and workers communicate, how workers receive compensation, and how work outcomes are verified (e.g., Estellés-Arolas and González-Ladrón-de-Guevaa 2012; Hetmank 2013; Nakatsu et al. 2014). These studies, taken together, suggest the following key elements of crowd-based labor.
The first element is crowd-based labor platforms, which are mediating platforms that connect workers and requesters. Since requesters and workers are dispersed, they need a connecting platform that spans beyond temporal, geographical, and organizational boundaries (Alberghini et al. 2013; Gregg 2010). As an alternative way to tap into external human resources, crowd-based labor platforms are virtual places that allow requesters and workers to connect virtually (Boons et al. 2015; Zhao and Zhu 2012) and aggregate human intelligence by utilizing the internet and communication technologies (Barnes et al. 2015).
The second element is worker’s professional profile. As noted earlier, crowd-based labor refers to a workforce that finishes tasks issued by requesters without having a formal employment contract. While debates and legal actions exist regarding how to classify crowd-based labor (Keith et al. 2020), according to the rules of behavioral control, financial control, and relationship type (Topic Number 762—Independent Contractor vs. Employee, Internal Revenue Service 2020), crowd-based labor is currently categorized as independent contractors, as they do not receive direct supervision or work-related materials from requesters nor have a formal employment contract with requesters. These rules are applicable and have been widely used by crowd-based labor platforms.
The third element is collective human intelligence. As an alternative way to tap into extra human resources, crowd-based labor platforms establish connections between requesting organizations and external human resources (Alberghini et al. 2013) by utilizing communication technologies (Barnes et al. 2015). By tapping into external human resources, crowdsourcing can generate a collective human intelligence that goes beyond organization’s boundaries (Gregg 2010).
The fourth element is open call. As indicated earlier, it is the crowd that makes crowdwork distinct from conventional human resource acquisition techniques, because crowdwork is a practice that taps into the wisdom of a large crowd of diversified people (Howe 2009). As such, crowdwork is accessible to a broad participating workforce that has the potential to contribute in unique and various ways. Importantly, platforms can also set varying threshold levels for participating in a task (e.g., Amazon Mturk) so that workers take tasks based on their level of qualification or achievement in the past. With these elements in mind, we conducted a comprehensive review of crowdbased-labor platforms to inform our review and to identify areas in need of additional theorizing and research.

2.2. Review of Crowd-Based Labor Platforms

Mediating platforms are the basis of any crowdwork because they connect workers and requesters (Boons et al. 2015; Zhao and Zhu 2012). Here, we examine existing business crowd-based labor platforms by reviewing their founding year, business model, compensation policy, payment procedure, performance evaluation methods, and platform-supported communication. Aguinis and Lawal (2013) reviewed several crowd-based labor platforms, such as eLance, oDesk, Freelancer, etc., however, given the fact that changes such as mergers and acquisitions are ubiquitous among crowd-based labor platforms, coupled with the rapid expansion and development in this emergent sector (e.g., oDesk merged with Elance, and relaunched as Upwork in 2015; Freelancer acquired multiple platforms during the past decade, including GetAFreelancer, EUFreelance, LimeExchange, Webmaster-talk, vWorker, and Escrow during the past decade), our review revisits the extant platforms and provides an update to previous studies.
To better understand crowd-based labor platforms, we conducted an inductive exploratory study on extant platforms. Inductive studies have been widely used by business researchers to form abstraction based on observing reality (Locke 2007) so that generalizing results and new patterns can be detected (Jebb et al. 2017) and new knowledge can be discovered beyond observation (Woo et al. 2017). By following best practices for inductive research (Woo et al. 2017), we examined existing business crowd-based labor platforms by comprehensively reviewing a variety of characteristics of platforms, including founding year, business model, compensation, policy, payment procedure, performance evaluation methods, and platform-supported communication.
To conduct the platform review, we performed a comprehensive search for existing crowd-based platforms. As indicated earlier, works from Fernandez-Macias (2017), Duggan et al. (2020), and Howcroft and Bergvall-Kåreborn (2019) provided comprehensive categorizations and clear direction for identifying platforms. Additionally, we primarily focused on general platforms (e.g., Fiverr, UpWork, and Freelancer) because these platforms have a higher level of worker representativeness (i.e., workers with a large variety of backgrounds, knowledge, skills, and abilities) and task comprehensiveness (i.e., a large variety of tasks posted by requesters).
We conducted a platform search by using keywords provided by Fernandez-Macias (2017), Duggan et al. (2020), and Howcroft and Bergvall-Kåreborn (2019), such as “freelancing”, “freelancer”, “crowd”, “crowdwork”, “crowd-based”, “crowdsource”, and “crowdsourcing”. Our search effort yielded over 100 platforms. After carefully reviewing these platforms, we excluded platforms based on the following exclusion criteria: (1) platforms that have been recently merged or purchased, such as CrowdFlower, oDesk, and Figure Eight; (2) non-profit platforms, such as Seed Company and Global Solution Networks because these platforms do not represent the labor exchange between requesters and workers; (3) platforms that provide workers with no direct financial compensation, such as Toluna because these platforms reward their participants with “reward points” instead of financial compensation, which is an important element of labor exchange (Barnes et al. 2015); (4) platforms that are not relevant to business crowd-based labor, such as GoFundMe; and (5) platforms that only provide offline-based services, such as Uber, Lyft, Airbnb, and Foodora as their platforms are out of our research focus of this review. Using these criteria, we identified 41 crowdsource-based labor platforms, which were reported in Table 2.

2.3. Platform Review Results

Along with information provided in Table 2, some noteworthy characteristics warrant mentioning. First, the majority of the platforms were established after 2006, the year that the term “crowdsourcing” was coined by Jeff Howe. The platforms that were established before 2006 mainly focused on computer science-related work, such as Guru (formerly eMoonlighter), NineSigma, and TopCoder, whereas after 2006, the type of work available on platforms became more diversified, such as design, microtask, problem-solving, etc.
Second, with regard to delivery methods, some platforms have extended to offline-based services or kept both online- and offline services simultaneously. These types of platforms require workers to go to local locations to finish tasks per requesters’ needs. Examples of this type of platform are TaskRabbit and Thumbtack.
Third, with regard to tasks, the majority of the platforms focus on design, programming/coding, and professional freelancing work, which includes business-related work and technology-related work. Moreover, most of the works are marketplace-based, such that requesters post the tasks on the platform, and workers pick and choose the tasks that they are willing to work on. Once a task is taken by a worker it will not be available to others. Some platforms take a contest-based approach, such that a task can be taken by multiple workers simultaneously, and the requester picks and pays for a single worker’s submission of which they approve, and then disregards the remaining submissions. It can be inferred that the platforms that take a contest-based approach are exposed to more uncertainty, for their registered workers are less likely to have guaranteed compensation from requesters. Therefore, as shown in Table 1, many of the platforms with the contest-based approach may also have a marketplace-based approach simultaneously.
Fourth, with regard to earnings and profit, the vast majority of platforms maintain their operations and make profits by charging their clients (requesters) a percentage-based commission, ranging from 2% (iJobDesk) to 40 % (ClickWorker) of the compensation paid to workers by requesters. For example, UpWork charges 2.75% of the compensation that requesters pay to workers; this rate is 10% at Freelancer and 20–40% at Amazon MTurk, which charges an extra 20% when there are ten or more assignments within a task. Meanwhile, a small number of platforms pay workers on an hourly basis (e.g., PeoplePerHour). Moreover, some platforms charge fees for posting tasks on platforms (by requesters). In addition, some platforms do not charge any commission or fee from workers or requesters, instead, they require requesters or workers, or both parties to purchase subscriptions so that they can gain access to the platform.
Fifth, the majority of platforms use escrow accounts, such that requesters need to pre-pay a certain amount before posting tasks (i.e., upfront payment to platforms). The funds deposit into an escrow account set up by the platform and then transfer to workers once the task is finished by workers and verified by the requester.
Sixth, concerning work verification and evaluation, work is usually evaluated by either a case-based or rule-based evaluation method (Prentzas and Hatzilygeroudis 2009). Specifically, the case-based method is an evaluation method based on the specific circumstances of the task. For instance, a logo design task is evaluated by the extent to which the finished logo design can reflect requesters’ special needs (e.g., for business or a special event), and the evaluation is based on requesters’ previous experiences. In comparison, the rule-based method is an evaluation method that is based on a well-defined, universally accepted knowledge base that results in standardized rules and requirements that are widely accepted within a certain field (Dutta and Bonissone 1993). For instance, transcribing tasks have a general, universal rule—the number of typos in the finished transcription and all requesters that request transcribing tasks would take this evaluation approach.
Seventh, a small portion of the platforms (e.g., Amazon MTurk and CrowdFlower) support the decomposition of jobs. Specifically, in these platforms, tasks can be decomposed into multiple smaller pieces (i.e., distributed work; Brabham 2008; or microtask, Howcroft and Bergvall-Kåreborn 2019), making it possible for multiple crowdsourcing workers to work on the same task simultaneously. This is what crowdsourcing researchers termed “modularity” or “granularity” (Baldwin and Von Hippel 2010; Cullina et al. 2015), which is important for collaboration because modularized work can be completed by multiple workers independently and in parallel, thus decreasing the complexity of each piece of work.
Eighth, as the middle person that connects requesters and workers, platforms take different approaches to facilitating interactions between requesters and workers. For instance, the majority of platforms provide in-site, multi-media messaging systems with integrated voice and video communication for requesters and workers to discuss job specifications, work progress, work quality, and compensation. Some platforms (e.g., 99Designs and Thumbtack) allow requesters and workers to have direct real-time communication, while others (e.g., CrowdSpring and Guru) have direct communication but without real-time communication, and some platforms (e.g., Chaordix and SPIGIT) do not provide any direct communication channel. One interesting finding from the communication mechanisms is that direct communication between requesters and workers seems related to the performance evaluation method, such that direct communication is more likely to take place when a task is evaluated by using a case-based approach, whereas direct communication is less likely to take place when a task is evaluated by using a rule-based approach. One possible way to explain this finding is that case-based evaluation is less likely to include universally accepted criteria, making direct communication more necessary to supplement the case-based evaluation.
Ninth, many platforms (e.g., Prolific and iJobDesk) employ pre-screening processes for both requesters and workers to make sure qualified workers are recruited and tasks do not introduce risk to workers. Furthermore, some platforms have taken one step further by integrating quality control mechanisms into work evaluation (e.g., Appen, which is not included in Table 2), such that platforms can detect worker errors and notify requesters about potential quality issues.
Lastly, many platforms offer to function as arbitrators, such that when there is a dispute between a requester and a worker about task evaluation or compensation, the platform intervenes to investigate and resolve the dispute.

2.4. Benefits and Concerns of Crowd-Based Labor

As demonstrated in the breadth of platforms identified and discussed above, crowd-based labor in the business context has attracted considerable attention over recent years (Simula and Ahola 2014). This is often attributed to the various benefits associated with using crowd-based labor, such as gaining access to diversified knowledge and to new perspectives that that would otherwise be absent (Gassenheimer et al. 2013; Surowiecki 2005), acquiring information from a highly diversified and representative workforce (Behrend et al. 2011; Buhrmester et al. 2011; Gassenheimer et al. 2013; Paolacci et al. 2010), leveraging previously unattainable resource and build competitive advantages (Prpić et al. 2015), decreasing the possibility of making decisions based on groupthink and common information (Surowiecki 2005), and having a cost-effective method for dealing with human resource shortages (Acosta et al. 2013; Chaisiri 2013; Buhrmester et al. 2011; Irani 2013; Yuen et al. 2011). Additionally, from the workers’ standpoint, crowdwork makes it possible for workers to have a higher level of flexibility and latitude about working hours and work location so that a balance between work and life can be easily maintained. Crowdwork also brings a higher level of job variety so that workers can pick and choose their favorite work from platforms.
Despite these benefits, concerns surrounding crowd-based labor are generally overlooked. These concerns have surfaced more recently as the use of crowd-based labor has continued to build in the business context. First, the use of monitoring and the feedback given to workers is relatively limited. As indicated earlier, workers are dispersed in different locations and connect to requesters through internet-mediated platforms. Because of this, it is difficult for requesters to monitor these workers’ activities effectively and provide feedback on time, reinforcing the possibility of low-quality work and limiting potential improvements in workers’ task effectiveness and efficiency (Askay 2017; Mao et al. 2013). Second, due to the absence of effective monitoring, some crowdsourcing workers may take advantage of the crowdsourcing system by engaging in character/identity misrepresentation (Sharpe-Wessling et al. 2017) and performing sloppy work intentionally (Eickhoff et al. 2012) to maximize compensation. These types of worker are referred to as “malicious workers” (Eickhoff et al. 2012). Third, many workers have limited access to regular jobs owing to a variety of internal and external circumstances (e.g., job market landscape, financial condition, flexibility, personal preference, Keith et al. 2020), and this means that they may have few other options outside of taking on crowdwork (Jäger et al. 2019). Consequently, under-compensation is a common issue within the business crowdwork environment. For instance, a large number of requesters are reported to take advantage of workers by underpaying them, such as paying below the minimum wage (Semuels 2018) or even refusing to pay by coming up with various reasons (e.g., qualification issues, response time, failing to pass attention checks, etc.) after workers have submitted completed tasks (Milland 2016). Compounding this, there are currently no laws or regulations to protect the rights of the participating workforce, making under-compensation and non-compensation almost a “common practice” implemented by many requesters and turning crowdwork into a precarious undertaking (Keith et al. 2020).

2.5. Crowd-Based Labor Concerns and Their Relation to Human Resource Management (HRM)

Following our review of crowd-based labor and its characteristics, we posit that the concerns surrounding the participating workforce within crowdwork are primarily reflective of HRM issues because the participating workforce, in essence, is a human resource that goes beyond organizations’ boundaries. More specifically, from the standpoint of HRM, we propose these concerns can be categorized into three areas.
First, workers in the traditional work context have agreed-upon pay rates that are clearly specified in employment contracts and protected by laws and regulations (e.g., minimum wage law). However, as noted earlier, workers in the crowdwork context do not have formal employment contracts, nor existing laws that protect their rights. In fact, Aguinis and Lawal (2013) posited that in the crowdwork context, workers’ compensation, to a large extent, is market-oriented, such that the pay rate is subject to requesters and crowd-based labor market conditions, making it favorable to requesters but unfavorable to workers.
Second, within the crowd-based work context, it becomes challenging to implement performance evaluations in a way that mirrors the traditional work context. Extant literature has suggested that performance evaluation is a continuous process of identifying, measuring, benchmarking, and developing the performance of working individuals (Aguinis 2009; Aguinis et al. 2011). However, a continuous evaluation is difficult to maintain in the crowdwork context, due to the limited interaction between requesters and workers as well as requesters’ presence in the crowd-based labor process. This can make it difficult for workers to receive and heed feedback from requesters in a timely manner. Moreover, the shifting nature of tasks and clients means that it is difficult to build expertise based on feedback provided by requesters, with such expertise leading to better pay or more desirable crowd-based tasks from the requester in the future.
Third, and compounding the difficulties surrounding continual evaluation, is the limited communication associated with the use of outcome-based evaluations, which often leads to dissatisfaction and a lack of organizational commitment in traditional work contexts (e.g., Campbell and Wiernik 2015) and is amplified in the crowdwork context. Taking Amazon’s Mechanical Turk (i.e., MTurk) as an example, requesters post tasks on the online platform, from which workers can pick various tasks from multiple requesters. When workers complete tasks, they submit works so that requesters can verify and evaluate the results, and then decide whether to pay the worker based on job quality. Workers may either receive financial compensation as the “reward” for desirable outcomes or receive no payment or even be blocked by requesters as “punishment” for undesirable outcomes. This outcome-oriented process further limits the number of available performance evaluation criteria that can be utilized for determining compensation, making performance evaluation in the crowd-based work context a single-sourced, outcome-based evaluation (Aguinis and Lawal 2013). The limited feedback and overemphasis on outcome-based evaluations also demonstrate the general lack of communication between requesters and workers, which can easily lead to confusion and ambiguity from the worker’s perspective. In a similar vein, from the standpoint of communication, crowdwork poses a challenge to human resource management as crowdwork is built upon physical distance, which negatively impacts platform-mediated communication between workers and requesters because the frequency of interaction decreases as physical distance increases (Latané et al. 1995). Moreover, miscommunication or misunderstanding increases when a mediating medium (e.g., an online platform) is present (Vukovic and Natarajan 2013).

2.6. Review of Organizational Justice

The aforementioned HRM-related issues that are associated with compensation, performance evaluation, and communication influence workers’ perceptions of organizational justice. This is because organizational justice is based on the transaction-based relationship between workers and employers, with workers’ work inputs (e.g., energy, time, effort) being compared to the outputs given in exchange by the employer (e.g., compensation and benefits). From the workers’ perspective, this transaction-based relationship reflects the transactional contract that “involves specific monetizable exchanges” (Rousseau 1990, p. 391) between workers and employers. Take completing job tasks as an example, workers spend time, energy (e.g., manual labor or brainpower), and effort to finish job tasks, and receive economic and quasi-economic compensation when job tasks are completed (Cropanzano et al. 2001a). This exchange occurs due to the completion of specific job tasks being specified in the employment contract, implying that completing certain job tasks and receiving corresponding compensation are built upon a transactional relationship with mutual, contractual consent (Opsahl and Dunnette 1966). However, when employees perceive violation in the transaction-based relationship in terms of compensation results, compensation determination process, or communication/explanation of compensation, it leads to psychological contract violation and experiencing of lack of justice.
Research in organizational justice dates back to the early 1980s. For instance, Dworkin (1986) posited that justice is understood as the basis for societal and organizational legitimacy. More recently, Goldman and Cropanzano (2015) suggested that justice describes normative standards and how these standards are implemented, such that justice not only denotes conduct that is morally expected, but also refers to whether a decision-maker adheres to norms and rules. Similarly, Colquitt and Rodell (2015) suggested that justice is best considered to be adherence to rules that reflect appropriateness, and the degree to which an organization or its top management is perceived to act consistently, equitably, respectfully, and truthfully when it comes to decision making.
Generally speaking, organizational justice refers to individuals’ perceptions and attitudes toward the policies, practices, and activities that are implemented within organizations (Greenberg 1987; Cropanzano et al. 2001b; Byrne and Cropanzano 2001; Colquitt 2001). Dimensionally, organizational justice can be summarized into three main aspects: (1) distributive justice, which is considered to be a justice perception that is associated with the distribution of tangible or intangible outcomes, as the result of certain behaviors or activities (e.g., working activities), in other words, it embodies the outcome-based justice attribute (Adams 1963; Colquitt 2001; Cropanzano and Rupp 2003; Greenberg 1987); (2) procedural justice, which refers to a process-based justice perception that is associated with decision-making procedures, reflecting the process-based justice attribute (Blader and Tyler 2003; Greenberg 1987; Leventhal 1980); and (3) interactional justice, which describes the perception of the degree to which certain decisions or outcomes are adequately explained to the target individual with respect and propriety (Bies and Moag 1986; Colquitt 2001; Sitkin and Bies 1993). Notably, some organizational justice researchers (e.g., Bies 2001; Greenberg 1993; Colquitt 2001) have divided interactional justice into two sub-dimensions, including interpersonal justice, which reflects the quality, dignity, and respect of interpersonal treatment received from the others (Bies and Moag 1986) and informational justice, which reflects the presence of explanations received by the decision-makers (Greenberg 1993; Shapiro et al. 1994). Despite the controversy about whether interactional justice should be one integral dimension or two interrelated sub-dimensions, when taken together, interactional justice broadly attempts to represent organizational justice from the perspective of between-person interaction, indicating the social aspect of organizational justice (Cropanzano and Ambrose 2001; Tyler and Blader 2000).
Extant literature also indicates that one vital factor behind organizational justice is fairness, which refers to a person’s assessment or evaluation of the extent to with a process/decision is undertaken morally (Goldman and Cropanzano 2015) and appropriately (Colquitt and Rodell 2015). Indeed, as Goldman (2015) indicated, organizational justice is assumed to be largely synonymous with fairness; or more specifically, fairness is one possible conception of justice (e.g., Colquitt and Rodell 2015; Chiaburu and Marinova 2006; Cugueró-Escofet and Rosanas 2013; Heponiemi et al. 2008).

2.7. Review of Organizational Justice and Turnover in Crowdwork Literature

Organizational justice is an indispensable component in HRM (Folger and Cropanzano 1998), such that when employees consider their relationship with employer to be jeopardized (e.g., perceived failures or inappropriate actions regarding performance evaluation), it leads to a perceived organizational justice violation or justice issue.
Previous studies have indicated the necessity and importance of organizational justice in regards to worker outcomes and perceptions. As indicated by Cropanzano et al. (2001a), in the organizational context, people have multiple justice-related needs, such as the need for control, need for belonging, need for meaning, and need for positive self-regard- and organizational justice plays the role of satisfying each of these needs. Similarly, building on Kelman (1958, 2017)’s three-pathway social influence (i.e., compliance pathway, identification pathway, and internalization pathway), Cropanzano et al. (2001b) pointed out that organizational justice researchers have “independently re-discovered Kelman’s key insight” (p. 9) by having “a long journey through conceptually varied terrain” (p. 9), and have figured out why organizational justice matters—these authors suggest that instrumental motive, relational motive, and moral motive precipitate organizations’ concerns of distributive justice, procedural justice, and interactional justice, respectively.
In traditional workplaces, organizational justice predicts employees’ work-related behaviors, such as organizational citizenship behaviors, organizational commitment, and turnover (Colquitt et al. 2001). In the crowdwork context, workers can have concerns about organizational justice issues as well because of the labor exchange and the transaction-based relationship between requester and worker are still key components, similar to workers in more traditional contexts.
In their recent review of organizational justice, Ryan and Wessel (2015) shed light on the new challenges in technology-mediated working contexts. Specifically, the authors pointed out that under a technology-mediated work environment, workers expect more consistent and bias-free HRM practices, such as justice and explanations associated with them. This is due to the technology-mediated working context creating a more challenging environment for workers to identify the outcomes and processes that undermine justice. Even though the authors did not explicitly address the crowdwork context, the technology-mediated work environment, to a large extent, still reflects these concerns because mediating platforms are an integral part of crowdwork.
To date, some reviews and studies have shed light on organizational justice issues in the crowdwork context. For instance, compensation provided by requesters is related to distributive justice issues (Gleibs 2016; Irani 2013; Porter et al. 2019); performance evaluations and pricing procedures are related to procedural justice issues (Faradani et al. 2011; Kamar et al. 2012; Porter et al. 2019); and the correspondence between requesters and workers is related to interactional justice (Porter et al. 2019). To better understand justice issues in the crowdwork context, we conducted a literature review of studies that specifically summarize the current understanding of organizational justice in the crowdwork context.
Specifically, we conducted a literature search using keywords such as “freelancing”, “freelancer”, “crowdsourcing”, “crowd”, “crowdsource”, and “crowd-based” in multiple databases, including ScienceDirect, Escudos, Emerald, JSTOR, Sage Journals, Springer, Wiley, and Google Scholar. To be inclusive, these keywords were used in all searches, including title, abstract, list of keywords, and main texts. In particular, we reviewed studies that examined the antecedents and outcomes of organizational justice in the crowdwork context and specific dimensions of organizational justice that were discussed. Following our search, we identified ten studies that specifically discussed organizational justice. We reviewed these articles in terms of antecedents and outcomes of organizational justice in the crowdwork context. A summary of this review is shown in Table 3.

2.8. Literature Review Results

In terms of notable findings, studies investigating the antecedents of organizational justice in depth are somewhat lacking in the crowdwork content. For instance, antecedents include value distribution, system transparency (Franke et al. 2013), workload (Ma et al. 2016) point rewarding, and feedback provision (Weng et al. 2019; Yang et al. 2018). However, these studies do not take moderations or boundary conditions into account.
Moreover, with regard to the outcomes, although all identified studies discussed at least one outcome of organizational justice, only Ma et al. (2016, 2018) discussed turnover intentions, which they defined as the intention to switch from one crowd-based labor platform to another (i.e., platform turnover). In a similar vein, Brawley and Pury (2016) operationalized crowd-based workers’ turnover intentions as their unwillingness to continue to work for the same requester (i.e., requester turnover). Since there is a triadic relationship among worker, requester, and platform when it comes to crowdwork (Fieseler et al. 2019), we posit that because of the unique circumstance that workers face (i.e., having both the requester and platform involved in the work process), both types of turnover exist and should be considered in crowdwork—requestor turnover and platform turnover.
Additionally, organizational justice is an integral concept that covers three dimensions (e.g., Sitkin and Bies 1993; Colquitt 2001), all three dimensions should be considered simultaneously because organizational justice is a comprehensive concept that consists of outcome, process, and explanation.
Based on our review of the platforms and literature, in the next few sections, we propose antecedents of justice perceptions as well as moderators that influence the relationship between antecedents and justice perception. Additionally, we posit that workers’ justice perceptions have a direct effect on requester turnover.

3. Conceptual Work—Antecedents of Crowd-Based Workers’ Organizational Justice Perception

3.1. Compensation Policy and Distributive Justice

Compensation policy, which is also known as pay policy, refers to organizational policies designed to provide compensation that commensurate with workers’ jobs. As noted earlier, distributive justice reflects perceived fairness related to outcome and resource allocation (Colquitt 2001; Cropanzano and Rupp 2003). In the crowdwork context, distributive justice is important to workers because this type of justice reflects outcome-based gains (Cropanzano et al. 2001b), which are largely related to acquiring foreseeable and tangible benefits, such as concrete economic and quasi-economic gains (Cropanzano et al. 2001a; Thibaut and Walker 1978) to satisfy one’s self-needs (Gond et al. 2017). Therefore, by seeing distributive justice through a crowdwork lens, what can be inferred is that a way to promote workers’ distributive justice is to have a compensation policy in place, such that the policy can ensure commensurate compensation that reflects workers’ inputs.
Furthermore, Equity Theory (Adams 1963) provides a useful theoretical framework to demonstrate the importance of compensation policy. According to Equity Theory (Adams 1963), working individuals seek to maintain equitable transactions between the input that they invest in the work and the outcome they receive from it (i.e., intrapersonal equity), and between the treatment they receive and that received by equity referents (i.e., interpersonal equity). This is because commensurate and proportionate compensation reflects the fundamental basis of employment relations (Opsahl and Dunnette 1966). By comparing inputs and outcomes, a worker can determine whether he/she is under-compensated or equitably compensated, leading to evaluations about justice and fairness (Adams and Freedman 1976). An equitable compensation policy can ensure commensurate compensation and minimize the occurrence of under-compensation. Therefore, we propose that an equitable compensation policy is positively associated with workers’ distributive justice perceptions.
Proposition 1.
Requesters’ equitable compensation policy is positively related to workers’ distributive justice perceptions toward requesters.

3.2. Compensation Policy and Motivation

Similar to the traditional working context, motivation is an important factor that affects work outcomes in crowdwork as well (Smith et al. 2013). Motivation refers to the dynamic personal energy by which an action is performed, and reflects individuals’ willingness to perform tasks (Cummings and Schwab 1973; Rothschild 1999; Siemsen et al. 2007) as a result of combining psychological processes that target the wanting and attempting to execute certain behaviors (Mitchell 1997). Similarly, Campbell and Pritchard (1976) defined motivation as “a set of independent/dependent variable relationships that explain the direction, amplitude, and persistence of an individual’s behavior, and holding constant the effects of aptitude, skill, and understanding of the task, and the constraints operating in the situation.” (p. 65).
Two main types of motivation described in the literature are extrinsic and intrinsic motivation (Hossain 2012). On the one hand, extrinsic motivation is the motivation to work for an outcome that is apart from and external to the work itself, such as reward or recognition from other people (Deci 1975). On the other hand, intrinsic motivation is defined as the drive to engage in work for its own sake because the work itself is interesting, satisfying, or enjoyable (Deci 1975; Smith et al. 2013). The extrinsic-intrinsic motivational orientation is helpful when researchers seek to explain how a specific motivational orientation influences the completion of a particular task.
Recent studies (e.g., Wexler 2011) have discussed Self-Determination Theory (SDT, Deci and Ryan 2000) within the crowdwork context. Broadly, SDT posits that instead of a unitary or a bipolar construct, there is a motivation continuum with external and internal motivation on both ends, whereby someone can move between externally motivated (e.g., being motivated by financial rewards) and internally motivated (e.g., being motivated by mastery of a skill or by achievement) (Deci and Ryan 2000). This movement depends on the extent to which people establish a sense of emotional involvement, that is, the degree of being psychologically involved in the process of reaching desired goals (Allen and Meyer 1996), an adequate level of work engagement (Schaufeli et al. 2002), and satisfaction of the need for competence, autonomy, and relatedness (Deci and Ryan 2000).
By seeing crowd-based work through a SDT lens, workers are likely motivated differently (Alam and Campbell 2017; Zhao and Zhu 2012), based on where they land on the external-internal continuum. When workers are relatively extrinsically motivated (e.g., being motivated by external factors), their focus will lean towards instrumental outcomes, which include outcomes that are extrinsic to requested tasks, such as economic gains (Deci and Ryan 2000; Gassenheimer et al. 2013; Smith et al. 2013) and quasi-economic gains (Cropanzano et al. 2001a). On the other hand, when workers are relatively intrinsically motivated (e.g., being motivated by satisfying the need for competence, autonomy, relatedness, etc.), their focus will lean towards the inherently interesting characteristics of requested tasks, such as work-related enjoyment and satisfaction, instead of the need for external reinforcement (e.g., financial compensation) to maintain their work (Smith et al. 2013).
Similarly, Brawley (2017) indicated that workers who participate in crowdwork consider themselves to be paid workers and are motivated by financial interests, such that higher payment encourages them to put more effort into their work. Taken together, given distributive justice perceptions are directly linked to instrumental outcomes, we suggest that workers who are more extrinsically motivated will have a stronger response to distributive justice violations due to compensation policy issues than workers who are more internally motivated. In this way, we propose that in the crowdwork context, the extent to which equitable compensation policy predicts workers’ distributive justice perceptions depends on workers’ work-related motivation.
Proposition 2.
The positive relationship between requesters’ equitable compensation policy and workers’ distributive justice perceptions is moderated by workers’ motivation, such that the relationship will be stronger when workers are extrinsically motivated and will be weaker when workers are intrinsically motivated.

3.3. Performance Evaluation Methods and Procedural Justice

Performance evaluation refers to how certain work outcomes are evaluated. Performance evaluation is closely related to procedural justice, which refers to the extent to which the procedures used for determining resource/reward allocation within an organization result in consistent evaluation approaches (Barrett-Howard and Tyler 1986). Procedural justice is important to working individuals because it signals that decision-makers make compensation-related decisions based on a process that embodies transparency and consistency (Tyler and Bies 1990; Tyler and Blader 2000).
Establishing and maintaining involvement with the individuals being evaluated is an important factor that contributes to performance evaluation fairness. Greenberg (1986) posited that soliciting employees’ input before a performance evaluation is a vital factor to ensure employees’ perceived fairness in the evaluation process. In the crowdwork context, the increase in workers’ feelings of involvement and inclusion with a requester increases their distributive justice perceptions because this leads workers to believe their voices are respected and heard by requesters. Specifically, a good way to promote workers’ justice perceptions of performance evaluation is to have them involved in the establishment and/or revision of performance evaluation practice.
In fact, Greenberg and Folger (1983) indicated that the organizational policies that allow employees to increase their influence and control over their work contribute to the increase of procedural justice perceptions. Therefore, by seeing procedural justice from a crowdwork perspective through the lenses of participation and influence, what can be inferred is that to promote workers’ procedural justice, there is a need to have a performance evaluation that emphasizes workers’ participation, such that the performance evaluation needs to allow for worker participation, feedback, and the opportunity to conduct work revisions. Taken together, we propose that performance evaluation practices that emphasize workers’ participation will contribute to workers’ procedural justice perceptions in the crowdwork context.
Proposition 3.
Requesters’ performance evaluation practices that emphasize workers’ participation and involvement are positively related to workers’ procedural justice perceptions toward requesters.

3.4. Case-Based vs. Rule-Based Performance Evaluation

In the traditional working context, workers can be evaluated in multiple ways (Campbell and Wiernik 2015). However, when it comes to the internet-mediated crowdwork context, the availability of evaluation methods is severely restricted (Aguinis and Lawal 2013). Instead, two types of evaluation systems have been broadly utilized in the computer-based work context: rule-based reasoning (RBR) and case-based reasoning (CBR) (Dutta and Bonissone 1993).
RBR and CBR have been broadly used in as part of computerized systems in different industries, such as auditing (e.g., Lee et al. 2008), healthcare (e.g., Marling et al. 1999; Rossille et al. 2005), and business (e.g., Golding and Rosenbloom 1996). As indicated by Chi and Kiang (1993), the RBR system takes a deductive reasoning approach, such that the system is represented by an objective, universally accepted knowledge base (i.e., a series of rules) that defines logical relations among concepts of the problem domain—the system evaluates work by searching, screening, and matching appropriate knowledge-based rules to the work that needs to be evaluated. A CBR system, on the other hand, takes an inductive reasoning approach, such that the system retrieves previous cases from a case library (i.e., a set of previous cases), matches the attributes and essential features from the work that needs to be evaluated subjectively with that from similar previous cases, which include episodic knowledge, memory organization, and learning (Slade 1991), and adapts the solutions from previous cases to the new cases (Dutta and Bonissone 1993; Watson and Marir 1994).
In the crowdwork context, requested tasks can be evaluated by either rule-based evaluation or case-based evaluation (Prentzas and Hatzilygeroudis 2009). RBR evaluations are used for tasks such as proof-reading, transcribing, computation, and coding/programming, as evaluation of these tasks, is based on a well-defined, universally accepted knowledge base (e.g., spelling, grammar, mathematics, etc.). On the other hand, CBR evaluations are used for tasks such as logo design and writing, as these tasks are evaluated based on previous cases (e.g., similar tasks that evaluated previously), such that requesters retrieve their memories of evaluating previous cases and apply them to current tasks.
Therefore, unlike an RBR evaluation that uses universally accepted knowledge and objective rules, a CBR evaluation relies on a requester’s subjective rules and accumulated experiences. This makes the accuracy and reliability of the evaluation to be contingent on recency, relevancy, and saliency of similar previous cases. Given, participating in the establishment/revision of performance evaluation helps workers to learn more about requesters’ needs and expectations (making it more likely for workers to yield working outcomes that align with requester expectations), participative performance evaluation is likely to have a stronger influence on procedural justice perceptions for CBR evaluations. This is because CBR evaluations are malleable and generally more ambiguous compared to RBR evaluation. Therefore, by participating in the evaluation, workers can gain additional clarity as to how and why they receive a particular performance outcome.
Proposition 4.
The positive relationship between performance evaluations and workers’ procedural justice perceptions is moderated by the type of evaluation policy, such that CBR evaluations will strengthen the relationship, while RBR evaluations will weaken the relationship.

3.5. Considerate Communication and Interactional Justice

Interactional justice refers to the perception of the degree to which certain decisions or outcomes are adequately explained to target individuals with respect and propriety (Bies and Moag 1986; Sitkin and Bies 1993), and it is considered to be a combination of interpersonal justice (Bies and Moag 1986) and informational justice (Shapiro et al. 1994). Moral motive contributes to interactional justice (Cropanzano et al. 2001a). Moral motive originates from the idea of egalitarianism, which posits that people tend to consider both self-interests and others’ interests simultaneously by engaging in egalitarian behaviors and distributing wealth fairly (Rawls 2005). Based upon egalitarianism, moral motive emphasizes workers’ expectations for being treated morally by employers, manifested in employers’ consideration of workers’ interests and maintaining a good moral standing (Cropanzano and Rupp 2003).
In the crowdwork context, considerate communication between requester and worker can attend to workers’ concerns and promote their perceptions of interactional justice. When a requester explains compensation outcomes and how compensation is determined to workers with consideration for workers’ circumstances and concerns, it allows workers to utilize social information processing to formulate a positive perceptions of their relationship with the requester (Thomas and Griffin 1989). Specifically, workers are likely to view requesters who take this approach to communication as considerate and understanding and are willing to address workers’ concerns. Furthermore, this type of social information processing facilitates workers’ rationalization of requesters’ explanatory behaviors, aiding workers to perceive stronger interactional justice.
Proposition 5.
Considerate and moral communication between requesters and workers is positively related to workers’ interactional justice perceptions toward requesters.

3.6. Communication Quality

Communication quality influences the relationship between considerate communication and workers’ interactional justice perceptions because communication quality determines how requesters can effectively transmit and explain compensation-related information. We suggest that two factors contribute to communication quality between requester and worker: (1) media richness of the communication, based on the Media Richness Theory (Daft and Lengel 1986), and (2) communication interactivity, based on the idea of presence (Steuer 1992).
According to the Media Richness Theory (Daft and Lengel 1986), different types of communication methods can be placed on a continuum (e.g., from lower richness to higher richness) based on their ability to adequately convey a message, such that a higher level of media richness makes it easier for requesters to make evaluations and determine and communicate outcomes more clearly and effectively. For instance, video calls have a relatively higher level of adequacy to convey information, whereas a bulletin or handbook has a relatively lower level of adequacy, due to limited multi-media representation (Herr et al. 1991). Recent studies have indicated that media richness can positively influence an organization’s effectiveness in virtual workplaces. For instance, Hambley et al. (2007) indicated that communication with higher media richness is more capable than that with lower media richness to facilitate better performance in virtual work teams. In the crowdwork context, requesters can establish effective communication that can transmit considerate messages with an adequate level of information richness so that respect and propriety are clearly conveyed to workers. For example, most platforms we identified in Table 2 have established multi-media communication channels (i.e., relatively higher level of richness) between requesters and workers so that both parties can discuss work and compensation with visualization.
Meanwhile, interactivity refers to the extent to which a real-time, between-person communication can be established between message senders (e.g., requesters) and receivers (e.g., workers). A higher level of interactivity increases receiver’s perception of telepresence, a specific type of presence that reflects the extent to which a person is perceived to be present (i.e., “being there”) in a technology-mediated context (Sheridan 1992; Steuer 1992), and an important indicator of effective interaction in technology-mediated communication (Steuer 1992). Interactivity can influence how workers perceive justice-related messages from requesters because it influences the efficacy of information transmitted from requesters to workers as well as workers’ perceptions of requesters’ telepresence. Interactivity promotes the requester’s telepresence, which enables the worker to perceive the requester as “being there” and ready to help (Steuer 1992). As an example, high interactivity takes place when real-time communication between a requester and a worker can be established, such as an instant message. In contrast, low interactivity occurs when the requester and a worker can only communicate through a broker or middle person, such as a platform representative. Given the level of interactivity between a requester and a worker depends on real-time interaction, and interactivity helps requesters provide considerate and moral communication promptly, interactivity can play a significant role in the communication between requesters and workers as it determines how closely computer-mediated communication mirrors requesters’ consideration.
Proposition 6.
The positive relationship between considerate and moral communication and workers’ interactional justice perceptions is moderated by media quality (including media richness and interactivity), such that the relationship will be stronger when there is higher media quality, and the relationship will be weaker when there is lower media quality.

4. Conceptual Work—Outcomes of Organizational Justice Issues

4.1. Turnover

Turnover is a typical withdrawal behavior. Previous studies have suggested organizational justice perceptions play an important role in predicting turnover intention. For instance, Colquitt et al. (2001) reviewed extant organizational justice literature and posited that organizational justice issues significantly contribute to workers’ turnover intention. Previous studies have suggested that there is a positive relationship between fairness perceptions and turnover in the traditional workplace context (e.g., Cropanzano et al. 2003; Jones and Skarlicki 2003; Tekleab et al. 2005). Building on the Equity Theory (Adams 1963), Brashear et al. (2005) indicated that justice issues jeopardize the equity between workers’ input and return from work, and consequently contribute to the intention of withdrawal behaviors. More recently, Fortin and colleagues (Fortin et al. 2014) conducted a comprehensive review of organizational justice and posited that organizational justice is a critical determinant of employees’ attitudes and behaviors in the workplace, and turnover is one of the consequences of injustice.
Extending to the crowdwork context, there are even fewer barriers to prevent workers from turnover, due to the absence of formal employment contracts between requesters and workers, as well as the easy process for workers to sign up and sign off crowdsourcing jobs (Brawley and Pury 2016). Ma et al. (2016) suggested that workers’ perceptions of fairness are negatively related to turnover despite differences regarding working processes and context. Previous studies have shed light on explicating turnover in the crowd-based context. For instance, Brawley and Pury (2016) refer crowd-based workers’ turnover to refusing to accept new jobs from a particular requester (i.e., requester turnover). Building on previous studies that provided evidence of the association between organizational justice and turnover, in a crowdwork context, organizational justice in three aspects (i.e., distributive, procedural, and interactional justice) can lead to requester turnover. This idea is also supported by target-similar effects (Cropanzano et al. 2001a; Skarlicki et al. 2016), which posits that workers usually direct their reactions towards the “source” of the antecedents. In the context of the crowdwork environment, when the source of the organizational justice issue is the requester (i.e., requesters’ compensation policy, requesters’ performance evaluation practices, and considerate and moral communication between requesters and workers), injustice perceptions toward the requester would lead to requester turnover.
Proposition 7.
Workers’ justice perceptions toward requesters regarding (a) distributive justice, (b) procedural justice, and (c) interactional justice will be negatively related to workers’ requester turnover.

4.2. Job Mobility

Job mobility is defined as a worker’s perception of available alternative job opportunities and has been shown to have a significant influence on workers’ turnover (Wheeler et al. 2007). In the crowdwork context, workers’ perceived requester-related job mobility can influence their requester turnover as well because perceived requester-related job mobility reflects the extent to which workers feel free to move from one requester to another. When job mobility is high, workers believe there are more job opportunities from other requesters so that they can easily find alternative works from other requesters when they perceive injustice from the current requester(s) they work for. In comparison, when job mobility is low, workers believe there are limited job opportunities from other requesters so that it is difficult to find alternative works from other requesters when they perceive injustice from the current requester(s) they work for. Therefore, we propose workers’ perceived job mobility moderates the relationship between justice perceptions and turnover.
Proposition 8.
The negative relationship between workers’ justice perceptions and requester turnover is moderated by workers’ perceived crowdwork job mobility, such that the relationship between workers’ (a) distributive justice perceptions, (b) procedural justice perceptions, and (c) interactional justice perceptions and workers’ requester turnover will be stronger when perceived crowd-based job mobility is higher.

4.3. Escalation of Crowd-Based Turnover

As indicated earlier, in the crowdwork context, the unique “triadic” relationship among workers, requester, and platform (Fieseler et al. 2019) implies that workers’ turnover can move from requester turnover to platform turnover. The co-existence of requester turnover and platform turnover has been supported by recent studies. For instance, Brawley and Pury (2016) refer crowd-based workers’ turnover to refusing to accept new jobs from a particular requester (i.e., requester turnover); whereas Ma et al. (2016) defined crowd-based workers’ turnover as discontinuing to work on a particular platform (i.e., platform turnover).
From the perspective of transaction costs (Dahlman 1979), for crowd-based workers, turnover is not without corresponding transaction costs. For instance, when it comes to crowd-based workers’ turnover, transaction costs incur along with seeking, identifying prospective requesters (i.e., information search costs, Dahlman 1979), as well as learning and negotiating with the prospective requester (i.e., bargaining costs, Dahlman 1979). Moreover, transaction costs go up as workers move from requester turnover to platform turnover as both information search costs and bargaining costs will increase when moving from one platform to another. For instance, low transaction costs are incurred when workers engage in requester turnover by switching from one requester to another within the same platform, as it requires fewer changes in task norms, working processes, and compensation processes; whereas higher costs are incurred when workers switch from one platform to another, as it requires workers to search for new requesters plus the costs associated with starting over the new platform registration and verification process, not to mention learning how to work in a new crowdwork system.
Therefore, by looking at crowd-based turnover through the lenses of transaction costs, what can be inferred is that requester turnover incurs relatively lower costs to workers, whereas platform turnover incurs higher costs. When workers perceive injustice in crowdcwork, they will address issues rationally by pursuing the option that incurs the lower cost (i.e., requester turnover), based on the Economic Man Principle (Camerer and Fehr 2006). This suggests that workers will usually engage in requester turnover first, as it incurs relatively lower transaction costs when compared with platform turnover. Moving beyond that point, requester turnover will then escalate to platform turnover if switching to other requesters within the same platform fails to address injustice adequately.
Proposition 9.
Workers’ requester turnover leads to platform turnover, such that if workers perceive justice issues still exist after requester turnover, requester turnover will escalate to platform turnover.
All propositions have been included in the proposed conceptual model illustrated in Figure 1.

5. Discussion

5.1. General Discussion

Human resource management scholars have suggested that there is an ongoing “war for talent” (Chambers et al. 1998) and organizations are facing more challenges regarding how to capitalize the external human resource than ever before. The war for talent implies that it is not just about utilizing new ways for attracting and acquiring human resources, but it also involves being mindful of traditional concerns such as motivating and retaining the workforce after human resource acquisition (Chambers et al. 1998). This is because human resources have a higher level of mobility than other types of resources (Boxall 1998), and this mobility allows human resources to move from one organization to another. This review details the nature of a new human resource acquisition technique—crowdsourcing—and discussed the implications of this new technique for acquiring and retaining talent from the crowd.
From the justice perspective, and based on previous studies about organizational justice in the crowdwork context, we proposed that all three components of organizational justice should be taken into consideration when seeking to understand workers’ justice perceptions. Specifically, we proposed a theoretical model that details the roles of policies, practices, justice, and turnover within the crowd-based work context, such that workers’ perceived distributive, procedural, and interactional perceptions toward requesters are influenced by requesters’ equitable compensation policy, participative performance evaluation, and considerable communication.
Moreover, moderators such as workers’ motivation, case- and rule-based evaluation, and media quality are proposed to influence the relationship between antecedents (i.e., equitable compensation policy, participative performance evaluation, and considerable communication) and the three types of justice perceptions towards requesters, while workers’ job mobility can influence the relationship between workers’ justice perceptions and requester turnover.
Lastly, from the transaction costs perspective, we further suggested that there is an incremental effect in two types of turnover (requester and platform turnover) in the crowdwork context, such that the turnover starts from requester turnover, which can then progress to platform turnover. Below, we detail how our review offers contributions, theoretical implications, practical implications, and numerous directions for future research.

5.2. Contributions

The proposed model presents an avenue for understanding the mechanisms that influence crowd-based workers’ experiences and perceptions when engaging in crowd-based work. This was done by reviewing extant crowd-based labor platforms and integrating organizational justice and turnover literature. By introducing these novel relationships and concepts that uniquely exist in the crowdwork context, our proposed model provides contributions that promote our understanding of crowdwork, which explicates the links between workers’ perceptions of justice and turnover behaviors.
Specifically, by examining crowd-based labor platforms from human resource management perspective and justice perspective, we extend the boundaries to the internet-based workforce in the virtual domain. Our review suggests that policies regarding worker compensation, performance evaluation, and communication quality can influence crowdworkers’ distributive, procedural, and interactional justice perceptions—the “three roads to justice” (Cropanzano et al. 2001a).
Second, by seeing business crowdwork through motivation lenses, our review suggests that individuals participating in crowdwork are motivated differently, such that workers who participate in contest-based crowdwork, such as the Type B crowdwork in Howcroft and Bergvall-Kåreborn (2019)’s categorization, are more likely to be motivated intrinsically as financial compensation is not the main target; whereas workers who participate in marketplace-based crowdwork, such as the Type A, C, and D crowdwork in Howcroft and Bergvall-Kåreborn (2019)’s categorization, are more likely to be motivated extrinsically. Therefore, workers that engage in compensation-oriented crowdwork (i.e., marketplace-based crowdwork) are more likely to be influenced by distributive justice-related issues. Recent studies have suggested that there is a need to integrate the motivation framework into crowdsourcing (e.g., Buettner 2015; Kaufmann et al. 2011)—this review responds to this call and contributes to crowdsourcing literature by integrating Self-Determination Theory (Deci and Ryan 1985, 2000; Ryan and Deci 2000) into crowdwork.
Third, by seeing crowdwork from an evaluation perspective, tasks can be evaluated by either rule-based evaluation, which uses universally accepted knowledge and objective rules for evaluation, or case-based evaluation, which is based on the requester’s subjective rules and accumulated experience. By comparing these two types of evaluation, we suggest that participative performance evaluations, which invite workers to be a part of the performance evaluation, are likely to have a stronger influence on procedural justice perceptions for case-based evaluations as they are generally more subject to requesters’ discretionary judgment instead of universal rules, making workers’ participation more important when ensuring fairness and consistency of performance evaluations.
Furthermore, concerning requester–worker communication quality, interactivity can influence how workers perceive justice-related messages from requesters because it influences the efficacy of information transmitted from requesters to workers thus influences workers’ perceptions of presence, which further enables worker to perceive requester as “being there” and ready to help (Steuer 1992) and helps requesters provide considerate and moral communication promptly, promoting the relationship between considerate communication and workers’ interactional justice perceptions toward the requester.
Finally, from the standpoint of job mobility, we suggest that crowdworkers’ perceived job mobility can influence their requester turnover, such that the relationships between organizational justice perceptions and requester turnover are moderated by workers’ perceived job mobility, which reflects the extent to which workers feel free to move from one requester to another.

5.3. Theoretical Implications

Previous studies discussed crowdsource worker turnover in general, which is viewed as either requester turnover or platform turnover (e.g., Ma et al. 2016; Brawley and Pury 2016). Our conceptual model indicated that because of the triadic relationship among worker, requester, and platform (Fieseler et al. 2019), it is imperative for crowdwork research to take both types of turnover into account.
Furthermore, building upon the consideration of both types of turnover in the crowdwork context (i.e., requester turnover and platform turnover), our model is the first to suggest that there is an escalation effect of worker turnover, which moves from requester to platform. This is based on the concept of transaction costs, such that turnover follows a low-to-high cost pattern, where requester turnover escalates to platform turnover as workers respond to injustice perceptions. We suggest that it is important for requesters and platforms to solve turnover-related issues as early as possible (e.g., before these issues escalate to platform level), especially when crowd-based labor demand exceeds supply, or when turnover incurs a high cost.
Additionally, while some researchers are still hesitant about utilizing the crowd (e.g., Harms and DeSimone 2015; Keith and Harms 2016), increased task quality from crowd-based workers (e.g., Amazon’s Mturkers) may make it possible for researchers in HRM and other social science disciplines to conduct empirical research by tapping into crowd-based samples (Landers and Behrend 2015). We suggest researchers should consider how the participating workforce is likely to perceive requested tasks concerning the various forms of justice and to ensure that appropriate policies, practices, and transparency are used to prevent unnecessary requester turnover.

5.4. Practical Implications

Our conceptual model suggests that when organizations consider tapping into human resources within the crowd, they should be aware of the central role that justice perceptions play on workers’ turnover. This is because workers in the crowdwork context are individuals performing tasks within virtual workplaces, and unfair treatment can negatively impact them more severely than those working in traditional settings (Vander Elst et al. 2013). Therefore, instead of viewing workers as simply interchangeable mechanical components within working systems or marginalized parts, they should be viewed and treated similar to workers in a traditional work context (Deng et al. 2016). Furthermore, workers’ turnover is primarily due to issues regarding whether the amount of compensation commensurately reflects their perceived level of contribution, how the compensation amount is determined, and whether adequate explanations were provided to support compensation decisions (e.g., Keith et al. 2017). Therefore, we recommend that requesters should ensure their crowdwork policies reflect all aspects of organizational justice.
Second, given the steady increase in people pursuing crowdwork/gig work, more attention should be paid to understanding how to best manage workers and maintain a stable workforce (Colbert et al. 2016). To this end, we suggest that actions should be taken to establish quality communication channels between requesters and workers to solve organizational justice issues by providing clear and adequate explanations about crowd-based compensation. This will further promote workers’ job security and psychological contract with the requester and decrease their turnover in the early stages (e.g., requester turnover stage), as turnover is likely to escalate (to platform turnover) if justice issues are not solved properly.
Third, while attracting and retaining workers in the crowdwork context are less of an issue than in traditional organizations, crowd-based labor platforms still depend on registered users to maintain business and operations (e.g., charging commissions or subscription fees). Therefore, workers’ turnover can negatively impact platforms’ revenues by decreasing platforms’ ability to offer requesters reliable workforces. Moreover, since turnover in the crowdwork context includes requester turnover and platform turnover, turnover can create losses for both the requesters and the platform. For requesters, turnover decreases the supply of potential crowdsourcing workers; this is particularly true for requestors working on platforms that have active third-party websites and discussion boards (e.g., turkopticon.ucsd.edu). If a particular requester receives unfavorable reviews from workers on such a forum, it could create a negative image and further prevent other workers from taking tasks from the requester. For platforms, turnover reduces the platform’s main source of revenue, which entails charging commission based on the payment given by requestors to workers. When a large number of workers leave, a platform will become less attractive to potential requesters, this will ultimately lead to platform closure. Therefore, while crowdworkers’ job mobility is higher than their counterparts in traditional organizations (Brawley and Pury 2016), attracting and retaining crowdworkers that actively participate in crowdsourcing jobs is still important for the survival of the platforms (Boons et al. 2015). Platforms need stable supplies of worker to maintain their attractiveness and competitiveness, especially in today’s crowd-based labor market, which includes an ever-increasing number of platforms.
Fourth, although the crowd-based workforce consists of a large number of working individuals accessed through open calls, the actual number of active workers is quite limited, implying that reputation is a major issue for requesters and platforms. For example, even though there are over 100,000 unique registered workers available on Amazon’s Mturk around the world, the actual population of workers that actively complete tasks is estimated to be less than 7500 (Stewart et al. 2015). Reputation issues easily escalate because workers’ reviews and ratings for requesters and platforms can easily spread on a large scale. For example, Amazon’s Mturkers share comments about requester at Turkopticon, such that comments are instantly available to all Mturkers once posted, implying that when a bad reputation is widely known by a certain group of workers, it will be difficult for a requester or a platform to recruit new workers.
Finally, as an effective way to minimize justice issues in the crowdwork context, requesters should provide equitable and commensurate compensation to workers and adhere to minimum wage levels whenever possible (Silberman et al. 2018). Meanwhile, it is also important for platforms to take some responsibility for maintaining an accommodating virtual workplace for workers. In fact, the escrow account, pre-screening process, built-in quality control mechanism, and dispute investigation are examples of platforms’ attempts to move beyond simply hosting requester task postings.

5.5. Future Research

Future research should continue to not only emphasize the importance of distributive justice in the crowdwork context, but all aspects of organizational justice. By identifying unique effects from all three aspects of justice, future research should focus on both platforms and requesters in terms of implementing appropriate policies, improving existing crowdwork processes, and addressing justice issues in three aspects (i.e., distributive, procedural, and interactional justice).
Another potentially fruitful avenue for future research is to explore the possibility of integrating more diverse HRM functions into the crowdwork context, such as the recruitment and selection of workers, as recruitment and selection practices contribute to perceived organizational justice in the traditional work context and likely have a similar influence on justice perceptions in the crowd-based work context (Gilliland 1993).
Finally, future research should examine whether workers from different countries perceive organizational justice in the same way. As more crowd-based labor platforms start recruiting workers from around the world, workers from different countries could have distinct attitudes towards compensation policies and performance evaluation due to different cultural backgrounds and different national economic statuses (Litman et al. 2014).

6. Conclusions

Crowdwork brings about both opportunities and challenges for organizations that attempt to tap into the human resources found from the crowd. Similar to traditional employees, crowd-based workers are more susceptible to justice issues. The nature of crowd-based work processes, such as single-sourced, outcome-based performance evaluations and limited communication, increases workers’ susceptibility to justice issues surrounding compensation, performance evaluation, and communication. More specifically, we suggest that marketplace-based workers are more concerned than contest-based workers regarding distributive justice, as this type of workers are more motivated by factors external to their work, such as financial compensation. Furthermore, we identify and propose two types of turnover: requester turnover and platform turnover, and suggest that both requesters and platforms should take workers’ turnover seriously as worker turnover could escalate quickly if justice issues are properly resolved. Taken together, by providing a conceptual model, we offer a novel understanding of crowd-based workers’ justice perceptions, the factors that influence those perceptions, and the behavioral outcomes of perceived injustice.

Author Contributions

All three authors contributed to all these following activities: conceptualization, methodology, investigation, resources, original draft preparation, review and editing, and visualization. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Conflicts of Interest

The authors report no conflict of interest.

References

  1. Acosta, Maribel, Amrapali Zaveri, Elena Simperl, Dimitris Kontokostas, Sören Auer, and Jens Lehmann. 2013. Crowdsourcing Linked Data Quality Assessment. In The Semantic Web. Edited by Harith Alani, Lalana Kagal, Achille Fokoue, Paul Groth, Chris Biemann, Josiane X. Parreira, Lora Aroyo, Natasha Noy, Chris Welty and Krzysztof Janowicz. Berlin/Heidelberg: Springer, pp. 260–76. [Google Scholar] [CrossRef] [Green Version]
  2. Adams, Stacy. 1963. Towards an understanding of inequity. The Journal of Abnormal and Social Psychology 67: 422–36. [Google Scholar] [CrossRef]
  3. Adams, Stacy, and Sara Freedman. 1976. Equity theory revisited: Comments and annotated bibliography. Advances in Experimental Social Psychology 9: 43–90. [Google Scholar] [CrossRef]
  4. Aguinis, Herman. 2009. Performance Management. Upper Saddle River: Pearson Prentice Hall. [Google Scholar]
  5. Aguinis, Herman, and Sola O. Lawal. 2013. Elancing: A review and research agenda for bridging the science–practice gap. Human Resource Management Review 23: 6–17. [Google Scholar] [CrossRef]
  6. Aguinis, Herman, Harry Joo, and Ryan K. Gottfredson. 2011. Why we hate performance management—And why we should love it. Business Horizons 54: 503–7. [Google Scholar] [CrossRef]
  7. Alam, Sultana L., and John Campbell. 2017. Temporal motivations of volunteers to participate in cultural Crowdsourcing work. Information Systems Research 28: 744–59. [Google Scholar] [CrossRef]
  8. Alberghini, Elena, Livio Cricelli, and Michele Grimaldi. 2013. KM versus enterprise 2.0: A framework to tame the clash. International Journal of Information Technology and Management 12: 320–36. [Google Scholar] [CrossRef]
  9. Allen, Natalie J., and John P. Meyer. 1996. Affective, continuance, and normative commitment to the organization: An examination of construct validity. Journal of Vocational Behavior 49: 252–76. [Google Scholar] [CrossRef]
  10. Askay, David. 2017. A conceptual framework for investigating organizational control and resistance in crowd-based platforms. Paper presented at the 50th Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, January 4–7. [Google Scholar]
  11. Baldwin, Carliss, and Eric Von Hippel. 2010. Modeling a paradigm shift: From producer innovation to user and open collaborative innovation. SSRN Electronic Journal. [Google Scholar] [CrossRef] [Green Version]
  12. Barnes, Sally-Anne, Anne Green, and Maria de Hoyos. 2015. Crowdsourcing and work: Individual factors and circumstances influencing employability. New Technology, Work and Employment 30: 16–31. [Google Scholar] [CrossRef]
  13. Barrett-Howard, Edith, and Tom R. Tyler. 1986. Procedural justice as a criterion in allocation decisions. Journal of Personality and Social Psychology 50: 296–304. [Google Scholar] [CrossRef]
  14. Behrend, Tara S., David J. Sharek, Adam W. Meade, and Eric N. Wiebe. 2011. The viability of crowdsourcing for survey research. Behavior Research Methods 43: 800–13. [Google Scholar] [CrossRef]
  15. Bies, Robert J. 2001. International (in)justice: The sacred and the profane. In Advances in Organization Justice. Edited by Jerald Greenberg and Russell Cropanzano. Stanford: Stanford University Press, pp. 89–118. [Google Scholar]
  16. Bies, Robert J., and Joseph S. Moag. 1986. Interactional communication criteria of fairness. Research in Organizational Behavior 9: 289–319. [Google Scholar]
  17. Blader, Steven L., and Tom R. Tyler. 2003. What constitutes fairness in work settings? A four-component model of procedural justice. Human Resource Management Review 13: 107–26. [Google Scholar] [CrossRef]
  18. Boons, Mark, Daan Stam, and Harry G. Barkema. 2015. Feelings of pride and respect as drivers of ongoing member activity on Crowdsourcing platforms. Journal of Management Studies 52: 717–41. [Google Scholar] [CrossRef]
  19. Boxall, Peter. 1998. Achieving competitive advantage through human resource strategy: Towards a theory of industry dynamics. Human Resource Management Review 8: 265–88. [Google Scholar] [CrossRef]
  20. Brabham, Daren C. 2008. Crowdsourcing as a model for problem solving. Convergence: The International Journal of Research into New Media Technologies 14: 75–90. [Google Scholar] [CrossRef]
  21. Brabham, Daren C. 2013. Crowdsourcing. Cambridge: MIT Press. [Google Scholar]
  22. Brashear, Thomas G., Chris Manolis, and Charles M. Brooks. 2005. The effects of control, trust, and justice on salesperson turnover. Journal of Business Research 58: 241–49. [Google Scholar] [CrossRef]
  23. Brawley, Alice M. 2017. The big, gig picture: We can’t assume the same constructs matter. Industrial and Organizational Psychology 10: 687–96. [Google Scholar] [CrossRef] [Green Version]
  24. Brawley, Alice M., and Cynthia L. Pury. 2016. Work experiences on MTurk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior 54: 531–46. [Google Scholar] [CrossRef]
  25. Breaugh, James A., Leslie A. Greising, James W. Taggart, and Helen Chen. 2003. The relationship of recruiting sources and pre-hire outcomes: Examination of yield ratios and applicant quality. Journal of Applied Social Psychology 33: 2267–87. [Google Scholar] [CrossRef]
  26. Buettner, Ricardo. 2015. A systematic literature review of Crowdsourcing research from a human resource management perspective. Paper presented at the 2015 48th Hawaii International Conference on System Sciences, Kauai, HI, USA, January 5–8. [Google Scholar]
  27. Buhrmester, Michael, Tracy Kwang, and Samuel D. Gosling. 2011. Amazon’s mechanical turk. Perspectives on Psychological Science 6: 3–5. [Google Scholar] [CrossRef] [PubMed]
  28. Byrne, Zinta S., and Russell Cropanzano. 2001. The history of organizational justice: The founders speak. Justice in the Workplace: From Theory to Practice 2: 3–26. [Google Scholar]
  29. Camerer, Colin F., and Ernst Fehr. 2006. When does “Economic man” Dominate social behavior? Science 311: 47–52. [Google Scholar] [CrossRef] [Green Version]
  30. Campbell, John P., and Robert D. Pritchard. 1976. Motivation theory in industrial and organizational psychology. In Handbook of Industrial and Organizational Psychology. Edited by Marvin D. Dunnette. Chicago: Rand McNally College, pp. 63–130. [Google Scholar]
  31. Campbell, John P., and Brenton M. Wiernik. 2015. The modeling and assessment of work performance. Annual Review of Organizational Psychology and Organizational Behavior 2: 47–74. [Google Scholar] [CrossRef] [Green Version]
  32. Chaisiri, Sivadon. 2013. Utilizing human intelligence in a Crowdsourcing marketplace for big data processing. Paper presented at the 2013 International Conference on Parallel and Distributed Systems, Seoul, Korea, December 15–18. [Google Scholar]
  33. Chambers, Elizabeth G., Mark Foulon, Helen Handfield-Jones, Steven M. Hankin, and Edward G. Michaels. 1998. The War for Talent. The McKinsey Quarterly 1: 44–57. [Google Scholar]
  34. Chandler, Jesse, Gabriele Paolacci, and Pam Mueller. 2013. Risks and rewards of Crowdsourcing marketplaces. In Handbook of Human Computation. New York: Springer, pp. 377–92. [Google Scholar] [CrossRef] [Green Version]
  35. Chi, Robert, and Melody Y. Kiang. 1993. Reasoning by coordination: An integration of case-based and rule-based reasoning systems. Knowledge-Based Systems 6: 103–13. [Google Scholar] [CrossRef]
  36. Chiaburu, Dan S., and Sophia V. Marinova. 2006. Employee role enlargement. Leadership and Organization Development Journal 27: 168–82. [Google Scholar] [CrossRef]
  37. Colbert, Amy, Nick Yee, and Gerard George. 2016. The digital workforce and the workplace of the future. Academy of Management Journal 59: 731–39. [Google Scholar] [CrossRef] [Green Version]
  38. Colquitt, Jason A. 2001. On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology 86: 386–400. [Google Scholar] [CrossRef] [Green Version]
  39. Colquitt, Jason A., Donald E. Conlon, Michael J. Wesson, Christopher O. Porter, and Yee Ng. 2001. Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology 86: 425–45. [Google Scholar] [CrossRef] [Green Version]
  40. Colquitt, Jason A., and Jessica B. Rodell. 2015. Measuring justice and fairness. In The Oxford Handbook of Justice in the Workplace. Edited by Russell Cropanzano and Maureen L. Ambrose. Oxford: Oxford University Press, pp. 187–202. [Google Scholar] [CrossRef]
  41. Cropanzano, Russell, and Maureen L. Ambrose. 2001. Procedural and distributive justice are more similar than you think: A monistic perspective and a research agenda. In Advances in Organization Justice. Edited by Jerald Greenberg and Russell Cropanzano. Stanford: Stanford University Press, pp. 119–51. [Google Scholar]
  42. Cropanzano, Russell, Zinta S. Byrne, Ramona Bobocel, and Deborah E. Rupp. 2001a. Moral virtues, fairness heuristics, social entities, and other denizens of organizational justice. Journal of Vocational Behavior 58: 164–209. [Google Scholar] [CrossRef]
  43. Cropanzano, Russell, and Deborah E. Rupp. 2003. An overview of organizational justice: Implications for work motivation. In Motivation and Work Behavior. Edited by Lyman W. Porter, Gregory Bigley and Richard M. Steers. New York: McGraw-Hill Irwin, pp. 82–95. [Google Scholar]
  44. Cropanzano, Russell, Deborah E. Rupp, Carolyn J. Mohler, and Marshall Schminke. 2001b. Three roads to organizational justice. In Research in Personnel and Human Resources Management. Edited by Ronald M. Buckley, Jonathon R. B. Halbesleben and Anthony R. Wheeler. Bingley: Emerald Group Publishing Limited, pp. 1–113. [Google Scholar]
  45. Cropanzano, Russell, Deborah E. Rupp, and Zinta S. Byrne. 2003. The relationship of emotional exhaustion to work attitudes, job performance, and organizational citizenship behaviors. Journal of Applied Psychology 88: 160–69. [Google Scholar] [CrossRef] [PubMed]
  46. Cugueró-Escofet, Natàlia, and Josep M. Rosanas. 2013. The just design and use of management control systems as requirements for goal congruence. Management Accounting Research 24: 23–40. [Google Scholar] [CrossRef] [Green Version]
  47. Cullina, Eoin, Kieran Conboy, and Lorraine Morgan. 2015. Measuring the crowd: A preliminary taxonomy of crowdsourcing metrics. Paper presented at the 11th International Symposium on Open Collaboration, San Francisco, CA, USA, August 19–21. [Google Scholar]
  48. Cummings, Larry L., and Donald P. Schwab. 1973. Performance in Organizations: Determinants and Appraisal. Culver City: Good Year Books. [Google Scholar]
  49. Daft, Richard L., and Robert H. Lengel. 1986. Organizational information requirements, media richness and structural design. Management Science 32: 554–71. [Google Scholar] [CrossRef] [Green Version]
  50. Dahlman, Carl J. 1979. The problem of externality. The Journal of Law and Economics 22: 141–62. [Google Scholar] [CrossRef] [Green Version]
  51. Deci, Edward L. 1975. Intrinsic Motivation. New York: Plenum Press. [Google Scholar]
  52. Deci, Edward L., and Richard M. Ryan. 1985. The general causality orientations scale: Self-determination in personality. Journal of Research in Personality 19: 109–34. [Google Scholar]
  53. Deci, Edward L., and Richard M. Ryan. 2000. The “What” and “Why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry 11: 227–68. [Google Scholar] [CrossRef]
  54. Deng, Xuefei, Kshiti D. Joshi, and Robert D. Galliers. 2016. The duality of empowerment and marginalization in Microtask Crowdsourcing: Giving voice to the less powerful through value sensitive design. MIS Quarterly 40: 279–302. [Google Scholar] [CrossRef]
  55. Dissanayake, Indika, Jie Zhang, and Bin Gu. 2015. Task division for team success in Crowdsourcing contests: Resource allocation and alignment effects. Journal of Management Information Systems 32: 8–39. [Google Scholar] [CrossRef]
  56. Duggan, James, Ultan Sherman, Ronan Carbery, and Anthony McDonnell. 2020. Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal 30: 114–32. [Google Scholar] [CrossRef] [Green Version]
  57. Dutta, Soumitra, and Piero P. Bonissone. 1993. Integrating case- and rule-based reasoning. International Journal of Approximate Reasoning 8: 163–203. [Google Scholar] [CrossRef] [Green Version]
  58. Dworkin, Ronald. 1986. Law’s Empire. Cambridge: Harvard University Press. [Google Scholar]
  59. Eickhoff, Carsten, Christopher G. Harris, Arjen P. de Vries, and Padmini Srinivasan. 2012. Quality through flow and immersion: Gamifying crowdsourced relevance assessments. Paper presented at the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, Portland, OR, USA, August 12–16. [Google Scholar]
  60. Estellés-Arolas, Enrique, and Fernando González-Ladrón-de-Guevara. 2012. Towards an integrated crowdsourcing definition. Journal of Information Science 38: 189–200. [Google Scholar] [CrossRef] [Green Version]
  61. Faradani, Siamak, Björn Hartmann, and Panagiotis G. Ipeirotis. 2011. What’s the right price? Pricing tasks for finishing on time. Human Computation 11: 26–31. [Google Scholar]
  62. Faullant, Rita, Johann Füller, and Katja Hutter. 2017. Fair play: Perceived fairness in crowdsourcing competitions and the customer relationship-related consequences. Management Decision 55: 1924–41. [Google Scholar]
  63. Fernández-Macías, Enrique. 2017. Automation, Digitisation and Platforms: Implications for Work and Employment. Loughlinstown: Eurofound, Available online: https://www.eurofound.europa.eu/publications/report/2018/automation-digitisation-and-platforms-implications-for-work-and-employment (accessed on 11 November 2020).
  64. Fieseler, Christian, Eliane Bucher, and Christian P. Hoffmann. 2019. Unfairness by design? The perceived fairness of digital labor on Crowdworking platforms. Journal of Business Ethics 156: 987–1005. [Google Scholar] [CrossRef]
  65. Folger, Robert G., and Russell Cropanzano. 1998. Organizational Justice and Human Resource Management. Thousand Oaks: SAGE. [Google Scholar]
  66. Fortin, Marion, Irina Cojuharenco, David Patient, and Hayley German. 2014. It is time for justice: How time changes what we know about justice judgments and justice effects. Journal of Organizational Behavior 37: S30–S56. [Google Scholar] [CrossRef]
  67. Franke, Nikolaus, Peter Keinz, and Katharina Klausberger. 2013. Does this sound like a fair deal: Antecedents and consequences of fairness expectations in the individual’s decision to participate in firm innovation. Organization Science 24: 1495–516. [Google Scholar] [CrossRef] [Green Version]
  68. Gassenheimer, Jule B., Judy A. Siguaw, and Gary L. Hunter. 2013. Exploring motivations and the capacity for business crowdsourcing. AMS Review 3: 205–16. [Google Scholar] [CrossRef]
  69. Gilliland, Stephen W. 1993. The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review 18: 694–734. [Google Scholar] [CrossRef]
  70. Gleibs, Ilka H. 2016. Are all “research fields” equal? Rethinking practice for the use of data from crowdsourcing market places. Behavior Research Methods 49: 1333–42. [Google Scholar] [CrossRef] [Green Version]
  71. Golding, Andrew R., and Paul S. Rosenbloom. 1996. Improving accuracy by combining rule-based and case-based reasoning. Artificial Intelligence 87: 215–54. [Google Scholar] [CrossRef] [Green Version]
  72. Goldman, Alan H. 2015. Justice and Reverse Discrimination. Princeton: Princeton University Press. [Google Scholar]
  73. Goldman, Barry, and Russell Cropanzano. 2015. “Justice” and “fairness” are not the same thing. Journal of Organizational Behavior 36: 313–18. [Google Scholar] [CrossRef]
  74. Gond, Jean-Pascal, Assâad El Akremi, Valérie Swaen, and Nishat Babu. 2017. The psychological microfoundations of corporate social responsibility: A person-centric systematic review. Journal of Organizational Behavior 38: 225–46. [Google Scholar] [CrossRef]
  75. Greenberg, Jerald. 1986. Determinants of perceived fairness of performance evaluations. Journal of Applied Psychology 71: 340–42. [Google Scholar] [CrossRef]
  76. Greenberg, Jerald. 1987. A taxonomy of organizational justice theories. Academy of Management Review 12: 9–22. [Google Scholar] [CrossRef] [Green Version]
  77. Greenberg, Jerald. 1993. The Social Side of Fairness: Interpersonal and Informational Classes of Organizational Justice. In Justice in the Workplace: Approaching Fairness in Human Resource Management. Edited by Russell Cropanzano. Mahwah: Lawrence Erlbaum Associates, pp. 79–103. [Google Scholar]
  78. Greenberg, Jerald, and Robert Folger. 1983. Procedural justice, participation, and the fair process effect in groups and organizations. In Basic Group Processes. New York: Springer, pp. 235–56. [Google Scholar] [CrossRef]
  79. Gregg, Dawn G. 2010. Designing for collective intelligence. Communications of the ACM 53: 134–38. [Google Scholar] [CrossRef]
  80. Hambley, Laura A., Thomas A. O’Neill, and Theresa J. Kline. 2007. Virtual team leadership: The effects of leadership style and communication medium on team interaction styles and outcomes. Organizational Behavior and Human Decision Processes 103: 1–20. [Google Scholar] [CrossRef]
  81. Harms, Peter D., and Justin A. DeSimone. 2015. Caution! MTurk workers ahead-Fines doubled. Industrial and Organizational Psychology 8: 183–90. [Google Scholar] [CrossRef]
  82. Heponiemi, Tarja, Marko Elovainio, Laura Pekkarinen, Timo Sinervo, and Anne Kouvonen. 2008. The effects of job demands and low job control on work–family conflict: The role of fairness in decision making and management. Journal of Community Psychology 36: 387–98. [Google Scholar] [CrossRef]
  83. Herr, Paul M., Frank R. Kardes, and John Kim. 1991. Effects of word-of-Mouth and product-attribute information on persuasion: An accessibility-diagnosticity perspective. Journal of Consumer Research 17: 454–62. [Google Scholar] [CrossRef]
  84. Hetmank, Lars. 2013. Components and Functions of Crowdsourcing Systems-A Systematic Literature Review. Wirtschaftsinformatik 4: 55–69. [Google Scholar] [CrossRef]
  85. Hossain, Mokter. 2012. Users’ motivation to participate in online crowdsourcing platforms. Paper presented at the 2012 International Conference on Innovation Management and Technology Research, Malacca, Malaysia, May 21–22. [Google Scholar]
  86. Howcroft, Debra, and Birgitta Bergvall-Kåreborn. 2019. A typology of Crowdwork platforms. Work, Employment and Society 33: 21–38. [Google Scholar] [CrossRef]
  87. Howe, Jeff. 2006. The Rise of Crowdsourcing. WIRED. June 1. Available online: https://www.wired.com/2006/06/crowds/ (accessed on 11 November 2020).
  88. Howe, Jeff. 2009. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. Fort Collins: Crown Pub. [Google Scholar]
  89. Internal Revenue Service. 2020. Topic No. 762 Independent Contractor vs. Employee. September 22. Available online: https://www.irs.gov/taxtopics/tc762 (accessed on 11 November 2020).
  90. Irani, Lilly. 2013. The cultural work of microwork. New Media and Society 17: 720–39. [Google Scholar] [CrossRef] [Green Version]
  91. Jäger, Georg, Laura S. Zilian, Christian Hofer, and Manfred Füllsack. 2019. Crowdworking: Working with or against the crowd? Journal of Economic Interaction and Coordination 14: 761–88. [Google Scholar] [CrossRef] [Green Version]
  92. Jebb, Andrew T., Scott Parrigon, and Sang Eun Woo. 2017. Exploratory data analysis as a foundation of inductive research. Human Resource Management Review 27: 265–76. [Google Scholar] [CrossRef]
  93. Jones, David A., and Daniel P. Skarlicki. 2003. The relationship between perceptions of fairness and voluntary turnover among retail Employees1. Journal of Applied Social Psychology 33: 1226–43. [Google Scholar] [CrossRef]
  94. Kamar, Ece, Severin Hacker, and Eric Horvitz. 2012. Combining human and machine intelligence in large-scale crowdsourcing. Paper presented at the 11th International Conference on Autonomous Agents and Multiagent Systems, Valencia, Spain, June 4–8. [Google Scholar]
  95. Kaufmann, Nicolas, Thimo Schulze, and Daniel Veit. 2011. More than fun and money. Worker Motivation in Crowdsourcing-A Study on Mechanical Turk. Paper presented at the 2011 Conference on Computer Supported Cooperative Work, Hangzhou, China, March 19–23. [Google Scholar]
  96. Keith, Melissa G., and Peter D. Harms. 2016. Is Mechanical Turk the answer to our sampling woes? Industrial and Organizational Psychology: Perspectives on Science and Practice 9: 162–67. [Google Scholar]
  97. Keith, Melissa G., Peter D. Harms, and Alexander C. Long. 2020. Worker health and well-being in the gig economy: A proposed framework and research agenda. In Research in Occupational Stress and Well Being. Bingley: Emerald Publishing Limited, pp. 1–33. [Google Scholar] [CrossRef]
  98. Keith, Melissa G., Louis Tay, and Peter D. Harms. 2017. Systems perspective of Amazon mechanical turk for organizational research: Review and recommendations. Frontiers in Psychology 8: 1359. [Google Scholar] [CrossRef]
  99. Kelman, Herbert C. 1958. Compliance, identification, and internalization three processes of attitude change. Journal of Conflict Resolution 2: 51–60. [Google Scholar] [CrossRef]
  100. Kelman, Herbert C. 2017. Further thoughts on the processes of compliance, identification, and internalization. In Social Power and Political Influence. Abingdon-on-Thames: Routledge, pp. 125–71. [Google Scholar]
  101. Kittur, Aniket, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. Paper presented at the 2013 Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, February 23–27. [Google Scholar]
  102. Landers, Richard N., and Tara S. Behrend. 2015. An inconvenient truth: Arbitrary distinctions between organizational, mechanical turk, and other convenience samples. Industrial and Organizational Psychology 8: 142–64. [Google Scholar] [CrossRef] [Green Version]
  103. Lankford, William M., and Faramarz Parsa. 1999. Outsourcing: A primer. Management Decision 37: 310–16. [Google Scholar] [CrossRef] [Green Version]
  104. Latané, Bibb, James H. Liu, Andrzej Nowak, Michael Bonevento, and Long Zheng. 1995. Distance matters: Physical space and social impact. Personality and Social Psychology Bulletin 21: 795–805. [Google Scholar] [CrossRef]
  105. Lee, Moohun, Sunghoon Cho, Hyokyung Chang, Junghee Jo, Hoiyoung Jung, and Euiin Choi. 2008. Auditing system using rule-based reasoning in ubiquitous computing. Paper presented at the 2008 International Conference on Computational Sciences and Its Applications, Perugia, Italy, June 30–July 3. [Google Scholar]
  106. Leventhal, Gerald S. 1980. What should be done with equity theory? New approaches to the study of fairness in social relationships. In Social Exchange: Advances in Theory and Research. Edited by Kenneth J. Gergen, Martin S. Greenberg and Richard H. Willis. New York: Plenum, pp. 27–55. [Google Scholar] [CrossRef]
  107. Leung, Gabriel, and Vincent Cho. 2018. An Empirical Study of Motivation, Justice and Self-Efficacy in Solvers’ Continued Participation Intention in Microtask Crowdsourcing. Paper presented at the American Conference on Information Systems, New Orleans, LA, USA, August 16–18. [Google Scholar]
  108. Litman, Leib, Jonathan Robinson, and Cheskie Rosenzweig. 2014. The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on mechanical turk. Behavior Research Methods 47: 519–28. [Google Scholar] [CrossRef]
  109. Liu, Ying, and Yongmei Liu. 2019. The effect of workers’ justice perception on continuance participation intention in the crowdsourcing market. Internet Research 29: 1485–508. [Google Scholar] [CrossRef]
  110. Locke, Edwin A. 2007. The case for inductive theory Building. Journal of Management 33: 867–90. [Google Scholar] [CrossRef] [Green Version]
  111. Ma, Xiao, Lara Khansa, and Jinghui Hou. 2016. Toward a contextual theory of turnover intention in online crowd working. Paper presented at the International Conference on Information Systems, Dublin, Ireland, December 11–14. [Google Scholar]
  112. Ma, Xiao, Lara Khansa, and Sung S. Kim. 2018. Active community participation and Crowdworking turnover: A longitudinal model and empirical test of three mechanisms. Journal of Management Information Systems 35: 1154–87. [Google Scholar] [CrossRef]
  113. Marjanovic, Sonja, Caroline Fry, and Joanna Chataway. 2012. Crowdsourcing based business models: In search of evidence for innovation 2.0. Science and Public Policy 39: 318–32. [Google Scholar] [CrossRef]
  114. Mandl, Irene, Maurizio Curtarelli, Sara Riso, Oscar Vargas-Llave, and Elias Georgiannis. 2015. New Forms of Employment. Loughlinstown: Eurofound, Available online: https://www.eurofound.europa.eu/publications/report/2015/working-conditions-labour-market/new-forms-of-employment (accessed on 11 November 2020).
  115. Mao, Andrew, Ece Kamar, Yiling Chen, Eric Horvitz, Megan E. Schwamb, Chris J. Lintott, and Arfon M. Smith. 2013. Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing. Paper presented at the First AAAI Conference on Human Computation and Crowdsourcing, Palm Springs, CA, USA, November 7–9. [Google Scholar]
  116. Marling, Cynthia R., Grace. J. Petot, and Leon S. Sterling. 1999. Integrating case-based and rule-based reasoning to meet multiple design constraints. Computational Intelligence 15: 308–32. [Google Scholar]
  117. Milland, Kristy. 2016. Crowdwork: The fury and the fear. In The Digital Economy and the Single Market. Edited by Werner Wobbe, Elva Bova and Catalin Dragomirescu-Gaina. Brussels: Foundation for European Progressive Studies, pp. 83–92. [Google Scholar]
  118. Mitchell, Terrence R. 1997. Matching motivational strategies with organizational contexts. In Research in Organizational Behavior. Edited by Larry L. Cummings and Barry M. Staw. Greenwich: JAI, pp. 57–149. [Google Scholar]
  119. Nakatsu, Robbie T., Elissa B. Grossman, and Charalambos L. Iacovou. 2014. A taxonomy of crowdsourcing based on task complexity. Journal of Information Science 40: 823–34. [Google Scholar]
  120. Opsahl, Robert L., and Marvin D. Dunnette. 1966. Role of financial compensation in industrial motivation. Psychological Bulletin 66: 94–118. [Google Scholar]
  121. Paolacci, Gabriele, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5: 411–19. [Google Scholar]
  122. Porter, Christopher O., Ryan Outlaw, Jake P. Gale, and Thomas S. Cho. 2019. The use of online panel data in management research: A review and recommendations. Journal of Management 45: 319–44. [Google Scholar]
  123. Prentzas, Jim, and Ioannis Hatzilygeroudis. 2009. Combinations of case-based reasoning with other intelligent methods. International Journal of Hybrid Intelligent Systems 6: 189–209. [Google Scholar] [CrossRef] [Green Version]
  124. Prpić, John, Prashant P. Shukla, Jan H. Kietzmann, and Ian P. McCarthy. 2015. How to work a crowd: Developing crowd capital through crowdsourcing. Business Horizons 58: 77–85. [Google Scholar] [CrossRef] [Green Version]
  125. Rawls, John. 2005. A Theory of Justice. Cambridge: Harvard University Press. [Google Scholar]
  126. Rossille, Delphine, Jean-François Laurent, and Anita Burgun. 2005. Modelling a decision-support system for oncology using rule-based and case-based reasoning methodologies. International Journal of Medical Informatics 74: 299–306. [Google Scholar] [CrossRef] [PubMed]
  127. Rothschild, Michael L. 1999. Carrots, sticks, and promises: A conceptual framework for the management of public health and social issue behaviors. Journal of Marketing 63: 24–37. [Google Scholar] [CrossRef]
  128. Rousseau, Denise M. 1990. New hire perceptions of their own and their employer’s obligations: A study of psychological contracts. Journal of Organizational Behavior 11: 389–400. [Google Scholar] [CrossRef]
  129. Ryan, Richard M., and Edward L. Deci. 2000. The darker and brighter sides of human existence: Basic psychological needs as a unifying concept. Psychological Inquiry 11: 319–38. [Google Scholar] [CrossRef]
  130. Ryan, Ann Marie, and Jennifer L. Wessel. 2015. Implications of a changing workforce and workplace for justice perceptions and expectations. Human Resource Management Review 25: 162–75. [Google Scholar]
  131. Schaufeli, Wilmar B., Isabel M. Martinez, Alexandra Marques Pinto, Marisa Salanova, and Arnold B. Bakker. 2002. Burnout and engagement in University students. Journal of Cross-Cultural Psychology 33: 464–81. [Google Scholar] [CrossRef] [Green Version]
  132. Schulte, Julian, Katharina D. Schlicher, and Günter W. Maier. 2020. Working everywhere and every time? Chances and risks in crowdworking and crowdsourcing work design. Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO) 51: 59–69. [Google Scholar] [CrossRef] [Green Version]
  133. Segal, Lewis M., and Daniel G. Sullivan. 1997. The growth of temporary services work. Journal of Economic Perspectives 11: 117–36. [Google Scholar] [CrossRef]
  134. Semuels, Alana. 2018. The Internet Is Enabling a New Kind of Poorly Paid Hell. The Atlantic. February 5. Available online: https://www.theatlantic.com/business/archive/2018/01/amazon-mechanical-turk/551192/ (accessed on 11 November 2020).
  135. Shapiro, Debra L., Holly Buttner, and Bruce Barry. 1994. Explanations: What factors enhance their perceived adequacy? Organizational Behavior and Human Decision Processes 58: 346–68. [Google Scholar] [CrossRef] [Green Version]
  136. Sharpe-Wessling, Kathryn, Joel Huber, and Oded Netzer. 2017. MTurk character misrepresentation: Assessment and solutions. Journal of Consumer Research 44: 211–330. [Google Scholar] [CrossRef] [Green Version]
  137. Sheridan, Thomas B. 1992. Musings on telepresence and virtual presence. Presence: Teleoperators and Virtual Environments 1: 120–26. [Google Scholar] [CrossRef]
  138. Siemsen, Enno, Aleda V. Roth, and Sridhar Balasubramanian. 2007. How motivation, opportunity, and ability drive knowledge sharing: The constraining-factor model. Journal of Operations Management 26: 426–45. [Google Scholar] [CrossRef]
  139. Silberman, Six, Bill Tomlinson, Rochelle LaPlante, Joel Ross, Lilly Irani, and Andrew Zaldivar. 2018. Responsible research with crowds. Communications of the ACM 61: 39–41. [Google Scholar] [CrossRef]
  140. Simula, Henri, and Tuomas Ahola. 2014. A network perspective on idea and innovation crowdsourcing in industrial firms. Industrial Marketing Management 43: 400–8. [Google Scholar] [CrossRef]
  141. Sitkin, Sim B., and Robert J. Bies. 1993. Social accounts in conflict situations: Using explanations to manage conflict. Human Relations 46: 349–70. [Google Scholar] [CrossRef]
  142. Skarlicki, Daniel P., Danielle D. van Jaarsveld, Ruodan Shao, Young Ho Song, and Mo Wang. 2016. Extending the multifoci perspective: The role of supervisor justice and moral identity in the relationship between customer justice and customer-directed sabotage. Journal of Applied Psychology 101: 108–21. [Google Scholar] [CrossRef]
  143. Slade, Stephen. 1991. Case-based reasoning: A research paradigm. Artificial Intelligence Magazine 12: 42–55. [Google Scholar] [CrossRef] [Green Version]
  144. Smith, Derek, Mohammad M. G. Manesh, and Asrar Alshaikh. 2013. How can entrepreneurs motivate Crowdsourcing participants? Technology Innovation Management Review 3: 23–30. [Google Scholar] [CrossRef]
  145. Steuer, Jonathan. 1992. Defining virtual reality: Dimensions determining telepresence. Journal of Communication 42: 73–93. [Google Scholar] [CrossRef]
  146. Stewart, Neil, Christoph Ungemach, Adam J. Harris, Daniel M. Bartels, Ben R. Newell, Gabriele Paolacci, and Jesse Chandler. 2015. The average laboratory samples a population of 7300 Amazon Mechanical Turk workers. Judgment and Decision Making 10: 479–91. [Google Scholar]
  147. Sundararajan, Arun. 2016. The Sharing Economy: The End of Employment and the Rise of Crowd-Based Capitalism. Cambridge: MIT Press. [Google Scholar]
  148. Surowiecki, James. 2005. The Wisdom of Crowds. New York: Anchor. [Google Scholar]
  149. Tekleab, Amanuel G., Kathryn M. Bartol, and Wei Liu. 2005. Is it pay levels or pay raises that matter to fairness and turnover? Journal of Organizational Behavior 26: 899–921. [Google Scholar] [CrossRef]
  150. Thibaut, John, and Laurens Walker. 1978. A theory of procedure. California Law Review 66: 541–66. [Google Scholar] [CrossRef]
  151. Thomas, Joe G., and Ricky W. Griffin. 1989. The power of social information in the workplace. Organizational Dynamics 18: 63–75. [Google Scholar] [CrossRef]
  152. Tyler, Tom R., and Robert J. Bies. 1990. Beyond Formal Procedures: The Interpersonal Context of Procedural Justice. In Applied Social Psychology and Organizational Settings. Edited by Jason S. Carroll. Mahwah: Erlbaum Associates, pp. 77–98. [Google Scholar] [CrossRef]
  153. Tyler, Tom R., and Steven L. Blader. 2000. Cooperation in Groups: Procedural Justice, Social Identity, and Behavioral Engagement. East Sussex: Psychology Press. [Google Scholar]
  154. Vander Elst, Tinne, Jacqueline Bosman, Nele De Cuyper, Jeroen Stouten, and Hans De Witte. 2013. Does positive affect buffer the associations between job insecurity and work engagement and psychological distress? A test among South African workers. Applied Psychology: An International Review 62: 558–70. [Google Scholar]
  155. Vukovic, Maja, and Arjun Natarajan. 2013. Operational excellence in IT services using enterprise Crowdsourcing. Paper presented at the 2013 IEEE International Conference on Services Computing, Santa Clara, CA, USA, June 28–July 3. [Google Scholar]
  156. Wang, Nan, Yongqiang Sun, Xiao-Liang Shen, and Xi Zhang. 2018. A value-justice model of knowledge integration in wikis: The moderating role of knowledge equivocality. International Journal of Information Management 43: 64–75. [Google Scholar]
  157. Watson, Ian, and Farhi Marir. 1994. Case-based reasoning: A review. The Knowledge Engineering Review 9: 327–54. [Google Scholar] [CrossRef]
  158. Weng, Jiaxiong, Huimin Xie, Yuanyue Feng, Ruoqing Wang, Yi Ye, Peiying Huang, and Xizhi Zheng. 2019. Effects of gamification elements on crowdsourcing participation: The mediating role of justice perceptions. Paper presented at the International Conference on Electronic Business, Newcastle upon Tyne, UK, December 8–12. [Google Scholar]
  159. Wexler, Mark N. 2011. Reconfiguring the sociology of the crowd: Exploring crowdsourcing. International Journal of Sociology and Social Policy 31: 6–20. [Google Scholar] [CrossRef]
  160. Wheeler, Anthony R., Vickie C. Gallagher, Robyn L. Brouer, and Chris J. Sablynski. 2007. When person-organization (mis)fit and (dis)satisfaction lead to turnover. Journal of Managerial Psychology 22: 203–19. [Google Scholar] [CrossRef]
  161. Woo, Sang Eun, Ernest H. O’Boyle, and Paul E. Spector. 2017. Best practices in developing, conducting, and evaluating inductive research. Human Resource Management Review 27: 255–64. [Google Scholar] [CrossRef]
  162. Yang, Congcong, Yuanyue Feng, Xizhi Zheng, Ye Feng, Ying Yu, Ben Niu, and Pianpian Yang. 2018. Fair or Not: Effects of Gamification Elements on Crowdsourcing Participation. Paper presented at the International Conference on Electronic Business, Guilin, China, December 2–6. [Google Scholar]
  163. Yuen, Man-Ching, Irwin King, and Kwong-Sak Leung. 2011. A survey of crowdsourcing systems. Paper presented at the 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, Boston, MA, USA, October 9–11. [Google Scholar]
  164. Zhao, Yuxiang, and Qinghua Zhu. 2012. Exploring the motivation of participants in crowdsourcing contest. Paper presented at the 33rd International Conference on Information Systems, Melbourne, Australia, June 28–July 2. [Google Scholar]
  165. Zheng, Haichao, Dahui Li, and Wenhua Hou. 2011. Task design, motivation, and participation in Crowdsourcing contests. International Journal of Electronic Commerce 15: 57–88. [Google Scholar] [CrossRef]
  166. Zou, Lingfei, Jinlong Zhang, and Wenxing Liu. 2015. Perceived justice and creativity in crowdsourcing communities: Empirical evidence from China. Social Science Information 54: 253–79. [Google Scholar] [CrossRef]
Figure 1. Conceptual Model.
Figure 1. Conceptual Model.
Admsci 10 00093 g001
Table 1. Categorization of Crowdwork.
Table 1. Categorization of Crowdwork.
Fernandez-Macias (2017)
(The Author Termed Online-Based Work as “Crowd Work” and Offline-Based Work as “gig Work”)
Duggan et al. (2020)
(The Authors Used the Term “gig Work” to Describe All Three Types Below)
Howcroft and Bergvall-Kåreborn (2019)
Online-based workCrowdwork—tasks are assigned to and finished by a geographically dispersed crowd, with requesters and workers connected by online platforms.Type A work—tasks assigned to and finished by workers online.
Type B work—“playbour” tasks assigned to and finished by workers online. Workers finish tasks primarily for fun and joy, instead of being compensated.
Online- and/or offline-based work a Type D work—profession-based freelance work, with requesters and workers connected by online platforms. Workers deliver services either online or offline.
Offline-based workCapital Platform Work—products sold or leased offline, with buyers and sellers connected by online platforms.Type C work—asset-based services, with requesters and workers connected by online platforms. Workers deliver service offline by utilizing assets/equipment owned by workers.
App Work—tasks deployed to worker and finished offline, with requesters and workers connected by online platforms.
a This category was not originally from Fernandez-Macias (2017).
Table 2. Summary of Extant Crowd-based General Labor Platforms (in the alphabetical order) a.
Table 2. Summary of Extant Crowd-based General Labor Platforms (in the alphabetical order) a.
Platform
Name and Founding Year
Platform’s Business Mode aCompensation PolicyPayment ProcedurePerformance Evaluation ProcessCase-Based vs.
Rule-Based Evaluation
Platform-Supported Communication b
Aileensoul
2017
Requesters will not need to pay the commission.Workers are compensated for the completion of tasks posted by requestersNo escrow accounts
Requester makes direct payment to the worker upon the completion of the task
Requester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
ClickWorker
2005
Requesters will need to pay 40% of the compensation amount as commission to the platform
The platform sets the minimum compensation rate
Workers are compensated for the completion of their corresponding tasks posted by requesters.Requester makes the upfront payment to an escrow account held by either platform; the fund will be released to the worker upon the completion of the requested taskRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
CloudPeeps
2015
Requesters will need to pay 5%–15% of the compensation amount as commission to the platform, plus 2.9% processing fees
Requesters can also choose a subscription plan and pay a monthly fee to reduce the commission percentage
Workers are compensated for completion of their corresponding tasks posted by requesters
Workers can also be compensated on an hourly basis
Requester makes the upfront payment to an escrow account held by either platform; the fund will be released to the worker upon the completion of the requested taskRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Expert 360
2013
Requesters will not need to pay the commission; however, 15% of the total payment will be deducted from workers’ earnings and go to the platformWorkers will receive compensation upon the completion of their corresponding tasksRequester makes direct payment to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Fiverr
2010
Requesters will not need to pay the commission; however, 20% of the total payment will be deducted from workers’ earnings and go to the platformWorkers will receive compensation upon the completion of their corresponding taskRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
FlexJobs
2007
Requesters will not need to pay the commission
The requester needs to subscribe to the platform by paying a monthly fee
Workers also need to subscribe to the platform by paying a monthly fee
Workers will receive compensation upon the completion of their corresponding tasksRequester makes direct payment to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Freelancer
2009
Requesters will not need to pay the commission, however, 3% of the compensation or $3 (or its approximate equivalent in other currencies) -whichever is greater - is collected by the platform when the workers are compensatedWorkers will receive compensation upon the completion of their corresponding tasks Requester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasks.Requester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Freelancermap
2011
Requesters will not need to pay the commission
Workers need to subscribe to the platform by paying a monthly fee
Workers will receive compensation upon the completion of their corresponding tasks Requester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workersRequester evaluates the work and decides compensationRule-based
Case-based
In-site text message
FreeUp
2015
Requesters will need to pay 15% of the compensation amount as commission to the platformWorkers will receive compensation upon the completion of their corresponding tasks Requester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Giggrabbers
2015
Requesters will not need to pay the commission; however, 9.5% of the total payment will be deducted from workers’ earnings and go to the platformWorkers will receive compensation upon the completion of their corresponding tasks Requester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Guru
1998
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earnings
Platform sets minimum compensation rate
Workers will receive compensation upon the completion of their corresponding tasksRequesters can compensate workers on an hourly basisRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Idea
Connection
2007
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsParticipants receive compensation upon solving the problems. Requester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationCase-basedIn-site multi-media message
iJobDesk
2018
Requesters will need to pay 2% of the compensation amount as commission to the platform
The platform sets the minimum compensation rate
Workers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site text message
InnoCentive
2001
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationCase-basedIn-site multi-media message
LocalLancers
2013
Requesters will not need to pay the commissionWorkers will receive compensation upon the completion of their corresponding tasksRequester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workersRequester evaluates the work and decides compensationRule-based
Case-based
No in-site communication
LocalSolo
2014
Requesters will not need to pay the commission
The requester needs to subscribe to the platform by paying a monthly fee
Workers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationCase-basedIn-site multi-media message
Mechanical Turk
2005
Requesters will need to pay 20% of the compensation amount as commission to the platformWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
MediaBistro
1999
Requesters will not need to pay the commission Requesters will need to pay for posting tasks on the platformWorkers will also need to subscribe to the platform by paying a monthly feeWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Micro Job Market
2018
Requesters will not need to pay the commission.Workers will receive compensation upon the completion of their corresponding tasksRequester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workers is out of the platformRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
MyRemoteTeam
2017
Requesters will not need to pay the commission
Requester will need to subscribe to the platform by paying a monthly fee
Workers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Nexxt
1996
Requesters will not need to pay the commission
The requester needs to subscribe to the platform by paying a monthly fee
Workers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
NineSigma
2000
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsParticipants will receive compensation when their proposals are accepted by the clientsRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationCase-basedIn-site multi-media message
Oridle
2008
Requesters will not need to pay the commission
Requesters will need to subscribe to the platform by paying a monthly fee
Participants will receive compensation when their proposals are accepted by the clientsRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
No in-site communication
Project4Hire
2009
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsParticipants will receive compensation when their proposals are accepted by the clientsRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
People
PerHour
2007
Requesters will need to pay 10% of the compensation amount as commission to the platformWorkers will receive compensation upon the completion of their corresponding tasks
The compensation is paid on an hourly basis
Requester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Prolific
2014
Requesters will need to pay 25% of the compensation amount as commission to the platform
The platform sets the minimum compensation rate
Workers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site text message
Rat Race Rebellion
1999
Requesters will not need to pay the commissionWorkers will receive compensation upon the completion of their corresponding tasksRequester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workers is out of the platformRequester evaluates the work and decides compensationRule-based
Case-based
No in-site communication
ServiceScape
2000
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasksThe requester needs to add a valid payment method before workers start works, workers will be paid upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Skip the Drive
2013
Requesters will not need to pay the commission
The requester needs to pay to the platform for posting the task
Workers will receive compensation upon the completion of their corresponding tasksRequester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workers is out of the platformRequester evaluates the work and decides compensationRule-based
Case-based
No in-site communication
Soshace
2013
Requesters will need to pay 10%–13% of the compensation amount as commission to the platformWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
No in-site communication
Speedlancer
2014
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Thumbtack
2009
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Toogit
2016
Requesters will not need to pay the commission; however, an 8% “facilitator fee” will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Toptal
2010
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Transformify
2015
Requesters will not need to pay the commission
Requester will need to either subscribe to the platform by paying a monthly fee or make a one-time payment for a job posting
Workers will receive compensation upon the completion of their corresponding tasksThe requester needs to add a valid payment method before workers start works, workers will be paid upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Truelancer
2014
Requesters will not need to pay the commission; however, a certain amount of fee will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasks.
The platform sets the minimum compensation rate
Requester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site text message
UpWork
2015
Requesters will not need to pay the commission; however, 20% commission and 2.75% processing fees will be deducted from workers’ earningsWorkers will receive compensation upon the completion of their corresponding tasks.
The compensation can also be paid on an hourly basis
Requester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Virtual Vocations
2008
Requesters will not need to pay the commission; however, workers need to subscribe for receiving task informationWorkers will receive compensation upon the completion of their corresponding tasksRequester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workers is out of the platformRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
WeWork
Remotely
2010
Requesters will not need to pay the commission
Requester will need to make a one-time payment for each job posting
Workers will receive compensation upon the completion of their corresponding tasksRequester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workers is out of the platformRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
Working Nomads
2014
Requesters will not need to pay the commission
The requester needs to make a one-time payment for each job posting
Workers will receive compensation upon the completion of their corresponding tasksRequester make direct payment to the worker upon the completion of tasks, the platform does not involve in payment to workers is out of the platformRequester evaluates the work and decides compensationRule-based
Case-based
In-site multi-media message
YunoJuno
2012
Requesters will need to pay the platform a certain amount of fee on top of the compensation amount that pays to workers.
The fee rate depends on requesters’ subscription
Workers will receive compensation upon the completion of their corresponding tasksRequester makes an upfront payment to an escrow account; the fund will be released to the worker upon the completion of tasksRequester evaluates the work and decides compensationRule-based
Case-based
No in-site communication
a We made our best attempt acquire all available information from multiple sources in terms of commission, fees, etc., however, some platforms do not disclose this type of information. For these platforms, we used “in a certain amount” in the table. b This column indicates the communication methods provided by the platform. It is also possible for requesters and workers to have direct communication without utilizing any platform-mediated methods (e.g., private chat).
Table 3. Literature Review of Organization Justice in Crowd-based Context (in alphabetical order).
Table 3. Literature Review of Organization Justice in Crowd-based Context (in alphabetical order).
Author and YearTypeAntecedent(s)MediatorOutcome(s)
Faullant et al. (2017)Empirical
  • Distributive justice
  • Procedural justice
  • No mediator discussed
  • Evoked product interest
  • Perceived innovativeness
  • Loyalty intentions
Franke et al. (2013)Empirical
  • Value distribution
  • System transparency
  • Ex-ante identification with the requesting organization
  • Distributive justice
  • Procedural justice
  • Willingness to contribute, ex-post identification with the requesting organization
Leung and Cho (2018)Empirical
  • Intrinsic motivation
  • Distributive justice
  • Self-efficacy
  • Continued participation intention
Liu and Liu (2019)Empirical
  • Distributive justice
  • Interpersonal justice
  • Informational justice
  • Trust in task requester
  • Trust in intermediary management
  • Continuance participation intention in the crowdsourcing market
Ma et al. (2016)Empirical
  • Workload
  • Distributive justice
  • Job satisfaction
  • Turnover intention (platform turnover)
Ma et al. (2018)Empirical
  • Distributive justice
  • No mediator discussed
  • Turnover intention (platform turnover)
Wang et al. (2018)Empirical
  • Distributive justice
  • Procedural justice
  • Interactional justice
  • Knowledge integration
  • Knowledge Quality
Weng et al. (2019)Empirical
  • Gamification elements
    Points
    Feedback
    Network
  • Distributive justice
  • informational justice
  • Interactional justice
  • Crowdsourcing participation
Yang et al. (2018)Empirical
  • Point rewarding
  • Feedback giving
  • Distributive justice
  • Interactional justice
  • Worker’s participation
Zou et al. (2015)Empirical
  • Distributive justice
  • Procedural justice
  • Interactional justice
  • Idea cooperation (e.g., giving feedback on the ideas of others or integrating knowledge from different participants, helps with novelty and usefulness)
  • Idea generation
  • Creative performance
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, X.; Lowman, G.H.; Harms, P. Justice for the Crowd: Organizational Justice and Turnover in Crowd-Based Labor. Adm. Sci. 2020, 10, 93. https://doi.org/10.3390/admsci10040093

AMA Style

Song X, Lowman GH, Harms P. Justice for the Crowd: Organizational Justice and Turnover in Crowd-Based Labor. Administrative Sciences. 2020; 10(4):93. https://doi.org/10.3390/admsci10040093

Chicago/Turabian Style

Song, Xiaochuan, Graham H. Lowman, and Peter Harms. 2020. "Justice for the Crowd: Organizational Justice and Turnover in Crowd-Based Labor" Administrative Sciences 10, no. 4: 93. https://doi.org/10.3390/admsci10040093

APA Style

Song, X., Lowman, G. H., & Harms, P. (2020). Justice for the Crowd: Organizational Justice and Turnover in Crowd-Based Labor. Administrative Sciences, 10(4), 93. https://doi.org/10.3390/admsci10040093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop