A Financial Incentive Mechanism for Truthful Reporting Assurance in Online Crowdsourcing Platforms
Abstract
:1. Introduction
2. Related Works
3. Methodology
3.1. Mechanism Description
- The answer by workers to crowdsourcing is in the form of providing a number in a range. The following statement can be the answer given by worker i to task k:
- The answer that worker i considers true for task k and believes in it; more accurately, the right answer from the view of worker i to task k can be illustrated in the following statement:
- The goal is to provide an answer by a worker to a task they believe in:
- For each task, a set of workers report their answers. The average of workers’ reports is then computed. The average of answers provided by workers participating in crowdsourcing to task k will be shown in the following statement:
- Utility of worker i when they answer to task k can be also displayed with the following statement:
- For each task, a worker can receive utility with these rules:
- o
- If the total distance of answers provided by contributing workers to a task from the average of the answers given to that task is less than a certain amount, it indicates collusion or excessive simplicity of a question; so, a worker is randomly selected and they will receive a fraction of utility related to that question and others will not receive anything:
- o
- otherwise, their utility will be calculated via the following relation:
- Upon the completion of crowdsourcing, the earnings of worker i will be computed as follows:
3.2. Proposed Mechanism Equilibrium
- Each worker can be selfish and willing to maximize their earnings from participating in crowdsourcing. So, they will seek for maximizing their utility on each task.
- Each worker knows a correct answer or at least an answer believed to be correct or even close to a correct answer.
- Each answer provided by each worker to each task is sealed and workers are not aware of each other’s answers to a task; so, they will not be informed of answers given to a task.
- Each worker understands that other workers know these rules.
- In Case of :
Utility of worker i from answer to task k is equal to . According to Rule 1, every worker is willing to maximize one’s utility; therefore, the goal of each worker is to provide the closest report to the average one:- In Case of :
In this case, the utility of worker i in an answer to task k is not out of two modes, either 0 or with a probability of , the amount will be and choose one of the two strategies or will not influence the .
3.3. Proposed Mechanism for Budget and Pricing Estimation
3.3.1. Fixed Budget Methods
3.3.2. Bidding Price Method
3.4. Reliability and Validity
3.5. Adopting the Proposed Mechanism to Another Form of Crowdsourcing
3.6. Experimental Results
- 1–20: Very Negative
- 21–40: Negative
- 41–60: Nature
- 61–80: Positve
- 81–100: Very Positive
- Five emails were attached to the output needed by the requesters (the authors of this paper) and were used to train crowdworkers. Hence, before crowdsourcing, each crowdworker received 25 emails (5 for each sentiment category) along with the employer’s opinion on the sentiment category to which they must be allocated.
- Three emails were used in an entrance test for the crowdsourcing process. In this test, which was performed after the training, each crowdworker was asked to read 15 emails (3 per category) and rate the sentiment implicit in each email with a score from 1 to 100. The obtained outputs were then compared with the outputs produced by the employer. Only the persons who had at least 12 correct outputs and at most 1 incorrect output per category were allowed to participate in the main crowdsourcing process. Note that prospective crowdworkers participated in this test only to prove their eligibility for the main task and were not financially compensated for this participation.
- Twenty emails were used as the main crowdsourced tasks. Each participant had to rate 100 emails in total (20 per category). This stage was similar to the previous one except that crowdworkers were paid to participate. The outputs obtained from this stage were used to evaluate the efficiency of the proposed mechanism.
4. Discussion
5. Conclusions and Future Works
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Edelman, B.; Ostrovsky, M.; Schwarz, M. Internet Advertising and the Generalized Second-Price Auction: Selling Billions of Dollars’ Worth of Keywords. Am. Econ. Rev. 2007, 97, 242–259. [Google Scholar] [CrossRef] [Green Version]
- Varian, R. Position auctions. Int. J. Ind. Organ. 2007, 25, 1163–1178. [Google Scholar] [CrossRef]
- Lahaie, S.; Pennock, D.M.; Saberi, A.; Vohra, R.V. Sponsored Search Auctions. Algorithmic Game Theory 2011, 1, 699–716. [Google Scholar]
- Benkler, Y. The Wealth of Networks: How Social Production Transforms Markets and Freedom; Yale University Press: New Haven, CT, USA, 2006. [Google Scholar]
- Malone, T.W.; Laubacher, R.; Dellarocas, C. Harnessing Crowds: Mapping the Genome of Collective Intelligence. MIT Sloan Research No.4732-09. 2009. Available online: https://ssrn.com/abstract=1381502 (accessed on 24 July 2021).
- Ahn, L. Games with a purpose. Computer 2006, 39, 92–94. [Google Scholar] [CrossRef]
- Chatzimilioudis, G.; Konstantinidis, A.; Laoudias, C.; Zeinalipour-Yazti, D. Crowdsourcing with Smartphones. IEEE Internet Comput. 2012, 16, 36–44. [Google Scholar] [CrossRef]
- Howe, J. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business; Crown Publishing Group: New York, NY, USA, 2008. [Google Scholar]
- Yang, D.; Xue, G.; Fang, X.; Tang, J. Crowdsourcing to smartphones:incentive mechanism design for mobile phone sensing. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking, Istanbul, Turkey, 22 August 2012. [Google Scholar]
- Zhang, X.; Xue, G.; Yu, R.; Yang, D.; Tang, J. Truthful Incentive Mechanisms for crowdsourcing. In Proceedings of the IEEE Conference on Computer Communications, Hong Kong, 26 April 2015. [Google Scholar]
- Zhao, D.; Li, X.-Y.; Ma, H. How to crowdsource tasks truthfully without sacrificing utility: Online incentive mechanisms with budget constraint. In Proceedings of the IEEE INFOCOM 2014-IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014. [Google Scholar]
- Mohan, P.; Padmanabhan, V.; Ramjee, R. Nericell: Rich monitoring of road and traffic conditionsusing mobile smartphones. In Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems, Raleigh, NC, USA, 5 November 2008. [Google Scholar]
- Thiagarajan, A.; Ravindranath, L.; LaCurts, K.; Madden, S.; Alakrishnan, H.; Toledo, S.; Eriksson, J. Vtrack: Accurate, energy-aware road traffic delay estimation using mobile phones. In Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems, Berkeley, CA, USA, 4 November 2009. [Google Scholar]
- Mun, M.; Reddy, S.; Shilton, K.; Yau, N.; Burke, J.; Estrin, D.; Hansen, M.; Howard, E.; West, R.; Boda, P. PIER, the personal environmental impact report, as a platform for participatory sensing systems research. In Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services, Kraków, Poland, 22 June 2009. [Google Scholar]
- Rana, R.; Chou, C.; Kanhere, S.; Bulusu, N.; Hu, W. Earphone: An end-to-end participatory urban noise mapping. In Proceedings of the 10th International Conference on Information Processing in Sensor Networks, Chicago, IL, USA, 12 April 2010. [Google Scholar]
- Ahn, L.; Dabbish, L. Designing games with a purpose. Commun. ACM 2008, 51, 58–67. [Google Scholar] [CrossRef]
- Jain, S.; Chen, Y.; Parkes, D.C. Designing incentives for online question and answer fórums. In Proceedings of the 10th ACM conference on Electronic Commerce, Innsbruck, Austria, 19 August 2009. [Google Scholar]
- Cooper, S.; Khatib, F.; Makedon, I.; Hao, L.; Barbero, J.; Baker, D.; Fogarty, J.; Popovic, Z.; Players, F. Analysis of social game-play macros in the foldit cookbook. In Proceedings of the 6th International Conference on Foundations of Digital Games, Bordeaux, France, 29 June 2011. [Google Scholar]
- Singla, A.; Krause, A. Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. In Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro, Brazil, 13 May 2013. [Google Scholar]
- Shaw, A.D.; Horton, J.J.; Chen, D.L. Designing incentives for inexpert human raters. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, Hangzhou, China, 19 March 2011. [Google Scholar]
- Kamar, E.; Horvitz, E. Incentives and Truthful Reporting in Consensus-Centric Crowdsourcing. 2012. Available online: https://www.microsoft.com/en-us/research/publication/incentives-and-truthful-reporting-in-consensus-centric-crowdsourcing/ (accessed on 24 July 2021).
- Feldman, M.; Papadimitriou, C.; Chuang, J.; Stoica, I. Free-riding and Whitewashing in Peer-to-Peer Systems. In Proceedings of the ACM SIGCOMM Workshop on Practice and Theory of Incentives in Networked Systems, Portland, OR, USA, 3 September 2004. [Google Scholar]
- Mason, W.; Watts, D. Financial incentives and the performance of crowds. ACM SIGKDD Explorations Newsl. 2009, 11, 100–108. [Google Scholar] [CrossRef]
- Goel, G.; Nikzad, A.; Singla, A. Allocating tasks to workers with matching constraints: Truthful mechanisms for crowdsourcing markets. In Proceedings of the 23rd International Conference on World Wide Web, Seoul, Korea, 7 April 2014. [Google Scholar]
- Chawla, S.; Hartline, J.D.; Malec, D.L.; Sivan, B. Multi-parameter mechanism design and sequential posted pricing. In Proceedings of the Forty-Second ACM Symposium on Theory of Computing, Cambridge, MA, USA, 5 June 2010. [Google Scholar]
- Singer, Y.; Mittal, M. Pricing mechanisms in online labor markets. In Proceedings of the Human Computation: AAAI Workshop, San Francisco, CA, USA, 8 August 2011. [Google Scholar]
- Zhang, Y.; van der Schaar, M. Reputation-based incentive protocols in crowdsourcing applications. In Proceedings of the IEEE INFOCOM, Orlando, FL, USA, 25–30 March 2012; pp. 2140–2148. [Google Scholar]
- Singer, Y.; Mittal, M. Pricing mechanisms for crowdsourcing markets. In Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro, Brazil, 13 May 2013. [Google Scholar]
- Devanur, N.R.; Hayes, T.P. The adwords problem: Online keyword matching with budgeted bidders under random permutations. In Proceedings of the 10th ACM Conference on Electronic Commerce, Stanford, CA, USA, 6 July 2009. [Google Scholar]
- DiPalantino, D.; Vojnovic, M. Crowdsourcing and all-pay auctions. In Proceedings of the 10th ACM Conference on Electronic Commerce, Stanford, CA, USA, 6 July 2009. [Google Scholar]
Ref. # | Main Approach | Worker Type | Answer Type | Incentives for Requesters | Incentives for Workers | Worker Best Answer (Truthful Reporting) | Case Study |
---|---|---|---|---|---|---|---|
[21] | Consensus Prediction Payment Rules | Human | single/Set of answer(s) | N/A | Fairness | No (Promote truthful reporting In Some Case) | Simulation |
[19] | Design Mechanism for regret minimization | Human | single/Set of answer(s) | Budget Feasible Near Optimal Util | Profitable | No | Simulation/Real Word (Amazon Mturk) |
[24] | Design Incentive-Compatible Mechanism (TM-Uniform) | Human | Single Answer | Budget feasible Near optimal Util | Profitable Mach Constraint | No | Simulation/Real Word (Wikipedia translation on Amazon Mturk) |
[10] | Designing platform/user centric incentive mechanisms (Mechanisms using Stakelberg game for Platform Centric Model and auction-based incentive mechanism for user-centric Model) | Human (Mobile Phones Sensing) | Single Answer | Budget Balance Utility | Profitable | No | Simulation |
[11] | Design Incentive Online Mechanisms (Very similar to (Site 10) in Appendix A and [13]) | Human (Mobile Phones Sensing) | Single Answer | Budget Balance | Profitable | No | Simulation |
[29] | An algorithm motivated by PAC Learning | Human | Single Answer | Maximize Revenue | - | No | - |
[28] | Design Multiple Mechanisms | Human | single/Set of answer(s) | maximizing the number of tasks performed under budget. minimizing payments for a given number of tasks | Profitable | No | Implement of Mechanical Perk Framework |
[27] | Design Mechanism (Incentive protocols/interaction between worker and requester with repeated game) | Human | single/Set of answer(s) | No Free-riding Maximize revenue | No False-Reporting | No | Simulation |
[30] | Model Contest As All-Pay Auction | Human | single/Set of answer(s) | - | - | No | Simulation/Test in Taskcn.com |
[26] | Design Mechanism for optimal price for task | Human | single/Set of answer(s) | budget feasibility competitive ration performance | Profitable | No | Test in Mechanical Turk |
Symbol | Meaning |
---|---|
Answer of worker i reports to question k | |
Answer which believes worker i for question k | |
, Average of answers to question k by n worker | |
Utility of worker i from question k | |
L | Factor of Utility in case of collusion or easy question. |
, the payoff of worker i after performing a task in the crowdsourcing process | |
Budget | Budget of the requester for crowdsourcing which will pay to workers. |
Price | Price of each Task performed by a worker during the crowdsourcing process. |
Number of Emails in Each Category Labeled by Requesters | ||||||
---|---|---|---|---|---|---|
Very Negative | Negative | Nature | Positive | Very Positive | ||
Number of emails in each category labeled by Participants | Very Negative | 20 | 18 | 19 | 17 | 18 |
Negative | 0 | 1 | 0 | 0 | 0 | |
Nature | 0 | 1 | 1 | 2 | ||
Positive | 0 | 0 | 0 | 1 | 1 | |
Very Positive | 0 | 0 | 0 | 0 | 1 |
Number of Emails in Each Category Labeled by Expert | ||||||
---|---|---|---|---|---|---|
Very Negative | Negative | Nature | Positive | Very Positive | ||
Number of emails in each category labeled by Participants | Very Negative | 19 | 1 | 0 | 0 | 0 |
Negative | 1 | 18 | 1 | 0 | 0 | |
Nature | 0 | 1 | 19 | 2 | 0 | |
Positive | 0 | 0 | 0 | 18 | 1 | |
Very Positive | 0 | 0 | 0 | 0 | 19 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mohammadi, A.; Hashemi Golpayegani, S.A. A Financial Incentive Mechanism for Truthful Reporting Assurance in Online Crowdsourcing Platforms. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 2014-2030. https://doi.org/10.3390/jtaer16060113
Mohammadi A, Hashemi Golpayegani SA. A Financial Incentive Mechanism for Truthful Reporting Assurance in Online Crowdsourcing Platforms. Journal of Theoretical and Applied Electronic Commerce Research. 2021; 16(6):2014-2030. https://doi.org/10.3390/jtaer16060113
Chicago/Turabian StyleMohammadi, Alireza, and Seyyed Alireza Hashemi Golpayegani. 2021. "A Financial Incentive Mechanism for Truthful Reporting Assurance in Online Crowdsourcing Platforms" Journal of Theoretical and Applied Electronic Commerce Research 16, no. 6: 2014-2030. https://doi.org/10.3390/jtaer16060113
APA StyleMohammadi, A., & Hashemi Golpayegani, S. A. (2021). A Financial Incentive Mechanism for Truthful Reporting Assurance in Online Crowdsourcing Platforms. Journal of Theoretical and Applied Electronic Commerce Research, 16(6), 2014-2030. https://doi.org/10.3390/jtaer16060113