A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring
Abstract
:1. Introduction
- (a)
- Employing a comprehensive literature review approach, this study outlines the factors contributing to algorithmic biases in the hiring process. The review unveils the fragmented understanding of algorithmic biases and their mitigation strategies. Our study endeavors to address the gaps and offer solutions to rectify algorithmic biases.
- (b)
- Despite the growing reliance on algorithmic solutions to enhance the hiring process, vulnerabilities exist within hiring algorithms. This study seeks to provide AI-powered solutions designed to mitigate such biases, thereby assisting businesses in improving workforce diversity.
- (c)
- We take an approach that advocates for collaboration between humans and AI to address algorithmic biases. The study contends that addressing algorithmic biases requires humans to work alongside AI-powered tools.
2. Research Background
3. Research Motivation, Aims and Significance
3.1. Research Motivation
3.2. Research Aim
- To analyze case studies on AI in hiring. This entails the exploration of specific case studies, both successful implementations and instances of bias. By examining the real-world impact of AI on hiring, the study provides insights into the challenges and opportunities associated with AI-based recruitment.
- To evaluate the impact of algorithmic bias. The study assesses the impact of algorithmic bias in AI-driven hiring processes and the potential implications for organizations. This assists in uncovering the multifaceted impacts of biases on organizational dynamics.
- To investigate bias mitigation strategies. The paper investigates AI techniques designed to mitigate algorithmic biases in hiring, empowering organizations to leverage AI-powered tools to foster diversity and inclusivity.
3.3. Research Significance
- Academic implications. The study expands beyond the prevalent focus on the application of AI in hiring into the often-overlooked problem of algorithmic biases. By providing a comprehensive understanding of the biases through historical perspectives, case studies, and advanced techniques, the study seeks to fill a significant gap in existing research.
- Social implications. Addressing algorithmic biases in hiring allows access to employment opportunities to all individuals irrespective of factors like complexion, sex, religion, race, or social status. This is imperative for promoting equality and fostering an inclusive society.
- Economic implications. Unbiased hiring can lead to greater diversity within organizations using AI-based tools. This diversity positively influences organizational performance by fostering a culture of innovation and creativity.
- Ethical implications. The study provides insights into the emergence of biases in hiring and the strategies to rectify them. The study acts as a catalyst for responsible AI practices.
4. Methodology
4.1. Search Strategy
- “Artificial Intelligence-powered tools” OR “Artificial intelligence systems”;
- “algorithmic bias” OR “algorithmic unfairness”;
- “transparency” OR ”ethical”;
- “recruitment” OR ”hiring”;
- (“Artificial Intelligence-powered tools” OR “Artificial Intelligence systems”) AND (“algorithmic bias” OR “algorithmic unfairness”);
- (“algorithmic bias” OR “algorithmic unfairness”) AND (“recruitment” OR “hiring”).
4.2. Inclusion and Exclusion Criteria
4.3. Screening Process
5. Understanding Bias in Hiring
5.1. HR Bias in Ranking CVs
5.2. Non-Technical Solutions in Ranking CVs
6. Applications of AI in Hiring
7. Applications of AI in Eliminating Bias
7.1. AI Approaches to Bias Mitigation
7.2. Step-by-Step Approach to Algorithm Bias
7.3. Case Studies of AI Eliminating Biases
8. Limitations of AI in Eliminating Bias
Privacy and Ethical Implications of AI Systems
9. Comparing and Contrasting Studies
9.1. Comparing Research Articles
9.2. Contrasting Research Articles
9.3. AI Fairness and Accuracy in Hiring
10. Broader Implications
11. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
ATS | Application Tracking System |
CV | Curriculum Vitae |
HR | Human Resource |
IAT | Implicit Association Test |
ML | Machine Learning |
NLP | Natural Language Processing |
RoBERTa | Robustly optimised Bidirectional Encoder Representations from Transformers approach |
UK | United Kingdom |
US | United States |
References
- Hameed, K.; Arshed, N.; Yazdani, N.; Munir, M. On globalization and business competitiveness: A panel data country classification. Stud. Appl. Econ. 2021, 39, 1–27. [Google Scholar] [CrossRef]
- Farida, I.; Setiawan, D. Business strategies and competitive advantage: The role of performance and innovation. J. Open Innov. Technol. Mark. Complex. 2022, 8, 163. [Google Scholar] [CrossRef]
- Dupret, K.; Pultz, S. People as our most important asset: A critical exploration of agility and employee commitment. Proj. Manag. J. 2022, 53, 219–235. [Google Scholar] [CrossRef]
- Charles, J.; Francis, F.; Zirra, C. Effect of employee involvement in decision making and organization productivity. Arch. Bus. Res. ABR 2021, 9, 28–34. [Google Scholar] [CrossRef]
- Hamadamin, H.H.; Atan, T. The impact of strategic human resource management practices on competitive advantage sustainability: The mediation of human capital development and employee commitment. Sustainability 2019, 11, 5782. [Google Scholar] [CrossRef]
- Sukmana, P.; Hakim, A. The Influence of Work Quality and Employee Competence on Human Resources Professionalism at the Ministry of Defense Planning and Finance Bureau. Int. J. Soc. Sci. Bus. 2023, 7, 233–242. [Google Scholar] [CrossRef]
- Li, Q.; Lourie, B.; Nekrasov, A.; Shevlin, T. Employee turnover and firm performance: Large-sample archival evidence. Manag. Sci. 2022, 68, 5667–5683. [Google Scholar] [CrossRef]
- Lyons, P.; Bandura, R. Employee turnover: Features and perspectives. Dev. Learn. Organ. Int. J. 2020, 34, 1–4. [Google Scholar] [CrossRef]
- Bishop, J.; D’arpino, E.; Garcia-Bou, G.; Henderson, K.; Rebeil, S.; Renda, E.; Urias, G.; Wind, N. Sex Discrimination Claims Under Title VII of the Civil Rights Act of 1964. Georget. J. Gender Law 2021, 22, 369–373. [Google Scholar]
- Fry, R.; Kennedy, B.; Funk, C. STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity; Pew Research Center: Washington, DC, USA, 2021; pp. 1–28. [Google Scholar]
- Bunbury, S. Unconscious bias and the medical model: How the social model may hold the key to transformative thinking about disability discrimination. Int. J. Discrim. Law 2019, 19, 26–47. [Google Scholar] [CrossRef]
- Kassir, S.; Baker, L.; Dolphin, J.; Polli, F. AI for hiring in context: A perspective on overcoming the unique challenges of employment research to mitigate disparate impact. AI Ethics 2023, 3, 845–868. [Google Scholar] [CrossRef]
- Quillian, L.; Heath, A.; Pager, D.; Midtbøen, A.H.; Fleischmann, F.; Hexel, O. Do some countries discriminate more than others? Evidence from 97 field experiments of racial discrimination in hiring. Sociol. Sci. 2019, 6, 467–496. [Google Scholar] [CrossRef] [PubMed]
- Benbya, H.; Davenport, T.H.; Pachidi, S. Artificial intelligence in organizations: Current state and future opportunities. MIS Q. Exec. 2020, 19, 4. [Google Scholar] [CrossRef]
- HireAbility. The Evolution of Resume Parsing: A Journey Through Time. 2023. Available online: https://www.linkedin.com/pulse/evolution-resume-parsing-journey-through-time-hireability-com-llc?trk=organization_guest_main-feed-card_feed-article-content (accessed on 5 January 2024).
- Ajunwa, I. Automated video interviewing as the new phrenology. Berkeley Technol. Law J. 2021, 36, 1173. [Google Scholar]
- Dastin, J. Insight—Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. 11 October 2018. Available online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-922showed-bias-against-women-idUSKCN1MK08G/ (accessed on 5 January 2024).
- Wang, J.; Yang, Y.; Wang, T.; Sherratt, R.S.; Zhang, J. Big data service architecture: A survey. J. Internet Technol. 2020, 21, 393–405. [Google Scholar]
- De Cremer, D.; De Schutter, L. How to use algorithmic decision-making to promote inclusiveness in organizations. AI Ethics 2021, 1, 563–567. [Google Scholar] [CrossRef]
- Kordzadeh, N.; Ghasemaghaei, M. Algorithmic bias: Review, synthesis, and future research directions. Eur. J. Inf. Syst. 2022, 31, 388–409. [Google Scholar] [CrossRef]
- Lee, N.T.; Resnick, P.; Barton, G. Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms; Brookings Institute: Washington, DC, USA, 2019; Volume 2. [Google Scholar]
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. CSUR 2021, 54, 115. [Google Scholar] [CrossRef]
- Shahbazi, N.; Lin, Y.; Asudeh, A.; Jagadish, H. Representation Bias in Data: A Survey on Identification and Resolution Techniques. ACM Comput. Surv. 2023, 55, 293. [Google Scholar] [CrossRef]
- Wilms, R.; Mäthner, E.; Winnen, L.; Lanwehr, R. Omitted variable bias: A threat to estimating causal relationships. Methods Psychol. 2021, 5, 100075. [Google Scholar] [CrossRef]
- Sun, W.; Nasraoui, O.; Shafto, P. Evolution and impact of bias in human and machine learning algorithm interaction. PLoS ONE 2020, 15, e0235502. [Google Scholar] [CrossRef]
- Mishra, R.K.; Reddy, G.S.; Pathak, H. The understanding of deep learning: A comprehensive review. Math. Probl. Eng. 2021, 2021, 5548884. [Google Scholar] [CrossRef]
- Sodhar, I.N.; Jalbani, A.H.; Buller, A.H.; Mirani, A.A.; Sodhar, A.N. Chapter 1—Natural Language Processing: Applications, Techniques and Challenges. In Advances in Computer Science; AkiNik Publications: New Delhi, India, 2020; Volume 7. [Google Scholar]
- Fisher, E.; Thomas, R.S.; Higgins, M.K.; Williams, C.J.; Choi, I.; McCauley, L.A. Finding the right candidate: Developing hiring guidelines for screening applicants for clinical research coordinator positions. J. Clin. Transl. Sci. 2022, 6, e20. [Google Scholar] [CrossRef]
- FitzGerald, C.; Martin, A.; Berner, D.; Hurst, S. Interventions designed to reduce implicit prejudices and implicit stereotypes in real world contexts: A systematic review. BMC Psychol. 2019, 7, 29. [Google Scholar] [CrossRef] [PubMed]
- Marvel, J.D.; Resh, W.D. An unconscious drive to help others? Using the implicit association test to measure prosocial motivation. Int. Public Manag. J. 2019, 22, 29–70. [Google Scholar] [CrossRef]
- Banaji, M.R.; Fiske, S.T.; Massey, D.S. Systemic racism: Individuals and interactions, institutions and society. Cogn. Res. Princ. Implic. 2021, 6, 82. [Google Scholar] [CrossRef] [PubMed]
- Fenton, W. 2023 Employment Discrimination Statistics Employees Need to Know. Available online: https://www.wenzelfenton.com/blog/2022/07/18/employment-discrimination-statistics-employees-need-to-know/ (accessed on 20 November 2023).
- Tabassum, N.; Nayak, B.S. Gender stereotypes and their impact on women’s career progressions from a managerial perspective. IIM Kozhikode Soc. Manag. Rev. 2021, 10, 192–208. [Google Scholar] [CrossRef]
- Zingora, T.; Vezzali, L.; Graf, S. Stereotypes in the face of reality: Intergroup contact inconsistent with group stereotypes changes attitudes more than stereotype-consistent contact. Group Process. Intergroup Relat. 2021, 24, 1284–1305. [Google Scholar] [CrossRef]
- Stopfer, J.M.; Gosling, S.D. Online social networks in the work context. In Current Issues in Work and Organizational Psychology; Routledge: London, UK, 2018; pp. 300–315. [Google Scholar]
- Marcelin, J.R.; Siraj, D.S.; Victor, R.; Kotadia, S.; Maldonado, Y.A. The impact of unconscious bias in healthcare: How to recognize and mitigate it. J. Infect. Dis. 2019, 220, S62–S73. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.Y.; Roberson, L. I’m biased and so are you. What should organizations do? A review of organizational implicit-bias training programs. Consult. Psychol. J. 2022, 74, 19. [Google Scholar] [CrossRef]
- Yarger, L.; Cobb Payton, F.; Neupane, B. Algorithmic equity in the hiring of underrepresented IT job candidates. Online Inf. Rev. 2020, 44, 383–395. [Google Scholar] [CrossRef]
- Cohen, J.R.; Dalton, D.W.; Holder-Webb, L.L.; McMillan, J.J. An analysis of glass ceiling perceptions in the accounting profession. J. Bus. Ethics 2020, 164, 17–38. [Google Scholar] [CrossRef]
- Ashikali, T.; Groeneveld, S.; Kuipers, B. The role of inclusive leadership in supporting an inclusive climate in diverse public sector teams. Rev. Public Pers. Adm. 2021, 41, 497–519. [Google Scholar] [CrossRef]
- de la Fuente Garcia, S.; Ritchie, C.W.; Luz, S. Artificial intelligence, speech, and language processing approaches to monitoring Alzheimer’s disease: A systematic review. J. Alzheimer’s Dis. 2020, 78, 1547–1574. [Google Scholar] [CrossRef] [PubMed]
- Thompson, I.; Koenig, N.; Mracek, D.L.; Tonidandel, S. Deep Learning in Employee Selection: Evaluation of Algorithms to Automate the Scoring of Open-Ended Assessments. J. Bus. Psychol. 2023, 38, 509–527. [Google Scholar] [CrossRef]
- McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.S.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nature 2020, 577, 89–94. [Google Scholar] [CrossRef] [PubMed]
- Goodman, C.C. AI/Esq.: Impacts of artificial intelligence in lawyer-client relationships. Okla. Law Rev. 2019, 72, 149. [Google Scholar]
- Brishti, J.K.; Javed, A. The Viability of AI-Based Recruitment Process: A Systematic Literature Review. Master’s Thesis, Umeå University, Umeå, Sweden, 2020. [Google Scholar]
- Bhalgat, K.H. An Exploration of How Artificial Intelligence Is Impacting Recruitment and Selection Process. Ph.D. Thesis, Dublin Business School, Dublin, Ireland, 2019. [Google Scholar]
- Sridevi, G.; Suganthi, S.K. AI based suitability measurement and prediction between job description and job seeker profiles. Int. J. Inf. Manag. Data Insights 2022, 2, 100109. [Google Scholar]
- Nawaz, N.; Gomes, A.M. Artificial intelligence chatbots are new recruiters. IJACSA Int. J. Adv. Comput. Sci. Appl. 2019, 10, 1–5. [Google Scholar] [CrossRef]
- Black, J.S.; van Esch, P. AI-enabled recruiting: What is it and how should a manager use it? Bus. Horiz. 2020, 63, 215–226. [Google Scholar] [CrossRef]
- Wright, J.; Atkinson, D. The Impact of Artificial Intelligence within the Recruitment Industry: Defining a New Way of Recruiting; Carmichael Fisher: Los Angeles, CA, USA, 2019; pp. 1–39. [Google Scholar]
- Adegboyega, L.O. Influence of Social Media on the Social Behavior of Students as Viewed by Primary School Teachers in Kwara State, Nigeria. Elem. Sch. Forum (Mimbar Sekol. Dasar) 2020, 7, 43–53. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 1–74. [Google Scholar] [CrossRef] [PubMed]
- Chen, Z. Collaboration among recruiters and artificial intelligence: Removing human prejudices in employment. Cogn. Technol. Work. 2023, 25, 135–149. [Google Scholar] [CrossRef]
- Washington, A.L. How to argue with an algorithm: Lessons from the COMPAS-ProPublica debate. Colo. Technol. Law J. 2018, 17, 131. [Google Scholar]
- Jackson, E.; Mendoza, C. Setting the record straight: What the COMPAS core risk and need assessment is and is not. Harv. Data Sci. Rev. 2020, 2, 1–14. [Google Scholar]
- Obermeyer, Z.; Mullainathan, S. Dissecting racial bias in an algorithm that guides health decisions for 70 million people. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; p. 89. [Google Scholar]
- Harwell, D. A face-scanning algorithm increasingly decides whether you deserve the job. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 206–211. [Google Scholar]
- Pandey, A.; Caliskan, A. Disparate impact of artificial intelligence bias in ridehailing economy’s price discrimination algorithms. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Virtual, 19–21 May 2021; pp. 822–833. [Google Scholar]
- Garrido-Muñoz, I.; Montejo-Ráez, A.; Martínez-Santiago, F.; Ureña-López, L.A. A survey on bias in deep NLP. Appl. Sci. 2021, 11, 3184. [Google Scholar] [CrossRef]
- Nemani, P.; Joel, Y.D.; Vijay, P.; Liza, F.F. Gender bias in transformers: A comprehensive review of detection and mitigation strategies. Nat. Lang. Process. J. 2023, 6, 100047. [Google Scholar] [CrossRef]
- Shin, S.; Song, K.; Jang, J.; Kim, H.; Joo, W.; Moon, I.C. Neutralizing gender bias in word embedding with latent disentanglement and counterfactual generation. arXiv 2020, arXiv:2004.03133. [Google Scholar]
- Manzini, T.; Lim, Y.C.; Tsvetkov, Y.; Black, A.W. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv 2019, arXiv:1904.04047. [Google Scholar]
- Zhou, P.; Shi, W.; Zhao, J.; Huang, K.H.; Chen, M.; Chang, K.W. Analyzing and Mitigating Gender Bias in Languages with Grammatical Gender and Bilingual Word Embeddings; ACL: Montréal, QC, Canada, 2019. [Google Scholar]
- Martínez-Huertas, J.Á.; Olmos, R.; Jorge-Botana, G.; León, J.A. Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables. Behav. Res. Methods 2022, 54, 2579–2601. [Google Scholar] [CrossRef]
- Maudslay, R.H.; Gonen, H.; Cotterell, R.; Teufel, S. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. arXiv 2019, arXiv:1909.00871. [Google Scholar]
- Sinha, R.S.; Lee, S.M.; Rim, M.; Hwang, S.H. Data augmentation schemes for deep learning in an indoor positioning application. Electronics 2019, 8, 554. [Google Scholar] [CrossRef]
- Pereira, S.; Correia, J.; Machado, P. Evolving Data Augmentation Strategies. In Proceedings of the International Conference on the Applications of Evolutionary Computation (Part of EvoStar), Madrid, Spain, 20–22 April 2022; Springer: Cham, Switzerland, 2022; pp. 337–351. [Google Scholar]
- Pagano, T.P.; Loureiro, R.B.; Lisboa, F.V.N.; Cruz, G.O.R.; Peixoto, R.M.; Guimarães, G.A.d.S.; Santos, L.L.d.; Araujo, M.M.; Cruz, M.; de Oliveira, E.L.S.; et al. Bias and unfairness in machine learning models: A systematic literature review. arXiv 2022, arXiv:2202.08176. [Google Scholar]
- Di Noia, T.; Tintarev, N.; Fatourou, P.; Schedl, M. Recommender systems under European AI regulations. Commun. ACM 2022, 65, 69–73. [Google Scholar] [CrossRef]
- Feldman, T.; Peake, A. End-to-end bias mitigation: Removing gender bias in deep learning. arXiv 2021, arXiv:2104.02532. [Google Scholar]
- Sweeney, C.; Najafian, M. Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 359–368. [Google Scholar]
- Aouragh, S.L.; Yousfi, A.; Laaroussi, S.; Gueddah, H.; Nejja, M. A new estimate of the n-gram language model. Procedia Comput. Sci. 2021, 189, 211–215. [Google Scholar] [CrossRef]
- Aissani, N.; Beldjilali, B.; Trentesaux, D. Use of machine learning for continuous improvement of the real time heterarchical manufacturing control system performances. Int. J. Ind. Syst. Eng. 2008, 3, 474–497. [Google Scholar] [CrossRef]
- Fritts, M.; Cabrera, F. AI recruitment algorithms and the dehumanization problem. Ethics Inf. Technol. 2021, 23, 791–801. [Google Scholar] [CrossRef]
- Lavanchy, M.; Reichert, P.; Narayanan, J.; Savani, K. Applicants’ fairness perceptions of algorithm-driven hiring procedures. J. Bus. Ethics 2023, 188, 125–150. [Google Scholar] [CrossRef]
- Serey, J.; Alfaro, M.; Fuertes, G.; Vargas, M.; Durán, C.; Ternero, R.; Rivera, R.; Sabattin, J. Pattern recognition and deep learning technologies, enablers of industry 4.0, and their role in engineering research. Symmetry 2023, 15, 535. [Google Scholar] [CrossRef]
- Hunkenschroer, A.L.; Kriebitz, A. Is AI recruiting (un) ethical? A human rights perspective on the use of AI for hiring. AI and Ethics 2023, 3, 199–213. [Google Scholar] [CrossRef] [PubMed]
- Akter, S.; McCarthy, G.; Sajib, S.; Michael, K.; Dwivedi, Y.K.; D’Ambra, J.; Shen, K.N. Algorithmic bias in data-driven innovation in the age of AI. Int. J. Inf. Manag. 2021, 60, 102387. [Google Scholar] [CrossRef]
- IBM. AI Fairness 360. 14 February 2018. Available online: https://www.ibm.com/opensource/open/projects/ai-fairness-360/ (accessed on 24 October 2023).
- Novet, J. Cisco Is Hiring More Women and Non-White Employees than Ever, and They Credit This Start-Up for 1051 Helping. 9 October 2019. Available online: https://www.cnbc.com/2018/10/09/textio-helping-cisco-atlassian-improve-workforce-diversity.html (accessed on 4 January 2024).
- Alameer, A.; Degenaar, P.; Nazarpour, K. Processing occlusions using elastic-net hierarchical max model of the visual cortex. In Proceedings of the 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, 3–5 July 2017; IEEE: New York, NY, USA, 2017; pp. 163–167. [Google Scholar]
- Rejikumar, G.; Gopikumar, V.; Dinesh, K.S.; Asokan-Ajitha, A.; Jose, A. Privacy breach perceptions and litigation intentions: Evidence from e-commerce customers. IIMB Manag. Rev. 2021, 33, 322–336. [Google Scholar]
- Robinson, M.F. Artificial Intelligence in Hiring: Understanding Attitudes and Perspectives of HR Practitioners; Wilmington University (Delaware): New Castle, DE, USA, 2019. [Google Scholar]
- Vanderhaegen, F. Heuristic-based method for conflict discovery of shared control between humans and autonomous systems-A driving automation case study. Robot. Auton. Syst. 2021, 146, 103867. [Google Scholar] [CrossRef]
- Gonen, H.; Goldberg, Y. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv 2019, arXiv:1903.03862. [Google Scholar]
- Hunkenschroer, A.L.; Lütge, C. How to improve fairness perceptions of AI in hiring: The crucial role of positioning and sensitization. AI Ethics J. 2021, 2, 1–19. [Google Scholar] [CrossRef]
- Alameer, A.; Ghazaeil, G.; Degenaar, P.; Nazarpour, K. An elastic net-regularized HMAX model of visual processing. In Proceedings of the 2nd IET International Conference on Intelligent Signal Processing 2015 (ISP), London, UK, 1–2 December 2015. [Google Scholar]
- Alameer, A.; Degenaar, P.; Nazarpour, K. Objects and scenes classification with selective use of central and peripheral image content. J. Vis. Commun. Image Represent. 2020, 66, 102698. [Google Scholar] [CrossRef]
- Alameer, A.; Degenaar, P.; Nazarpour, K. Context-based object recognition: Indoor versus outdoor environments. In Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC); Springer: Cham, Switzerland, 2020; Volume 2, pp. 473–490. [Google Scholar]
- Alameer, A.; Degenaar, P.; Nazarpour, K. Biologically-inspired object recognition system for recognizing natural scene categories. In Proceedings of the 2016 International Conference for Students on Applied Engineering (ICSAE), Newcastle Upon Tyne, UK, 20–21 October 2016; IEEE: New York, NY, USA, 2016; pp. 129–132. [Google Scholar]
- Accenture Art. The Art of AI maturity: Advancing from Practice to Performance. 16 November 2023. Available online: https://www.accenture.com/us-en/services/applied-intelligence/ai-ethics-governance (accessed on 23 November 2023).
- Woods, S.A.; Ahmed, S.; Nikolaou, I.; Costa, A.C.; Anderson, N.R. Personnel selection in the digital age: A review of validity and applicant reactions, and future research challenges. Eur. J. Work. Organ. Psychol. 2020, 29, 64–77. [Google Scholar] [CrossRef]
- Goretzko, D.; Israel, L.S.F. Pitfalls of machine learning-based Personnel Selection. J. Pers. Psychol. 2021, 21. [Google Scholar] [CrossRef]
- Köchling, A.; Wehner, M.C. Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Bus. Res. 2020, 13, 795–848. [Google Scholar] [CrossRef]
- Schwartz, R.; Vassilev, A.; Greene, K.; Perine, L.; Burt, A.; Hall, P. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence; NIST Special Publication 1270; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2022. [Google Scholar]
- Gichoya, J.W.; Thomas, K.; Celi, L.A.; Safdar, N.; Banerjee, I.; Banja, J.D.; Seyyed-Kalantari, L.; Trivedi, H.; Purkayastha, S. AI pitfalls and what not to do: Mitigating bias in AI. Br. J. Radiol. 2023, 96, 20230023. [Google Scholar] [CrossRef] [PubMed]
- Lokanan, M. The determinants of investment fraud: A machine learning and artificial intelligence approach. Front. Big Data 2022, 5, 961039. [Google Scholar] [CrossRef] [PubMed]
- Chen, Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit. Soc. Sci. Commun. 2023, 10, 567. [Google Scholar] [CrossRef]
Criteria | Inclusion | Exclusion |
---|---|---|
Publication type | Peer-reviewed journals | Sources that have not been peer-reviewed |
Publication Date | Sources must be published between 2017 and 2023 | Sources published earlier than 2017 |
Content | Studies related to hiring algorithms, artificial intelligence, and how they impact the hiring process | Studies not related to hiring algorithms, artificial intelligence, and hiring |
Language | Studies published in English | Studies published in languages other than English and not having an English translation |
No | Application | Description |
---|---|---|
1 | CV screening | Automating the CV screening process to identify the best candidate match |
2 | Personality and behavior assessment | Analyzing data from social media profiles and other online forums. |
3 | Overcoming language barriers | Can recognize different languages, allowing for hiring professionals to assess candidates from different parts of the world |
Name | Core Ideas | Deficiencies |
---|---|---|
|
| |
|
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Albaroudi, E.; Mansouri, T.; Alameer, A. A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring. AI 2024, 5, 383-404. https://doi.org/10.3390/ai5010019
Albaroudi E, Mansouri T, Alameer A. A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring. AI. 2024; 5(1):383-404. https://doi.org/10.3390/ai5010019
Chicago/Turabian StyleAlbaroudi, Elham, Taha Mansouri, and Ali Alameer. 2024. "A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring" AI 5, no. 1: 383-404. https://doi.org/10.3390/ai5010019
APA StyleAlbaroudi, E., Mansouri, T., & Alameer, A. (2024). A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring. AI, 5(1), 383-404. https://doi.org/10.3390/ai5010019