How to Mitigate the Risk of Using AI in the Hiring Process
The rapid advancements in AI have revolutionized various aspects of our lives, including the way businesses operate. One significant arena that AI has impacted is the recruitment process. From streamlining workflows to transforming team dynamics, AI is reshaping industries and bringing about a paradigm shift in hiring practices. This article explores the profound influence of AI on the hiring process, highlighting its benefits and potential risks, with a focus on the implications for discrimination and privacy laws.
The infusion of AI into workplaces has led to enhanced efficiency, informed decision-making, and improved employee experiences. From finance to healthcare, education to manufacturing, AI has made its mark across diverse industries. Its influence on recruitment, in particular, has been transformative. By automating repetitive tasks, predicting candidate success, and optimizing talent acquisition strategies, AI has become an invaluable tool for recruiters.
AI’s Role in the Hiring Process
AI-driven tools have reshaped how recruiters identify, assess, and engage potential candidates. Organizations are now leveraging AI-powered algorithms and tools to efficiently screen resumes, analyze candidate data, reduce unconscious bias and attract diverse candidates, and match candidates to jobs and rank them. Recruiters can save time and effort by automating routine tasks, allowing them to focus on higher-value activities like building relationships and candidate engagement.
The benefits of AI tools are multi-faceted. For instance, natural language processing (NLP) algorithms can analyze resumes to match qualifications with job descriptions, shortlisting the most suitable applicants. Chatbots and virtual assistants can engage with candidates, answering their queries and providing real-time updates. Additionally, predictive analytics can help recruiters identify the most promising candidates based on historical data, leading to more informed hiring decisions.
Other examples of AI recruitment tools include:
- Sourcing tools: These tools leverage machine learning algorithms to automate the top of the funnel tasks and find candidates that match certain parameters such as job titles, skills, keywords, and locations.
- Resume screening tools: Employing machine learning algorithms, resume screening tools analyze resumes and job applications to identify the most qualified candidates for a particular role.
- Video interviews: Video interviews are pre-recorded or live interviews that candidates complete online. The videos are then scored or evaluated by an algorithm, which assesses the candidates' communication skills, body language, and other factors.
- Systems that rank candidates: These systems use algorithms to rank candidates based on their suitability for a particular position or how well they meet specific criteria. The systems may use data from resumes, applications, interviews, or other sources to generate a ranking.
Legal Risks Associated with AI Adoption in the Hiring Process
While AI offers significant advantages in the hiring process, it also raises legitimate legal concerns. Discrimination and privacy laws, as well as new regulations, come into play when AI systems are used to evaluate candidates. One of the primary risks is the potential for algorithmic bias, where the AI model inadvertently favors or disfavors certain groups of candidates, leading to unfair treatment. Furthermore, privacy laws such as GDPR and CCPA require organizations to handle personal data responsibly. AI systems that process candidate information must adhere to these regulations to prevent unauthorized data usage or breaches.
At the state-level, we’re seeing trends of enforcements in new laws and regulations addressing the use of AI tools in the hiring process, with notable examples including:
- New York City: As of July 5, 2023, NYC enacted a law to regulate automated employment decision tools (AEDT). Companies using these tools are required to conduct independent audits to ensure compliance. The responsibility for the audit lies with the employers, and the audit results must be published on the employer's website before implementing AI programs. The audit examines disparate impact on individuals in Equal Employment Opportunity (EEO) categories, publishes recent bias audit results, and notifies applicants of AI tool usage.
- Illinois: The Artificial Intelligence Video Interview Act, effective from January 2020, mandates that employers using AI to analyze applicant-submitted videos must notify applicants about AI usage, provide detailed information on AI functionality and evaluation criteria, obtain applicant consent for AI evaluation, and collect and report demographic data.
- Maryland: A law passed in May 2020 in Maryland prohibits the use of "facial recognition service" to create "facial templates" during applicant interviews unless applicants provide consent by signing a waiver.
- California: In California, employers using, administering, or creating AI tools that impact applicants or employees can potentially face liability under the Fair Employment and Housing Act (FEHA). However, this liability can be mitigated if the selection criteria are proven to be job-related for the position and consistent with business necessity.
In addition to these regulations, there have been lawsuits involving AI tools in the hiring process:
- In August 2023, an English tutoring company settled a lawsuit with the EEOC that claimed it used AI-powered software to intentionally discriminate against female applicants aged 55 or older and male applicants aged 60 or older.
- A lawsuit involving Workday's AI screening tool emerged in February 2023. Many companies use Workday as a recruitment tool, allowing for the preselection of applications. In this lawsuit, the plaintiff accused Workday of facilitating discrimination based on race and disability.
- In November 2019, the Electronic Privacy Information Center (EPIC) filed a complaint against HireVue with the FTC, alleging that HireVue was engaging in unfair and deceptive practices using facial recognition in its hiring assessment software. Hirevue later removed facial recognition from its software. The complaint also “alleged that HireVue's claims regarding measuring cognitive ability, psychological traits, emotional intelligence, and social aptitudes were unproven, invasive, and prone to bias.”
These cases underscore the importance of vigilance and compliance with legal requirements when utilizing AI tools in the hiring process.
Ensuring Fairness and Compliance
When evaluating AI vendors for recruitment purposes, it's essential to take steps to ensure that the system doesn't have a disparate impact on protected classes. Here are some ways to achieve fairness and compliance:
Best Practices to Mitigate Legal Risks
When evaluating AI vendors for recruitment purposes, it's essential to take deliberate steps to ensure that the system avoids any disparate impact on protected classes. To achieve fairness and compliance, consider implementing the following best practices to mitigate potential legal risks:
- Thorough Vendor Assessment: Begin by conducting a comprehensive assessment of the vendor's system, ensuring transparency and auditability. While AI vendors often highlight their ability to minimize unconscious bias, inherent bias can persist in training data. For instance, Amazon's AI tool once downgraded resumes containing terms related to women. Evaluate the vendor's track record in addressing bias and compliance concerns, and inquire about their data handling practices, decision-making process, and training data.
Ask questions about how the AI tool works, such as:
- How frequently does the algorithm change?
- What data was the tool trained on? Ensure that the AI model has been trained on a diverse dataset that accurately represents the demographics of the candidate pool.
- How does the tool keep the AI system's training data up-to-date and reflective of the evolving candidate landscape?
- What are the AI system's decision-making processes? How does the system rank and select candidates based on different criteria?
- Bias Detection and Mitigation: Choose tools that offer built-in bias detection and mitigation mechanisms, ensuring a fair evaluation of all candidates.
- Regular Audits: Conduct periodic audits of the AI system to identify and rectify emerging bias that may have emerged over time.
- Human Oversight: Integrate a layer of human oversight into the AI-powered hiring process. This ensures that algorithmic decisions can be reviewed and challenged when necessary, preventing overreliance on AI and preserving the value of discretionary human judgment.
- Continuous Monitoring: Implement mechanisms to continuously monitor the AI system's performance and its impact on candidate selection. The monitoring process should be ongoing, not just a one-time assessment. Regularly revisit the system's performance and adapt strategies as needed.
Conclusion
AI's integration into the recruitment landscape has revolutionized the hiring process, offering efficiency, accuracy, and fairness. From automating routine tasks to reducing human biases, AI tools are transforming how organizations identify and select candidates. While the benefits are significant, organizations must also be vigilant about potential legal risks and ensure that their AI systems adhere to discrimination and privacy laws. By adopting a comprehensive approach to AI implementation, recruiters can harness its power to create a more inclusive and effective hiring process.
Source: https://www.capitalpersonnel.com.au/blog

