Much digital ink has been spilled on artificial intelligence taking over jobs, but what about AI shaking up the hiring process in the meantime?
In the age of digital transformation, Artificial Intelligence (AI) has swiftly become a cornerstone of organizational operations. Recruitment – a process that all organizations of any size will have to undertake at some point – is no exception.
However, the talent acquisition landscape is a bit of a minefield with an average of over 250 applicants for a corporate job opening, resulting in busy recruiters typically spending only 6-8 seconds looking at each CV. When the right people can make such a difference to a company’s culture and performance, an ineffective recruitment process can cost companies time and money to find replacements for poor hires and undo any damage they may have caused in the interim.
For recruiters, AI provides an exciting alternative to sifting through countless resumes, writing job descriptions, and managing a never-ending loop of daily admin chores. AI-powered tools and algorithms are changing, and in some cases, replacing, the whole recruitment process, resulting in speedier hires and more efficient experiences for both the candidates and recruiter. While this shift towards AI brings numerous benefits, it also raises critical questions about fairness, bias, and privacy.
We’ve previously looked at how businesses can avoid exposing their data when using large language models (LLMs). This time, let’s consider the wider implications of using AI to streamline their recruitment processes.
The AI recruitment revolution
HR professionals know just how time-consuming it is to recruit a new candidate. First, the job description needs to be written – this alone can take time to get the appropriate people to identify the key tasks and responsibilities for the role. It then needs to be approved internally before publishing on the relevant job-seeking platforms or shared with potential candidates. Once all the desired applications have been submitted, the recruiter then needs to review and shortlist them before any interviews can even begin.
However, enter AI and a new, streamlined recruitment process. Already, around 85% of recruiters believe that AI is a useful technology that will replace some parts of the hiring process. In many cases, it’s already been introduced. Back in 2019, a spokeswoman from Unilever said that their AI recruitment tool saved over 100,000 hours and $1 million in global recruitment costs that year. And it’s easy to see why. Using AI to its full potential can create significant benefits for busy recruiters needing to fill a vacant role.
1. Speedier candidate vetting
AI models can automate repetitive tasks such as screening resumes and candidate matching. Instead of reading through hundreds of applications for a single job vacancy, recruiters can input the information into an AI model which can then identify certain keywords that match the job description and what they’re looking for. The model can then automatically shortlist the candidates based on how closely they align with the desired criteria. As a result, recruiters can focus on more strategic aspects of talent acquisition, or simply crack on with everything else on their growing to-do lists.
2. Enhanced candidate experience
Ever hesitated to apply for a job because the recruiter didn’t answer your question about the role? Well, no longer: AI-powered chatbots and virtual assistants provide immediate responses to candidates’ queries, ensuring a smoother and more engaging experience throughout the recruitment journey. Personalized interactions and prompt feedback contribute to a positive employer brand, increasing the number of people wanting to work for the company, and therefore increasing the talent pool from which the recruiters can select.
3. Data-driven decision making
AI tools can use predictive analytics to identify top candidates based on historical data and performance metrics. By analyzing patterns in successful hires, organizations can make more informed decisions based on previous recruitment performance.
4. Improved diversity and inclusion
Some AI platforms claim to mitigate unconscious bias in recruitment by anonymizing candidate information, focusing solely on qualifications and skills. By removing identifying information such as name, gender, or ethnicity, these tools may promote diversity and inclusivity in hiring.
AI risks and challenges
Sold by the impressive list of benefits? Not so fast… the involvement of AI in the hiring process also opens up a new host of security risks and challenges that organizations must address to use this new tool efficiently and honorably.
1. Algorithmic bias
If a model is trained on a historical dataset, historical biases may be carried through to the model’s output. For example, if a company was using AI to look through resumes to find a fit for a doctor’s job, and if the dataset it’s been trained with shows that 80% of doctors who historically fit the role were male, the model may be more likely favor the male applicants over the female, despite them having equal suitability for the role.
As well as having internal implications of not seeing all of the suitable candidates, this can have significant financial and reputational consequences. Consider this real-life scenario where a tutoring company was made to pay a $365,000 settlement when AI automatically disqualified applicants based on age as a result of the data it was fed.
Additionally, AI may over-value the use of keywords and metrics when reviewing the resumes submitted. Unlike a human, an AI system might not pick up on soft skills and other experience or character traits that would make someone a more desirable candidate for the role.
The automated process that the AI models use, may even favor applicants who have used AI to create their resume using the job description posted. This will result in a submission that ‘on paper’ looks perfect for the role, yet it is not an authentic or honest representation of the candidate’s suitability.
2. Lack of transparency
Many AI algorithms operate as black boxes, meaning the decision-making process is unclear and difficult to understand. This lack of transparency raises questions about accountability and the ability to challenge or correct biased outcomes. If companies don’t know that their AI input is biased or ‘poisoned’, how can they know to rectify it? And how would they know how to go about doing that? This lack of transparency can also provide an opportunity for sneaky candidates to find potential loopholes in the system that get their resumes to the top of the list.
3. Data privacy and security
For the use of AI in recruitment, the models would need to be fed vast amounts of personal data provided by candidates and the organization itself. Ensuring the confidentiality and security of this data with sufficient cybersecurity measures is paramount to protecting the company’s and individuals’ privacy rights, as well as complying with regulations such as the General Data Protection Regulation (GDPR).
4. Human oversight and accountability
While AI can enhance efficiency, human oversight is still essential to prevent the misuse or misinterpretation of AI-generated insights. Organizations must establish clear accountability frameworks and mechanisms for addressing algorithmic errors or ethical breaches.
5. Legal and regulatory compliance
The use of AI in recruitment is subject to various legal and regulatory frameworks, including anti-discrimination laws and data protection regulations. Failure to comply with these requirements can result in legal repercussions and reputational damage.
How can your organization harness AI for recruitment in a safe and effective manner?
To realize the benefits of AI while mitigating associated risks, organizations must adopt a holistic approach to AI. This includes:
1. Ethical AI design
Prioritize fairness, transparency, and accountability in the development and deployment of AI across IT systems. This can be done by implementing measures such as bias detection algorithms and regular fairness assessments to identify and address discriminatory patterns.
2. Continuous monitoring and evaluation
Regularly assess the performance of AI algorithms to identify and mitigate biases or errors. Establish feedback mechanisms for candidates to report concerns or provide input on their experiences with AI-driven recruitment processes. This constant oversight and monitoring means that if something does go wrong with the AI system, it can be identified and rectified before negative consequences build up.
3. Insights from teams with mixed expertise
Encourage collaboration between HR professionals, data scientists, ethicists, and legal experts to ensure a multidisciplinary approach to AI operation. A range of expertise and insight overlooking the AI model and programs supports the development of comprehensive robust AI policies and practices.
4. Education and training
Provide training to recruiters and hiring managers on the ethical use of AI in recruitment, including awareness of bias mitigation strategies and the importance of data privacy and security. Cultivate a culture of responsible AI adoption across the organization with transparency and guidelines on how best to use it.
5. Regulatory compliance
Stay ahead of evolving legal and regulatory requirements surrounding AI in recruitment and proactively adapt company policies and practices to ensure complete compliance. By regularly engaging with regulatory authorities and industry associations, you can stay informed about looming risks and any loopholes in the AI system that cybercriminals might take advantage of.
To conclude…
AI presents immense opportunities to transform recruitment processes, enabling organizations to identify and attract top talent more effectively in less time. However, the widespread adoption of AI in recruitment also creates risks surrounding bias, privacy, and accountability. By engaging in the best practices listed above, organizations can navigate these challenges and leverage AI responsibly to achieve their hiring goals while upholding principles of fairness, inclusion, and authenticity.
by Imogen Byers, ESET
Comments