In 2025, AI bias persists in HR tech

From hiring to performance reviews, discrimination persists in digital form.
The promise of AI in human resources (HR) has long been efficiency: streamlined hiring, quicker candidate screening, and data-driven performance management. But cracks in that promise are becoming more visible. The very systems designed to cut through human bias are, paradoxically, codifying and reproducing them. Despite major strides in machine learning and natural language processing, AI bias continues to be a thorn in the side of HR technology.
A 2024 study from the University of Washington Information School revealed uncomfortable data about the state of AI resume screening. It found that Massive Text Embedding Models (MTEs) – a sophisticated class of large language models (LLMs) – disproportionately favoured names associated with white and male candidates. In 85.1% of resume comparisons, names perceived as white were ranked higher. Meanwhile, male-associated names prevailed in 51.9% of cases. Most damning of all, the systems never once preferred names associated with black men over those associated with white men.
These systems, trained on vast historical data, were parroting the patterns of systemic discrimination deeply woven into past hiring decisions. They weren’t breaking bias, but reproducing them.
When it came to intersectionality – the compound effects of multiple marginalised identities – the results were even more concerning. The study confirmed three core hypotheses: that intersectional biases are not simply additive but operate on their own terms. In some scenarios, resumes associated with black men were disadvantaged in 100% of comparisons. The message was clear: AI was replicating discrimination with cold precision.
Details make the difference (and the damage)
It wasn’t just the names that tipped the algorithm’s scales. The study also uncovered that even low-level textual features – resume length, formatting, and word choice – had significant effects on outcomes. Shorter resumes, for instance, resulted in higher levels of bias: a 2.5% increase in race-related bias and a whopping 22.2% uptick in gender bias when comparing titles only to full resumes. These subtleties, largely invisible to jobseekers, wielded disproportionate influence over outcomes.
Even strategies like removing names – long considered a standard anti-bias tactic – were revealed to be insufficient. Other elements in the text can subtly encode identity, acting as proxies for race, gender, or socioeconomic background. This creates an HR minefield where even good intentions can backfire.
In response, jobseekers have entered an AI arms race of their own. Many now use AI tools to write and optimise resumes – sometimes embellishing achievements or skills beyond reality. Recruiters, already pressed for time, are left trying to separate wheat from artificially generated chaff. But this isn’t an equal playing field. Those with access to premium AI services have an edge. Candidates from lower-income backgrounds, who may rely on free tools or none at all, risk being left behind – yet another layer of bias in a process meant to be meritocratic.
Read: At IBM, AI just took over HR jobs
Automation in performance management: promising but dangerous
Bias isn't limited to hiring. AI-powered performance management systems are increasingly used to predict employee potential and track performance. While the intention is to boost productivity and identify high-performing employees, the execution can fall short, especially for employees who don't fit traditional patterns of success.
People with neurodiversity or employees with disabilities may be unfairly rated by AI systems trained on normative behavioural data. Worse still, algorithms rarely offer a clear view of how assessments are made, leaving employees with little room to understand, or challenge, the results.
Regulatory response and best practices
Fortunately, regulators are starting to catch up. The United States is one example. New York City's Local Law 144 now requires employers to annually audit AI-based hiring tools and inform candidates when such tools are used. Meanwhile, states such as Colorado and Illinois have passed laws requiring AI transparency and candidate notification. Across the Atlantic, the Trades Union Congress has pressured the British government to regulate AI in the workplace before it gets out of hand.
The checklist for human resources departments includes the following strategies:
- Conduct regular bias audits of AI systems.
- Maintain clear and up-to-date information on the use of AI in recruitment.
- Ensure human judgment is part of all final decisions.
- Offer candidates the option to opt out of AI assessments and request explanations when decisions are made by machines.
- Create internal AI governance teams.
- Limit the use of company data by third-party providers to train unrelated models.
Read: Is generative AI really making you more productive?
A data problem or a design problem?
Many of the problems stem from flawed training data. When AI is trained on historically biased resumes or assessments, it simply replicates those patterns at scale. For example, a model trained on a decade of hires from a male-dominated tech company will likely favour male candidates, no matter how sophisticated the algorithm.
But the problem isn't just the data. It's also the design philosophy. Too many AI tools focus on technical definitions of equality without addressing deeper structural inequalities. Creating ethical AI for HR requires rethinking what “merit” and “suitability” really mean.
So, the challenge for HR leaders is not simply to choose the right software. It is about rebuilding trust in hiring and performance systems. That means collaborating with ethicists, technologists, regulators, and the employees themselves who are affected by these systems.
Some companies are already experimenting with algorithmic designs that prioritise fairness, such as introducing exploration bonuses to favour underrepresented candidates. Others are incorporating ethical reviews into AI development, ensuring that diverse voices are heard from design to implementation.