Article: Can AI eliminate bias and promote diversity and inclusion?

Diversity

Can AI eliminate bias and promote diversity and inclusion?

AI-driven tools and technologies have a lot of potentials to help you tackle bias and make your recruitment practices more inclusive, but it's important to note they're driven by human input.
Can AI eliminate bias and promote diversity and inclusion?

In search for an inclusive workplace, organisations are relying on Artificial Intelligence more than ever. Given the inherent bias in human beings, intentional or otherwise, it is widely believed that the science of AI enables better decisions in areas such as candidate selections and employee competencies more objectively than any traditional systems. 

But does it work that way, really? 

In evaluating the role of AI at the workplace as a means to promote diversity and inclusion (D&I), some critical questions arise: Can we trust AI to make the right decisions? If the AI system itself is flawed, how do we rectify it? And how do we remove the bias prevalent in AI? 

Inherent bias in AI? Yes, it’s real…

When Amazon introduced an AI-based tool to recruit employees, it used data from applications collected by white men over the past 10 years. The data was heavily biased in favour of male candidates; for example, applications were downgraded if they had terms like "women's" or "women's colleges." Finally, Amazon had to discontinue its use following allegations of bias.

AI is not ‘intelligent’  in the real sense, but just an efficient way of categorising data. More specifically, it is built to analyse patterns in data. So, if a dataset is already biased, then AI will incorporate those characteristics as well. In fact, data accuracy is no guarantee of impartiality. 

AI systems contain biases due to two reasons. The first is cognitive bias – unintended or unconscious errors in thinking that affects individuals’ judgements and decisions. The second is incomplete data, which may not be fully representative of the whole population, introduced by how data is obtained, how algorithms are designed, and how AI outputs are interpreted.  

The biggest drawback is the lack of human judgment. If an organisation intends to diversify its workforce, then AI-based hiring may not serve the purpose. There are candidates with atypical work experience who could be the best fit for their individual personality, interest, character, and work ethics. AI, void of any human attributes, will miss these traits. 

The remedy for AI bias? AI itself…

There are multiple ways to eliminate AI bias. From the perspective of AI development, these range from designing bias-free applications to collecting data in a relatively unbiased way, or designing mathematical algorithms to minimize bias. While it’s easy to blame the science of AI for any bias, it should be noted that any shortcomings emerge only because AI solutions are developed, created and refined by human beings. 

And AI itself may be a key remedy to answer AI bias! Deep neural networks – AI algorithmic clusters that emulate human ability to spot data patterns – are especially helpful in uncovering hidden preferences. For instance, AI-based chatbots like Mya help to know if candidates with lighter skin tones are more favoured. In fact, AI is helping to identify biases even in past hiring decisions.

It helps to develop “algorithmic hygiene” in AI. By conducting regular AI audits and testing, we can ensure the data used in algorithmic decision-making is equitable. This is especially helpful in the modern workplace where organisations want to use AI to not only enable an objective recruitment process, but also to ensure an equitable system of employee training as well as rewards and promotions to build real diversity and inclusiveness.

AI-inclusive teams drive inclusive AI … and vice versa

In the larger context of technology being essentially non-diverse, it’s important to have more diverse design teams. A recent study found that less than 6 percent of Google employees were Latin and only 3.3 percent were Black. And the representation of women in AI or even tech teams overall is truly low. Just as more diverse, inclusive, and equitable organizations tend to outperform in business, we need to deploy inclusivity for AI and then let AI drive inclusivity. 

Even while designing the AI systems, less diverse and inclusive teams can suffer the blind spots that can hold them from being completely objective. Diversity of thoughts brings completeness to any design and AI is no exception, except when you do it for designing AI systems, AI rewards you in driving future diversity and inclusion!

Microsoft set up the “Fairness, Accountability, Transparency, and Ethics in AI” team to uncover any biases creeping into the data used by its AI systems. The company also built inclusive design teams to better address sensitivities of diverse people, including those with disabilities and women. That’s the way to go! 

AI has helped us make fairer decisions for a more diverse and inclusive world. Ultimately, we need to combine the human factor and technology to create the best decisions. If developed and designed objectively, AI technology can help under-represented people break through professional ceilings driven by bias, and help organisations reap the benefits of diverse workforces. 

That could well and truly lead us to the ideal workplace of the future!

Read full story

Topics: Diversity, Employee Engagement, #ArtificialIntelligence, #GuestArticle, #DEIB

Did you find this story helpful?

Author


QUICK POLL

How do you envision AI transforming your work?

Be Heard: Share Your Feedback and Recommend Our Content!

01
10
Selected Score :