AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem.
The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries.
Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum, said:
“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long.
The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”
Just over half (54%) of respondents have “deep concerns” around the risk of AI bias while a much higher percentage (81%) want more government regulation to prevent.
Given the still relatively small adoption of AI at this stage across most organisations; there’s a concerning number reporting harm from bias.
Over a third (36%) of organisations experienced challenges or a direct negative business impact from AI bias in their algorithms. This includes:
Lost revenue (62%)
Lost customers (61%)
Lost employees (43%)
Incurred legal fees due to a lawsuit or legal action (35%)
Damaged brand reputation/media backlash (6%)
Ted Kwartler, VP of Trusted AI at DataRobot, commented:
“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place.
Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted, and explainable.”
Four key challenges were identified as to why organisations are struggling to counter bias:
Understanding why an AI was led to make a specific decision
Comprehending patterns between input values and AI decisions
Developing trustworthy algorithms
Determinng what data is used to train AI
Fortunately, a growing number of solutions are becoming available to help counter/reduce AI bias as the industry matures.
“The market for responsible AI solutions will double in 2022,” wrote Forrester VP and Principal Analyst Brandon Purcell in his Predictions 2022: Artificial Intelligence (paywall) report.
“Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.”