Who’s Keeping an Eye on Your AI?
Yes, the AI business is growing, but so are related ethical concerns. One of the greatest of these concerns is AI bias. Let’s take a look at two cases.
The global artificial intelligence (AI) market has gained rapid traction the past few years. Consider this growing behemoth by the numbers:
The global AI market was valued at over $87 billion in 2021 and is expected to surpass $1.5 trillion by 2030.i
The number of AI startups has grown 14 times over the past two decades.ii
Today, 77% of the devices we use utilize some form of AI.iii
Yes, the AI business is growing, but so are related ethical concerns. One of the greatest of these concerns is AI bias. Let’s take a look at two cases.
Who’s Really Sick?
Optum, a subsidiary of UnitedHealth Group, created an AI system that would spot high-risk patients who should receive follow-up care. The AI, however, was alerting medical professionals to pay more attention to white people than to black ones. Specifically, only 18% of the people identified by the AI were black, while 82% were white. If you considered who was actually the sickest, the split should have been 46% black to 53% white.iv
Let’s be clear, the data scientists and executives involved in developing the algorithm didn’t set out to discriminate against black patients. They did, however, make a significant mistake by training the AI with faulty, historical healthcare costs. Historically, less money has been spent caring for black patients than for white ones – which caused the algorithm to mistakenly infer that black patients needed less help – creating a biased output for a system that was applied to at least 100 million patients.v
Arrested by Proxy
Northpointe developed an algorithm called COMPAS, which was designed to predict how many Broward County, Florida defendants were likely to commit additional crimes within two years of arrest. The program was meant to help judges in determining bail and sentencing, but when ProPublica conducted an audit of the AI, it found that the scores were quite unreliable. In fact, only 20% of the people who were predicted to commit violent offenses actually did so. To make matters worse, the algorithm was also twice as likely to erroneously flag black defendants over white ones as future criminals.vi
Reid Blackman, founder, and CEO of Virtue, an ethical risk consultancy, writes in the Harvard Business Review that this case also stems from faulty programming. The AI developers were using arrest data as a proxy for the actual incidence of crimes. Blackman argues that both black and white populations may be committing crimes at the same rate, but if black populations are policed more than white ones, they will have a higher arrest rate despite the crime rates being equal.
AI Monitoring
Companies that want to take advantage of the explosion in AI use can do so more safely by hiring AI ethicists, whose job it is to monitor the dark side of algorithms and ensure a corporate culture that embraces safeguards. According to Vox, it’s important for these AI ethicists to work closely with AI scientists as new algorithms are developed or acquired, rather than be siloed or isolated.vii
Some companies may also choose to form AI committees comprised of lawyers, business strategists, technologists, ethics professionals, and subject matter experts. These committees can identify and mitigate risks associated with AI products that are either developed inside the company or purchased. It’s important, however, that such committees have the authority and mandate to review all programs and have the ability to veto proposals. Otherwise, their impact will be limited.viii
Sources
i “Artificial Intelligence Market Size to Surpass Around US$ 1,597.1 Bn By 2030,” GlobalNewswire, April 19, 2022,
ii “101 Artificial Intelligence Statistics,” TechJury, June 2, 2022, https://techjury.net/blog/ai-statistics/#gref.
iii “101 Artificial Intelligence Statistics,” TechJury, June 2, 2022, https://techjury.net/blog/ai-statistics/#gref.
iv R. Blackman, “Why You Need an AI Ethics Committee,” Harvard Business Review, July-August 2022,
https://hbr.org/2022/07/why-you-need-an-ai-ethics-committee.
v S. Morse, “Study finds racial bias in Optum algorithm,” Healthcare Finance, October 25, 2019,
https://www.healthcarefinancenews.com/news/study-finds-racial-bias-optum-algorithm; R. Blackman, “Why You Need an AI Ethics Committee,” Harvard Business Review, July-August 2022,
https://hbr.org/2022/07/why-you-need-an-ai-ethics-committee.
vi R. Blackman, “Why You Need an AI Ethics Committee,” Harvard Business Review, July-August 2022,
https://hbr.org/2022/07/why-you-need-an-ai-ethics-committee.
vii K. Piper, “There are two factions working to prevent AI dangers. Here’s why they’re deeply divided,” Vox, August
viii R. Blackman, “Why You Need an AI Ethics Committee,” Harvard Business Review, July-August 2022,
https://hbr.org/2022/07/why-you-need-an-ai-ethics-committee.