Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Are AI investors shorting Black lives?

Posted on February 11, 2021 by admin

Artificial intelligence often doesn’t work the same for Black people as it does for white people. Sometimes it’s a matter of vastly different user experiences, like when voice assistants struggle to understand words from Black voices. Other times, such as when cancer detection systems don’t account for race, it’s a matter of life and death.

So who’s fault is it?

Setting aside intentionally malicious uses of AI software, such as facial recognition and crime prediction systems for law enforcement, we can assume the problem is with bias.

When we think about bias in AI, we’re usually reminded of incidents such as Google’s algorithm mislabeling images of Black persons as animals or Amazon’s Rekognition system misidentifying several sitting Black members of US Congress as criminals.

But bias isn’t just obviously racist ideations hidden inside the algorithm. It usually manifests unintentionally. It’s a safe bet to assume, barring sabotage, the people at Amazon’s AI department aren’t trying to build racist facial recognition software. But they do, and it took the company’s leadership far too long to admit it.

Amazon argues that its software works the same for all faces when users set it to the proper threshold for accuracy. Unfortunately, the higher the accuracy threshold is set in a facial recognition system, the lower the odds the system will match faces in the wild with faces in a database.

Cops use these systems set at a threshold low enough to get a hit when they scan a face, even if that means setting it lower than Amazon’s peer-reviewed parameters for minimum acceptable accuracy.

But, we already new facial recognition was inherently biased against Black faces. And we know that cops in the US and other nations still use it, which means our governments are funding the research on the front end and purchasing it on the back end.

This means the reality of false arrests for Black people is, in current status quo and practice, an acceptable risk as long as it results in some valid ones too. That’s a shitty business model.

Basically, the rules of engagement in the global business world dictate that you can’t build a car that’s been proven to be less safe for Black people. But you can program a car with a computer vision system that’s been proven less reliable at recognizing Black pedestrians than white ones and regulators won’t bat an eye.

The question is why? And the answer’s simple: because it makes money.

Even when every human in the loop has good intentions, bias can manifest at an unstoppable scale in almost any AI project that deals with data related to humans.

Google and other companies have released AI-powered mammogram screening systems that don’t work as well on Black breasts as white ones. Think about that for a second.

The developers, doctors, and researchers who worked on those programs almost certainly did so with the best interests of their clients, patients, and the general public at heart. Let’s assume we all really hate cancer. But it still works better for white people.

And that’s because the threshold for commercialization in the artificial intelligence community is set far too low all the way around. We need to invest heavily in cancer research, but we don’t need to commercialize biased AI: research and business are two different things. 

The doctor using a cancer screening system has to trust the marketing and sales team from the company selling it. The sales and marketing team have to trust the management team. The management team has to take the word of development team. The dev team has to take it on good faith that the research team accounted for bias. The research team has to also take it on faith that the company they bought the datasets from (or the publicly available dataset they downloaded) used diverse sources.

And nobody has any receipts because of the privacy issues involved when you’re dealing with human data.

Now, this isn’t always the case. Very rarely, you can trace the datasets all the way back to real people and see exactly how diverse the training data really is. But here’s the problem: those verifiable datasets are almost always too small to train a system robust enough to, for example, detect the demographic nuances of cancer distributions or understand how to differentiate shadows from features in Black faces.

That’s why, for example, when the FDA decides whether an AI system is ethical to use, it just requires companies to provide small batch studies showing the software in use, not prove the diversity of the data used to train the AI.

Any AI team worth their salt can come up with a demo that shows their product working under the best of circumstances. Then all they have to do is support the demo with the results of previous peer-review (where other researchers use the same datasets to come to the same conclusions). Meanwhile, in many cases, the developers themselves have no clue what’s actually in the datasets other than what they’ve been told – much less the regulators.

In my experience as an AI journalist – that being, someone who has been pitched tens of thousands of stories – the vast majority of all commercial AI entities claim to check for bias. Yet, scant an hour can pass without a social media company, big tech, or government having to admit they’ve somehow managed to use algorithms that were racially biased and are working to solve the problem.

But they aren’t. Because the problem is that those entities have commercialized a product that works better for white people than Black people. 

From inception to production, everyone involved in bringing an AI product to life can be focused on building something for the greater good, but the moment a human decides to sell, buy, or use an AI system for non-research purposes that they know works better for one race than another: they’ve decided that there is an acceptable amount of racial bias. That’s the definition of systemic racism derived from racist privilege.

But what’s the real harm?

Hearkening back to the mammogram AI problem, when one class or race of people get better treatment than others because of inherent privilege, it creates an unjust economy. In other words: if the bar for commercial acceptability is “if it works for whites but not Blacks,” and it’s easier to develop systems with bias than without, then it becomes more lucrative to focus on developing systems that don’t work well for Black people than it does to develop systems that work equally well for Black people. This is the current state of commercial artificial intelligence.

And it will remain that way as long as VC’s, big tech, and governments continue to set the bar for commercialization so low. Until things change, they’re effectively “shorting” Black lives by profiting from systems that work better for whites. 

Published February 11, 2021 — 19:14 UTC

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • LG Electronics and Nvidia are in talks on robotics, AI data centres, and mobility
  • Sequoia is giving away the hardware for an AI project it cannot invest in. That is the point.
  • Trump says Anthropic Pentagon deal is ‘possible’, weeks after blacklisting the company as a national security risk
  • Samsung and IKEA just made the $6 smart home real, and your TV is already the hub
  • OpenAI recruits Cognizant and CGI to take Codex into enterprise software shops worldwide

Recent Comments

    Archives

    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme