Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Scientists invented an AI to detect racist people

Posted on February 3, 2021 by admin

A team of researchers at the University of Virginia have developed an AI system that attempts to detect and quantify the physiological signs associated with racial bias. In other words, they’re building a wearable device that tries to identify when you’re having racist thoughts.

Up front: Nope. Machines can’t tell if a person is a racist. They also can’t tell if something someone has said or done is racist. And they certainly can’t determine if you’re thinking racist thoughts just by taking your pulse or measuring your O2 saturation levels with an Apple Watch-style device.

That being said, this is fascinating research that could pave the way to a greater understanding of how unconscious bias and systemic racism fit together.

How does it work?

The current standard for identifying implicit racial bias uses something called the Implicit Association Test. Basically, you look at a series of images and words and try to associate them with “light skin,” “dark skin,” “good,” and “bad” as quickly as possible. You can try it yourself here on Harvard’s website.

There’s also research indicating that learned threat responses to outsiders can often be measured physiologically. In other words, some people have a physical response to people who look different than them and we can measure it when they do.

The UVA team combined these two ideas. They took a group of 76 volunteer students and had them take the Implicit Association Test while measuring their physiological responses with a wearable device.

Finally, the meat of the study involved developing a machine learning system to evaluate the data and make inferences. Can identifying a specific combination of physiological responses really tell us if someone is, for lack of a better way to put it, experiencing involuntary feelings of racism?

The answer’s a muddy maybe.

According to the team’s research paper:

Our machine learning and statistical analysis show that implicit bias can be predicted from physiological signals with 76.1% accuracy.

But that’s not necessarily the bottom line. 76% accuracy is a low threshold for success in any machine learning endeavor. And flashing images of different colored cartoon faces isn’t a 1:1 analogy for experiencing interactions with different races of people.

Quick take: Any ideas the general public might have over some kind of wand-style gadget for detecting racists should be dismissed outright. The UVA team’s important work has nothing to do with developing a wearable that pings you every time you or someone around you experiences their own implicit biases. It’s more about understanding the link between mental associations of dark skin color to badness and the accompanying physiological manifestations.

In that respect, this novel research has the potential to help illuminate the subconscious thought processes behind, for example, radicalization and paranoia. It also has the potential to finally demonstrate how racism can be the result of unintended implicit bias from people who may even believe themselves to be allies.

You don’t have to feel like you’re being racist to actually be racist, and this system could help researchers better understand and explain these concepts.

But it absolutely doesn’t actually detect bias; it predicts it, and that’s different. And it certainly can’t tell if someone’s a racist. It shines a light on some of the physiological effects associated with implicit bias, much like a diagnostician would initially interpret a cough and a fever as being associated with certain diseases while still requiring further testing to confirm a diagnosis. This AI doesn’t label racism or bias, it just points to some of the associated side effects.

You can check out the whole pre-print paper here on arXiv.

Published February 3, 2021 — 20:11 UTC

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Sequoia is giving away the hardware for an AI project it cannot invest in. That is the point.
  • Trump says Anthropic Pentagon deal is ‘possible’, weeks after blacklisting the company as a national security risk
  • Samsung and IKEA just made the $6 smart home real, and your TV is already the hub
  • OpenAI recruits Cognizant and CGI to take Codex into enterprise software shops worldwide
  • Lovable left thousands of projects exposed for 48 days, and the vibe coding security crisis is only getting worse

Recent Comments

    Archives

    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme