{"id":3001,"date":"2021-02-11T19:14:57","date_gmt":"2021-02-11T19:14:57","guid":{"rendered":"https:\/\/thenextweb.com\/?p=1338675"},"modified":"2021-02-11T19:14:57","modified_gmt":"2021-02-11T19:14:57","slug":"are-ai-investors-shorting-black-lives","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=3001","title":{"rendered":"Are AI investors shorting Black lives?"},"content":{"rendered":"\n<div><img decoding=\"async\" src=\"https:\/\/img-cdn.tnwcdn.com\/image\/neural?filter_last=1&amp;fit=1280%2C640&amp;url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2020%2F03%2FScreenshot-2020-03-16-at-12.02.50.png&amp;signature=fd23740a9fed5527cd26c272bcdc402b\" class=\"ff-og-image-inserted\"><\/div>\n<p>Artificial intelligence often doesn\u2019t work the same for Black&nbsp;people as it does for white people. Sometimes it\u2019s a matter of vastly different user experiences, like when <span>voice assistants <a href=\"https:\/\/www.nytimes.com\/2020\/03\/23\/technology\/speech-recognition-bias-apple-amazon-google.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">struggle to understand words from Black voices<\/a>. Other times, such as when cancer detection systems <a href=\"https:\/\/qz.com\/1781123\/googles-ai-for-mammograms-doesnt-account-for-race\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">don\u2019t account for race<\/a>, it\u2019s a matter of life and death. <\/span><\/p>\n<p><strong>So who\u2019s fault is it? <\/strong><\/p>\n<p><span>Setting aside intentionally malicious uses of AI software, such as <a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2020\/01\/31\/why-amazons-ring-and-facial-recognition-technology-are-a-clear-and-present-danger-to-society\/\">facial recognition<\/a> and <a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2019\/02\/21\/predictive-policing-is-a-scam-that-perpetuates-systemic-bias\/\">crime prediction systems<\/a> for law enforcement, we can assume the problem is with <\/span><i>bias.<\/i><span> <\/span><\/p>\n<p><span>When we think about bias in AI, we\u2019re usually reminded of incidents such as Google\u2019s algorithm <a href=\"https:\/\/www.theverge.com\/2018\/1\/12\/16882408\/google-racist-gorillas-photo-recognition-algorithm-ai\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">mislabeling images of Black persons as animals<\/a> or Amazon\u2019s Rekognition system <a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2018\/07\/26\/amazons-facial-recognition-ai-confuses-politicians-with-criminals\/\">misidentifying several sitting Black members of US Congress as criminals<\/a>. <\/span><\/p>\n<p><span>But bias isn\u2019t just obviously&nbsp;racist ideations hidden inside the algorithm. It usually manifests unintentionally. It\u2019s a safe bet to assume, barring sabotage, the people at Amazon\u2019s AI department aren\u2019t trying to build racist facial recognition software. <a href=\"https:\/\/www.independent.co.uk\/news\/world\/americas\/amazon-police-facial-recognition-ban-racism-a9560016.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><em>But they do<\/em><\/a>, and it took the company\u2019s leadership far too long to admit it. <\/span><\/p>\n<p><span>Amazon argues that its software works the same for all faces <\/span><i>when users set it to the proper threshold for accuracy. <\/i><span>Unfortunately, the higher the accuracy threshold is set in a facial recognition system, the lower the odds the system will match faces in the wild with faces in a database. <\/span><\/p>\n<p><span>Cops use these systems set at a threshold low enough to get a hit when they scan a face, even if that means setting it lower than Amazon\u2019s peer-reviewed parameters for minimum acceptable accuracy.<\/span><\/p>\n<p><span>But, we already new facial recognition was <a href=\"http:\/\/sitn.hms.harvard.edu\/flash\/2020\/racial-discrimination-in-face-recognition-technology\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">inherently biased against Black faces<\/a>. And we know that cops in the US and other nations still use it, which means our governments are funding the research on the front end and purchasing it on the back end. <\/span><\/p>\n<p><span>This means the reality of false arrests for Black people is, in current status quo and practice, an acceptable risk as long as it results in some valid ones too. That\u2019s a shitty business model. <\/span><\/p>\n<p><span>Basically, the rules of engagement in the global business world dictate that you can\u2019t build a car that\u2019s been proven to be less safe for Black people. But you can program a car with a computer vision system that\u2019s been proven less reliable at recognizing Black pedestrians than white ones and regulators won\u2019t bat an eye. <\/span><\/p>\n<p><span><strong>The question is why?<\/strong> And the answer\u2019s simple: because it makes money. <\/span><\/p>\n<p><span>Even when every human in the loop has good intentions, bias can manifest at an unstoppable scale in almost any AI project that deals with data related to humans. <\/span><\/p>\n<p><span>Google and other companies have released AI-powered <a href=\"https:\/\/qz.com\/1781123\/googles-ai-for-mammograms-doesnt-account-for-race\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">mammogram screening systems<\/a> that don\u2019t work as well on Black breasts as white ones. Think about that for a second. <\/span><\/p>\n<p><span>The developers, doctors, and researchers who worked on those programs almost certainly did so with the best interests of their clients, patients, and the general public at heart. Let\u2019s assume we all really hate cancer. <em>But it still works better for white people<\/em>. <\/span><\/p>\n<p><span>And that\u2019s because the threshold for commercialization in the artificial intelligence community is set far too low all the way around. We need to invest heavily in cancer research, but we don\u2019t need to commercialize biased AI: research and business are two different things.&nbsp;<\/span><\/p>\n<p><span>The doctor using a cancer screening system has to trust the marketing and sales team from the company selling it. The sales and marketing team have to trust the management team. The management team has to take the word of development team. The dev team has to take it on good faith that the research team accounted for bias. The research team has to also take it on faith that the company they bought the datasets from (or the publicly available dataset they downloaded) used diverse sources. <\/span><\/p>\n<p><span>And nobody has any receipts because of the privacy issues involved when you\u2019re dealing with human data.<\/span><\/p>\n<p><span>Now, this isn\u2019t always the case. Very rarely, you can trace the datasets all the way back to real people and see exactly how diverse the training data really is. But here\u2019s the problem: those verifiable datasets are almost always too small to train a system robust enough to, for example, detect the demographic nuances of cancer distributions or understand how to differentiate shadows from features in Black faces.<\/span><\/p>\n<p><span>That\u2019s why, for example, when the FDA decides whether an AI system is ethical to use, <a href=\"https:\/\/www.statnews.com\/2021\/02\/11\/breast-cancer-disparities-artificial-intelligence-fda\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">it just requires companies to provide small batch studies<\/a> showing the software in use, not prove the diversity of the data&nbsp;used to train the AI. <\/span><\/p>\n<p><span>Any AI team worth their salt can come up with a demo that shows their product working under the best of circumstances. Then all they have to do is support the demo with the results of previous peer-review (where other researchers use the same datasets to come to the same conclusions). Meanwhile, in many&nbsp;cases, the developers themselves have no clue what\u2019s actually in the datasets other than what they\u2019ve been told \u2013 much less the regulators. <\/span><\/p>\n<p><span>In my experience as an AI journalist \u2013 that being, someone who has been pitched tens of thousands of stories \u2013 the vast majority of all commercial AI entities claim to check for bias. Yet, scant an hour can pass without a social media company, big tech, or government having to admit they\u2019ve somehow managed to use algorithms that were racially biased and are working to solve the problem.<\/span><\/p>\n<p><span>But they aren\u2019t. Because the problem is that those entities have commercialized a product that works better for white people than Black people.&nbsp;<\/span><\/p>\n<p><span>From inception to production, everyone involved in bringing an AI product to life can be focused on building something for the greater good, but the moment a human decides to sell, buy, or use an AI system for non-research purposes that they know works better for one race than another: they\u2019ve decided that there is an acceptable amount of racial bias. That\u2019s the definition of systemic racism derived from racist privilege. <\/span><\/p>\n<p><strong>But what\u2019s the real harm? <\/strong><\/p>\n<p><span>Hearkening back to <a href=\"https:\/\/www.statnews.com\/2021\/02\/11\/breast-cancer-disparities-artificial-intelligence-fda\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">the mammogram AI problem<\/a>, when one class or race of people get better treatment than others because of inherent privilege, it creates an unjust economy. In other words: if the bar for commercial acceptability is \u201cif it works for whites but not Blacks,\u201d and it\u2019s easier to develop systems with bias than without, then it becomes more lucrative to focus on developing systems that don\u2019t work well for Black people than it does to develop systems that work equally well for Black people. This is the current state of commercial artificial intelligence. <\/span><\/p>\n<p><span>And it will remain that way as long as VC\u2019s, big tech, and governments continue to set the bar for commercialization so low. Until things change, they\u2019re effectively \u201cshorting\u201d Black lives by profiting from systems that work better for whites.&nbsp;<\/span><\/p>\n<p class=\"c-post-pubDate\"> Published February 11, 2021 \u2014 19:14 UTC <\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/neural\/2021\/02\/11\/are-ai-investors-shorting-black-lives\/\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence often doesn\u2019t work the same for Black&nbsp;people as it does for white people. Sometimes it\u2019s a matter of vastly different user experiences, like when voice assistants struggle to understand words&#8230;<\/p>\n","protected":false},"author":1,"featured_media":3002,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/3001"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3001"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/3001\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/3002"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3001"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3001"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3001"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}