{"id":9049,"date":"2021-11-19T22:36:52","date_gmt":"2021-11-19T22:36:52","guid":{"rendered":"http:\/\/TheNextWeb=1373745"},"modified":"2021-11-19T22:36:52","modified_gmt":"2021-11-19T22:36:52","slug":"ai-cant-tell-if-youre-lying-anyone-who-says-otherwise-is-selling-something","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=9049","title":{"rendered":"AI can\u2019t tell if you\u2019re lying \u2013 anyone who says otherwise is selling something"},"content":{"rendered":"\n<div><img decoding=\"async\" src=\"https:\/\/img-cdn.tnwcdn.com\/image\/neural?filter_last=1&amp;fit=1280%2C640&amp;url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2021%2F11%2Flie_detector.jpg&amp;signature=2447937ef86e4d8c209a2304c2c062fe\" class=\"ff-og-image-inserted\"><\/div>\n<p>Another day, another problematic AI study.&nbsp;Today\u2019s snake oil special comes via Tel Aviv University where a team of researchers have unveiled a so-called \u201clie-detection system.\u201d<\/p>\n<p>Let\u2019s be really clear right up front: AI can\u2019t do anything a person, given an equivalent amount of time to work on the problem, couldn\u2019t do themselves. And no human can tell if any given human is lying. Full stop.<\/p>\n<p>The simple fact of the matter is that some of us can tell when some people are lying some of the time. Nobody can tell when anybody is lying all of the time.<\/p>\n<p>The university makes the following claim via <a href=\"https:\/\/www.eurekalert.org\/news-releases\/935222\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">press release<\/a>:<\/p>\n<blockquote readability=\"6\">\n<p>Researchers at Tel Aviv University detected 73% of the lies told by trial participants based on the contraction of their facial muscles \u2013 achieving a higher rate of detection than any known method.<\/p>\n<\/blockquote>\n<p>That\u2019s a really weird statement. The idea that \u201c73%\u201d accuracy at detecting lies is indicative of a particular paradigm\u2019s success is arguable at best.<\/p>\n<h2>What exactly is accuracy?<\/h2>\n<p>Base luck gives any system capable of choice a 50\/50 chance. And, <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2211368119301597\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">traditionally<\/a>, that\u2019s about how well humans perform at guessing lies. Interestingly, they perform much better at guessing truths. Some studies claim humans achieve about the same \u201caccuracy\u201d <a href=\"https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/01463379809370090?journalCode=rcqu20#:~:text=For%20example%2C%20a%20series%20of,be%20around%2070%2D80%25.\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">at determining truth statements<\/a> as the Tel Aviv team\u2019s \u201clie-detection system\u201d does at determining truthfulness.<\/p>\n<p>The Tel Aviv University team\u2019s paper even mentions that polygraphs aren\u2019t admissible in courts because they\u2019re unreliable. But they fail to point out that polygraph devices (which have been around <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/23744006.2015.1060080\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">since 1921<\/a>) beat their own system in so-called \u201caccuracy\u201d \u2014 polygraphs average about an 80% \u2013 90% accuracy-rate in studies.<\/p>\n<p>But let\u2019s take a deeper look at the Tel Aviv team\u2019s <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/brb3.2386\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">study<\/a> anyway. The team started with 48 participants, 35 of which were identified as \u201cfemale.\u201d Six participants were cut because of technical issues, two got dropped for \u201cnever lying,\u201d and one participated in \u201conly 40 out of 80 trials when monetary incentives were not presented.\u201d<\/p>\n<p>So, the data for this study was generated from two sources: a proprietary AI system and 39-40 human participants. Of those participants, an overwhelming majority were identified as \u201cfemale,\u201d and there\u2019s no mention of racial, cultural, or religious diversity.<\/p>\n<p>Furthermore, the median age of participants was 23 and there\u2019s no way to determine if the team considered financial backgrounds, mental health, or any other concerns.<\/p>\n<p>All we can tell is that a small group of people averaging 23-years in age, mostly \u201cfemale,\u201d paired off to participate in this study.<\/p>\n<p>There was also compensation involved. Not only were they paid for their time, which is standard in the world of academia research, but they were also paid for successfully lying to humans.<\/p>\n<p>That\u2019s a red flag. Not because it\u2019s unethical to pay for study data (it isn\u2019t). But because it\u2019s adding unnecessary parameters in order to intentionally or ignorantly muddy up the study.<\/p>\n<p>The researchers explain this by claiming it was part of the experiment to determine whether incentivization changed people\u2019s ability to lie.<\/p>\n<p>But, with such a tiny study sample, it seems ludicrous to cram the experiment full of needless parameters. Especially ones that are so half-baked they couldn\u2019t possibly be codified without solid background data.<\/p>\n<p>How much impact does a financial incentive have on the efficacy of a truth-telling study? That sounds like something that needs its own <i>large-scale<\/i><span> study to determine.<\/span><\/p>\n<h2>Let\u2019s just move on to the methodology<\/h2>\n<p><span> The researchers paired off participants into liars and receivers. The liars put on headphones and listened for either the word \u201ctree\u201d or \u201cline\u201d and then were directed to either tell the truth or lie about which they\u2019d heard. Their partner\u2019s job was to guess if they were being lied to.<\/span><\/p>\n<p><span>The twist here is that the researchers created their own electrode arrays and attached them to the liars\u2019 faces and then developed an AI to interpret the outputs. The researchers operated under an initial assumption that twitches in our facial muscles are a window to the ground-truth.<\/span><\/p>\n<p><span>This assumption is purely theoretical and, frankly, ridiculous. Stroke victims exist. Bell\u2019s Palsy exists. Neurodiverse communication exists.&nbsp; Scars and loss of muscle strength exist. At least 1 billion people in the world currently live with some form of physical disability and nearly as many live with a diagnosed mental disorder.<\/span><\/p>\n<p><span>Yet, the researchers expect us to believe they\u2019ve invented a one-size-fits-all algorithm for understanding humans. They\u2019re claiming they\u2019ve stumbled across a human trait that inextricably links the mental act of deceit with a singular universal physical expression. And they accomplished this by measuring muscle twitches in the faces of just 40 humans?<\/span><\/p>\n<p><span>Per the aforementioned press release:<\/span><\/p>\n<blockquote readability=\"9\">\n<p><span>The researchers believe that their results can have dramatic implications in many spheres of our lives. In the future, the electrodes may become redundant, with video software trained to identify lies based on the actual movements of facial muscles.<\/span><\/p>\n<\/blockquote>\n<p>So the big idea here is to generate data with one experimental paradigm (physical electrodes) in order to develop a methodology for a completely different experimental paradigm (computer vision)? And we\u2019re supposed to believe that this particular mashup of disparate inputs will result in a system that can determine a human\u2019s truthfulness to such a degree that its outputs are admissible in court?<\/p>\n<p>That\u2019s a bold leap to make!&nbsp;<span>The team may as well be claiming it\u2019s solved AGI with black box deep learning. Computer vision already exists. Either the data from the electrodes is necessary or it isn\u2019t.<\/span><\/p>\n<p><span>What\u2019s worse, they apparently&nbsp;intend to<\/span><span> develop this into a snake oil solution for governments and big businesses.<\/span><\/p>\n<p>The press release continues with a quote:<\/p>\n<blockquote readability=\"16\">\n<p>[Team member Dino Levy] predicts: \u201cIn the bank, in police interrogations, at the airport, or in online job interviews, high-resolution cameras trained to identify movements of facial muscles will be able to tell truthful statements from lies. Right now, our team\u2019s task is to complete the experimental stage, train our algorithms and do away with the electrodes. Once the technology has been perfected, we expect it to have numerous, highly diverse applications.\u201d<\/p>\n<\/blockquote>\n<h2><span>Police interrogations? Airports? What?&nbsp;<\/span><\/h2>\n<p><span>Exactly what percentage of those 40 study participants were Black, Latino, disabled, autistic, or queer? How can anyone, in good faith and conscience, make such grandiose scientific claims about AI based on such a tiny sprinkling of data?&nbsp;<\/span><\/p>\n<p><span>If this \u201cAI solution\u201d were to actually become a product, people could potentially be falsely arrested, detained at airports, denied loans, and passed over for jobs because they don\u2019t look, sound, and act exactly like the people who participated in that study.&nbsp;<\/span><\/p>\n<p><span>This AI system was only able to determine whether someone was lying with a 73% level of accuracy in an experiment where the lies were only <em>one word long<\/em>, meant <em>nothing<\/em> to the person saying them, and had <em>no real effect<\/em> on the person hearing them. <\/span><\/p>\n<p><span>There\u2019s no real-world scenario analogous to this experiment. And that \u201c73% accuracy\u201d is as meaningless as a Tarot card spread or a Magic 8-Ball\u2019s output.<\/span><\/p>\n<p>Simply put: A 73% accuracy rate over less than 200 iterations of a study involving a maximum of 20 data groups (the participants were paired off) is a conclusion that indicates your experiment is a failure.<\/p>\n<p>The world needs more research like this, don\u2019t get me wrong. It\u2019s important to test the boundaries of technology. But the claims made by the researchers are entirely outlandish and clearly aimed at an eventual product launch.<\/p>\n<p>Sadly, there\u2019s about a 100% chance that this gets developed and ends up in use by US police officers.<\/p>\n<p><span> Just like <a href=\"https:\/\/thenextweb.com\/news\/predictive-policing-ai-is-a-bigger-scam-than-psychic-detectives\">predictive-policing<\/a>, <a href=\"https:\/\/thenextweb.com\/news\/stanford-team-behind-bs-gaydar-ai-says-facial-recognition-can-expose-political-orientation\">Gaydar<\/a>, <a href=\"https:\/\/thenextweb.com\/news\/why-using-ai-to-screen-job-applicants-is-almost-always-a-bunch-of-crap\">hiring AI<\/a>, and all the other <a href=\"https:\/\/thenextweb.com\/news\/why-flat-earthers-clear-present-threat-ai-powered-society\">snake oil AI solutions<\/a> out there, this is absolutely harmful. <\/span><\/p>\n<p><span>But, by all means, don\u2019t take my word for it: read the entire paper and the researchers\u2018 own conclusions <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/brb3.2386\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">here<\/a>. <\/span><\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/ai-cant-tell-youre-lying-anyone-who-says-otherwise-selling-something\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Another day, another problematic AI study.&nbsp;Today\u2019s snake oil special comes via Tel Aviv University where a team of researchers have unveiled a so-called \u201clie-detection system.\u201d Let\u2019s be really clear right up front:&#8230;<\/p>\n","protected":false},"author":1,"featured_media":9050,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/9049"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9049"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/9049\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/9050"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9049"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9049"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9049"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}