{"id":15690,"date":"2024-09-19T08:28:16","date_gmt":"2024-09-19T08:28:16","guid":{"rendered":"http:\/\/TheNextWeb=1410369"},"modified":"2024-09-19T08:28:16","modified_gmt":"2024-09-19T08:28:16","slug":"ai-doesnt-hallucinate-why-attributing-human-traits-to-tech-is-users-biggest-pitfall","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=15690","title":{"rendered":"AI doesn\u2019t hallucinate \u2014 why attributing human traits to tech is users\u2019 biggest pitfall"},"content":{"rendered":"\n<div><img decoding=\"async\" src=\"https:\/\/img-cdn.tnwcdn.com\/image\/tnw-blurple?filter_last=1&amp;fit=1280%2C640&amp;url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2024%2F09%2FUntitled-design-8.jpg&amp;signature=9f6039e9b3355fdb91493619292300cd\" class=\"ff-og-image-inserted\"><\/div>\n<p>This year, Air Canada lost a lawsuit against a <a href=\"https:\/\/www.theguardian.com\/world\/2024\/feb\/16\/air-canada-chatbot-lawsuit\" target=\"_blank\" rel=\"nofollow noopener\">customer who was misled by an AI chatbot<\/a> into purchasing full-price plane tickets, being assured they would later be refunded under the company\u2019s bereavement policy. The airline tried to claim the bot was \u201cresponsible for its own actions.\u201d This line of argumentation was rejected by the court and the company not only had to pay compensation, it also received public criticism for attempting to distance itself from the situation. It\u2019s clear companies are liable for AI models, even when they make mistakes beyond our control.<\/p>\n<p>The rapidly advancing world of <a href=\"https:\/\/thenextweb.com\/artificial-intelligence\" target=\"_blank\" rel=\"noopener\">AI<\/a>, and particularly generative AI, is looked at with a mixture of awe and apprehension by businesses. Seen as a double-edged sword, AI has been viewed as a catalyst that has the power to speed up productivity, allowing you to do far more with less; but with kinks that can lead to issues ranging from customer dissatisfaction to lawsuits.<\/p>\n<p>This is what\u2019s become popularly known as \u2018AI hallucinations,\u2019 or when an AI model provides answers that are incorrect, irrelevant, or nonsensical.<\/p>\n<p>\u201cLuckily, it\u2019s not a very widespread problem. It only happens between 2% to maybe 10% of the time at the high end. But still, it can be very dangerous in a <a href=\"https:\/\/thenextweb.com\/business\" target=\"_blank\" rel=\"noopener\">business<\/a> environment. Imagine asking an AI system to diagnose a patient or land an aeroplane,\u201d says Amr Awadallah, an AI expert who\u2019s set to give a talk at <a href=\"https:\/\/vds.tech\/\" target=\"_blank\" rel=\"nofollow noopener\">VDS2024<\/a> on <em>How Gen-AI is Transforming Business &amp; Avoiding the Pitfalls.<\/em><\/p>\n<p>But most AI experts dislike this term. The terminology, and what\u2019s behind it, i.e. our misunderstanding of how these occurrences happen, can potentially lead to pitfalls with ripple effects into the future.<\/p>\n<p>As former VP of Product Intelligence Engineering at Yahoo! and VP of Developer Relations for Google Cloud, Awadallah has seen the technology evolve throughout his career and has since founded <a href=\"https:\/\/vectara.com\/\" target=\"_blank\" rel=\"nofollow noopener\">Vectara<\/a>, a company focused on using AI and neural network technologies for natural language processing to help companies take advantage of the benefits search relevance can bring.<\/p>\n<p>We spoke with him to get some clarity on why this term is so controversial, what businesses need to understand about \u2018AI hallucinations,\u2019 and whether or not they can be solved.<\/p>\n<h2><strong>Why AI models don\u2019t \u2018hallucinate\u2019<\/strong><\/h2>\n<p>Using the term hallucination implies that, when an AI model provides the wrong information, it\u2019s seeing or feeling something that isn\u2019t there. But that\u2019s not what\u2019s happening behind the lines of code that puts these models into operation.<\/p>\n<p>It\u2019s very common that we as humans fall into this type of trap. Anthropomorphism, or the innate tendency to attribute human traits, emotions, or intentions to non-human entities, is a mechanism we use to grapple with the unknown, by viewing it through a human lens. The ancient Greeks used it to attribute human characteristics to deities; today, we\u2019re most likely to use it to interpret our pets\u2019 actions.<\/p>\n<p>There is a particular danger that we can fall into this trap with AI, as it\u2019s a technology that\u2019s become so pervasive within our society in a very short time, but very few people actually understand what it is and how it works. For our minds to comprehend such a complex topic, we use shortcuts.<\/p>\n<p>\u201cI think the media played a big role in that because it\u2019s an attractive term that creates a buzz. So they latched onto it and it\u2019s become the standard way we refer to it now,\u201d Awadallah says.<\/p>\n<p>But just as assuming a wagging tail in the animal world equals friendly, misinterpreting the outputs an AI gives can lead us down the wrong path.<\/p>\n<p>\u201cIt\u2019s really attributing more to the AI than it is. It\u2019s not thinking in the same way we\u2019re thinking. All it\u2019s doing is trying to predict what the next word should be given all the previous words that have been said,\u201d Awadallah explains.<\/p>\n<p>If he had to give this occurrence a name, he would call it a \u2018confabulation.\u2019 Confabulations are essentially the addition of words or sentences that fill in the blanks in a way that makes the information look credible, even if it\u2019s incorrect.<\/p>\n<p>\u201c[AI models are] highly incentivised to answer any question. It doesn\u2019t want to tell you, \u2018I don\u2019t know\u2019,\u201d says Awadallah.<\/p>\n<p>The danger here is that while some confabulations are easy to detect because they border on the absurd, most of the time an AI will present information that is very believable. And the more we begin to rely on AI to help us speed up productivity, the more we may take their seemingly believable responses at face value. This means companies need to be vigilant about including human oversight for every task an AI completes, dedicating more and not less time and resources.<\/p>\n<p>The answers an AI model provides are only as good as the data it has access to and the scope of your prompt. Since AI relies on patterns within its training data, rather than reasoning, its responses might be fallible depending on the training data that\u2019s available to it (whether that information is incorrect or it has little data on that particular query) or it may depend on the nature and context of your query or task. For example, cultural context can result in different perspectives and responses to the same query.<\/p>\n<p>In the case of narrow domain knowledge systems, or internal AI models that are built to retrieve information within a specific set of data, such as a business\u2019 internal system, an AI will only have space for a certain amount of memory. Although this is a much larger amount of memory than a human can retain, it\u2019s not unlimited. When you ask it questions beyond the scope of its memory, it will still be incentivized to answer by predicting what the next words could be.<\/p>\n<h2><strong>Can AI misinformation be solved?<\/strong><\/h2>\n<p>There\u2019s been a lot of talk about whether or not \u2018confabulations\u2019 can be solved.<\/p>\n<p>Awadallah and his team at Vectara are developing a method to combat confabulations in narrow domain knowledge systems. The way they do this is by creating an AI model with the specific task of fact checking the output of other AI models. This is known as Retrieval Augmented Generation (RAG).<\/p>\n<p>Of course, Awadallah admits, just like with human fact checkers, there is always the possibility that something will slip by an AI fact checker, this is known as a false negative.<\/p>\n<p>For open domain AI models, like ChatGPT, which are built to retrieve information about any topic across the wide expanse of the world wide web, dealing with confabulations is a bit trickier. Some researchers recently published a promising paper on the <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" target=\"_blank\" rel=\"nofollow noopener\">use of \u201csemantic entropy\u201d to detect AI misinformation<\/a>. This method involves asking an AI the same question multiple times and assigning a score based on how different the answers range.<\/p>\n<p>As we edge closer and closer to eliminating AI confabulations, an interesting question to consider is, do we actually want AI to be factual and correct 100% of the time? Could limiting their responses also limit our ability to use them for creative tasks?<\/p>\n<p><em>Join Amr Awadallah at the seventh edition of VDS to find out more about how businesses can harness the power of generative AI, while avoiding the risks, at <a href=\"https:\/\/vds.tech\/\" target=\"_blank\" rel=\"nofollow noopener\">VDS2024<\/a> taking place October 23-24 in Valencia. <\/em><\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/ai-hallucinate-why-human-tech-users-pitfall\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This year, Air Canada lost a lawsuit against a customer who was misled by an AI chatbot into purchasing full-price plane tickets, being assured they would later be refunded under the company\u2019s&#8230;<\/p>\n","protected":false},"author":1,"featured_media":15691,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/15690"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=15690"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/15690\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/15691"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=15690"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=15690"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=15690"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}