{"id":13993,"date":"2023-11-20T16:00:21","date_gmt":"2023-11-20T16:00:21","guid":{"rendered":"http:\/\/TheNextWeb=1401664"},"modified":"2023-11-20T16:00:21","modified_gmt":"2023-11-20T16:00:21","slug":"ai-hallucinations-pose-direct-threat-to-science-oxford-study-warns","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=13993","title":{"rendered":"AI hallucinations pose \u2018direct threat\u2019 to science, Oxford study warns"},"content":{"rendered":"\n<div><img decoding=\"async\" src=\"https:\/\/img-cdn.tnwcdn.com\/image\/tnw-blurple?filter_last=1&amp;fit=1280%2C640&amp;url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2023%2F11%2FUntitled-design-19-4.jpg&amp;signature=9139c6c6b51f0c537bb595a8c6344130\" class=\"ff-og-image-inserted\"><\/div>\n<p>Large Language Models (LLMs) \u2014 such as those used in chatbots \u2014 have an alarming <a href=\"https:\/\/thenextweb.com\/news\/ai-hallucinations-solution-iris-ai\" target=\"_blank\" rel=\"noopener\">tendency to hallucinate<\/a>. That is, to generate false content that they present as accurate. These <a href=\"https:\/\/thenextweb.com\/topic\/ai\" target=\"_blank\" rel=\"noopener\">AI<\/a> hallucinations pose, among other risks, a direct threat to science and scientific truth, researchers at the Oxford Internet Institute warn.<\/p>\n<p>According to their paper, published in<a href=\"https:\/\/u7061146.ct.sendgrid.net\/ls\/click?upn=4tNED-2FM8iDZJQyQ53jATUTOg-2Bj-2BgJ-2Fbn0TaqTIjqgfrsPma310aGfxugFNaOMn8gPkO5K94zHlRHUyB35uce5A-3D-3DbUfS_cGe9W5K-2FCqFb2N07halzpAEvxBhOYRd-2BEyhwAaxkmepQvFkAqKPWpw1VrPTgfhsisBwzWpYasssovNXWxk7heDLWkf670GFde1Xzk4DvwFKuGExvudbBCynWHlUHjY8GQRmpOGWISEpbsOayFNQ04I2Y7mk-2FuwVYhxDyGGU9Zn1q6Lp8pKUQr-2BVkAdTefaSYtM7LSAXRnHDGsM-2BeqVUMRJcdOAB4X-2BnnolTNOampNTY8BhJrKA-2B6ttEg4YsXh8tyC3bpQkp2qLLspOWZUxsVCtJKFm0JkXfA4jnlh7D2YWqOA5JtEPpvg61VwuAUHjiDxX3oXnIa1S09WkMXBJlTERc9U9U6H26R84-2FzgP10yo8-3D\" target=\"_blank\" rel=\"nofollow noopener\"> Nature Human Behaviour<\/a>, \u201cLLMs are designed to produce helpful and convincing responses without any overriding guarantees regarding their accuracy or alignment with fact.\u201d<\/p>\n<p>LLMs are currently treated as knowledge sources and generate information in response to questions or prompts. But the data they\u2019re trained on isn\u2019t necessarily factually correct. One reason behind this is that these models often use online sources, which can contain false statements, opinions, and inaccurate information.<\/p>\n<p>\u201cPeople using LLMs often anthropomorphise the technology, where they trust it as a human-like information source,\u201d explained Professor Brent Mittelstadt, co-author of the paper.<\/p>\n<p>\u201cThis is, in part, due to the design of LLMs as helpful, human-sounding agents that converse with users and answer seemingly any question with confident sounding, well-written text. The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.\u201d<\/p>\n<div class=\"inarticle-wrapper channel-cta\">\n<div class=\"ica-text\" readability=\"0\"><a href=\"https:\/\/thenextweb.com\/conference\/tickets?utm_source=TNW-media&amp;utm_medium=display&amp;utm_campaign=TNW2024\" data-event-category=\"Article\" data-event-action=\"In Article Block\" data-event-label=\"Get your ticket NOW for TNW Conference - Super Earlybird is 90% sold out!\" target=\"_blank\" readability=\"6\" rel=\"noopener\"><\/p>\n<p class=\"ica-text__title\">Get your ticket NOW for TNW Conference &#8211; Super Earlybird is 90% sold out!<\/p>\n<p>Unleash innovation, connect with thousands of tech lovers and shape the future on June 20-21, 2024.<\/p>\n<p><\/a><\/div>\n<\/div>\n<p>When it comes to science and education, information accuracy is of vital importance and the researchers urge the scientific community to use LLMs as \u201czero-shot translators.\u201d This means that users should provide the model with the appropriate data and ask to transform it into a conclusion or code,&nbsp;for instance \u2014 instead of relying on the model itself as a source of knowledge.<\/p>\n<p>This way it becomes easier to check that the output is factually correct and in line with the provided input.<\/p>\n<p>LLMs will \u201cundoubtedly\u201d assist with scientific workflows, according to the Oxford professors. But it\u2019s crucial for the community to use them responsibly and maintain clear expectations on how they can actually contribute.<\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/ai-hallucinations-pose-direct-threat-science\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Large Language Models (LLMs) \u2014 such as those used in chatbots \u2014 have an alarming tendency to hallucinate. That is, to generate false content that they present as accurate. These AI hallucinations&#8230;<\/p>\n","protected":false},"author":1,"featured_media":13994,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/13993"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13993"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/13993\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/13994"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13993"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=13993"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=13993"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}