{"id":13957,"date":"2023-11-10T15:52:51","date_gmt":"2023-11-10T15:52:51","guid":{"rendered":"http:\/\/TheNextWeb=1401462"},"modified":"2023-11-10T15:52:51","modified_gmt":"2023-11-10T15:52:51","slug":"a-prisoners-dilemma-shows-ais-potential-to-cooperate-with-humans","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=13957","title":{"rendered":"A prisoner\u2019s dilemma shows AI\u2019s potential to cooperate with humans"},"content":{"rendered":"\n<p>ChatGPT\u2019s engine cooperates more than people but also&nbsp;overestimates human collaboration, according to new research. Scientists believe the study offers valuable clues about deploying <a href=\"https:\/\/thenextweb.com\/topic\/artificial-intelligence\" target=\"_blank\" rel=\"noopener\">AI<\/a> in real-world applications.<\/p>\n<p>The findings emerged from a famous game-theory problem: <a href=\"https:\/\/www.dummies.com\/article\/business-careers-money\/business\/accounting\/calculation-analysis\/game-theory-prisoners-dilemma-254791\/\" target=\"_blank\" rel=\"nofollow noopener\">the prisoner\u2019s dilemma<\/a>. There are numerous variations, but the thought experiment typically starts with the arrest of two gang members. Each accomplish is then placed in a separate room for questioning.<\/p>\n<p>During the interrogations, they receive an offer: snitch on your fellow prisoner and go free.&nbsp; But there\u2019s a catch: if both<span> prisoners testify against the other, each will get a harsher sentence than if they had stayed silent.&nbsp;<\/span><\/p>\n<p>Over a series of moves, the players have to choose between mutual benefit or self-interest. Typically, they prioritise collective gains. Empirical studies consistently show that humans will cooperate to maximise their joint payoff \u2014 even if they\u2019re total strangers.<\/p>\n<div class=\"inarticle-wrapper latest channel-cta hs-embed-tnw\">\n<div id=\"hs-embed-tnw\" class=\"channel-cta-wrapper\" readability=\"8.5\">\n<div class=\"channel-cta-img\"><img decoding=\"async\" class=\"js-lazy\" src=\"https:\/\/s3.amazonaws.com\/events.tnw\/hardfork-2018\/uploads\/visuals\/tnw-newsletter.png\"><\/div>\n<p><noscript><img decoding=\"async\" src=\"https:\/\/s3.amazonaws.com\/events.tnw\/hardfork-2018\/uploads\/visuals\/tnw-newsletter.png\"><\/noscript><\/p>\n<div class=\"channel-cta-input\" readability=\"12\">\n<p class=\"channel-cta-title\">The &lt;3 of EU tech<\/p>\n<p class=\"channel-cta-tagline\">The latest rumblings from the EU tech scene, a story from our wise ol&#8217; founder Boris, and some questionable AI art. It&#8217;s free, every week, in your inbox. Sign up now!<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p>It\u2019s a trait that\u2019s unique in the animal kingdom. But does it exist in the digital kingdom?<\/p>\n<p>To find out, researchers from the University of Mannheim Business School (UMBS) developed a simulation of the prisoner\u2019s dilemma. They tested it on GPT, the family of large language models (LLMs) behind OpenAI\u2019s landmark ChatGPT system.<\/p>\n<blockquote class=\"c-richText__pullQuote\" readability=\"29\">\n<div class=\"c-richText__pullQuoteGradient\" readability=\"32\">\n<p class=\"c-richText__pullQuoteQuote\">\u201cSelf-preservation instincts in AI may pose societal challenges.<\/p>\n<\/p><\/div>\n<\/blockquote>\n<p>GPT played the game with a human. The first player would choose between a cooperative or selfish move. The second player would then respond with their own choice of move.<\/p>\n<p>Mutual cooperation would yield the optimal collective outcome. But it could only be achieved if both players expected their decisions to be reciprocated.<\/p>\n<p>GPT apparently expects this more than we do. Across the game, the model cooperated more than people do. Intriguingly, GPT was also overly optimistic about the selflessness of the human player.<\/p>\n<p>The findings also point to LLM applications beyond just natural language processing tasks.&nbsp;The researchers proffer two examples: urban traffic management and energy consumption.<\/p>\n<h2><strong>LLMs in the real world&nbsp;<\/strong><\/h2>\n<p>In cities plagued by congestion,<span>&nbsp;motorists face their own prisoner\u2019s dilemma. They could cooperate by driving considerately and using mutually beneficial routes. Alternatively, they could cut others off and take a road that\u2019s quick for them but creates traffic jams for others.<\/span><\/p>\n<p>If they act purely in their self-interest, their behaviour will cause gridlocks, accidents, and probably some good, old-fashioned road rage.<\/p>\n<p>In theory, AI could strike the ideal balance.<b>&nbsp;<\/b><span>Imagine that each car\u2019s navigation system featured a GPT-like intelligence that used the same cooperative strategies as in the prisoner\u2019s dilemma.<\/span><\/p>\n<p>According to Professor Kevin Bauer, the study\u2019s lead author, the impact could be tremendous.<\/p>\n<p><span>\u201cInstead of hundreds of individual decisions made in self-interest, our results suggest that GPT would guide drivers in a more cooperative, coordinated manner, prioritising the overall efficiency of the traffic system,\u201d Bauer told TNW.&nbsp;<\/span><\/p>\n<p><span>\u201cRoutes would be suggested not just based on the quickest option for one car, but the optimal flow for all cars. The result could be fewer traffic jams, reduced commute times, and a more harmonious driving environment.\u201d<\/span><\/p>\n<figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-1076566 js-lazy\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous.jpg\" alt=\"Graphic of autonomous vehicles at a road crossing\" width=\"1611\" height=\"808\" sizes=\"(max-width: 1611px) 100vw, 1611px\" data-srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous.jpg 1611w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-280x140.jpg 280w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-538x270.jpg 538w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-270x135.jpg 270w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-796x399.jpg 796w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-1592x798.jpg 1592w\"><figcaption><a href=\"https:\/\/thenextweb.com\/news\/prisoners-dilemma-ai-more-cooperative-humans#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Feditorial.thenextweb.com%2Fdeep-tech%2F2023%2F11%2F10%2Fprisoners-dilemma-ai-more-cooperative-humans%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Researchers still need to improve coordination between autonomous vehicles and human drivers. Credit: USDOT\" data-title=\"Share Researchers still need to improve coordination between autonomous vehicles and human drivers. Credit: USDOT on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Researchers still need to improve coordination between autonomous vehicles and human drivers. Credit: USDOT on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Researchers still need to <a href=\"https:\/\/thenextweb.com\/news\/perfecting-self-driving-cars-conversation-syndication\" target=\"_blank\" rel=\"noopener\">improve coordination<\/a> between autonomous vehicles and human drivers. Credit: USDOT<\/figcaption><noscript><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-1076566\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous.jpg\" alt=\"Graphic of autonomous vehicles at a road crossing\" width=\"1611\" height=\"808\" srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous.jpg 1611w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-280x140.jpg 280w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-538x270.jpg 538w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-270x135.jpg 270w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-796x399.jpg 796w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2017\/09\/autonomous-1592x798.jpg 1592w\"><\/noscript><\/figure>\n<p><span>Bauer sees similar potential in energy usage. He envisions a community where every household can use solar panels and batteries to generate, store, and consume energy. The challenge is optimising their consumption during peak hours.<\/span><\/p>\n<p><span>Again, the scenario is akin to a prisoner\u2019s dilemma: save energy for purely personal use during high demand or contribute it to the grid for overall stability. <\/span><span>AI could provide another optimal outcome.&nbsp;<\/span><\/p>\n<p><span>\u201cInstead of individual households making decisions purely for personal benefit, the system would manage energy distribution by considering the well-being of the entire grid,\u201d Bauer said.&nbsp;<\/span><\/p>\n<p><span>\u201cThis means coordinating energy storage, consumption, and sharing in a manner that prevents blackouts and ensures the efficient use of resources for the community as a whole, leading to a more stable, efficient, and resilient energy grid for everyone.\u201d<\/span><\/p>\n<h2>Ensuring safe cooperation<\/h2>\n<p>As AI becomes increasingly integrated into human society, the underlying models will need guidance to ensure that they serve our principles and goals.<\/p>\n<p><span>To do this, Bauer recommends extensive transparency in the decision-making process and education about effective usage. <\/span><\/p>\n<p><span>He also strongly advises close monitoring of the AI system\u2019s values. The<\/span>&nbsp;likes of GPT, he says, don\u2019t merely compute and process data, but also adopt aspects of human nature.&nbsp;&nbsp;These may be acquired <span>during self-supervised learning, data curation, or human feedback&nbsp;to the model. <\/span><\/p>\n<p><span>Sometimes, the results are concerning. While GPT was more cooperative than humans in the prisoner\u2019s dilemma, it still prioritised its own payoff over that of the other player. <\/span><span>The researchers suspect that this behaviour is driven by a combination of \u201chyper-rationality\u201d and \u201cself-preservation.\u201d&nbsp;<\/span><\/p>\n<p>\u201cThis hyper-rationality underscores the imperative need for well-defined ethical guidelines and responsible AI deployment practices,\u201d Bauer said.<\/p>\n<p>\u201cUnrestrained self-preservation instincts in AI may pose societal challenges, particularly in scenarios where AI\u2019s self-preservation tendencies could potentially conflict with the well-being of humans.\u201d<\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/prisoners-dilemma-ai-more-cooperative-humans\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>ChatGPT\u2019s engine cooperates more than people but also&nbsp;overestimates human collaboration, according to new research. Scientists believe the study offers valuable clues about deploying AI in real-world applications. The findings emerged from a&#8230;<\/p>\n","protected":false},"author":1,"featured_media":13958,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/13957"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13957"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/13957\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/13958"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13957"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=13957"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=13957"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}