Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Can OpenAI’s Strawberry program deceive humans?

Posted on October 31, 2024 by admin

OpenAI, the company that made ChatGPT, has launched a new artificial intelligence (AI) system called Strawberry. It is designed not just to provide quick responses to questions, like ChatGPT, but to think or “reason”.

This raises several major concerns. If Strawberry really is capable of some form of reasoning, could this AI system cheat and deceive humans?

OpenAI can program the AI in ways that mitigate its ability to manipulate humans. But the company’s own evaluations rate it as a “medium risk” for its ability to assist experts in the “operational planning of reproducing a known biological threat” – in other words, a biological weapon. It was also rated as a medium risk for its ability to persuade humans to change their thinking.

It remains to be seen how such a system might be used by those with bad intentions, such as con artists or hackers. Nevertheless, OpenAI’s evaluation states that medium-risk systems can be released for wider use – a position I believe is misguided.

Strawberry is not one AI “model”, or program, but several – known collectively as o1. These models are intended to answer complex questions and solve intricate maths problems. They are also capable of writing computer code – to help you make your own website or app, for example.

An apparent ability to reason might come as a surprise to some, since this is generally considered a precursor to judgment and decision making – something that has often seemed a distant goal for AI. So, on the surface at least, it would seem to move artificial intelligence a step closer to human-like intelligence.

When things look too good to be true, there’s often a catch. Well, this set of new AI models is designed to maximise their goals. What does this mean in practice? To achieve its desired objective, the path or the strategy chosen by AI may not always necessarily be fair, or align with human values.

True intentions

For example, if you were to play chess against Strawberry, in theory, could its reasoning allow it to hack the scoring system rather than figure out the best strategies for winning the game?

The AI might also be able to lie to humans about its true intentions and capabilities, which would pose a serious safety concern if it were to be deployed widely. For example, if the AI knew it was infected with malware, could it “choose” to conceal this fact in the knowledge that a human operator might opt to disable the whole system if they knew?

AI chatbot icons
Strawberry goes a step beyond the capabilities of AI chatbots.
Robert Way / Shutterstock

These would be classic examples of unethical AI behaviour, where cheating or deceiving is acceptable if it leads to a desired goal. It would also be quicker for the AI, as it wouldn’t have to waste any time figuring out the next best move. It may not necessarily be morally correct, however.

This leads to a rather interesting yet worrying discussion. What level of reasoning is Strawberry capable of and what could its unintended consequences be? A powerful AI system that’s capable of cheating humans could pose serious ethical, legal and financial risks to us.

Such risks become grave in critical situations, such as designing weapons of mass destruction. OpenAI rates its own Strawberry models as “medium risk” for their potential to assist scientists in developing chemical, biological, radiological and nuclear weapons.

OpenAI says: “Our evaluations found that o1-preview and o1-mini can help experts with the operational planning of reproducing a known biological threat.” But it goes on to say that experts already have significant expertise in these areas, so the risk would be limited in practice. It adds: “The models do not enable non-experts to create biological threats, because creating such a threat requires hands-on laboratory skills that the models cannot replace.”

Powers of persuasion

OpenAI’s evaluation of Strawberry also investigated the risk that it could persuade humans to change their beliefs. The new o1 models were found to be more persuasive and more manipulative than ChatGPT.

OpenAI also tested a mitigation system that was able to reduce the manipulative capabilities of the AI system. Overall, Strawberry was labelled a medium risk for “persuasion” in Open AI’s tests.

Strawberry was rated low risk for its ability to operate autonomously and on cybersecurity.

Open AI’s policy states that “medium risk” models can be released for wide use. In my view, this underestimates the threat. The deployment of such models could be catastrophic, especially if bad actors manipulate the technology for their own pursuits.

This calls for strong checks and balances that will only be possible through AI regulation and legal frameworks, such as penalising incorrect risk assessments and the misuse of AI.

The UK government stressed the need for “safety, security and robustness” in their 2023 AI white paper, but that’s not nearly enough. There is an urgent need to prioritise human safety and devise rigid scrutiny protocols for AI models such as Strawberry.The Conversation

Shweta Singh, Assistant Professor, Information Systems and Management, Warwick Business School, University of Warwick

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • When robots outshine humans, I have to ask: Are we ready?
  • VC Quantonation closes €220M fund to back next-gen physics tech
  • Mistral AI buys cloud startup Koyeb
  • How the uninvestable is becoming investable
  • The European Parliament pulls back AI from its own devices

Recent Comments

    Archives

    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme