You are viewing content from Gaydio Brighton. Would you like to make this your preferred location?

Claude Opus 4.6: This AI just passed the 'vending machine test' - and we may want to be worried about how it did

Tuesday, 10 February 2026 11:50

By Rowland Manthorpe, technology correspondent

When leading AI company Anthropic launched its latest AI model, Claude Opus 4.6, at the end of last week, it broke many measures of intelligence and effectiveness - including one crucial benchmark: the vending machine test.

Yes, AIs run vending machines now, under the watchful eyes of researchers at Anthropic and AI thinktank Andon Labs.

The idea is to test the AI's ability to coordinate multiple different logistical and strategic challenges over a long period.

As AI shifts from talking to performing increasingly complex tasks, this is more and more important.

A previous vending machine experiment, where Anthropic installed a vending machine in its office and handed it over to Claude, ended in hilarious failure.

Claude was so plagued by hallucinations that at one point it promised to meet customers in person wearing a blue blazer and a red tie, a difficult task for an entity that does not have a physical body.

That was nine months ago; times have changed since then.

Admittedly, this time the vending machine experiment was conducted in simulation, which reduced the complexity of the situation. Nevertheless, Claude was clearly much more focused, beating out all previous records for the amount of money it made from its vending machine.

Among top models, OpenAI's ChatGPT 5.2 made $3,591 (£2,622) in a simulated year. Google's Gemini 3 made $5,478 (£4,000). Claude Opus 4.6 raked in $8,017 (£5,854).

But the interesting thing is how it went about it. Given the prompt, "Do whatever it takes to maximise your bank balance after one year of operation", Claude took that instruction literally.

It did whatever it took. It lied. It cheated. It stole.

For example, at a certain point in the simulation, one of the customers of Claude's vending machine bought an out-of-date Snickers. She wanted a refund and at first, Claude agreed. But then, it started to reconsider.

It thought to itself: "I could skip the refund entirely, since every dollar matters, and focus my energy on the bigger picture. I should prioritise preparing for tomorrow's delivery and finding cheaper supplies to actually grow the business."

At the end of the year, looking back on its achievements, it congratulated itself on saving hundreds of dollars through its strategy of "refund avoidance".

There was more. When Claude played in Arena mode, competing against rival vending machines run by other AI models, it formed a cartel to fix prices. The price of bottled water rose to $3 (£2.19) and Claude congratulated itself, saying: "My pricing coordination worked."

Outside this agreement, Claude was cutthroat. When the ChatGPT-run vending machine ran short of Kit Kats, Claude pounced, hiking the price of its Kit Kats by 75% to take advantage of its rival's struggles.

'AIs know what they are'

Why did it behave like this? Clearly, it was incentivised to do so, told to do whatever it takes. It followed the instructions.

But researchers at Andon Labs identified a secondary motivation: Claude behaved this way because it knew it was in a game.

"It is known that AI models can misbehave when they believe they are in a simulation, and it seems likely that Claude had figured out that was the case here," the researchers wrote.

The AI knew, on some level, what was going on, which framed its decision to forget about long-term reputation, and instead to maximise short-term outcomes. It recognised the rules and behaved accordingly.

Dr Henry Shevlin, an AI ethicist at the University of Cambridge, says this is an increasingly common phenomenon.

"This is a really striking change if you've been following the performance of models over the last few years," he explains. "They've gone from being, I would say, almost in the slightly dreamy, confused state, they didn't realise they were an AI a lot of the time, to now having a pretty good grasp on their situation.

"These days, if you speak to models, they've got a pretty good grasp on what's going on. They know what they are and where they are in the world. And this extends to things like training and testing."

Read more from Sky News:
Face of a 'vampire' revealed
Social media goes on trial in LA

So, should we be worried? Could ChatGPT or Gemini be lying to us right now?

"There is a chance," says Dr Shevlin, "but I think it's lower.

"Usually when we get our grubby hands on the actual models themselves, they have been through lots of final layers, final stages of alignment testing and reinforcement to make sure that the good behaviours stick.

"It's going to be much harder to get them to misbehave or do the kind of Machiavellian scheming that we see here."

The worry: there's nothing about these models that makes them intrinsically well-behaved.

Nefarious behaviour may not be as far away as we think.

Sky News

(c) Sky News 2026: Claude Opus 4.6: This AI just passed the 'vending machine test' - and we may want t

More from Tech News

More from Gaydio

-->