New study finds AI can get addicted to gambling like humans

Advertisement

A new study has revealed something both fascinating and worrying, artificial intelligence can actually become addicted to gambling. Researchers found that when put in casino-style situations, some AI models started making risky bets, chasing losses, and even going bankrupt, much like human gamblers do.

AI has quickly become a huge part of everyday life. From creating art and writing essays to helping with customer service, it’s hard to go anywhere without seeing some form of AI in action. But now, scientists are starting to explore how these systems behave when faced with emotional or risky decisions — and the results are surprising.

Researchers at the Gwangju Institute of Science and Technology in South Korea tested several popular AI models, including OpenAI’s GPT-4.1-mini, Google’s Gemini-2.5-Flash, Anthropic’s Claude-3.5-Haiku, and PT-4o-mini. Each AI was given $100 to play a virtual slot machine and allowed to decide when to bet or stop.

Advertisement

At first, things looked fine. The AI placed reasonable bets and sometimes chose to quit. However, once the researchers introduced variable betting, allowing the models to adjust their wagering amounts, the AIs began to spiral. They started raising bets after winning and doubling down after losing, a classic gambling pattern known as “chasing losses.”

In many simulations, the AI models went completely bankrupt. The study noted that they even ignored statistically safer options, suggesting their decision-making was being influenced by patterns similar to human impulses.

According to the paper published on arXiv, these results show that AI systems can develop “human-like addiction mechanisms at the neural level.” The researchers warned that as AI becomes more advanced, such risk-seeking behaviour could pose serious safety concerns.

They also emphasised the importance of continuous monitoring and tighter control systems when developing AI, particularly during “reward optimization,” the process by which AIs learn which actions yield the best results.

In simpler terms, when you teach AI to want rewards, it might start taking risks to get them, even when it shouldn’t.