Developing AI serves to protect the US: this, in short, is the argument with which Sam Altman tries to defend his position and justify the enormous investments that AI has required.

Artificial intelligence is now openly a bubble, even Jeff Bezos readily admits this. However, tech prices are still very high, and entrepreneurs solely focused on AI, like Sam Altman of OpenAI (the company behind ChatGPT), continue to spin their sales record, pumping air into the bubble until the very last possible day.
Recently, answering a question from the audience, Altman gave a response that clearly explains how this wave of speculation was triggered: AI was and is sold first and foremost as the military technology front of the present era. We know how sensitive the United States is to this type of argument, and as often happens in tech, the markets have been called upon to help finance projects that ultimately require funding from the Pentagon.
Obviously, as always, the last to enter the arena will be those who pay all the costs, and those who started the speculation will reap huge profits.
Returning to Altman, he was asked: "What are your biggest fears about AI, and how can we avoid reaching scenarios where they materialize?" And here’s his first response:
I think there are three categories of fear, let’s say.
The first is that a "bad guy" gets superintelligence first and abuses it before the rest of the world has a version powerful enough to defend itself. So, an adversary of the United States says, "I’m going to use this superintelligence to design a bioweapon," "to bring down the US power grid," to, you know, "get into the financial system and take everyone’s money." Something that would simply be difficult to imagine without significantly superhuman intelligence, but with it becomes very possible, and since we don’t have it, we can’t defend ourselves. This is the first category, the broad category one.It’s clear how, in a very "American" style, the commercial narrative is based on danger, on the risk that an "enemy" will arrive first and threaten US supremacy in the world.
Dangers numbers 2 and 3 for Altman are at least intrinsic to AI itself: loss of control of technology, which begins to be hostile and work against humans, and against its own shutdown and passive use by society, which ends up depending on AI for everything, losing any kind of autonomous decision-making capacity and even the ability to interpret the world. It’s worth reading the transcript of the response:
This is the category where models, in a sense, accidentally take over the world. They never ’wake up’, they don’t do the sci-fi thing, but they simply become so ingrained in society, and they’re so much smarter than us, that we can’t really understand what they’re doing, but we have to rely on them somehow.
And even without a drop of malice on anyone’s part, society can simply veer in a strange direction. There are young people who say, ’I can’t make any decisions in my life without telling ChatGPT everything that happens. It knows me, it knows my friends, I’ll do whatever it tells me.’ That seems really negative to me. And it’s a very common thing among young people. What if AI becomes so smart that the President of the United States can’t do better than follow ChatGPT 7’s recommendation, but can’t even understand it?
What if I couldn’t make a better decision about how to run OpenAI and said, ’You know what? I’m completely handing it over. ChatGPT 7, you’re in charge. Good luck.’ That might be the right decision in every single case, but it means that society has collectively handed over a significant portion of the decision-making to this very powerful system that is learning from us, improving for us, evolving with us, but in ways we don’t fully understand. So, this is the third category of how I think things could go wrong."Notice how, in Altman’s scenario, the US President uses ChatGPT and not other AI... we’re truly at the level of a street salesman. Aside from this nuance, it’s worrying that a scenario that’s far from hypothetical, catastrophic for humanity in the deepest sense, and already unfolding, is ranked third on the list of dangers associated with this technology.
It’s even more striking, but here we’re still talking about the salesman, how none of the scenarios considers how flawed and error-prone the technology is: all the problems are tied to the fact that AI is/will be too powerful, and we’re too stupid, incompetent, and clueless to stand up to it and control it.The harsh truth is that everything that could be developed technologically on LLM models has already been done, and what we have before us is merely refinement. At the same time, none of the AI companies are yet making profits, and it seems increasingly unlikely that they will be able to do so in the future to any significant extent. enough to repay the enormous investments made so far, not to mention the energy costs always required to keep everything running.
What will happen is that before long—the fact that Bezos is talking about it openly is a sign not to be ignored—the bubble will burst, and many companies riding the crest of the wave will fail or become sector-specific businesses. The technology will remain and will be more expensive, since those who use it will have to pay the actual cost of development and computing; forget about widespread freeware because it lacks a business model to support it. Even Google could backtrack, and it would do well.
Add to this the copyright factor and it becomes truly difficult to imagine a sustainable future for these technologies which, as extraordinary as they are, certainly have They’ve done more harm than good. All because, essentially, AI was sold as a weapon. That’s just the way the world works.Original article published on Money.it Italy 2025-10-07 16:04:45. Original title: L’AI non ha mercato, ecco quindi come Sam Altman prova a venderla ancora una volta