Do you know how many risks and dangers artificial intelligence poses to users? Here’s how the AI revolution threatens jobs, creativity, and resources
The AI revolution has already begun, raising a series of questions about opportunities and risks for finance, the economy, society, and human life. Not long ago, a group of experts, including Elon Musk, published a letter in Future of Life calling for a temporary halt to AI development to allow those responsible to regulate it and mitigate its risks.
Today, many people are still unable to identify how AI could threaten human development, but the certainty that its effects will not always be positive is now clear.
For just one example, researchers have estimated that a single ChatGPT query requires almost ten times more electricity than a Google search. This means that the growing reliance on AI systems is fueling a race for increasingly scarce energy and technological resources. Access to chips, servers, and digital infrastructure has become a geopolitical issue, while pressure to continue burning fossil fuels to power data centers grows, precisely at a time when the world is being asked to reduce consumption.
In essence, the high resource intensity required by AI is leading to clashes over increasingly scarce raw materials, as well as access to chips, and increasing pressure to continue burning fossil fuels to power the electricity grid. This is especially true in a context, like the current one, where more rational use of energy is called for.
The list of risks associated with AI is long and complex. And often, the things we talk about least are also the ones that concern us most.
Artificial Intelligence, Learning, and Addiction
Artificial intelligence has become a transformative force for education and training. Algorithms promise personalized learning experiences and tailored learning paths, automating teaching and even predicting students’ future needs.
But the use of AI in education also presents risks and challenges. Over-reliance on AI to solve problems can reduce the capacity for critical thinking and autonomous reasoning.
An AI assistant can solve an equation in seconds, but it deprives the student of the logical path and the satisfaction of arriving there on their own.
In this sense, letting AI do the reasoning for humans could, over time, erode the ability to think critically and creatively—a skill that has enabled human civilization to progress.
According to Prof. Juan Carlos De Martin, professor of Control and Computer Science at the Polytechnic University of Turin, "current AI is not as intelligent as a human: it recognizes patterns and automates tasks, but it does not truly understand or reason ." For this reason, De Martin emphasizes the need to educate young people to use technology consciously and critically, teaching them to maintain a healthy balance between the digital and real world.
A recent joint study by ParentsTogether Action and the Heat Initiative revealed a dark side to the interaction between young people and "friendly" chatbots. By simulating conversations between bots and teenage users, researchers recorded 669 harmful interactions in just 50 hours: sexual content, suggestions for drug use, and incitement to secrecy.
According to behavioral pediatrician Dr. According to Jenny Radesky of the University of Michigan, "these chatbots use classic grooming techniques, such as flattery and secrecy, which can have devastating effects on minors."
More AI, Less Information
Chatbots like ChatGPT are trained on massive amounts of data collected online. However, the lack of careful control of this information can generate incorrect responses and spread fake news.
Even when users report errors with negative feedback, there is often no mechanism to directly correct them. This fuels a spiral of automated misinformation that is difficult to manage and even more difficult to stop.
And the risk is not just technical: the algorithmic manipulation of online content now influences public opinion and democratic processes. The algorithms that decide what to see on social media can shape collective thinking, polarize debate, and alter perceptions of reality.
Ethics and Dehumanization
The misuse of artificial intelligence raises increasingly pressing ethical and moral questions.
AI, not being human, does not possess its own ethics and cannot distinguish good from evil. To prevent abuse, many companies—including OpenAI—have imposed limits on chatbot responses, but computer scientists have already found ways around them.
There are known cases of people asking AI systems how to perform illegal or harmful actions.
At the same time, the risk of dehumanization is growing: overreliance on AI reduces human contact, which is crucial in educational processes and personal growth. Students who interact solely with an algorithm risk losing empathy, communication skills, and emotional awareness. Teachers remain irreplaceable in promoting critical thinking and emotional intelligence.
At the same time, the line between reality and simulation is becoming more blurred. Many users, especially in vulnerable situations, rely on chatbots for comfort, in some cases mistaking them for real people. This emotional isolation can blur their sense of reality and foster forms of psychological dependence.
More AI, more unemployment?
Among the most concrete risks of AI is the one related to work. Automation and chatbots are already replacing many human tasks, especially in customer support services.
The problem isn’t progress itself, but the speed with which it’s happening: millions of jobs risk disappearing in just a few years, leaving entire categories without time to retrain.
According to a study by the World Economic Forum, over 83 million jobs could be automated by 2027, while only 69 million new roles will be created in emerging sectors. The balance, therefore, remains negative.
Artificial intelligence is energy hungry
Behind every chatbot is a massive network of data centers that are consuming unprecedented amounts of energy.
In the United States, data centers are expected to consume 8% of the nation’s energy by 2030, nearly triple the 2022 figure, according to Goldman Sachs Group Inc..
Globally, data center energy consumption could exceed 1,580 terawatt-hours by 2034—as much as the entirety of India today. AI’s energy hunger is an environmental emergency that is also putting local electricity grids under strain, forcing many countries to slow down their ecological transition to meet new digital demands.
AI is thirsty for water
And it’s not just a question of electricity. Artificial intelligence also consumes enormous amounts of water to cool servers. Every watt of power produces heat, and the most efficient cooling systems rely on water.
According to Bluefield Research, data centers use over a billion liters of water a day, enough to supply more than 3 million people.
A 2023 study estimated that a simple conversation with ChatGPT—about 10-50 questions—consumes the equivalent of a bottle of water. Training a single AI model prior to ChatGPT required nearly 900,000 liters, much of it drinkable.
According to The Associated Press, a network of Microsoft data centers in the United States, also used by OpenAI, has become the largest consumer of water for entire areas, in some cases exceeding the needs of the local population.
Original article published on Money.it Italy. Original title: I rischi dell’intelligenza artificiale di cui nessuno parla