Henry Kissinger: ChatGPT could Jeopardize Democracy

Money.it

9 March 2023 - 08:00

condividi
Facebook
twitter whatsapp

The former Secretary of State, in a long editorial in the "Wall Street Journal" talks about the risks of AI and the next phase of human evolution: Homo technicus.

Henry Kissinger: ChatGPT could Jeopardize Democracy

In recent years, among the Silicon Valley tycoons, the fear has begun to spread that artificial intelligence, once it reaches singularity, could represent a danger to humanity.

Can artificial intelligence become a threat to humanity?

There are numerous and disparate currents of thinkers, including critical voices within transhumanism, of those who are starting to worry about the future of humanity.

Bill Gates, for example, following the alarm raised by Elon Musk and the astrophysicist Stephen Hawking, intervened in the debate on the possible scenarios that would open following the development of an AI and expressed his concern:

"They are among those worried about super intelligence [...] In the beginning, machines will do a lot of work for us and they won’t be super intelligent. This will be good if well managed […] A few decades later, however, intelligence will be strong enough to become a concern."

Nick Bostrom’s appeal

Even the transhumanist philosopher Nick Bostrom in his book Superintelligence, predicts what superpowers and strategies could a superintelligent AI use against us: from hacking to escape the control of its human "guardians", to a real attack in which it could eliminate: "the human species and any automatic system created by human beings that can intelligently oppose the execution of its plans. This goal could be achieved by activating some advanced weapon system that the AI has devised using the superpower of technological research and secretly installed during the clandestine preparation phase."

In short, a technological challenge could become a survival challenge.

Henry Kissinger’s editorial

To the critical voices on the future of AI was added that of the 99-year-old Henry Kissinger who, in an editorial published in the Wall Street Journal together with Eric Schmidt and Daniel Huttenlocher, felt the need to advise the world of danger posed by artificial intelligence.

"Artificial intelligences pose practical and philosophical challenges that humanity has not faced since the Renaissance", writes the former Secretary of State who continues: "If we are to successfully navigate this transformation, it will be necessary to develop new concepts of human thought and interaction with machines. This is the essential challenge of the age of artificial intelligence".

In the long and dense article, Kissinger ventures into comparisons between the arrival of artificial intelligence and the spread of other technologies that have changed the way humans experience the world in and around them (one above all : Gutenberg’s movable type press).

There are many concrete dangers of AI, starting with the practical ones:

"We need to include a caveat to this prediction: What if this technology can’t be fully controlled? What if there were always ways to generate falsehoods, fake images and fake videos, and people would never learn to disbelieve what they see and hear? Humans are taught from birth to believe what they see and hear, and this may no longer be true due to generative AI."

Mystical-apocalyptic concerns

Kissinger not only examines these possible problems on the use of AI but also dedicates space to mystical-apocalyptic concerns. We need to start worrying, warns Kissinger, about possible techno-reactionary cults that could start to worship AIs as pagan gods and push the world back to the time of religious wars:

"The arrival of an unknowable and apparently omniscient tool, capable of altering reality, can trigger a resurgence of mystical religiosity. The potential of group obedience to an authority whose reasoning is largely inaccessible to his subjects has been seen from time to time in human history, perhaps most dramatically and recently in the 20th century subjugation of whole masses of humanity under the slogan of ideologies on both sides of the political spectrum. A third way of knowing the world could emerge, which is neither human reason nor faith. What becomes democracy in such a world?".

A threat to democracy?

Furthermore, Kissinger also sees the risk for the stability of democracies:

"Without guiding principles, humanity runs the risk of domination or anarchy, of unlimited authority or nihilistic freedom. The need to relate major societal changes to ethical justifications and new visions for the future will appear in a new form. If the axioms expressed by ChatGPT are not translated into a humanly recognizable effort, alienation of society and even revolution could become probable.
Without proper moral and intellectual foundations, the machines used in governance could control rather than amplify our humanity and trap us forever. In such a world, AI could amplify human freedom and transcend limitless challenges.
"

We need to start worrying, Kissinger intimates, about how the use and abuse of artificial intelligences could change human nature in its essence and transform us into a new species, a disturbing biotechnological hybrid.

A dig at those who, like Elon Musk, think they can stem the danger of artificial intelligence, hybridizing man with machines?

What to do?

How do we save the world from itself, how do we avoid handing over the Earth to machines? Not even Kissingere knows the answer. But he has already thought of a name for the next phase of human evolution: Homo technicus.

“When we become Homo technicus, we have the imperative to define the purpose of our species. It’s up to us to provide the real answers."

Before the transformation of the human being takes place, writes the former Secretary of State, we must answer the question that defines this era more than any other: what is the purpose of our species? A question that the editorial does not answer, suggesting however to adopt the parameters for a responsible use of AI as soon as possible, before it is too late:

"Trust in artificial intelligence requires improvements on multiple levels of reliability: in the accuracy and safety of the machine, in the alignment of AI goals with human goals, and in the accountability of the humans who govern the machine. But even as AI systems become more technically robust, humans will still need to find new, simple, and accessible ways to understand and critically challenge the structures, processes, and outcomes of AI systems."

"It is necessary to establish parameters for responsible use of AI, with variations based on the type of technology and the context of implementation. Language models like ChatGPT require limits to its conclusions. ChatGPT needs to know and transmit what it does not know and cannot transmit".

Original article published on Money.it Italy 2023-03-09 08:00:00. Original title: ChatGPT potrebbe mettere a repentaglio la democrazia: parola di Henry Kissinger

Trading online
in
Demo

Fai Trading Online senza rischi con un conto demo gratuito: puoi operare su Forex, Borsa, Indici, Materie prime e Criptovalute.