Underpaid Kenyan workers and a sophisticated predicting algorithm is what truly lies behind ChatGPT. Artificial Intelligence is not what you think it is.
Artificial Intelligence was presented to the world in a fanfare of excitement and fear. It was promised to be the new groundbreaking technology that would disrupt the market. AI could make people lose jobs and introduce new ones, change art and creativity as we know it, change society as a whole.
Billions of dollars have been poured into artificial intelligence already. Microsoft closed a deal with OpenAI for $10 billion, making it one of the sector’s most valuable companies. Google too plunged into the new craze and announced Bard, a new AI search engine.
The AI craze started last year when OpenAI, the company recently acquired by Microsoft, released ChatGPT. This is an artificial intelligence software capable of generating text using only a few prompts as commands. ChatGPT can write letters, essays, resumes and summaries after a simple request by the user. It can also change the style of the text and correct his mistakes when asked.
ChatGPT is seemingly an unprecedented technology with incredible potential. Except, this potential hides a sad, much less glamorous reality. ChatGPT is a text-generator, not creator. Its technology is a so-called “generative AI”. And, in fact, it is much less “artificial” or "intelligent than one might think.
Neither intelligent…
A “generative AI” is essentially an algorithm that generates strings of code. It does not know how to do anything else, it just generates whatever it is programmed to generate. ChatGPT is an AI programmed to generate text, DALL-E (another OpenAI software) is an AI programmed to generate images.
This means that ChatGPT takes all the text available on the internet, processes it, and “replies” with a string of words. ChatGPT does not even understand grammar, it just “predicts” what is the grammatically correct word that comes next using human-produced writing.
Predicting and generating is all that ChatGPT does. Knowing every text ever created, it can safely use a probabilistic algorithm to guess which word comes next in the text it’s generating.
ChatGPT does not understand language, creativity, humankind or even any sort of intelligence. It just understands predicting and generating.
This is how ChatGPT and other AI-based text generators get so many things wrong. This is why Bard said a plainly wrong answer after a very simple question, making Google lose $100 billion on the stock market.
… nor artificial
Naturally, for ChatGPT to predict accurately its text someone has to feed the entire human production of writings existing on the internet. And, unfortunately, the internet can be a rather nasty place.
A TIME Magazine exclusive report found that OpenAI outsources Kenyan workers to filter out toxic content from ChatGPT. These workers are daily subjected to scenes of torture, sexual abuse, violence to children. All for a $2/hour pay.
Therefore, ChatGPT cannot “artificially” select the content to predict. It has to do it with “adult supervision” processed by underpaid workers, hidden far away from the lights and glitter of San Francisco.
In conclusion, generative AI is an undeniably fascinating technology. It will surely change some industries, perhaps even revolutionize some markets. It will become an incredibly useful tool for human workers in many fields.
But that is what generative AI is. A tool that works only with human supervision.
Tech companies have to ride the wave of a new, amazing technology. They need to for their stock market prices. Sometimes they are right, most of the time they are wrong. Anything coming from a tech giant statement must be always taken with a grain of salt.