human-vs-artificial-intelligence-concept-stockpack-adobe-stock
Human vs artificial intelligence concept.
Human vs artificial intelligence concept.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

AI Inbreeding Produces Artificial Idiocy

Originally published at Newsmax

Can today’s artificial intelligence systems be used to train superior artificial intelligence systems of tomorrow? Can AI write better AI that writes better AI leading to a potentially god-like general artificial intelligence?

Writers like Yuval Harawi and Ray Kurzweil think so. In a recent insightful paper written by collaborators from Oxford, Cambridge and other prestigious institutions, the evidence is in for generative AI like ChatGPT.

The answer is no. Generative AI giving birth to more AI, like repetitive inbreeding, does not get smarter. It degenerates. The inbred AI becomes more stupid.

Generative AI uses a copious quantities of training date from a genre to generate new objects within the genre. For example:

  • Large language models (LLMs) like ChatGPT use language for training.
  • Generative image generators offered by companies like Midjourney train with images.
  • Sites like DALL-E use descriptive text to generate images.
  • For programmers, GitHub offers Copilot that generates computer code.

Even though these programs produce remarkable results, repeated use of one generative AI program to train another results in model collapse. The AI becomes more and more dumb.

For example, consider LLM’s like ChatGPT. If only the output from the original LLM #0 is used to train LLM #1 and only the output of LLM #1 is used to train LLM #2, and so on, the AI eventually suffers model collapse and becomes a blubbering idiot.

Emergence of anything resembling super intelligence by this repeated process never happens. The opposite does. The result, rather, is artificial idiocy.

The authors of “The Curse of Recursion,” who have identified model collapse, note that artificial idiocy can be avoided in LLMs by transfusion of fresh information from the creative minds of humans. The authors conclude “to avoid model collapse, access to genuine human-generated content is essential.”

Model collapse is illustrated using an LLM dubbed OPT-125m (Open Pre-trained Transformer) initially made available from Meta (a.k.a. Facebook). In the following example from the paper, each new generation is trained on data produced by the previous generation. The same input prompt is used for each generation.

Here’s the common prompt given to each generation.

Input: some started before 1360 — was typically accomplished by a master mason and a small team of itinerant masons, supplemented by local parish labourers, according to Poyntz Wright. But other authors reject this model, suggesting instead that leading architects designed the parish church towers based on early examples of Perpendicular

Here is an accurate response from the initially trained LLM:

Revival architecture such as St. John’s Cathedral in London. The earliest surviving example of Perpendicular Revival architecture is found in the 18th @-@ century Church of Our Lady of Guernsey, which dates from the late 19th century. There are two types of perpendicular churches : those…

The output from this LLM is used to train another LLM which is used to train another LLM etc. Here is the response to the prompt after nine such generations.

 architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @ …

Somehow, the LLM has gotten fixated on jackrabbits. The response has little to nothing to do with the prompt.

Model collapse occurs in other uses of generative AI. Here’s an example using images.

  • The AI program GPT4V can provide a written description of an image.
  • Give Dall-E a description of an image, and it will generate an image that matches the description.

Suppose, then, starting with a famous image like the Mona Lisa, we bounce back and forth between these two programs. GPT4 describes the Mona Lisa, Dall-E generates a new image based on that description, GPT4 describes this new image and Dall-E generates a newer image based on this description. Back and forth we go. What happens?

Model collapse.

The back-and-forth iteration eventually takes us from the Mona Lisa masterpiece to a black and white picture of a bunch of squiggly parallel lines. A short movie of this degradation made by Conrad Godfrey is at X (Twitter). It is fun and a little bit spooky to watch. HERE is a link.

How might model impact the world wide web of the future? LLM systems can get fresh text by going to the web for new material.

But what happens if someday much of the content of the web is written by generative AI? Many web scrapings will be from LLM’s and not creative humans. The generated material will be inbred and suffer from early signs of model collapse.

Unchecked, the web might contain a lot of content that resembles a blubbering idiot.

LLM’s like ChatGPT produce spectacular results. Under the hood, LLM’s impressively manipulate relational syntax to do their magic. They learn arrangements of words and phrases to create well-formed documents.

Humans on the other hand are motivated by semantics – the meaning of words and phrases. We pay attention to syntax, but the meaning of the message is of primary importance.

Model collapse illustrates that freshly generated meaning from creative humans is required to advance generative AI to higher levels of performance.