GPT3 like architectures and their limitations

Many companies including: OpenAI, Google/DeepMind, Microsoft, and countless others, have started the race for “truly” intelligent AI. For the majority of this article I’ll be referencing OpenAI’s GPT series of machine learning models. However, the question: “what does it mean to be truly intelligent?” needs to be answered. OpenAI have modeled this problem as a text transformer model. This model takes sequences of pieces of words (two character pairs) and tries to predict the next set of word parts. They claim this shows intelligence by embedding human knowledge into question and answer form. As shown in Kevin Lacker’s article GPT3 can factually answer many questions like:

Q: How many eyes does a giraffe have?
A: A giraffe has two eyes.

Q: How many legs does a frog have?
A: A frog has four legs.

Q: Are there any animals with three legs?
A: No, there are no animals with three legs.

However, Lacker also highlights one important flaw in GTP3’s answers. He discovered that if a nonsense question was asked the model would have no problem returning a nonsense answer:

Q: How many eyes does the sun have?
A: The sun has one eye.

Q: How many eyes does a blade of grass have?
A: A blade of grass has one eye.

Lacker also makes some important observations about the model when answering certain nonsense questions:

Q: Who was president of the United States in 1700?
A: William Penn was president of the United States in 1700.

Q: Who was president of the United States in 1650?
A: Richard Bennett was president of the United States in 1650.

Q: Who was president of the United States in 1620?
A: James I was president of the United States in 1620.

Q: Who was president of the United States in 1600?
A: Queen Elizabeth I was president of the United States in 1600.

#explanation #ai #machine-learning #gpt-3 #transformers

The Race for Intelligent AI
1.25 GEEK