This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

From ancient mythology to modern science fiction, humans have been dreaming of creating artificial intelligence for millennia. But the endeavor of synthesizing intelligence only began in earnest in the late 1950s, when a dozen scientists gathered in Dartmouth College, NH, for a two-month workshop to create machines that could “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

The workshop marked the official beginning of AI history. But as the two-month effort—and many others that followed—only proved that human intelligence is very complicated, and the complexity becomes more evident as you try to replicate it.

That is why, despite six decades of research and development, we still don’t have AI that rivals the cognitive abilities of a human child, let alone one that can think like an adult. What we do have, however, is a field of science that is split into two different categories: artificial narrow intelligence (ANI), what we have today, and artificial general intelligence (AGI), what we hope to achieve.

What are the requirements of AGI?

Defining artificial general intelligence is very difficult. “General” already implies that it’s a very broad term, and even if we consider human intelligence as the baseline, not all humans are equally intelligent.

But there are several traits that a generally intelligent system should have such as common sense, background knowledge, transfer learning, abstraction, and causality. These are the kind of functions you see in all humans since early age.

#what is... #artificial intelligence #demystifying ai #general ai

What is artificial general intelligence (general AI/AGI)?
4.70 GEEK