In 1975 Herbert A. Simon was awarded the Turing Award by the Association for Computing Machinery. This award, given to “an individual selected for contributions of a technical nature made to the computing community” is considered to be the Nobel Prize for computing.

Simon and co-recipient Allen Newell made basic contributions to artificial intelligence, the psychology of human cognition, and list processing.

It is interesting to note that amongst his contributions to artificial intelligence and list processing, he is also being recognised for his contribution to human cognition. At first glance, one would think that understanding how humans think is about as far from computer science as you can get!

However, there are two key arguments that explain why human cognition is important for any advancements in computer science, and especially AI.

Imitating Humans

In his 1950 seminal paper “Computing Machinery and Intelligence,” Alan Turing introduced what became known as the Turing test. A computer and a human have a written dialogue and in this “imitation game” the computer tries to fool the human participant into thinking it is also a human by devising responses that it thinks a human would make.

One of the key aims of AI is to train computers to make decisions like humans, whether labelling pictures or responding to questions. Even if the aim is task-specific, not centred around replicating humans in their entirety, it is crucial that developers of AI have some understanding of human cognition, so that they can replicate it.

#artificial-intelligence #data-science #machine-learning #psychology #human-behavior

Why All Data Scientists Should Understand Behavioral Economics
1.55 GEEK