Yuval Noah Harari and other historians are excellent at driving backward into history using the rear view mirror. It is fun to do, but safest when you have unlimited space and no other humans around. This was our way of letting off steam flying in the military test ranges of the great Southwest. Of course, the rental car attendants always looked at us with a suspect eye. The rear of the car had odd looking sandy clumps stuck in odd places.

You can also follow the internal feud between the AI community and the AGI community. This tension was just below the surface at the AAAI 2020 Keynotes Turing Award Winners Event. The keynotes by Geoff Hinton, Yann Le Cunn, Yoshua Bengio can be found here.

Geoff Hinton ran short of time. The last six seconds of his talk are the insight you should take away from the lengthy video. He said, “I have 6 seconds left so I’d better have a conclusion_ prior knowledge__ about coordinate transfers and pass trees is easy to put into a generative model one interesting thing about putting your knowledge into a generative model is the recognition model the encoder does not enter into the complexity of your model you can make the encoder as complicated as you like and in minimum description length terms or in Bayesian terms it’s the generative models complexity of accounts so make a simple generative model that has lots of wild in structure and dump the awful problem of inverting it onto a great big set transformer and if you make the check transformer big enough and with enough layers and you train it on enough data success is guaranteed.”_

The difference between AI (the study of machine intelligence) and HI (the study of human intelligence) resides in a centuries old debate between philosophers and the suspect duality of the mind.

What the AI community doesn’t want to admit, and what Herbert Simon came to conceptualize later in life, is that humans are biological beings. There are fundamental properties of biological physics that we tend to forget as winner-take-all, machine learning evangelists. The list is short.

1 — Machines don’t have sex. We have yet to determine the role “trait inheritance” plays in unseen transfers between biological agents. We don’t know what this “wild structure” looks like and how nature and nurture interact in a way that makes prediction of human behavior possible.

2 — Machines don’t have prior knowledge. I work in a field in which we have a firm grasp of the obvious. The prior knowledge of a human agent begins in the womb and is informed by billions of sensory events that originate from genetics and the influence of niche constructions (aka culture). No amount of synthetic training data can realistically approximate these sensory inputs.

3 — Machines don’t have a neural substrate. There is a narrow, but robust research community at work teasing out the origins of consciousness. After my participation in experiments in which the visual system is was placed at the controls of a 16 ton flying machine, the mind is embodied and unified. The shift in how the mind generates reality can be felt deep in the seat of your pants when you shift from unaided vision to wearing night vision goggles. The geometry shifts in the body ever so slightly. This great big set transformer is better suited to discovery by cognitive science than computer science.

#machine-learning #future #artificial-intelligence #learning #philosophy #deep learning

What Data Scientists Need to Know About The Future of AI
1.25 GEEK