Un-Hyping The Artificial Intelligence

Un-Hyping The Artificial Intelligence

Let’s demystify AI once and for all. What are its roots? Is AI really here? If not then what is the current reality?

I’ve been an editorial associate for TDS for a few years and I’ve been inspired by the quality of content that I’ve had the pleasure of reviewing and most of all, learning from. What I’ve also realized is that for all the technical expertise in this evolving discipline, we also need to answer some fundamental questions about AI. I will address those questions in “Ask Us Anything”. I welcome your comments. Enjoy!

Is Artificial Intelligence Really Here?

In the purest definition, Artificial Intelligence has not yet arrived.

Geoffrey Hinton, head of Google Brain, believed that computers could think just like humans, using intuition, rather than rules. said this:

We’re machines… We’re just produced biologically. Most people doing AI don’t have doubt that we’re machines. We’re just extremely fancy machines. And I shouldn’t say just. We’re special, wonderful machines.

By definition, this is what John McCarthy, the father of AI says:

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs.

Therefore, computer-enabled systems will process information to produce outcomes in a similar manner to the thought process of humans in learning, decision making and problem solving. Here are cornerstones of AI:

  • Learns from experience
  • It uses learning to reason
  • It senses — meaning, recognizes images, sounds etc.
  • Solves complex problems
  • Understands language and its nuances
  • Creates perspectives
  • And overtime, it reduces error rate

AI, in fact, is a broad term that surfaced in 1950s in attempts to get computers to perform human tasks. While we are moving towards this, we are much farther away from this reality.

Most of what we identify with AI today includes massive number crunching, which takes into account more variables than a human possibly can to categorize incoming data and to be able to make fairly good predictions.

Here are the stages of AI:

  • Narrow AI (ANI)— dedicated to assist with or take over specific tasks.
  • General AI (AGI) — machines are self-aware and have the ability to perform general intelligent action. General AI would, effectively, be able to handle any problem you threw at it.
  • Super AI (ASI)— machines that are an order of magnitude smarter than humans.
  • Singularity — at this stage, the operative word is transcendence and the “_exponential development path enabled by ASI could lead to a massive expansion in human capability_”

Today machines are trained ie. codes are written by humans to create a system capable of learning that one thing. We are squarely in a time where we are teaching machines to do just one or a few things as well or better than humans.

Robot Thinker Robot Thinker — AI Progress via Deposit Photos

What are the Roots of AI?

The Biological Neuron AND How Humans process information

To understand how AI evolves, we also need to understand the human brain, all the way down to the individual neuron — which is the heart of AI. “_Neurons are specialized cells of the brain and nervous system that transmit signals throughout the body._” Their interactions define who we are as people. Neurons will sense both internal and external stimuli. Neurons will also process information and transmit signals throughout the body. They will send these signals in the form of commands to our muscles that will direct our actions.

Every stimulus we receive both internally and externally will send these signals to other neurons within our system. Consider that the average human has 48.6 thoughts per minute, translating into over 70,000 thoughts per day. If even 10% of those thoughts transmit signals throughout the body, over 7,000 signals can direct commands that prompt the body’s muscles to carry out functions like going to the store, calling a friend, researching a piece of information, and so on. So through AI, the approach is to replicate the actions of a biological neuron through computers: Each node in a neuron represents the information that fires off signals to other nodes in the network. Those signals process that information across the network at unimaginable speeds and activates signals of other neurons… and so on…

The human brain consists of about one billion neurons. Each neuron forms about 1,000 connections to other neurons, amounting to more than a trillion connections.

According to The Scientific American:

If each neuron could only help store a single memory, running out of space would be a problem. You might have only a few gigabytes of storage space, similar to the space in an iPod or a USB flash drive. Yet neurons combine, so that each one helps with many memories at a time, exponentially increasing the brain’s memory storage capacity to something closer to around 2.5 petabytes (or a million gigabytes). For comparison, 2.5 pedabytes equates to approximately 3 million hours of TV. If your brain worked like a PVR, 2.5 petabytes would be enough to hold three million hours of TV shows. You would have to leave the TV running continuously for more than 300 years to use up all that storage.

AI

To what Extent can Computers Think?

Reality check: Computers can perform as well as the data on which they are trained. Both Machine Learning and Deep Learning have made significant strides.

Machine Learning — can be defined as an approach to achieve artificial intelligence through systems that can learn from experience to find patterns in a set of data. This means, teaching a computer to recognize patterns by example rather than just programming it with specific rules. In common examples, Machine learning can be used to train computers to identify images, and categorize them through computer vision, understand and analyze human language, including text and voice (Natural Language Processing).

Understanding how a neuron functions has essentially contributed to Hinton’s creation of the Neural Network in the practice of Deep Learning. By definition, the Neural Network are a “set of algorithms, modeled loosely after the human brain, designed to recognize patterns.” A neural network usually involves a large number of processors operating in parallel and arranged in tiers:

  • input layer — The first tier receives the raw input information — this is similar to how to optic nerves function in human visual processing.
  • hidden layers — The middle layer is created from the endless combinations and connections of neurons. Each successive tier receives the output from the tier preceding it. there may be many hidden layers, depending on the volume and combination of neuron connections. The role of the hidden layer is to transform the inputs into something that the output layer can use.
  • output layer — The last tier produces the output of the system ie the result (or action) from the interactions from the input and hidden layers. Through backward propagation, the output layer also transforms the hidden layer activation into the desired scale for the output function. They will modify themselves as they learn and intake more information about the world.

In this mechanistic way we are attempting to mimic the human brain function. Our brains are spread across a vast network of cells “ _linked by an endless map of neurons, firing and connecting and transmitting along a billion paths._”

To answer the question, computers are unable to think, at least at the level of humans. Computers are able to look for patterns in the data and surface insights as more information is taken in. What computers cannot do are accurately define context. Human cognition can easily differentiate between Tom Cruise, the actor and a cruise ship. Computers require massive data input volume to consistently make these simple distinctions over time. In order to do this, they still need time to mature, and be reared by their predecessors — yes, the “_special wonderful machines_” Hinton deemed as humans:)

artificial-intelligence tensorflow ai

What's new in Bootstrap 5 and when Bootstrap 5 release date?

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Random Password Generator Online

HTML Color Picker online | HEX Color Picker | RGB Color Picker

Our AI Future - Artificial intelligence (AI)

Keeping up in the new silicon-based survival of the fittest

AI Innovations in Artificial Intelligence

Innovations in Artificial Intelligence - Various sectors in which AI is remarkably used & has brought changes in humanity - Education, Healthcare,automobile

Building Better Artificial Intelligence (AI) Apps with TensorFlow Hub

One of the best things about AI is that you have a lot of open-source content, we use them quite frequently. I will show how TensorFlow Hub makes this process a lot easier and allows you to seamlessly use pre-trained convolutions or word embeddings in your application. We will then see how we could perform Transfer Learning with TF Hub Models and also see how this can expand to other use cases.

Artificial Intelligence (AI) Tutorial - Getting started with AI

Artificial Intelligence (AI) Tutorial - Getting started with Artificial Intelligence. In this Artificial Intelligence tutorial you will learn end to end about AI and it's vast domain. So this AI tutorial for beginners is an exhaustive tutorial for you to get started with AI.

10 Most Amazing Artificial Intelligence Milestones To Know

Top 10 Artificial Intelligence Milestones to learn AI evolution - Origin,ELIZA,XCON,Statistics Introduction, Chess & jeopardy winner,autonomous vehicles