Understanding Neuralink, the Novel Implantable Brain Chip

Neuralink is a is a brain-machine interface developed by the Neuralink Corporation founded by Elon Musk. The goal of developing Neuralink is towards treating serious brain and spinal cord issues. Recently, the Neuralink team gave a demonstration of the Neuralink device in pigs. It was also recently designated by the FDA as a breakthrough device. In this post, I will discuss the Neuralink technology and its implications for future use in humans.

#technology #neuroscience #machine-learning #data-science #artificial-intelligence

What is GEEK

Buddha Community

Understanding Neuralink, the Novel Implantable Brain Chip

Understanding Neuralink, the Novel Implantable Brain Chip

Neuralink is a is a brain-machine interface developed by the Neuralink Corporation founded by Elon Musk. The goal of developing Neuralink is towards treating serious brain and spinal cord issues. Recently, the Neuralink team gave a demonstration of the Neuralink device in pigs. It was also recently designated by the FDA as a breakthrough device. In this post, I will discuss the Neuralink technology and its implications for future use in humans.

#technology #neuroscience #machine-learning #data-science #artificial-intelligence

Navigating the Complex World of Advanced Brain-Machine Interfaces

In 2018, I wrote extensively about the emerging opportunities and challenges around augmentation technologies in the book Films from the Future — including the advances being promised by Elon Musk’s company Neuralink.

As Neuralink gears up to demonstrate their latest advances in cutting edge brain-machine interface technology, I thought it worth posting a few relevant excerpts from the book here. These are from the chapter that is inspired by the 1995 Anime movie Ghost in the Shell, and focuses on the opportunities and challenges surrounding human augmentation.

Through a Glass Darkly

On June 4, 2016, Elon Musk tweeted: “Creating a neural lace is the thing that really matters for humanity to achieve symbiosis with machines.”

This might just have been a bit of entrepreneurial frippery, inspired by the science fiction writer Iain M. Banks, who wrote extensively about “neural lace” technology in his _Culture _novels. But Musk, it seems, was serious, and in 2017 he launched a new company to develop ultra-high-speed speed brain-machine interfaces.

Musk’s company, Neuralink, set out to disrupt conventional thinking and transform what is possible with human-machine interfaces, starting with a talent-recruitment campaign that boldly stated, “No neuroscience experience is required.” Admittedly, it’s a little scary to think that a bunch of computer engineers and information technology specialists could be developing advanced systems to augment the human brain. But it’s a sign of the interesting times we live in that, as entrepreneurs and technologists become ever more focused on fixing what they see as the limitations of our biological selves, the boundaries between biology, machines, and cyberspace are becoming increasingly blurred.

Plugged In, Hacked Out

In Western culture, we deeply associate our brains with our identity. They are the repository of the memories and the experiences that define us. But they also represent the inscrutable neural circuits that guide and determine our perspectives, our biases, our hopes and dreams, our loves, our beliefs, and our fears. Our brain is where our cognitive abilities reside (“gut” instinct not withstanding); it’s what enables us to form bonds and connections with others, and it’s what determines our capacity to be a functioning and valuable part of society — or so our brains lead us to believe. To many people, these are essential components of the cornucopia of attributes that define them, and to lose them, or have them altered, would be to lose part of themselves.

This is, admittedly, a somewhat skewed perspective. Modern psychology and neurology are increasingly revealing the complexities and subtleties of the human brain and the broader biological systems it’s intimately intertwined with. Yet despite this, for many of us, our internal identity — how we perceive and understand ourselves, and who we believe we are—is so precious that anything that threatens it is perceived as a major risk. This is why neurological diseases like Alzheimer’s can be so distressing, and personality changes resulting from head traumas so disturbing. It’s also why it can be so unsettling when we see people we know undergoing changes in their personality or beliefs. These changes force us to realize that our own identity is malleable, and that we in turn could change. And, as a result, we face the realization that the one thing we often rely on as being a fixed certainty, isn’t.

Over millennia, we’ve learned as a species to cope with the fragility of self-identity. But this fragility doesn’t sit comfortably with us. Rather, it can be extremely distressing, as we recognize that disease, injuries, or persuasive influences can change us. As a society, we succeed most of the time in absorbing this reality, and even in some cases embracing it. But neural enhancements bring with them a brand new set of threats to self-identity, and ones that I’m not sure we’re fully equipped to address yet, including vulnerability to outside manipulation.

#augmentation #elon-musk #brain #neuralink #brain-machine-interfaces

Willie  Beier

Willie Beier

1598870820

Elon Musk Neuralink: connecting your brain to AI

Just today, Elon Musk presented the long-awaited Neuralink: a brain interface device cable of connecting AI with your brain. This article is an extract of the presentation found on Youtube.

Purpose of Neuralink

Creating a device capable of:

  • Possibly solve a variety of brain problems
  • Help spinal injuries problems
  • Help people who are unable to control some muscles of their bodies because of brain damage, possibly restoring their full-body motions

***There was no mention, however, of improving cognitive performances. Although this is an area of great interest, no one, up to now, has never been able to boost cognitive performances (many claims that are possible to increase your IQ, no claim has ever shown validity).

#elon-musk #neuralink #artificial-intelligence #brain-interface #neural-networks

Zakary  Goyette

Zakary Goyette

1601798400

Hot AI Chips To Look Forward To In 2021

The success of algorithms can be traced back to the hardware that mounts them. The explosion of System-on-Chip customised hardware onto the AI scene has revolutionised many real-world applications. Chips for AI acceleration have tremendous implications for applying AI to domains under significant constraints such as size, weight and power, both in embedded applications and in data centres.

In a survey supported by the Assistant Secretary of Defense for Research and Engineering under the Air Force, the researchers from MIT Lincoln Laboratory Supercomputing Center discussed the current state of machine learning hardware and what the future holds. Over the past few months, we have seen many releases from top chip makers like Nvidia and Intel. There have been other announcements too which were kept under wraps for later this year or next year. In this article, we take a look at accelerator chips, that according to the survey, have been announced but have not published any performance and power numbers.

#developers corner #chips #qualcomm ai chip #machine-learning

Oral  Brekke

Oral Brekke

1622643120

Why Cerebras Keeps Making Bigger Chips?

Cerebras Systems is known for its avant-garde chip designs. The chipmaker has once again managed to turn heads with the announcement of Wafer Scale Engine 2 (WSE 2), the world’s largest chip, based on 7nm process node.

Measuring roughly around 46,225 mm square (50 times the size of the largest GPU) and a processor with almost one million cores (850,000), WSE2 is  a 123x improvement over the Ampere A100, NVIDIA’s largest GPU with 54 billion transistors and 7,433 cores.

When the semiconductor industry is striving to build smaller components, the introduction of the Wafer Scale Engine (WSE 1) caught everyone off-guard. Cerebras Systems has designed and manufactured the largest chip exclusively for optimised deep learning.  Deep learning is one of the most computationally intensive workloads. Moreover, DL is quite time-consuming as it uses multiple layer loops to extract higher-level features from the raw input. The only way to reduce training time is to cut down the time taken for inputs to pass through multiple layer loops, which can be achieved by increasing the number of cores to increase calculation speed. The Wafer Scale Engine was built to address this need.

Two years ago, Cerebras challenged  Moore’s Law with the Cerebras Wafer Scale Engine (WSE). The previous generation of WSE chip with 1.2 trillion transistors trounced Moore’s law by a huge margin. Moore’s Law states that the number of transistors on a microchip doubles every two years, while the cost of computers is halved.

So, what’s the catch?

According to Our World Data, till 2019, the next largest transistor count for a microprocessor  is AMD’s Epyc Rome processor with 39.54 billion transistors. WSE might as well be the tipping point for the semiconductor industry bringing forth a new era of AI chips which quadruple the number of transistors every year. The newly developed AI chip WSE 2 with  2.6 trillion transistors is proof of that possibility.

So, what was the purpose of introducing a big chip? Was it just to prove Moore’s Law was wrong? The answer is much more practical. According to  Andre Feldman, co-founder and CEO of Cerebras Systems, the logic behind the size of WSE is quite simple. Usually, a large amount of data is required to accelerate AI, and the processing speed is crucial.

#opinions #ai chip #ai chips #node #nodejs