Neuralink: Is Symbiosis Between Human Brain & AI Worth the Pain and Risk?
17:03 GMT, 6 December 2022
Elon Musk's medical-device firm Neuralink has been caught in the crosshairs of the Inspector General for the US Department of Agriculture (USDA) over complaints about a hasty animal-testing process. What's Musk's brain implant project about and what is its real goal?
SputnikMusk's brainchild, Neuralink, was founded in 2016 and is seeking to find ways of instant interaction between the human brain and artificial intelligence (AI).
Currently, the firm is developing two pieces of equipment. One, as small as a coin, would be embedded in a human skull with tiny wires – comprising 1,024 electrodes – fanning out into their brain. Each wire is said to be about 20 times thinner than a human hair. The device is supposed
to monitor and record brain activity, and then transmit this data wirelessly via Bluetooth-like radio waves to computers. Two years ago, Neuralink held a live demo showing off the chip's ability to read the brain activity of a pig and transmit the data.
The other one is an eight-foot-tall robot that would surgically insert a Neuralink AI brain chip while avoiding damage to the brain or blood vessels. Musk promised that the process would take hours and leave only a small scar. "Our goal is to record from and stimulate spikes in neurons in a way that is orders of magnitude more than anything that has been done to date and safe and good enough that it is not like a major operation," the billionaire said in his 2019 presentation, explaining that the procedure could be compared to that of laser eye surgery.
Meanwhile, critics say that Neuralink has not done anything particularly innovative: neuroscientists and bioengineers have been working on this for decades.
For instance, in 2019, a team of researchers from Carnegie Mellon University in collaboration with the University of Minnesota
used a noninvasive brain-computer interface (BCI) to develop the first-ever successful mind-controlled robotic arm. The goal was to create a technology that would allow paralyzed patients to control their robotic limbs using their own "thoughts."
In May 2022, Neuralink competitor
Synchron Inc. enrolled the first patient in its US clinical trial of its own BCI called Stentrode. The device is intended to help people suffering from paralysis control digital devices hands-free. Over five million people are affected by the condition in the US alone, according to researchers at the Centers for Disease Control.
1 December 2022, 07:04 GMT
AI: Challenges and Ethical Issues
However, Musk is determined to go further than that. The Tesla and Twitter CEO wants to expand people's cognitive abilities: for example, to enable superhuman vision or telepathy, and eventually change the world. The businessman warned that with AI's present rate of advancement, humans will soon lag far behind, machines triggering a vast variety of ethical and societal issues. The World Economic Forum (WEF) outlined at least nine of them.
The first is unemployment: automation of jobs employing physical work could soon be followed by AI taking the burden of cognitive labor on its shoulders, too. Inequality is the second issue: How would people distribute the wealth created by machines? There is a fear that individuals who have ownership in AI-driven companies will make all the money.
Third, it's unclear how machines will affect humans' behavior and interaction. WEF observers note that machines can already trigger the reward centers in the human brain, warning against "tech addiction." Those who still believe that it's not that serious should look at click-bait headlines optimized with A/B testing (also known as split testing) to capture our attention. First humans programmed computers, now supercomputers could program us.
Fourth, "artificial stupidity": no matter how "smart" AI is, it's not guaranteed it won't make stupid mistakes. One should mitigate risks associated with AI's "glitches" when it comes to labor, security, and efficiency. Fifth, AI systems can be "biased and judgmental," because they are created and programmed by humans.
Sixth, needless to say, AI systems do not have a built-in conscience and therefore could act "maliciously" and cause damage in case they are hacked or originally used as weapons. Seventh, some researchers still believe in the possibility of a Terminator-style rebellion of machines, and consider future advanced AI system as a "genie in a bottle."
Remarkably, renowned physicist Stephen Hawking also worried about the emergence of artificial intelligence, calling it the "worst event in the history of our civilization" in November 2017. "[W]e cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said at the Web Summit technology conference in Lisbon, Portugal. He insisted that creators of AI should "employ best practice and effective management" to avoid this potential reality.
The eighth issue – and this is what Musk is especially concerned about – is the question as to how people would stay in control of AI systems which one day could become much smarter than humans.
The ninth issue that concerns the international community is… "robot rights." Indeed, even in the 20th century American writer Isaac Asimov described robots with emotions, feelings, and even ambitions. If one day people begin to consider advanced AI machines as subjects that can perceive, feel, and act, then the question of legal status will immediately arise.
Merging Humans With AI
Should humans really keep up with machines and even merge with them one day? Musk believes that we ought to: as AI threatens to become widespread, humans could soon find themselves useless, the billionaire argues.
"Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told the World Government Summit in Dubai in February 2017. "It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output."
To illustrate his point, Musk noted that computers can communicate at "a trillion bits per second," while humans, who mostly communicate through typing, can do just 10 bits per second. According to the businessman, a symbiosis between human and machine intelligence could solve both the control problem and the usefulness problem.
To that end, he proposed developing a "neural lace" that connects the brain directly to computers. As a result, the empowered human brain would be able to tap into artificial intelligence instantly through digital devices or directly to the cloud, where massive computing power is available.
1 October 2022, 08:52 GMT
However, some scientists warn that "merging" a human brain with artificial intelligence could bring more harm than good. Moreover, it would be a "suicide for the human mind" if so called transhumanists go so far as to replace parts of the brain with AI components.
Furthermore, after radical enhancement, the individual who remains may not even be the same person. His/her behavior could change dramatically. What if the "upgraded" person loses his/her "self"?
On the other hand, who would guarantee that a human connected through a "neural lace" to the "cloud" would not be "hacked" or "zombified" one day? "Bio-conservatives" argue that Musk's neurotechnology project could erase the frontier between natural and artificial, human and machine, living and no-living.
Other members of the scientific community suggest that the merged combination of a human and a machine could give rise to a higher form of AI-powered intelligence. Building of hybrid collaborative systems could end up in creating new types of robots, designed to look and behave like humans. Is this potential scenario fascinating or frightening?
Is the Musk-championed human-machine "symbiosis" worth the risk?