<iframe src="//www.googletagmanager.com/ns.html?id=GTM-K3L4M3" height="0" width="0" style="display:none;visibility:hidden">

Flat White

Elon Musk, Neuralink and the existential risks of human enhancement

4 September 2020

1:30 PM

4 September 2020

1:30 PM

Elon Musk’s Neuralink update on 28 August will be remembered by most of the public for clickbait news headlines involving pigs and memes referencing “cypork.” This drives more web traffic than “wireless brain-computer interface makes progress.” 

Cognitive neuroscientists and other leading medical technology researchers highlighted that the medical world already uses implants for some conditions.  

And the caricature of Musk as a cross between Victor Frankenstein and Howard Hughes distracts from the existential concerns Neuralink simultaneously seeks to address and triggers. 

Context is everything in relation to understanding why Musk is funding Neuralink.   

Apart from the late Stephen Hawking, no other public figure has sounded the warning bells on AI more than Musk over the past decade.  

The motivation for funding Neuralink goes well beyond the short-term development of any neurological treatments. 

To quote Musk from a lengthy interview he gave on AI: 

 We want to have a human-brain interface before the Singularity, or at least not long after it, to minimise existential risk for humanity and consciousness as we know it. 

Musk has cited Isaac Asimov’s the Foundation series as a key influence. To those unfamiliar with the series, it is to sci-fi what The Lord of the Rings is to fantasy fiction.  He even included a copy of it on an  “Arch disc” in the Tesla he launched into space in 2018. Key backstories in the Foundation universe include humankind colonising other planets, the banning of robots on earth and ecological and cultural destruction.  

Technology is inexorably shifting from being the enabler of culture war debates to becoming the subject of culture war debates. As technological risks become increasingly existential, the debates and societal disruption will intensify. 


As I wrote in The Spectator on 29 July: Existential debates are never conducive to compromise, but are tailor-made for polarisation.  

Neuralink will be a company at the centre of these existential debates. Elon Musk knows this. You don’t become a centibillionaire by missing trends. 

For readers of The Spectator, the existential questions stemming from the future direction of Neuralink will see old political alliances end and new one’s form. The additional complicating factor is that Neuralink needs to be seen in the context of its’ interoperability with other emerging technologies such as nanotechnology. 

There are many existential issues raised by the long-term direction of Neuralink, which require deep thinking from our political classes over the next two decades. 

At what point does correcting medical issues with technology move from corrective action to human enhancement? 

What regulatory mechanism, if any, should the state employ to manage human enhancement risk? 

What happens to a meritocratic society if human enhancement is priced like any consumer product? 

Can a nation stay together if a significant proportion of the population for religious and/or ethical reasons elects not to be enhanced? 

Are liberal democracies capable of surviving against authoritarian states which mandate human enhancement? 

How do you define free will, when your decisions are now being partially directed by code? 

Unfortunately, our politics for the next year will be focused on the alcohol level of hand sanitisers and drawing arbitrary lines in the sand, to determine the holiday movement of Australians. 

If Germans are storming the Reichstag over COVID in 2020, consider the societal atomisation we will see in a future between those who want enhancement and those who do not. 

Some may think that Musk’s reference to being able to potentially download (memories) into a new body or a robot bodysounds outlandish. This again shows the disconnect between what is happening within AI research and public understanding. Ray Kurzweill, former Director of Engineering at Google and author of The Singularity is Near is one of many technologists who belong to the “back up the brain” school of thinking.  

Public policymaking is all about COVID in 2020.  

COVID though has supercharged the growth of many technology companies, who are accelerating their investment in areas such as AI. 

The risk for the federal government is that by the time they turn their attention to these existential issues, it will be too late. 

Australian governments were aware of the pandemic risk for the past decade and did little. 

Let us not repeat the same mistake. 

Got something to add? Join the discussion and comment below.


Comments

Don't miss out

Join the conversation with other Spectator Australia readers. Subscribe to leave a comment.

Already a subscriber? Log in

Close