Tim Urban, on his website waitbutwhy.com, just posted a discussion of artificial intelligence and Elon Musk’s new company Neuralink. This is a fairly long article (but illustrated!) that discusses what Musk views as one of the greatest threats to humanity and how he plans to fight it–by becoming it. I’ve snipped a bit of the article and pasted it below. If you like this, have a scan through the article. Or, just read it. I hope we don’t look back on this someday as the warning we thought was crackpot nonsense at the time (or just the plot line to an Avengers movie).
Could it be that a creation that’s better at thinking than any human on Earth might not be fully content to serve as a human extension, even if that’s what it was built to do?
We don’t know how issues will actually manifest—but it seems pretty safe to say that yes, these possibilities could be.
And if what could be turns out to actually be, we may have a serious problem on our hands.
Because, as the human history case study suggests, when there’s something on the planet way smarter than everyone else, it can be a really bad thing for everyone else. And if AI becomes the new thing on the planet that’s way smarter than everyone else, and it turns out not to clearly belong to us—it means that it’s its own thing. Which drops us into the category of “everyone else.”
So people gaining monopolistic control of AI is its own problem—and one that Open AI is hoping to solve. But it’s a problem that may pale in comparison to the prospect of AI being uncontrollable.
This is what keeps Elon up at night. He sees it as only a matter of time before superintelligent AI rises up on this planet—and when that happens, he believes that it’s critical that we don’t end up as part of “everyone else.”
That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option:
To be AI.