Former CEO Eric Schmidt is warning that AI may soon stop obeying humans, raising questions about the safety of AI models that are no longer under human control.
One of the biggest challenges companies face in AI development is ensuring such development occurs safely, with the necessary safeguards in place to ensure humanity can maintain control of AIs. According to Schmidt, the day when AI ignores humans is nearly here.
In an interview at Special Competitive Studies Project, Schmidt discussed where AI is currently, as well as where he sees it going in the near future.
“One way to say this is that within three to five years, we’ll have what is called general intelligence, AGI, which can be defined as a system that is as smart as the smartest mathematician, physicist, artist, writer, thinker, politician.
“I call this, by the way, the ‘San Francisco Consensus,’ ’cause everyone who believes this is in San Francisco. It may be the water,” Schmidt joked.
Schmidt then raises an interesting question regarding what it will mean when users have access to AI with such a level of intelligence.
“What happens when every single one of us has the equivalent of the smartest human on every problem in our pocket?
“But the reason I wanna make the point here, is that in the next year or two, this foundation is being locked in, and it’s not, we’re not gonna stop it.
Schmidt then goes on to highlight what happens at the next level of AI development.
“It gets much more interesting after that. Because, remember, the computers are now doing self-improvement. They’re learning how to plan and they don’t have to listen to us anymore. We call that super intelligence, or ASI, artificial super intelligence, and this is the theory that there will be computers that are smarter than the sum of humans. The San Francisco Consensus is this occurs within six years, just based on scaling.
“This path is not understood in our society. There’s no language for what happens with the arrival of this. That’s why it’s under-hyped. People do not understand what happens when you have intelligence at this level which is largely free.
Schmidt’s Statements Should Serve As a Warning
Listening to Schmidt discuss where AI is headed, it’s clear that he’s excited about the possibilities. And from a purely technical standpoint, it’s easy to understand why. True AI is the holy grail of scientific development.
Nonetheless, if Schmidt is correct in his assessment of where AI is headed, there are a litany of questions that come to mind.
- If AI has the ability to learn and self-improve without the need to listen to humans, what safeguards remain to ensure it doesn’t go rogue?
- Given that virtually every AI that has ever been given access to the internet without safeguards very quickly became a vile, racist, Nazi-supporting example of the worst of humanity, what will stop AI that no longer listens to humans from being the very worst representation of humanity?
- If an AI somehow avoids the above path, what are the odds that an AI doesn’t objectively look at the condition of the world—wars, strife, violence, climate change, and more—and not come to the conclusion that humanity is the problem?
- If AI comes to the conclusion that humanity is a plague, what safeguards exist to prevent it from taking action, if it’s already smarter than the sum of all humans and no longer under our control?
Schmidt’s statements should serve as a chilling wake-up call for where the AI industry currently is, where it’s headed, and what that may mean for humanity.