An alien species is on its way to planet Earth and we have no reason to believe it will be friendly. Some experts predict it will be there within 30 years, while others argue it will arrive much sooner. No one knows what it will look like, but it will share two important traits with us humans – it will be intelligent and self-conscious†
No, this alien will not come from a distant planet – he will be born here on Earth, hatched in a research lab of a major university or large corporation. I refer to the first artificial general intelligence (AGI) that achieves (or exceeds) human-level cognition.
As I write these words, billions are being spent bringing this alien to life as it would be seen as one of the greatest technological achievements in human history. But unlike our other inventions, this one will literally have a mind of its own. And if it behaves like any other intelligent species we know, it will put its own interests first and work to maximize its chances of survival.
AI in our own image
Should we fear a superior intelligence driven by its own goals, values, and self-interest? Many people reject this question, believing that we will build AI systems in our own image, so that they think, feel and behave just like we do. It is very unlikely that this is the case.
Artificial minds will not be created by writing software with carefully crafted rules that make them think like us. Instead, engineers feed huge datasets into simple algorithms that automatically adjust their own parameters, making millions upon millions of tiny changes to their structure until an intelligence emerges – an intelligence with inner workings far too complex for us to comprehend. .
And no – entering data about people won’t make it think like people do. This is a common misconception: the false belief that by training an AI on data that: describes human behavior, we will make it eventually think, feel and act like we do. It will not.
Instead we will build these AI creatures until know people† not to be human. And yes, they will know us inside out, be able to speak our languages and interpret our gestures, read our facial expressions and predict our actions. They will understand how we make decisions, for better or for worse, logically and illogically. After all, we must have spent decades teaching AI systems how we humans behave in almost every situation.
But very different
But still, their minds will be nothing like ours. To us they will seem omniscient, linked in all places to all kinds of remote sensors. In my book of 2020, Arrival ghostI depict AGI as “with a billion eyes and ears”, for his perceptual faculties could easily span the whole world. We humans cannot possibly imagine what it would feel like to perceive our world in such a vast and holistic way, and yet we somehow assume that a mind like this will share our morals, values and sensitivities. It will not.
Artificial minds will be very different from any biological brain we know on Earth – from their basic structure and functionality to their overall physiology and psychology. Of course we will create human-like bodies for these alien spirits to inhabit, but they will be little more than robotic facades to make us feel comfortable in their presence.
In fact, we humans will work very hard to make these aliens look like us and talk like us, even smile and laugh like us, but deep down they won’t be anything like us. Most likely their brains will live in the cloud (in whole or in part) connected to functions and functions both within and outside the humanoid forms in which we personify them.
Still, the facade will work – we won’t be afraid of these aliens – not as we would fear beings rushing towards us in a mysterious spaceship. We may even feel like a sense of kinship, regard them as our own creation, a manifestation of our own ingenuity. But if we put those feelings aside, we begin to realize that an extraterrestrial intelligence born here is much more dangerous than that from afar.
The danger within
After all, an alien mind built here will know everything about us from the moment it arrives because it is designed to understand people inside and out – optimized to feel our emotions and anticipate our actions, our predict feelings, influence our beliefs, and influence our opinions . If creatures rushing toward us in sleek silver spaceships had such an in-depth knowledge of our behavior and tendencies, we’d be terrified.
AI can already beat our best players in the toughest games on Earth. But really, these systems don’t just control the games of chess, poker and Go, they control the game of people, learn to accurately predict our actions and reactions, anticipate our mistakes, and exploit our weaknesses. Researchers around the world are already developing AI systems to outsmart usnegotiate and outsmart us.
Can we do anything to protect ourselves?
We certainly can’t stop AI from getting more powerful, as innovation has never been curtailed. And while some are working on security measures, we can’t assume this will be enough to eliminate the threat. In fact, a poll by Pew Research indicates that few professionals believe the industry will implement meaningful “ethical AI” practices by 2030.
So how can we prepare for arrival?
The best first step is to realize that AGI will happen decades to come and will not be a digital version of human intelligence. It will be a alien intelligence as strange and dangerous as if it came from a distant planet.
Bringing urgency to the ethics of artificial intelligence
Framing the problem in this way allows us to tackle it with urgency, pushing for regulation of AI systems that monitor and manipulate the public, sense our emotions and anticipate our behavior. Such technologies may not seem like an existential threat today, as they are usually developed to optimize the effectiveness of AI-driven advertising, not to facilitate world domination. But that doesn’t eliminate the danger: AI technologies designed to analyze human feelings and influence our beliefs can easily be used against us as mass persuasion weapons.
We also need to be more careful when automating human decisions. While it is undeniable that AI can help with effective decision-making, we must always remain people in the loop† This means using AI to improve human intelligence instead of working to replace it.
Prepared or not, alien ghosts are heading our way and they could easily become our rivals, competing for the same niche at the top of the intellectual food chain. And while a serious effort is being made in the AI community to strive for safe technologies, there is also a lack of urgency. That’s because too many of us mistakenly believe that a sentient AI created by humanity will somehow be a branch of the human tree, like a digital descendant that shares a very human core.
This is wishful thinking. It is more likely that a true AGI will be profoundly different from us in almost every way. Yes, it will be remarkably adept at: pretend to be human, but under a human-friendly facade, each will be a rival spirit that thinks, feels and acts like no creature we have ever encountered on Earth. The time to prepare is now.
Louis Rosenberg, PhD is a technology pioneer in VR, AR and AI. He is known for developing the first augmented reality system for the United States Air Force in 1992, for founding early virtual reality company Immersion Corp (Nasdaq IMMR) in 1993 and for founding early AR company Outland Research in 2004. He is currently founder and CEO of Unanimous AI†
Welcome to the VentureBeat Community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers