Judgment Day. A wave of nuclear missiles is followed by a swarm of robots that hunt and kill all human survivors. This is the subject of the classic hit movie The Terminator. Judgment Day comes when a military-industrial computer network called Skynet becomes self-aware—malevolently so.
This sort of nightmare outcome is what many opponents of artificial intelligence (AI) research fear. There are others who argue, as Stephen Hawking did, that AI might not be malevolent, but it may simply not care about human beings and squash us whenever we got in the way. Elon Musk is another famous person who thinks AI could result in the extinction of the human race.
But there’s another vision. Many futurists believe that AI could be designed to be benevolent. Or they think AIs would play nice simply because that’s the smart thing to do.
A well-known movie example of a friendly AI is J.A.R.V.I.S. in the Iron Man films. The machine is not just a hyper-intelligent assistant, but also a friend. Proponents of this vision argue that AI will bring on an age of plenty in which there are never any shortages, never any traffic accidents, and almost no crime or disease.
I should clarify that by AI I don’t simply mean smart systems or machines capable of learning. Really good self-driving cars—even ones that talk to us—are not the kind of AI I’m talking about. Nor do I mean systems like the new Google assistant that can fool people into thinking that it’s human as it makes restaurant reservations.
No. I mean fully self-aware, autonomous machine intelligence—a digital person. This is sometimes called artificial general intelligence (AGI).
Will AGI be good or bad?
Frankly, I have no idea.
This is a theoretical debate I doubt anyone can win. We’ll only know when a true AGI comes into existence. By then it will be too late to change anything, but we’ll know.
The good news for those concerned is that it’s by no means certain that a true AGI is even possible.
A humanoid robot that can fool people into thinking it’s human would not necessarily be an AGI. It would need true self-awareness, autonomous control, and the ability to learn and choose its own goals. Without this, it would still be just a machine following a predetermined program.
What is consciousness? How can such a thing be coded when we’re not even sure what it is or how it works?
I don’t know the answers to these questions. I know smart people who know much more about this field than I do, and they think that if AGI is possible, it’s decades away.
The really good news—whether you’re concerned about AI or not—is that smarter machines are already improving our quality of life. This means there’s money to be made for speculators who back winning technologies as they’re developed.
That’s no great revelation. There are many newsletters, venture funds, and media channels dedicated to this idea.
What’s news (perhaps) for my readers is that I’m interested in diversifying into this space.
Here’s why:
- I remain absolutely convinced that we’re heading into a resource bull for the record books. But putting my eggs all in one basket is just too dangerous—no matter how good the basket looks. This is simple common sense.
- Many people don’t understand AI and there’s a lot of fear going around. That’s a recipe for contrarian opportunities in this space.
- The market almost always rewards an obviously better mousetrap. This is true even in a bad market. It’s also true in an economic recession—or even a depression. Actually, given that people are often resistant to change, bringing a cheaper and better product to market during hard times could lead to faster adoption.
I can’t tell you whether full AGI will result in heaven or hell on earth.
I can say that we’re going to see a lot of systems on the J.A.R.V.I.S. side of the game before we get an answer to that question—and there’s money to be made.