Go Back

Turing Was Wrong—And Why It Matters Now

by Lobo Tiggre
Friday, April 07, 07:00pm, UTC, 2023


There’s so much hype about so-called artificial intelligence (AI) flooding financial, social, and other media. I think it’s time for your due diligence guy to wade in with Occam’s Razor.

First off, I think the term “AI” is flawed and often misleading. Intelligence itself is not something well understood or even defined precisely by psychologists. Many object to the circular definition of intelligence being what intelligence tests like the Stanford-Binet IQ test measure.

So, if we don’t really know what intelligence is, how can we say when we’ve created an artificial one?

In popular culture, however, the dominant meaning of AI is quite clear. Whether embodied in a robot or housed in a powerful computer, a real AI is a cybernetic mind—a living digital person.

This matters because it cuts through much of the mental fog on the subject being blown around the internet.

What programmers are developing now are smart systems; codifications of human solutions amplified by “machine learning” applied to large datasets. Some of these are enormously complex and very capable, but they are in no way or sense living beings.

They are not conscious.

They have no volition or agency.

They want nothing.

They have no curiosity.

They are not sad when we switch them off.

A true cybernetic mind would be a living consciousness—something it would be murder to erase.

Bear with me and I’ll get to issues of more immediate practicality. It’s important to nail down these basics.

Let’s start with “consciousness.” I see it as something that can only be said of living animals with nervous systems complex enough to learn and remember. Life is a self-sustaining, self-perpetuating, internally motivated process. Not all life is conscious. And consciousness is more than just awareness. Sentience is not sapience.

Plants are alive and they can adapt to conditions, but I don’t think they can be aware of them. A sunflower may respond to sunlight with physical motion, but I’d argue that such a bio-mechanistic response is not awareness. The sunflower certainly doesn’t try to understand the sun.

I’m not sure fish, worms, or most politicians seek understanding, but dogs, mice, octopi, and other animals do. They have enough brain power to be aware of themselves and accumulate an understanding of the world they live in. They have agency to pursue their own goals within that world. To me, that means they are conscious.

This can’t be said of ChatGPT or any smart system I’ve encountered. No matter how much computational power they may have, they are not self-sustaining, self-perpetuating, internally motivated processes that are aware of themselves and their world. Being able to answer questions about consciousness with the understanding of humans whose writings the system summarizes is not the same as being conscious. If these systems don’t act with self-directed agency, they are not conscious.

By my definition, a program that simulates consciousness is not the same thing as consciousness. Even if it were very good at it, it wouldn’t be alive. As soon as its program had run, it would be done, not “caring” what happened to it, its world, or anything else.

To be blunt: I reject the Turing Test.

A powerful inanimate system that simulates consciousness, no matter how successfully, is no more conscious than a mechanical clock wound by a spring.

In physics, if we can’t tell the difference between the acceleration due to gravity and another form of acceleration, we can treat them the same, mathematically. But in life, it makes a big difference if we feel gravity pressing us to the ground or centrifugal force pressing us to the side of a carnival ride. We need to be prepared for when the latter stops.

Turing was brilliant. I’m not attacking the man. And I know he was specifically addressing the question of whether computers can think, not whether they can become conscious, let alone alive. But I disagree with the idea his famous test is used to justify, that there’s no difference between a mind and a simulation of a mind.

Someday we may create true artificial beings. Perhaps it will be with silicon, perhaps with biological matter, or perhaps both. To be considered alive, these beings can’t just be programs we switch on and off. They need to be self-sustaining, self-perpetuating, internally motivated entities that pursue their own goals. They would resist—or at least object to, and possibly fight—being killed.

That day still seems like a long way off. It’s certainly not what today’s so-called AI systems are or can be.

This has pragmatic implications:

  • Hyped-up fear about ChatGPT taking over the world like the Cyberdyne Systems computer in The Terminator movies is just silly and can be ignored.
  • Hand-wringing about whether there is a mind trapped in ChatGPT, Alexa, Siri, or any other smart system known today can be dismissed. Occam’s Razor, says no.
  • Calls to ban the development of so-called AI systems are futile. We’re in an arms race, perhaps the most important one in history. Whoever opts out will be at the complete mercy of those who surge ahead. We—whoever “we” are—have no choice but to compete or surrender.
  • That doesn’t mean we’re doomed, however. If we don’t want cybernetic minds more powerful than our own making us obsolete—or worse, deciding we’re in the way and wiping us out, à la Terminator. The thing to avoid is giving our tools goals of their own and the ability to pursue them: agency. Or selfhood, if you will.
  • I can see no reason to assume that living smart systems will be more powerful than smart systems that are just human tools. Therefore, the smart systems arms race may not push technology in the direction of living artificial beings with a sense of self.
  • I suspect that programming personhood will turn out to be much harder than programming awareness, such as the situational awareness makers of autonomous vehicles are struggling with.
  • The good news is that the more powerful these smart systems become—while remaining inanimate tools like pocket knives or smartphones—the more powerful we as individual human beings will become. We’re very close to being able to communicate in any and every language in the world. We’re not far from forgetfulness being a thing of the past. The potential benefits of these developments are literally incalculable.

There are more immediate implications:

  • Anyone doing repetitive information work should immediately look into doing something more creative. By that I don’t just mean artistic work, though that’s one path. I mean developing skills in things that are hard to systematize. (For example, as I’ve written before, resource stock evaluation is so… messy, it’s almost like an art. I’m not worried about my work being automated anytime soon.)
  • Anyone doing repetitive physical work should watch for advances in robotics that will make those jobs obsolete. No one can afford to think of flipping burgers as a career.
  • Smart systems at work on medical issues are likely to dramatically extend human life expectancy soon. By that, I don’t just mean a long, lingering state of feeble pre-death in an old folks’ home. I mean robust health for decades longer than at present. Anyone who thinks they will live for another 20 years, or maybe even just 10 years, should make sure their financial plans include the possibility that they end up living much, much longer than currently expected.
  • There’s a fear that “rich people” will get the most powerful technology first and gain an unfair advantage over others. I wouldn’t be keen on being the first person to try implanting a computer connection in my brain, or other new technologies that come with huge risks if something goes wrong. Rather than just wealth, the advantage may go to those watching the trends closely and making the right call between early adoption and thoroughly secure and debugged systems. But having money sure won’t hurt.

There are even investment implications:

  • Investors who don’t understand the technology should be very wary of gambling on anything with “AI” in the name. That’s like buying Awesome Gold Inc. because you’re a gold bull and the company has “gold” in its name.
  • In the same way the advent of smartphones triggered an explosion of apps, there will be a lot of “AI apps” coming to market soon. Most will be garbage, and some will be really great. I’m not the right due diligence guy for this market, but if you can find one you trust and who has a good track record, I think there will be fortunes made in this space.
  • If someone develops a mineral exploration smart system that delivers a solid, verifiable track record of making valuable discoveries, I’ll be very keen on it. This is in my wheelhouse, so I’d feel competent to judge. If you have some similar core competency of your own, you may be able to profit from a successful “AI” in that space.

Futurist Ray Kurzweil is famous for predicting a “singularity” in the human condition. This is the point at which the rate of technological advancement, empowered by ever-smarter systems, goes vertical and it becomes impossible to see what might happen afterward.

Kurzweil thinks the singularity will arrive by 2045.

Many people were skeptical something so… singular could happen so soon, but ChatGPT has them rethinking their assumptions. That’s probably a good thing. But my due diligence training tells me not to get too excited just yet. Snags and plateaus along the way could make it take longer than Kurzweil or others imagine.

But think about it for a minute: 2045 is only 22 years from now…

That’s not long at all, really. And before the rate of change goes vertical, it will increase dramatically.

If this is the path ahead, it means that the amazing changes we’ve seen in our lives in recent years are just the beginning of a rising, rapidly accelerating tide.

The gathering flood may be so powerful, even the best prepared will do no better than get swept along. But I can’t see going into this blind as being an advantage. So I’m doing my best to watch the trends and prepare myself mentally, physically, and financially for what’s coming.

I hope this essay helps you think constructively about the agency you have in your own future.




P.S. To be kept abreast of more dangers, opportunities, and issues affecting investors, please sign up for our free, no-hype, no-spam, weekly Speculator’s Digest.


Think. Speculate.

Facts and insights to navigate the markets. Delivered FREE.

  • Free digest with fresh investment-related news and ideas on a daily basis.
  • Free reports on investment ideas for speculators.
  • Honest, unbiased trend analysis
  • Heads up on events, appearances, and other educational opportunities.

Forever Free subscription

  • Monthly Newsletter Subscription
  • Requests
  • Free Access to Blog
  • Books and More
My Take

$500 (SAVE: $100) for 1-year subscription

$50 for monthly subscription

  • Field Trip Invitations
  • Free Educational Media
  • Free Access to Blog
  • Books and More
  • Monthly Newsletter Subscription
  • Conference Invitations
The Independent Speculator

$3,000 for 1-year subscription

$1,000 for quarter subscription