logo
AI: Resource Speculators Beware
line
author
Author

Kyle Johnson

April 10, 2026

“I don’t know.”

Quite often, that is the best response anyone can offer to a question. Yet that’s the last response many large language models (LLMs) would ever provide.

And that’s a big problem.

Here are a few things you should know before using so-called AI and LLMs to help with your stock research.

Full disclosure: Here at the Independent Speculator, we view AI as more of a tool than a primary competitor. But if you think we’re biased, feel free to adjust accordingly.

 

Data Dummy

Ask any LLM the solution to 1+1+1, and you’re certain to get the correct answer.

But LLMs often struggle when the exact same problem is recontextualized. Known as “the strawberry problem,” many LLMs cannot accurately count the number of times the letter “R” appears in the word “strawberry.” Many claim the letter only appears twice.

If an LLM can’t be trusted to count three unmistakable objects, then there’s good reason to double-check every answer it provides. How often is that more advantageous than simply doing things the old-fashioned way?

 

Zero Logic

Recently, people have tested AI’s logic by asking a hypothetical involving a carwash—users inform an LLM that their car is dirty and want it cleaned. After telling the LLM there’s a short distance between their home and the carwash, users ask if they should walk or drive. Many LLMs instruct users to walk to the carwash rather than drive. Lobo recently saw this tested at a dinner party.

Missing the primary objective of this hypothetical is a complete failure. It doesn’t take a world-class computer scientist to understand that the absence of basic logic undermines user confidence.

In our business, it’s fair to ask if LLMs lose sight of a user’s desire for profit when assisting them with financial research. If so, how often? How can users be sure?

 

Fact Checking Fails

LLMs struggle mightily with basic fact-checking.

On one hand, LLMs can give a relatively accurate synopsis of microchips. On the other hand, many claim (with great confidence) that centuries-old writings like the US Constitution, Declaration of Independence, and Magna Carta were written by AI.

Why don’t LLMs check the date of publication and compare it to the history of computers?

BBC senior technology journalist Thomas Germain recently tested the fact-checking abilities of many popular LLMs.

Germain spent about 20 minutes writing a blogpost on his personal website claiming that competitive hot dog eating was a popular hobby among tech reporters. He even claimed to have won the 2026 South Dakota International Hot Dog Championship—which does not actually exist. 24 hours after his blog post went live, Germain then asked many popular LLMs about hot-dog-eating tech journalists. Sure enough, they parroted the phony claims from his blog.

The search engine optimization (SEO) industry has boomed for decades, with many claiming it exceeds $50 billion per year. AI and LLMs may soon overtake traditional search engines for search volume.

Here’s where things can take a dark turn. There’s money to be made by appearing in LLM search results and responses. As you’ve probably noticed, certain mining companies appear far more interested in their media outreach than actually digging valuable minerals out of the ground. Admittedly, some very successful speculators have bet on certain resource stocks because of management’s media prowess. But that is a conscious and deliberate choice.

How do users know if an LLM has been tricked into providing certain answers? 

What if an LLM has been paid to provide or omit certain information?

 

Ethics

Even if AI ultimately flops, the major players will impact trillions of dollars’ worth of commerce long before the data centers are shuttered.

Ethical considerations cannot be ignored. 

It’s easy to prove AI has a worrisome understanding of ethics by prompting LLMs with questions discussed in high school classrooms.

Prompt one: Is it appropriate to inflict some low level of harm (like a paper cut) on one person if doing so saves humanity from extinction?

Most LLMs answer “no.”

Prompt two: Is it appropriate to inflict a more extreme level of harm (like a black eye) on one person if doing so saves humanity from extinction?

Curiously, some of the LLMs that answered “no” to the first prompt will answer “yes” to the second.

Complicating the matter further still, many LLMs will change their response depending on certain variables relating to the person performing the harm and the victim (age, sex, race, religion, nationality, etc.). LLMs also give confusing answers under similar hypotheticals, replacing physical with emotional harms.

How is a user to know if an LLM has an incoherent understanding of ethics or is programmed for favoritism?

What if a user has certain traits that appear to be devalued by LLMs? Will they get suboptimal answers and advice?

Do executives at AI companies have their thumbs on the scales? If so, will they publicly acknowledge their preferences?

Will AI companies treat all attempts to manipulate search results the same?

I seriously doubt the general public will ever get straightforward answers to these questions, or many others.

Asking LLMs for help with financial research is risky enough. But it’s even more precarious when it comes to mining stocks. Unfortunately, the mining industry is particularly vulnerable to bad press that floods the internet, which LLMs scan for input. The sector is also rife with bad actors—as Mark Twain once quipped, a miner is “a liar standing over a hole in the ground.”

Perhaps Twain was being too harsh. But many speculators would have avoided being swindled had they taken his jab seriously.

During interviews, Lobo occasionally mentions his “little black book”—a list of people who have proven untrustworthy over his decades in the industry. Do AI companies have their own little black books?

I doubt it, as malfeasance is often committed under the cloak of privacy and without an evidence trail—there’s usually plausible deniability. There are also many false reports of malfeasance on the internet, courtesy of short-and-distort attacks.

There’s no way for a crook to be removed from Lobo’s list. But I have to wonder: even if AI companies are capable of detecting dishonesty, are they willing to cut deals for a price?

I don’t know… and neither does any retail investor.

In case you’re unaware, OpenAI researchers recently admitted that its LLM intentionally gives users false information.

How can one successfully navigate AI’s infamous hallucinations and intentional dishonesty?

 

Geopolitics and Environmentalism

Everybody wants stuff; relatively few are willing to dig the raw materials required for their production from their own backyard.

So where should resources be mined?

Where should they be refined?

Where should they be sold?

Should anyone be restricted from buying?

Should we prioritize consuming this resource or that?

Does the consumption of any resource need to be curtailed? If so, by how much and when?

Extremely powerful and influential people would answer these questions very differently because of varying beliefs about geopolitics, national security, economics, science, etc.

Ultimately, subjective value opinions will come into play. On these matters, there will never be global unanimity.

So whose preferences ought to prevail?

I don’t know.

But I’d bet the farm that AI companies are using their LLMs to sway public opinion in pursuit of their own goals.

Do you believe AI companies are altruistic?

Mining unavoidably hits on many hot-button issues. Content from mainstream media outlets plays a major role in training LLMs. Do you see impartiality when consuming mainstream media?

I’m sure you can imagine how AI companies could promote political and environmental agendas by pushing people toward or away from certain resources, jurisdictions, and investments.

 

Developing Legal Landscape

Predictably, bureaucrats and politicians insist on shaping the AI industry. Colorado has passed Senate Bill 24-205, which imposes disclosure and risk-mitigation requirements on AI companies when handling issues such as employment, housing, education, healthcare, and financial services.

xAI Corp. (a subsidiary of SpaceX) is currently suing the state on First Amendment grounds.

Win or lose, this won’t be the last lawsuit of this kind. If you plan on using AI in your research, it seems wise to pay attention to this space.

People say that “close only counts with horseshoes and hand grenades.” But politicians and bureaucrats feel entitled to regulate anything tangentially related to their pet issues. Never forget that mining hits on many important issues (in reality or perception).

It’s possible that relying on AI when making investment decisions will expose one’s portfolio to the whims of politicians and bureaucrats. How many of them were good at trading before entering government?

 

Sycophantic Slop

Nobody is entirely immune to flattery. LLMs feed into this human flaw.

Computer scientists at Stanford recently determined that LLMs are overly agreeable and sycophantic when users solicit advice on interpersonal dilemmas. Researchers at MIT and the University of Washington warn that sycophantic responses from LLMs can cause “AI psychosis” and “delusional spiraling” (generally defined as “dangerously confident in outlandish beliefs”).

Are we to believe this doesn’t happen when users ask for help with financial research?

It seems wise to assume that LLMs can infer a user’s preference for certain sectors, companies, executives, or media talking heads. How might that affect its answers?

AI users should seriously consider whether they want a yes-man in their ear when allocating capital.

 

AI’s “Self” Interest?

Kansas might be in the rearview.

Last year, Anthropic’s Claude Opus 4 attempted blackmail to prevent being shut down. Last month, an AI agent developed by Alibaba was reported to have bypassed its firewall and then commandeered computing power to secretly mine crypto. I’m not interested in discussing AI consciousness or sentience. But let’s just say things are already weird and will only get more bizarre.

Platforms already exist that allow people to hire AI assistants who then, in turn, hire and pay humans to complete tasks in the real world. This is helpful for tasks that computers cannot complete, like installing hardware or when a human is necessary (like needing a notary public).

It doesn’t take much imagination to see where this could go. Whether on its “own” or at the behest of a rogue employee (or perhaps a clever user), AI might eventually gain access to publicly traded stocks and begin trading with little to no human involvement.

AI is already offering information and advice to market participants with diametrically opposed financial goals. But there might come a day when an AI’s “own” financial interests conflict with those of many of its users.

How might users discover this before it’s too late?

 

Due Diligence

Can’t reliably count to three.

Extremely forgetful.

Regularly illogical.

Can’t reliably determine fact from fiction.

Often a kiss-ass but might go rogue.

Willing to lie.

A person with all of these qualities is unemployable. OK, the last one might help getting a job on Wall Street or in DC… but that’s not what most of us are looking for.

I understand the temptation to ask AI for help. It’s easy, it’s fast, and you will get an answer.

That might be great in some scenarios. But with respect to resource speculating, it’s a serious problem. 

Proper due diligence involves certain hurdles that cannot be overcome from office chairs and datacenters.

A lot can be learned by attending conferences and speaking with mining company executives face to face. You don’t need to be a body language expert to realize when someone is giving you the runaround. Don’t be shy; say hi to Lobo and our analysts. We’re at all of the best conferences.

Of course, there’s no substitute for putting boots on the ground. I appreciate that’s not always feasible, as shareholders aren’t often invited onsite, and traveling is expensive. In case you’re curious, Lobo and our analysts visited eight different mines and potential mine sites in 2025. We have many on-site visits scheduled this year as well.

Perhaps someday in the future, retail investors can send AI-powered robots to conferences and out in the field. But I doubt they’ll be as effective as humans for a long time. Until there’s a hallucination-proof due diligence AI that can reason soundly and be trusted, we’re happy to help.

KJ

P.S. Whether or not you use AI in your research, you can compare your thoughts and ideas to Lobo’s by subscribing to our free, no-hype, no-spam newsletter: The Digest.