NCI: Is AI A Bubble?

Guests:
Ram Ahluwalia & Michael Parekh
Date:
02/16/26

Thank you for Listening this Episode!

Support our podcast by spreading the word to new listeners. We deeply appreciate your support!

Ram Ahluwalia & Michael Parekh

Episode Transcript

NCI Is AI A Bubble

Speaker1: [00:00:00] Welcome everybody to the next episode of Lumida Non-Consensus Investing. We're streaming live on YouTube, Twitter, and wherever you consume your social media. Really pleased to have a friend and special guest. I think he has the record number of appearances on the Lumida non-Consensus investing podcasts.

Michael p PreK. Michael really started his career in the nineties at Goldman Sachs, where he was a lead internet coverage analyst. Took a lot of these companies public. So he's been through a few cycles and he writes a terrific blog on AI and he has insights every day. He's tracking what the CEOs of the cloud businesses within the hyperscalers are talking about.

And we wanna talk about a few things. We wanna talk about CapEx spending is a sustainable, we wanna talk about anthropic, trillion dollar revenue forecast, GDP growth of 10%. The dynamics around open claw and Mark Zuckerberg in the [00:01:00] background here. There's a lot we want to get through to create some perspective on what's happening.

Welcome back, Michael. How are you? 

Speaker2: Great. Great to be back and good to see your handsome face again. 

Speaker1: I'm trying, I'm not on GLP ones yet to tell that to my wife too. But far there's a film 

Speaker2: version. Now 

Speaker1: I'm gonna try to do it the old fashioned way. It's just called fasting, eat Less Calories in, calories Out.

We'll see if that works. We'll see if that works. I want to get into Anthropic, right? You saw of course, saying the stage for you here. We had, Sam Allman had this disastrous interview with Brad Gerstner at the end of October, where he talked about a trillion dollars in committed spend obligations.

In Wall Street, we call that debt. And he had a very defensive reaction and really created a correction due to that. And now you have Microsoft trading at 2022. Pe, multiple lows, max seven stocks that were treated, the CapEx payers, which we've been [00:02:00] talking about have been punished.

CapEx receivers have generally done well, like the memory related players et cetera. And now Dario, the C of Anthropic, was on Dokes podcast, and I was surprised that he was talking about fantastical numbers, like a trillion dollars in revenue potential in five years. And they're debating whether the trillion revenues happens in five years or three years.

I'm saying maybe not ever. And so I said, let's get Michael on here. Let's get some perspective. Let's try to answer the question of is AI a bubble? 

Speaker2: If you recall Rama, a long time ago when, we first did one of these, I think I said that, we're for the last couple, two or three years since this is the fourth year of chat, GPT, we are in the mainframe stage of ai.

And these people we are talking about, whether it's at the big tech companies or the two companies we talked about, open ai, philanthropic, they're essentially big tech philosophies. They're mainframe companies. This is their [00:03:00] DNA. So the Dario interview, which is two and a half hours with esh, it comes, it just eeks out that they're in a mainframe environment.

They think everything is centralized. This is, that their two or three sets of labs are going to basically provide most of the compute in the world, which is a reality on today's technology. But in the timeframes they're talking about in the next three to five years. Which is this data center filled, fulfilled with geniuses, which is what Dario's thing that he wrote in his essay last last year.

He has another 40 page essay out this year, which I wrote about. And, we will put up the things in the show notes. And the point is, they still very much believe in the mainframe ideology that AI is going to be served up by these three or four companies as a foundational LLMs, and they're gonna collect all the revenues.

In that interview with Dario and Dsh, he echoes what [00:04:00] OpenAI CFO said at one point that, hey, we might be able to even get away with value pri priced LLMs. In other words, if you, Mr. Drug Company, use our ais to create the next multi-billion dollar GLP two, we get a piece of that maybe. Okay, now think about that.

Okay. As a historical test case, Microsoft launched Excel in 85. In that, and under that theory, they could have said, Hey, KKR, we get a piece of RJR Nabisco in 87. The barbarians of the gate or any PE deal because you used a spreadsheets to create the models for the overnight bid at 30 billion to buy X, Y, Z.

That's never happened in technology before. That's how much these guys are stretching. To go to whatever revenue numbers they want to shoot for. And this is not to take away from the amazing momentum of these companies, right? Dario rightfully says that he's had a 10, 10 x in revenues every year [00:05:00] since founding, is three years ago.

He's now 14 billion. The number for next year is supposed to be in the 50 billion range, opening eyes in a similar kind of higher level trajectory, et cetera. But all of them have crazy costs. Open ai, far more than in their official costs as opposed to philanthropic kind of soft cost.

And Dario in this interview literally said that we are, if we are wrong on our assumptions on the mainframe worldview of the world. We're two years away from bankruptcy. And which is what he's justly trying to avoid. So we've never seen this any other tech wave as we've 

Speaker1: never seen as 

Speaker2: we can be excited.

Speaker1: I agree. I agree. Lemme play back what I heard from you. Number one, there's a premise that we're in a mainframe world in contrast to say, I guess a client server world or a grid world. That's one. We'll, I'll let you double click, which I'll come back to that.

Speaker2: Yes. 

Speaker1: The second thing is that the, what you pointed out I thought was right on the mark when you said, look if they're off by two years ago, bankrupt, we've never [00:06:00] seen this level of burn.

Burn means losing money. I think case people forgot, that's what burn rate is. 

Speaker2: 700 billion this year. The four companies, 

Speaker1: these burn rates are extraordinary 

Speaker2: and open. 

Speaker1: They can only be funded from. Customers are raising money, the customer flow is not there. They have to bet on the future with the spreadsheet.

And we all know that the bigger you are, the hard is to grow. It's called limits to growth. They did add a couple billion in January, so good luck. And they've gotta withstand competition. So what and the third part is on the timing of this whole thing, right? So this isn't a characteristic of a good business model if you've gotta nail your CapEx planning with that level of precision when ai, by definition is this kind of singularity event where it's hard to make predictions around and you have intensifying competition.

Speaker2: Totally. And all of this also forgoes, [00:07:00] because I mentioned the mainframe stage. What we're also seeing are anecdotal pieces of evidence, the open claw, open AI thing of this week. Just been an example. I've been long talking about that. We are, we're looking at the mainframe to PC cycle, the tech industry.

So over 50 years, IBM to Microsoft and Intel and Dell today, that took 50, 60 years. We are gonna see that. I said, four years ago when I started writing my daily posts on AI reset to zero, I said that we are gonna see that probably in three to five years. What we saw this week with open core, and I'm jumping ahead a little bit which is AI agents that one developer very accomplished software guy and someone who had built a software company, sold it as a solo endeavor over three months, which he launched last November. By the way, on the third, ironically, on the third anniversary of chat, GPT. Managed to basically get millions of developers around the world in three [00:08:00] months, excited on AI agents, mostly running on local MAC minis, on your own computer, not on data centers.

That's one of the key takeaways I want to highlight. 

And as much as OpenAI is excited about agents and so is clawed and which is anthropic, et cetera, yes. We're gonna need tons of compute. There's no question. 

But a lot of that compute is going to come from our computers on our desktops and our phones.

That's one of the big transitions here. And why is that interesting? Is because the underlying assumptions that the darrio of the world use is scaling in a mainframe way, throw a lot of data, huge amounts of data, huge amounts of compute, and huge amounts of model parameters to basically scale the models to be these God models.

These agis, these geniuses in a data center kind of stuff. On the other hand, this was theoretical until this year we are starting this, which is the open claw thing. AI agents that are local that you've just developed. The [00:09:00] reason o almost a million developers have gone nuts in the last two months, and me, who, I'm not a developer, but I've been playing around.

I bought two Mac minis and I've been playing around with them and you, I'm sure you're playing around with this, most of the stuff it's calling is local. The data is all my personal stuff under very strict safeguards, 

Speaker1: privacy 

Speaker2: matters. 

Speaker1: Yeah. And for 

Speaker2: enterprise matters even more. 

It totally matters.

And but, and those problems will be solved very quickly. Now with this open claw, this accelerates this, and. It means that the data that everyone is looking for, that, that is gonna be the huge burner of tokens, intelligence tokens are going to run more locally for inference in the next two or three years.

That doesn't mean that that the mainframe need for mainframe compute goes away. It's just that it, it dents the demand curve that the 

Speaker1: right 

Speaker2: hyperscalers are assuming right now. 

Speaker1: And under a 

Speaker2: lot premise. 

Speaker1: Yeah.