Cisco the warning parable, a recap
Nvidia is actually pretty cheap, when you think about it
Capex spending is just getting started (relatively)
But intelligent? Seems like a stretch
👉👉👉Reminder to sign up for the Weekly Recap only, if daily emails is too much. Find me on twitter, for more fun.
AI is no Cisco
The (by now) conventional warning parable for the AI bubble is Cisco, the posterchild for the DotCom crash.
The basic premise then was that networking was the new railroad, and you couldn’t possibly go wrong investing in the infrastructure layer of a transformational technology like “the internet.” So when you see Cisco, smash that “buy” button, over and over and over.
We now know how the story ends. You don’t get labels like “bubble” and “crash” for nothing.
You might argue that Cisco investors weren’t wrong, so much as early, but generally they’re remembered as wrong. The “DotCom bubble” crashed when .com companies didn’t generate the kinds of revenues that would justify all that infrastructure spending (even if the internet turned out to be an awfully big deal, in the longer run).
Back to the present, and the parallel to Nvidia is pretty straightforward. Nvidia is selling picks n’ shovels in a gold rush. If you want LLMs, you need Nvidia’s chips to do it, and in an arm’s race like this, you can’t fool around with the distant second best. From the investor’s standpoint, when you see Nvidia, smash that “buy” button, over and over and over.
So, it’s a bubble right? All that spending on Nvidia and nary the revenue to show for it? The hyperscalers can’t possibly keep this up?
Idk. There are reasons to think so, but there are also some pretty big differences (for better and for worse).
Fortunately, the always excellent Michael Cembalest tallies up the score, and Random Walk will excerpt some of the highlights for your pleasure.
Nvidia is pretty cheap
One pretty good reason to think Nvidia is not a bubble is that Nvidia isn’t even that expensive of a stock, relative to Cisco.
The thing about Nvidia is its incredible margins.
It doesn’t just sell lots of GPUs, it generates incredible profits, and that simply wasn’t the case with Cisco.
Nvidia’s share price (lhs) has grown with the forward estimate of its earnings . . . Cisco’s (rhs) did not. Cisco was all sales.
That’s a pretty material difference.
Now, it begs the question as to how sustainable those margins actually are, and that’s a function of how quickly the competition can, y’know, compete. As of now, there is Nvidia . . . and that’s it really:
It’s not for lack of trying, but when you’re the best game in town by a longshot, you can charge whatever prices you want.
Competition is on the way (and Cembalest has some notes on that, as well), but consensus seems to think it’s at least two years away.
Capex isn’t that high
The other thing that Cembalest points out (which Random Walk has also observed) is that Nvidia’s customers have only just gotten started.
Compared to the 90s, Capex actually seems pretty reasonable:
As a share of free cashflow, capex is considerably smaller than it was during peak DotCom frenzy.
Cembalest figures it will be at least another two years before the hyperscalers get to Cisco-levels of capex:
Capex on Cisco as a share of overall capex peaked at just under ~14%, which Nvidia will match sometime in 2026 (if everything goes as expected).
Also, IBM in 1969 was an incredible company.
AI Bubble or Not?
I’ve got no idea, but I will speculate.
By far the best thing that LLMs do now is generate code, and that’s a pretty valuable thing to do. If you think of how software has “eaten the world,” and then imagine software development becoming 100X cheaper, well that’s a lot of worlds to eat.
I mean, everyone understood that energy was important, but no one could imagine all the things we’d do with it, once it became so cheap and abundant.
What AI does not seem likely to do is become “intelligent.”1
Based on my paltry understanding of how LLMs work (and my limited experience tinkering with them), they are probability machines, or firehoses of statistically commonplace text. They can appear to reason because they are so good at replicating human speech (which may occasionally reflect reason), but LLMs are not in fact reasoning (or doing anything like it).
LLMs are just guessing the next likely word in the sequence based on prior sequences. There is no “understanding,” just math. There is no path to how understanding is even in scope (again, as best as my smol brain can tell).
I kind of like this line of Alexey Guzey’s because it makes me feel less dumb for having the same reaction:
Lots of people working on AGI tell me that they’re trying to “solve reasoning”. I still have no idea what this means.
Same.
Previously, on Random Walk
Random Walk is an idea company dedicated to the discovery of idea alpha. Find differentiated data, perspectives and people, and keep your information mix lively. A foolish consistency is the hobgoblin of small minds. Fight the Great Idea Stagnation. Join Random Walk. Follow me on twitter. Follow me on substack:
And I reiterate my pet theory that AI Safetyism is just a wounded status play by brilliant researchers who are less good at business, and felt badly that OpenAI the business was leaving them behind (in contrast to OpenAI the research team that made them front-and-center). “We’re not bad at business—we just think safety and the research should be the priority, for, umm, humanity’s sake!” Sure, pal.
Good bull case or at least anti-bear case on nvda. And in terms of "solving reasoning" they probably don't know what they mean either. When your business model is immanentizing the eschaton you can make as many vague pronouncements as you want as long as they're very grand
Great Post!
LLMs remind me of watching "Watson" on Jeopardy. He won the game because he could click is buzzer faster and get it right most of the time, but he got the Final Jeopardy question fantastically wrong in the way no human would. It was suppose to usher in a new age but it turns out that "mostly right, but occasionally catastrophically and comically wrong" didn't work for healthcare diagnosis and such even if you think it was a good statistical bet.
I know people that use them really well but every time I try I'm underwhelmed. It feels like what a non-critical thinking analyst might spit out after a day google searching. I'm sure there is a use for that, but it's not super relevant to my job. The things just can't give any good differentiating "color".