top of page
Writer's pictureVincent D.

AI Bubble? What the Dot-Com Era Can Teach Us—and Why We’re Not There Yet

Part 1. AI Bubble?

The recent meteoric rise of Nvidia has sparked widespread discussions about an AI bubble, drawing parallels to the infamous Dot-Com bubble. In this analogy scenario, Nvidia is the new Cisco, and the market is about to implode exactly as if we were in 2000 again.

 

Although I recognize the similarities between the transformative impact of the internet around 1995-2000 and how AI is set to change our lives today—similar to how Cisco's technology was at the heart of that era, just as Nvidia is now—I believe that assuming we are experiencing the same scenario is an overly simplistic comparison that could lead to mistakes. I am aware that the mantra 'this time is different' typically doesn’t yield much success in investing. However, I also understand that assuming historical patterns will always repeat themselves is a reductive approach that often leads to flawed assumptions.

 

As a tech investor since 2008 (a bad year to start!) but also as someone who has worked with AI since 2004, my main thesis is that:

 

  1. We are not yet in a real AI bubble, although I agree that there is a frenzy about everything that revolves around data centers.

  2. Nvidia is not Cisco despite the similarities

  3. Things will unfold differently with the AI bubble than with the Dot-Com bubble, although both could experience similar ends.

 

I very rarely give my opinion outside of the market's big picture. Talking about AI may already be closer to discussing individual stocks than what I usually do, but I thought that some of the research I did on this topic and some of the knowledge I have in the field of AI could add some color to the discussion. I also think that since AI's transformative technology in our lives will lead to similar outcomes that we have seen with internet in 1995, it is important to better understand the Dot-Com bubble to grasp what went wrong and what went well, because let’s not forget, although that bubble blew up spectacularly, some of the most formidable companies of today are a direct legacy of that bubble.

 

The Dot-Com Bubble: How It Differs from Today's Market

The release of the Mosaic web browser in 1993, followed by the decommissioning of NSFNet two years later, paved the way for the web to become fully available for commercial traffic by 1995. This period marked a transformative phase in technology that immediately dominated the news, with promises that the internet would bring the world closer together. At that time, the concept of the internet and its potential impact on our lives was still a nebulous idea for many. The only comparable feeling of confusion I've had in recent decades was when a colleague at Stanford suggested I open a Facebook account, back when it was just a small startup on the third or fourth floor of a small building at the beginning of University Avenue. I struggled to understand what I would do with that account, and my friend couldn’t quite explain it either. If you weren't there when the internet started, but witnessed the beginning of Facebook, that was similar to how we felt when the internet was introduced through the media.

 

For the internet, besides emailing, one of the first clear concepts that emerged was that companies would do retail business there. This led to a first wave of dot-com companies along that paradigm: Web platform for selling stuff (eBay), online store (Amazon), web payment companies (PayPal), and search engines to find those stores (Yahoo). I know something about this trend since I actually launched my first real startup in that era. It was an online platform that was recreating graphically a Shopping mall where you could navigate through different stores. Google was not a thing at that time, and finding company websites was not always easy, so this online mall was allowing people to go from one shop to the other without having to search. This business failed mainly because I was simply too young. As a 19-year-old kid pitching to VPs of sales of big department stores, did I really have a chance?

 

Interestingly, the dot-com bubble did not start with companies in these fields. Although the web-for-retail business model didn't take off until around 1997, even before settling on this trend, everyone recognized that the internet was fundamentally about networking. Cisco quickly emerged as the key player with essential components for this new network that was expected to change the world. Here is a look at Cisco from the dawn of the internet era up to 1997.

Right from the beginning of the Internet to 1997, a period before the real Dot-com bubble, Cisco saw a constant climbing that has is price climb 330% from the price it open in 1995. Other company like Microsoft that was also making a technical part at the center of the web revolution by making the OS of computer that would be use to surf through this new web, also saw their stock having a significant ride (Around 180% over the same period).

 

Is this remind you off something ? Taking the run Nvidia has made since ChatGPT started the AI revolution, would probably not make sense as it was at the bottom of a very significant bear market and therefore the price action we have seen since is a mix of the recovery and the new AI trend. But let take the price Nvidia was trading before that huge correction, it has seen a 304% appreciation while the same Microsoft that benefited in 1995 of the web trend also appreciate significantly in this AI one (although less so due to an already very big market cap), since it provide the cloud environment of this revolution.

 

There is similarity on the numbers here, but I don’t think it is the main point since those could be simply coincidence. But the real thing here is that in both case, as soon as we saw a transformational technology emerging, even before that we figured out how this revolution would exactly change our life, the companies that would make the technological core of that revolution were the early winners.

 

To me this is one of the first thing that tell me we are not in a real bubble yet. Nvidia have seen an incredible growth, but so is their profit and margin. There is not much speculative beside is this going to be sustainable. We could debate if it’s valuation is ahead of itself or not and I think it’s all depend on that question: “is that growth sustainable ?”. But other company inside of this trend like Vertiv Holding that make cooling solution for these data center has also seen considerable rise, and for me it is really aligned with what we saw in the 1995 to 1997 period of the Dot-Com Era where the early winner were real company that where having strong revenue acceleration. Then came the Bubble…

 

The Bubble

As the vision of the web as a platform for retail business began to emerge, VC enthusiasm for web startups aligned with that trend surged in 1997. This was the perfect timing as many young engineers, inspired by tales of garage-based startups turning their founders into millionaires within months, were dreaming of launching their own dot-com ventures. Additionally, for the first time in history, creating a technological startup was relatively inexpensive since it primarily involved programming and web design. This significantly lowered the barriers to startup creation.

 

This combination sparked a frenzy of new companies aiming to sell products online, where the predominant business model involved spending heavily on marketing to acquire customers, hoping for future revenue. This approach laid the groundwork for the later crash.

 

1998 saw the Fed lower its fund rates, increasing available capital and fueling the bubble. Bankers were keen to initiate IPOs with these new companies, benefiting from easy transactions that yielded substantial fees. In that market, making money was not important. In fact founder of CheapTickets which was trying to compete with Priceline’s hype of selling at discount Airplane tickets said something that just doesn’t make sense today: “We’ve got a policy here at CheapTickets  that we need to make money. But It hurts our valuation.”

 

This marked the point where the bubble gained full momentum, characterized by easy money, frivolous investing, and speculation. One story I particularly enjoy, which perfectly captures the sentiment of that year, involves the CEO of theGlobes.com. After breaking the record for the largest first-day trading gain in 1998, he famously stated in an interview, clad in tight plastic pants, "Got the girls, got the money. Now I’m ready to live a disgusting, frivolous life!". That period also saw the start of frequent acquisitions among companies. Although Microsoft purchasing Hotmail for $400 million was one of the most notable deals, Yahoo! became synonymous with acquisitions at that time. In fact, I heard from major VCs in California that during those years, some companies were explicitly created to be acquired by Yahoo! (Scoop: similar strategies were planned for Facebook regarding VR metaverse technology).

 

This trend accelerated until it reached its zenith, almost exactly coinciding with the IPO of Pets.com, which subsequently became emblematic of the ensuing crash. With only $619k in yearly revenue and a market cap of $400 million at its IPO, it traded at an unprecedented revenue multiple of 646,203,554X. Such figures are unlikely to be seen again in the stock market. It was probably when the effects of all the champagne wore off that bankers, brokers, and investors, nursing headaches, realized the absurdity and filth of the market. Quietly but inevitably, the bubble popped a month later.

 

AI Bubble ?

There are many more details and anecdotes that I could have included in the previous section to deepen your understanding of that bubble. I also only briefly summarized the forces that contributed to its inflation. However, I wrote that section with one goal in mind: to illustrate how it was a period of easy money, where everyone—VCs, bankers, brokers, investors—was eager to invest blindly, ignoring nearly every conventional metric. During this time, companies with no earnings and often no sales saw their stock prices soar based solely on narratives. It was also an era of blind, euphoric overspending. For example, the startup Loudcloud, founded by Marc Andreessen, threw a party with a live elephant and a staff of 200 before they even had a product! This environment was very different from that of 1995-1997, and even more so from our current environment.

 

Rates are currently incredibly high by the standards of the last 20 years; the market is risk-averse, and companies that don’t generate sustainable profits see their valuations plummet. Most are stashing their cash in large-cap companies. In fact, many small caps have taken a beating since 2021 to a degree reminiscent of the bear market that followed the Dot-com bubble. This is not surprising because the only trading environment we've seen that resembled the Dot-com bubble since then was the post-Covid stay-at-home trade that culminated in the SPAC mania. If you're here with us, chances are you lived through that environment. Not only could Reddit trading groups pump stocks in that environment, but investing was often reduced to the company's narrative as the sole assessment of its valuation. IPOA, IPOB, IPOC, Chamath here, Cathie Wood there—claims that a stock represented the future of XYZ, 3D printing, or automated electric cars, all buzzwords sufficient to pump a stock. If you lived through those times, you know that we are now in a vastly different environment, far from an euphoric bubble. In fact, apart from the poor macroeconomic conditions, we're likely not in a bubble at all because we’re still paying the price for the exuberance of 2020-2021. Force people to stay at home, give them cash and a free trading account, and you get what we got: a euphoric market. I was guilty too. Two years later, the market is still hungover.

 

Outside of the stock market, the current perspective is not much better. VC funding is in hibernation. Companies that don't show strong growth and aren't cash flow positive struggle to raise cash in private equity. Those in a position to raise cash do so at very low revenue multiples from a historical standpoint, which are even more ridiculous compared to the multiples VCs were offering during Covid [I can’t provide a ref but this come from a private report I got from a very well known venture capital advisory firms]. Also, the number of new IPOs is now anemic which is ironic considering the private equity world is currently struggling, yet people still prefer it to going public.

(Data from Stockanalysis.com, chart from WU)

 

The difference between the effervescence in the stock market of the 1997-2000 period and the one we are in today can also be objectively seen through the spectrogram of the market. Spectrograms are a way in engineering to analyze the power across each frequency in a signal, such as in audio recording. For example, in that field, a strong male singer will translate into a lot of power at low frequencies on a spectrogram. It’s the same for the stock market, where a market moving strongly but also very rapidly will show a lot of power at high frequencies for that period. Looking at the 1997 to 2024 spectrogram of the S&P 500, we can see that the level of power across almost every frequency during the late 1990s uptrend was something we haven’t really seen since in an uptrend. This contrasts enormously with the current period, which has a level of power in its moves that is at a very low level, more reminiscent of the 2004-2007 and 2017 eras.


Also, if we were in an AI bubble, shouldn't AI companies be thriving? This is far from the reality. C3.ai is down 83% from its ATH, Snowflake is 68% down from its ATH and 42% from its 2024 year high. Alteryx, which used to be an AI darling that traded at 20X revenue even before the Covid bubble, was acquired this year by private interests at a depressing 4.4X revenue multiple. Even Adobe, which I believe is one of the first companies to truly offer us AI as a fully integrated solution in their products—a model I see as how AI will reach us rather than through that poor chatbot interface—has seen a 32% retracement this year before rebounding in June. It still trades considerably below its 2021 valuation.


Thus, the current macroeconomic environment makes financial markets and investors very risk-averse, leading to a stock market that is far from exuberant. But I also think one of the reasons we are not in an AI bubble yet is that we still haven’t had the “web as a platform for retail business” moment for AI.

 

How AI will really affect our life ?

In November 2022, when OpenAI released their ChatGPT interface to the public, it served as a wake-up call, signaling the onset of a genuine AI revolution. Even some of my colleagues, who are fundamental AI researchers, were astonished by what they witnessed. Most had not anticipated such levels of intelligence for at least another decade. It became immediately apparent to all that jobs involving text generation would be impacted. However, I think we are still grappling with understanding how AI will truly affect us, because let’s establish one thing: the chatbot interface sucks, and this is not how AI will be integrated into our lives. Don’t get me wrong, I am an avid user of ChatGPT and I appreciate its ability to simplify various aspects of my life. But, I also believe that interacting through a chat interface is not the most natural way to utilize AI. Initially, it was built as a kind of demo to showcase the capabilities of the OpenAI API. It went viral and has since become the face of AI. Yet, I think its lack of integration in our personal computers and the constant questioning make it an awkward way to interact with AI. AI can be so much more! Using the Dot-com bubble as an analogy, what I say about the ChatGPT chatbot interface is akin to what I would have said in 1996 about surfing the web by typing full HTTP addresses into a browser: "The web has tremendous potential, but the current method of navigating it sucks." Eventually, Google resolved that issue…

 

Adobe has already shown us how a large company can cleverly integrate AI directly into their products to simplify usage while enhancing their tools’ capabilities, but I am convinced this is just the beginning. I expect Apple, which may not often be a pioneer in technology but is a master at creating integrated products, to teach us a lesson later this year on how AI should be subtly yet effectively incorporated. My belief in human capacity for innovation also leads me to anticipate that soon, young innovative companies will have us discover intuitive ways to harness the incredible potential of what OpenAI has created across various aspects of our lives. For instance, at a Robotics conference I attended in Japan last May, there was an overwhelming number of papers focused on programming robots using the OpenAI API. Over the past decade, Robotiq has led the way in programming robots through direct hand guidance. However, we may soon see the introduction of voice-driven robot programming.

 

These are indeed lofty dreams, but we are still entrenched in the 1995-1997 phase: AI will transform our lives, yet the exact nature of this change remains unclear. Therefore, currently, the most viable and logical investments in that trend remain everything that revolves around data centers, including GPUs, as well as cooling solutions and energy. However, it’s only a matter of time before a shift occurs. Will Apple next fall provide a vision for AI usage that captivates the world and potentially ignites a bubble? Or will the innovation emerge from a lesser-known company diligently working in a garage? It's uncertain, but one thing is sure: it will happen. When it does, it will likely be the catalyst for a genuine AI bubble. The unfolding will differ from the dot com and SPAC bubbles, hopefully because we have learned lessons from those times. The broader economic conditions might also influence a unique timeline, potentially introducing a recession-induced bear market between the two phases. We don’t know yet. But at that moment, probably a year (or two) away from discovering how AI will profoundly change our lives and with the Fed poised to cut rates, we could also very well see the next phase bearing some similarities to the 1997-2000 phase of the Dot-Com bubble.

 

However, I must caution against drawing direct comparisons, as the AI bubble will chart its own course. Using historical years as benchmarks can be misleading, just as equating companies can be. For instance, let me be clear: Nvidia is not Cisco.


Part 2. Nvidia is not Cisco

Well, I am comfortable with the analogy that Nvidia is to AI what Cisco was to the Dot-com bubble if it simply serves to illustrate how central that company is to the AI revolution. However, despite some similarities with Cisco, I strongly believe that Nvidia has a significantly lower risk of losing its competitive edge.


From 1985 to 2003, networking was a rapidly evolving field, with considerable complexities in the late '90s involving embedding routing functions directly into silicon chips. Cisco, loosely a spin-off from Stanford (though they were expelled for IP infringement), quickly became a leading force in networking. Much like Nvidia with AI, by the time the internet became widely accessible, Cisco was already an established market leader. Nevertheless, the theories underpinning networking and routing were not firmly established, which left Cisco vulnerable to competition. This susceptibility was compounded by the fact that networking technology was not 'rocket science'—a dynamic startup could swiftly challenge the technologies of a larger corporation.


Indeed, in some areas, pushing the boundaries of science within university labs or a small startup is nearly impossible due to the sheer scale of funding required. The development of advanced chips is one such area where universities can no longer make significant contributions—a topic I'll return to later. However, networking technology did not fall into this category. By the late 1990s, with the widespread adoption of Internet protocols, many academics and emerging companies were developing innovative methods to process IP and MPLS packets. Juniper Networks, for instance, launched its first product in 1999 and by 2001 had captured about 30% of Cisco’s market share among service providers.


The deflation of the Dot-com bubble, which led to the bankruptcy of many tech companies and reduced spending by the survivors, significantly affected Cisco's growth. Despite these unfavorable market conditions, the increased competition, stemming from its lack of a definitive moat, along with the industry's shift toward a more software-centric approach, has permanently impacted Cisco's market position. While it remains a giant in the industry today, it is now viewed as a more measured force, no longer seen as a contender for the title of the world's most valuable company at the center of a revolution.

 

This is where, for me, the comparison between Cisco and Nvidia diverges. Although there is a parallel to be drawn, as both were central technological players enabling their respective revolutionary industries, Nvidia holds a much more robust fundamental position in the face of potential future competition. My conviction is based on three independent facts that I believe confer a strategic advantage to Nvidia.

 

1. A GPU is perfectly suited for computing what is at the core of AI, and this isn't likely to change soon.

 

Although GPUs were initially a good solution for mining Bitcoin at the beginning of blockchain technology, they were quickly outclassed by Application-Specific Integrated Circuits (ASICs), which are silicon chips specifically designed for the SHA-256 hashing algorithm used by Bitcoin. To illustrate how GPUs have become obsolete for this application, using the NiceHash estimator, today it would take around 172 years for a top-of-the-line Nvidia RTX 4090 to mine the equivalent of one Bitcoin.

 

Will GPUs suffer the same fate with AI? It is very unlikely. The reason why GPUs were rapidly outclassed by ASICs in Bitcoin blockchain mining is that the SHA-256 algorithm was not very well suited for a GPU. GPUs are not meant to compute complex equations but to compute a phenomenal number of simple ones in parallel. In fact, the original purpose of a GPU, as its name suggests, was for computing graphics, such as rapidly changing images in computer games. A computer image, like in gaming, involves a vast amount of simple vectorial equations, which are essentially linear equations of the form y=mx+b that we all learned in high school. While the more complex Bitcoin algorithm was not well suited for that, the component at the base of all AI networks, which is the artificial neuron, is essentially just this type of linear equation. This is not likely to change anytime soon, since the artificial neuron, originally derived by McCulloch and Pitts in 1943, has become the de facto standard in AI since the original perceptron introduced by Rosenblatt. Although AI may seem complicated, its primary component is not. Let’s consider a one-neuron example of an AI trained to recognize a person's sex based on their weight and height. If we take that information for 100 people, the data would probably look like this, with a good decision line represented by the green line in the middle.


I know I mixed metric and imperial measurements, which is the most Canadian thing I could have done, besides not locking my door when going out for the day. But, the artificial neuron equation of that line:

Would be in this specific case

Which if we skip vectorial notation for more conventional mathematic, becomes

So here is our Artificial Neuron that should be able to infer with around 85-90% accuracy using your weight (x1) and height (x2) whether you are a woman or a man. You can try it for yourself. A negative value should indicate a man and a positive one a woman.


So you see, the basis of AI—the artificial neuron—is very simple and similar to the equation of a line displayed on a screen. This simple example only uses one neuron, while complex modern AI networks use millions (and even billions) of neurons. But the good news is that GPUs were actually made for computing and rendering millions of vectors to a screen. The hard part is the training, which is a way of saying finding the 'W' and 'b' coefficients (we call them weights and bias) in that equation that separate the data (in our case, the -0.138, -0.043, and 28.968). Once you have these parameters, using the model (which we call inference) is pretty straightforward as it just plugging numbers (but a lot of numbers!).


Andrew Ng, a professor who applied AI to robotics, was one of the first to advocate for using GPUs around 2010, as far as I remember. At that time, while serving as a professor at Stanford, he also led the Google Brain group. Throughout his tenure (before he transitioned to Baidu), Google achieved very impressive AI advancements leveraging Google's substantial computing power. Faced with results that seemed only achievable by large corporations, some academic colleagues began to voice concerns loudly that it was becoming impossible to keep pace with their research. In response, Andrew began to promote how easily accessible GPUs, with their architecture ideally suited for computing AI, could accelerate AI training by a factor of 10. Here is one of his original slides.

Well, that solution addressed academic researchers' problems until big tech companies started building super GPU clusters, pushing capabilities once again beyond the reach of academia, but that's another story.


In summary, unlike Cisco, where the methods of routing and switching were not set in stone and eventually underwent a paradigm shift, the foundation of AI has been the artificial neuron since 1957. GPUs are perfectly suited for this purpose, and it is unlikely that we will see a change in this compatibility, despite ongoing progress in how we connect these neurons together. Academia has explored alternatives to the neuron, with the most successful attempts, in my personal opinion, being the SVM, but we always return to McCulloch and Pitts neurons. Since it mirrors the functioning of our own brains, I believe we haven’t yet reached the full potential of artificial neurons.

 

2. Progress in Advanced Semiconductor Chips is Beyond the Reach of Academia and Startups.

 

Progress in semiconductor technology today is incredibly challenging. The university where I am a professor has one of the most well-equipped 'clean rooms' (places where we make chips) you can find at any Canadian university. The Canadian government provided substantial funding for its construction with the stipulation that it would be accessible to researchers from other institutions. Despite this, our researchers in the field cannot even begin to approach what companies like Nvidia, AMD, or Intel are producing in terms of transistor density, chip complexity, operating speed, and transistor size. I am not an expert in this area, but I would venture that our facilities are more suited to producing chips akin to those from 1998. Although there is still very interesting research that can be done with this expensive equipment, no one within our walls is attempting to catch up with Nvidia. As I mentioned earlier, there are fields where research has become so difficult and costly that innovation is almost exclusively possible through large companies, and the production of highly complex silicon chips for AI is certainly one of these. This indicates that the scenario in which Cisco lost 30% of its market to a two-year-old startup is unlikely to happen with Nvidia. Just consider how many decades it took for AMD to surpass Intel in terms of CPU performance and you have a glimpse of the effort it takes to pass someone in that field. AMD or a big tech giant could theoretically produce a chip that is slightly better, but Nvidia is well shielded from seeing a small startup outclass them, unlike Cisco was.

 

3.Nvidia Software Ecosystem Around AI Makes Them the De Facto Choice

As researchers rapidly embraced GPUs for training AI networks in the early 2010s, Nvidia was incredibly quick to recognize the potential and almost instantly began investing in software to support AI. Although this might seem like the obvious approach now, at that time, their sales of GPUs for AI were a niche market that probably wasn't even generating enough revenue to cover their Christmas party expenses. In fact, they sponsored so many labs (including mine) by giving away GPUs that they were certainly cash flow negative in that field for a long time. Despite this, they rapidly developed CUDA, which is now the best software that makes Nvidia an easy choice when it's time to build and train an AI on a GPU cluster. But I don’t think CUDA is their main strength on the software side! If other GPUs were to significantly outperform Nvidia's, CUDA alone would not be a strong enough reason to prevent a switch, despite their considerable lead with this software. However, Nvidia played another card that added even more value. Seeing the AI trend emerging, they began to fund top experts across a vast array of fields that could benefit from AI, to start gathering data and training incredibly powerful networks that could unlock various applications. These software applications, I believe, are what most protect Nvidia from competition.


One thing you need to know is that training an AI is hard. Yes, it is mathematically much more computing intensive than inference, but the process sucks due to several other factors including, for example, overtraining. But aside from the training itself, what makes training complicated is gathering and, in many cases, labeling the data. For example, in my field, anytime we start to build a dataset of a robot grasping objects, we usually have a pace of 1,000 training examples per month. Sometimes we also realize that after two weeks of training, some of our data are not good, because a sensor has worn out or the wear in a mechanical component has caused the data to drift, thus not being representative of what it was before. One thousand is a big number, but for training examples, it is a very small one.


So, many people would love to use AI without going through the training burden. The good news is that Nvidia has our back and has had top researchers in many fields build stuff for us. Moreover, using that stuff is most of the time free. Here are some examples in my field: We wanted to use 3D vision to find from a 3D image what is the most optimal place to grasp an object for grasp stability. In simple words, where should one grasp an object to ensure it doesn’t drop. I could take a year of data and train an AI network, or I could use Nvidia’s GraspNet. We chose the latter, and within a day of work, we were grasping perfectly almost any object. Similarly, recently, Teradyne, a partner of ours, updated their robot controllers to be AI-ready. One of the first features they released is that it can now compute and rapidly adapt a robot trajectory to avoid collisions. They were able to release that feature quickly without building much as they decided to use Nvidia's Isaac Manipulator, which is a Nvidia robot software made specifically for that. Also, read here what Peter Soetens, CEO of Intermodalics, is saying about how Nvidia Issac Sim and PhysX is currently behind every humanoid robot demo you are seeing. These were example in robotics, but if you are working in AR, VR and telepresence you have Nvidia Maxine to help you. You work in genetic R&D, you can use their Clara for genomics.


I hope you can see that while CUDA is amazing and efficient, and confers a significant advantage to Nvidia, the 10 years of advancement that the company has in building AI across almost any field is simply impossible to catch up with. Moreover, almost all the best researchers are already working for them on even more software and features. These specialized software packages make them the de facto choice for building AI solutions as they allow, in many cases, a shortcut in the development and training process. I am not able to foresee any other company closing this gap, and this, for me, is the ultimate barrier that protects Nvidia and makes competition futile in several cases, even if AMD were to release a slightly better chip next year. Very few AI applications require a network of the size that OpenAI is building; in many application cases, the network is not that large, and therefore, a bit more or less computing power would not significantly change the outcome in several instances. But the software does, and it is transformational.

 

Nvidia: Simply a Better Version of Cisco

All of this together tells us that Nvidia is much more immune to competition than Cisco ever was. Like with anything that rises sharply, there will be times when Nvidia's valuation gets ahead of itself, and it will see pullbacks that also lead to great discount opportunities (I started writing this text when Nvidia was near $140, and it went down 25% from that point). Cisco had two such moments in 1997 and 1998 when it pulled back 40% before resuming its uptrend to all-time highs. Following analysts like the IOFund, who know the stock well, will help you form your own opinion of where we are in its cycle and relative valuation. If the AI bubble really goes into full swing, it could continue to rise before seeing a sharp sell-off of a magnitude reminiscent of what we saw in 2022. But one thing is clear to me: Nvidia will not be commoditized as easily as Cisco was. Rarely in my life have I seen a business as well protected from losing its moat. Some of these protections are inherent to the field (GPUs will remain relevant, and few companies can design chips like Nvidia), but Nvidia’s CEO and executive team have worked hard for nearly a decade to create some of that protection (software ecosystem), which is proof of competency and strong vision—qualities that, for me, add to Nvidia’s moat.

 

So, I will come back to where we started: I’m okay with the comparison between Cisco and Nvidia as a simplification that they are companies at the center of a technological revolution. I also think that the fact that both of them rapidly emerged as leaders while we were still trying to understand who would benefit from the internet or AI led their stocks to follow very similar early paths in their respective trends. But Nvidia is simply a better version of what Cisco ever was and should resist competitors for longer. If it ever crashes with an eventual bubble, we should then compare Nvidia more to Microsoft in the ’90s than to Cisco: a company that will bounce back to make new highs as the real AI trend silently continues beyond a bubble.

 


 Part 3: Learning from the Dot-Com Bubble

 

The previous section demonstrates that straight comparisons, while containing elements of truth, can be misleading. Assuming an identical path for a company based on past trends can lead to inaccurate assumptions about its future trajectory. Similarly, we should be cautious when drawing timeline comparisons between the AI trend and the Dot-Com bubble—an approach I often see in various analyses. The adoption rate of the internet, its evolutionary path, the macroeconomic landscape, the banking system, and the private investment ecosystem during the internet boom created a unique timeline that is unlikely to repeat in the same way this time. Therefore, instead of simply mapping timelines, investors should focus on understanding what worked and what failed during the Dot-Com era to avoid the pitfalls that may arise as the AI trend evolves.

 

Much of the narrative surrounding the Dot-Com bubble portrays it as a simplistic story of naive startups attempting to deliver value that people were not yet ready to accept. This narrative, which I referenced earlier to illustrate the euphoric spirit of that era in contrast to where we are now, is epitomized by pets.com—the poster child of that bubble. In a frenzy, it ballooned to an absurd valuation for a service—shipping dog food to homes—that was a low-margin business with minimal online demand. Moreover, with such a mascot, was this company truly set up for success?


The reality is lot more nuance as in fact 48% of dot-com Companies survived through 2004, albeit at a lower valuation and some of the big tech giant of today are direct results of this era. Also there is some companies that died that was actually good, but they could not survived the bear market as low valuation didn’t allow them to raise further cash.

 

In fact looking back at this era we can fit most of these companies into 7 different classes that we would have chance to see again when the AI bubble will goes full blown.

 

1. Stupid Companies That Were Meant to Fail

  • Example: Pets.com, Kozmo.com

  • Description: These companies were built on unsustainable business models or flawed products, often with little to no revenue, and focused more on hype and marketing than on creating a viable product or service. They were almost destined to fail, as they lacked a clear path to profitability. During the bubble, investors were so eager to back anything internet-related that even the most impractical ideas received funding. Take Kozmo.com, for example—a delivery service that promised to deliver anything (like a single candy bar or a video game) within an hour, without any delivery fees. Despite its impracticality, Kozmo.com managed to raise $280 million in funding.


    We all like to think we won’t get caught buying into such companies in the AI trend, but consider some of the SPACs or emerging tech stocks you bought in 2020—be honest with yourself! I remember seeing a well-known (but bad) analyst in 2021 touting an "exoskeleton megatrend." Knowing enough about robotics, I could tell there was no such thing as an "exoskeleton megatrend"—exoskeletons are a very niche market with technology that is far from mature. Yet, many people bought into RWLK, a stock that eventually got delisted, changed its ticker and worth nothing today. This made me realize that my expertise in one field protected me from buying into a very bad stock, but I may not have the same depth of knowledge in other fields related to the stocks I owned. I have to admit, my understanding of genetically engineered animals for human consumption is limited, so it may have been unwise to hold a stock like AquaBounty Technologies (which, by the way, is down 98.51% over the last 5 years). The key lesson here is that  “real” expert analysts’ opinions really do matter when you don’t fully understand a field. There will undoubtedly be bad products in the AI trend, and I hope to avoid owning companies that make them. Did you see that AI pin—the worst product Marques Brownlee has ever reviewed? (And keep in mind, he’s reviewed the Dyson Earphone-Air Purifier…). Thankfully, they aren’t on the stock market.


2. Risky Companies That Had a Good Trajectory But Couldn't Raise Further Cash

  • Example: Webvan

  • Description: These companies had potential and were on a promising path, but the severe valuation corrections during the bubble burst made it impossible for them to raise the necessary capital to survive. Despite having solid business models or innovative ideas, they were caught in a liquidity crunch. This is a crucial lesson from the Dot-Com bubble that I plan to keep in mind if we enter an AI bubble. Even if I believe a company offers a unique product with strong potential, I will exit if we transition from an easy money environment to a tough money environment that triggers a stock market correction. Companies that burn through cash too quickly may not survive even if they are good. The 2022-2023 environment impacted many companies in this way, both in the private and public sectors.


3. Companies That Became Obsolete in a Fast-Changing Environment

  • Example: AltaVista, Blockbuster, Nokia, Palm

  • Description: Some companies were simply outpaced by rapid technological advancements or new competitors. They couldn't adapt quickly enough to the changing environment, leading to their obsolescence. I believe this represents one of the most challenging pitfalls to avoid in the eventual AI trend. Some of these companies may be well-established, with solid revenue streams, built to solve a problem that could later be addressed at a higher level.


    You might remember my first startup I mentioned earlier, which aimed to be a web-based shopping center. The idea was good at the time, as it addressed the challenge of finding products online in a difficult search environment, and I could have probably made some decent money after successfully selling the concept to companies—until Google launched and rendered it obsolete. I know this well because there was a local company called Copernic that developed software to search across all search engines for better results. They were reportedly offered $100 million by Microsoft at their peak, but their success was short-lived as they were quickly overshadowed by Google.


    I also had a roommate at Stanford in 2008 who noticed that Facebook didn’t have an iPhone app. He created one, and it became one of the top 3 apps in the Apple Store for a while—until Facebook released its own app. These are simple examples, but I’m certain the AI trend will produce companies that emerge and become highly successful by solving a problem that, at some point, will no longer exist. This happens in every field, but in a new technological megatrend, these shifts occur much faster. After all, who remembers Netscape, even though it had the biggest single-day IPO price increase at that time?


4. Companies That Became Commoditized

  • Example: Cisco, Sun microsystem, Many early internet service providers (ISPs).

  • Description: Cisco Systems, as discussed earlier, along with Sun Microsystems, are prime examples of companies that, despite their initial dominance, faced significant challenges as their core products became commoditized. During the Dot-Com era, Sun Microsystems was a key player in the internet infrastructure boom, supplying the hardware and software that powered many of the web’s largest sites. However, as the technology industry evolved, the high-performance hardware that Sun specialized in became increasingly commoditized. Competitors like Dell and HP began offering cheaper servers that were sufficient for most applications, while the rise of open-source software and Linux reduced the demand for Sun's proprietary solutions. Additionally, the shift toward cloud computing and virtualization further diminished the need for Sun’s hardware. As its products lost their competitive edge, Sun Microsystems struggled to maintain profitability. At one point, its stock traded 98% down from its peak. In 2010, Sun was acquired by Oracle Corporation for $7.4 billion, but by then, it was a shadow of its former self, having been overtaken by newer, more agile competitors.


5. Companies That Managed to Survive by Evolving to Solve a Real Need

  • Example: Amazon, eBay

  • Description: These companies successfully navigated the turbulent post-bubble environment by adapting their business models, focusing on profitability, and solving genuine customer needs. They often emerged stronger, having learned valuable lessons from the bubble and its aftermath. Importantly, they also had significant revenue before the bubble burst. While I expect that some companies of this type will be easy to spot in the AI trend, much like eBay was in 1998, I believe finding the "Amazon" of this trend will be more challenging. If you were around in 2000, Ebay was a clear winner but  you’d remember that many people doubted Amazon’s chances of success. At least the model of shipping books made more sense than shipping dog food. I suspect the differentiating factor here is the strong leadership of the founders, but that is not always easy to assess.


6. Companies That Were Acquired and Thrived Under New Ownership

  • ExamplePayPal

  • Description: Some companies were acquired by larger, more established firms during or after the bubble burst. These acquisitions often provided the stability and resources needed for the acquired companies to thrive. For example, PayPal was acquired by eBay in 2002 for $1.5 billion and became a dominant player in the online payments industry.


7. Companies That Pivoted and Found New Success

  • ExampleNetflix

  • Description: Some companies managed to pivot their business models in response to changing market conditions and consumer demands. Netflix started as a DVD rental service but pivoted to streaming content in the mid-2000s. This strategic shift allowed them to capitalize on the growing demand for online entertainment and eventually led to their dominance in the industry.

 

Conclusion

I understand that mentioning a potential future AI bubble carries a very negative connotation, but it’s important to remember that the Dot-Com bubble, from a stock market perspective, was one of the greatest uptrends of all time, if not the biggest. Here’s how the market’s ascent on a unitary basis during that period compares to other eras of the Nasdaq.


Even though it was followed by one of the worst bear markets of all time, those like Mark Cuban who successfully navigated the hype became incredibly wealthy. The key was clearly to ride the market to benefit from the trend but exit when things started to become too ridiculous—something easier said than done. In an eventual AI supertrend, I expect we will begin to see behavior reminiscent of the 1998-2000 phases, where hype and growth dominate. Being a prudent investor who has learned from past mistakes will be crucial when these signs appear, ensuring I don’t get caught in the bear market that will likely follow the hype. I remember someone saying in mid-2021, when Jeff Bezos and Richard Branson were playing at being astronauts: “Seeing the richest men on the planet going into space in their self-funded spaceships is probably the strongest signal that we’re at a euphoric market top”. That person was right.

 

But from a society perspective, we also need to acknowledge that a bubble isn’t all bad. Some of the capital that investors lost laid the groundwork for the internet’s future growth. Before the bubble burst, in a parallel trend to the internet, telecom companies raised nearly $2 trillion on the market to build a vast digital infrastructure across the United States and Canada. This extensive network provided the necessary foundation for the internet to mature. In the aftermath of the Dot-Com bubble, the excess fiber capacity led to a significant overabundance of bandwidth, enabling the next wave of companies to deliver advanced internet services at a low cost. By 2004, the cost of bandwidth had dropped by more than 90 percent, even as internet usage only doubled every few years. As late as 2005, up to 85 percent of broadband capacity in the United States remained unused. This meant that when new “killer apps” like streaming TV were developed, there was ample, affordable capacity to support their widespread adoption. I bet some of the money investors lost in the 2000-2003 bear market was recouped by investing in the real internet winners of the 2010s.

 

Additionally, for some companies, the bear market that followed the bubble’s burst became an incredible opportunity. Some used it to strengthen their positions at a discount. Solid businesses with strong faith in their futures, like Amazon and Dell, took advantage of the irrationally low prices of their stocks at the bottom of the bear market to buy back shares and increase their ownership in anticipation of what was to come. We also saw some genius acquisitions at very low prices, such as Google buying Applied Semantics in 2003 for approximately $102 million in cash and stock. This acquisition became AdSense, which turned into the real core engine of Google’s business. Even some remnants of the worst companies were bought at a discount by others. Believe it or not, the online mortgage company BarNone bought the very famous but ugly mascot of Pets.com for $125,000 to use in its own advertising campaigns under the theme “Everyone deserves a second chance”. Capitalism at its best!

 

So maybe a bubble isn’t entirely bad, and we should even look forward to a real AI bubble, even if it has yet to materialize. This bubble could represent a generational wealth opportunity for investors who successfully navigate the trend by avoiding bad companies and investing in the good ones. It will undoubtedly create some future winners that we have yet to discover, though I have no doubt that Nvidia will remain at the core of this revolution. Knowing also when the euphoria becomes unsustainable will also be crucial. But even if the bubble eventually bursts as things get ahead of themselves, the history of the Dot-Com bubble suggests that 20 years from now, we might look back at this period as one of the most defining moments that laid the groundwork for the future of human society. I look forward to the positive changes that AI will undoubtedly bring to humanity, and something tells me that some of these changes could come in the next few years.

 

 

 

 

 

 

 


 

9 comments

9 comentários


Bjoern
30 de set.

Vincent, I love reading your stuff, as always entertaining and very interesting!

Really enjoying this membership.. :-)

Cheers

Curtir

Nuvix
27 de set.

100%. Unless you’re in the industry, you don’t realize the competitive advantages in the advancement of certain complex semiconductors like GPUs. Combined with the software, it’s almost impossible for anyone else to catch up. There would need to be some new computing technology that displaces GPUs as the best AI training and inference processor.


It also helps that 40% of their top customers is the 4 largest companies in the world that literally print money. Fear of losing that top spot will keep them spending on finding new ways to use this new technology. Many don’t realize it’s already being used heavily on the back end. There’s more to come


One thing not touched upon is the amount governments around…


Curtir

Alek Mesarovich
27 de set.

Several concerns could be pointed out:


  1. NVidia has about 4 customers - Google, Meta, Amazon, Microsoft. They (a) all have so much money they don't know what to do with it (e.g. Meta's whole VR investment) and (b) have teams of brilliant engineers who love to build really large data centers just because it's fun. Eventually the finance people are going to notice all this money is being spent and nothing is coming in. Until somebody thinks of something to do with all that hardware, when it's running at 97% capacity, the order stream to NVidia could dry up really soon.

  2. The finance guys are also going to ask why all this money is being given to Nvidia, can't we…


Curtir
Bjoern
30 de set.
Respondendo a

Developing their own CPUs or GPUs (or whatever :-)) would also still cost them a huge amount of money, even these huge companies cannot develop this stuff for free - so they would not simply "keep their money". That is a bit simplistic. It might happen here and there and I think I read somewhere that these guys are indeed developing their own chips, but this does not lead to no demand for NVDA. The world is not just black and white and so far the demand outpaces the supply by far as it seems... Let's see how it goes.. really exciting


Curtir

Etienne
26 de set.

Whoa give me the night to digest all this! I somehow guessed you were onto a new post!

Curtir
Xrayzr
27 de set.
Respondendo a

Would love to hear your thoughts on Elliott wave theory and correlations to your data driven work. I remember you mentioning you were wanting g to write an article about it at some point and seems timely considering ewt ppl screaming for an eminent top. I like to look at it for targets but have lost more money following avi Gilbert charts than I care to admit.

Curtir

Michael Korb
Michael Korb
26 de set.

Absolutely incredible, thank you. A complete history of the internet.

Curtir
Vincent D.
Vincent D.
27 de set.
Respondendo a

Thanks, Michael. I hope you grasped the main takeaway from the history of that incredible trend: "Got the girls, got the money. Now I’m ready to live a disgusting, frivolous life!" 😜

Curtir
bottom of page