Overwhelming demand for artificial intelligence chips propels Nvidia’s valuation close to $2 trillion

Thu Feb 22 2024
Ramesh Sridharan (932 articles)
Overwhelming demand for artificial intelligence chips propels Nvidia’s valuation close to $2 trillion

The valuation of Nvidia has reached the illustrious $1 trillion after the business had been public for 24 years. A second trillion will be added to the company’s holdings in only eight months, all because of the chip maker’s part in driving the AI revolution.

Nvidia may have begun its rise to become one of the three most valuable U.S. firms at a Denny’s in 1993, but the company’s success in graphics processing units (GPUs) has accelerated its rise to the top. With an estimated 80% market share, these chips—worth tens of thousands of dollars each—have become a rare and valuable commodity beyond anything Silicon Valley has ever seen.

Rival chip companies have responded to the ravenous demand by developing their own products. How fast businesses can create new AI systems is dependent on their capacity to secure GPUs. In their search for artificial intelligence (AI) talent, companies boast about their access to graphics processing units (GPUs), which have also been used as security for billions of dollars in loans.

Cisco Systems’ chief information officer Fletcher Previn said at this month’s CIO Network Summit in The Wall Street Journal that the chips are transported to the company in an armored car because of how precious they are.

Following Nvidia’s third consecutive quarter of results that surpassed expectations, the company’s leaders announced on Wednesday that supplies are still tight and that a new generation of artificial intelligence chips, which are slated to be released this year, will face similar constraints.

Generic artificial intelligence bots like OpenAI’s ChatGPT rely on these chips to train their massive language models. Google, Amazon, and Microsoft have all invested heavily in GPUs for artificial intelligence.

Nvidia co-founder and CEO Jensen Huang said that generative AI will lead to a trillion dollar investment surge that will quadruple the world’s data center capacity in five years and open up new markets for Nvidia.

“A whole new industry is being formed, and that’s driving our growth,” he said on the company’s earnings call. This past Wednesday, Nvidia announced quarterly sales of $22.1 billion and predicted another $24 billion for the current quarter. These figures are more than three times higher than last year’s results and exceed Wall Street’s optimistic predictions.

Shares of Nvidia closed Thursday at $785.38, a record high, valuing the business at $1.96 trillion, thanks to the results. In 2023, the stock more than tripled in value; this year, it has increased 59%.

Despite having its start making graphics chips for personal computers over 30 years ago, Nvidia was quick to jump on the AI bandwagon.

According to FactSet, Huang, who has been one of the longest-tenured CEOs in the computer industry, owns 86.6 million shares in Nvidia, which is valued at over $68 billion.

By allowing its chips to be used for applications other than computer graphics in 2006, Huang set the stage for Nvidia’s AI ascent. Their rapid adoption by engineers for artificial intelligence computations was a direct result of their exceptional performance in this area. More so than with conventional CPUs, the kind of mathematics required to construct sophisticated AI systems is well-suited to the multi-threading capabilities of graphics processing units.

The most complex AI systems often make use of hundreds, if not thousands, of Nvidia’s H100s, their most powerful graphics processing units. Plus, they’re not cheap; experts predict that a single one will cost you about $25,000.

It is becoming more difficult for Nvidia to meet demand, even though analysts believe the company can produce about 1.2 million of the chips annually. Despite Nvidia’s best efforts, Taiwan Semiconductor Manufacturing Co., which Nvidia contracts to manufacture the chips, has hit a snag in the assembly phase of chip production. In these subsequent stages, TSMC plans to quadruple capacity this year.

Due to the high demand, rivals have begun developing their own AI-centric CPUs. To take against Nvidia, Advanced Micro Devices has released processors into the market, and they expect sales of over $3.5 billion for the year. Arm Holdings, a British chip company, has bragged about how AI may benefit from its chips, and Intel has begun marketing CPUs that can manage AI calculations.

A plethora of new companies are also developing AI processors. Additionally, major cloud providers like Amazon and Google are bolstering their in-house AI chip development teams. In November, Microsoft introduced the Maia 100, its inaugural artificial intelligence processor.

In the meantime, both small and large tech companies have been bragging about the number of Nvidia chips they have acquired. In an Instagram post last month, Mark Zuckerberg, chief executive of Meta Platforms, revealed intentions for his company to use 350,000 of Nvidia’s H100 chips by year’s end. At present prices, such a system would cost several billion dollars.

In August, CoreWeave, which has Nvidia as an investor, raised $2.3 billion using its Nvidia H100s as collateral. An individual with knowledge of the transaction said that the high effective interest rate reflected the deal’s risk.

Some schools are even boasting about their H100 inventories in an effort to attract students and attract investors. Last year, while recruiting a software engineer and research scientist, Sanjeev Arora, director of Princeton University’s Language and Intelligence Initiative, boasted about the institute’s “state-of-the-art computational infrastructure with 300 Nvidia H100 GPUs” on the website of the organization.

An executive council has been established by Google to determine the allocation of computing resources to both internal and external users. Executives at Microsoft decide how to distribute the remaining computing resources to Microsoft’s internal projects through a comparable rationing procedure called GPU councils.

The breadth and complexity of Nvidia’s software that it has spent years developing around its chips, according to several analysts and industry officials, makes it difficult for competitors to erode Nvidia’s advantages.

Andrew Ng, founder of AI Fund and a pioneer in the field, countered that both AMD and Intel have made substantial progress in creating rival software systems to complement their AI-powered hardware.

At the Journal’s CIO conference, he stated, “I think in a year or so the semiconductor shortage will feel much better.” So said.

Ramesh Sridharan

Ramesh Sridharan

Ramesh Sridharan is our Stock Market Correspondent covering events and daily movements of stock markets in Asia. He is based in Mumbai