Elon Musk: By the end of 2024, Tesla’s AI training capabilities will be equivalent to about 85,000 Nvidia H100 chips

Elon Musk: By the end of 2024, Tesla’s AI training capabilities will be equivalent to about 85,000 Nvidia H100 chips
Elon Musk: By the end of 2024, Tesla’s AI training capabilities will be equivalent to about 85,000 Nvidia H100 chips
--

According to Tesla, in the first quarter of 2024, the company’s computing capacity for AI training grew by 130%. However, if Elon Musk’s ambitions come true, this figure could increase by almost 500% by the end of the year.

Previously, analysts noted that, based on Musk’s comments, Tesla could own between 30,000 and 350,000 Nvidia H100 GPUs. Today, as part of its Q1 2024 earnings call, the company confirmed that its AI training capacity has reached nearly 40,000 Nvidia H100 equivalent units, in line with Musk’s stated range.

In January, while confirming a new $500 million investment (roughly 10,000 H100 GPUs) in the Dojo supercomputer, Musk announced that Tesla would spend even more on Nvidia hardware this year as the bets on AI competitiveness are now at a minimum several billion dollars a year.

Now Musk has revealed the true scale of his AI ambitions, revealing that by the end of 2024, Tesla’s AI training computing capacity will grow by about 467% year-on-year to reach 85,000 H100 GPU equivalent units.

This aggressive expansion is already forcing Tesla to sacrifice free cash flow. At the end of the first quarter of 2024, the company recorded negative free cash flow of $2.5 billion associated with an increase in inventories of $2.7 billion and capital expenditures on AI infrastructure of $1 billion.

Elon Musk is also actively increasing AI computing power at xAI, an artificial intelligence-focused company. It is reported that xAI likely owns between 26,000 and 30,000 AI-centric Nvidia graphics cards.

It is worth noting that Nvidia H100 chips will give way to the latest GB200 Grace Blackwell superchip this year. This chip combines one Arms-based Grace CPU and two Blackwell B100 GPUs and is capable of deploying an AI model with 27 trillion parameters. In addition, the superchip is expected to be 30 times faster in tasks such as generating responses in chatbots.

The article is in Russian

Tags: Elon Musk Teslas training capabilities equivalent Nvidia H100 chips

-

NEXT The expert told how to find out the remaining fuel in a car without an on-board computer