コインチェーン

仮想通貨・Web3ニュース・投資・教育情報

Elon Musk Announces GROK 3 Training at Memphis with NVIDIA H100 GPUs

Jul 26, 2024 #仮想通貨
Elon Musk Announces GROK 3 Training at Memphis with NVIDIA H100 GPUsコインチェーン 仮想通貨ニュース

Elon Musk has announced the commencement of GROK 3 training at the Memphis supercomputer facility, equipped with NVIDIA’s H100 GPUs, marking a significant milestone in AI development.

Points

  • GROK 3 training began at the Memphis supercomputer facility with NVIDIA H100 GPUs.
  • The facility is described as the world’s most powerful AI training cluster.
  • The training aims to develop the world’s most advanced AI by December 2024.
  • xAI shifts strategy by canceling a $10 billion Oracle server deal.

Elon Musk has officially announced the commencement of GROK 3 training at the Memphis supercomputer facility, equipped with NVIDIA’s current-generation H100 GPUs. Musk refers to this facility as “the most powerful AI training cluster in the world.” The training began on July 24, 2024, at 4:20 am local time, with the aid of 100,000 liquid-cooled H100 GPUs on a single RDMA fabric.

GROK 3 Training at Memphis Supercluster

Musk stated that the world’s “most advanced AI” could be developed by December of this year. The Memphis supercomputer facility’s capabilities are unparalleled, thanks to the integration of 100,000 liquid-cooled H100 GPUs, making it the most powerful AI training cluster globally. This significant milestone is a testament to the efforts of the teams from xAI, X, and NVIDIA.

xAI Shifts Strategy

In light of recent developments, xAI has shifted its strategy, canceling a $10 billion server deal with Oracle. This decision comes as xAI’s Gigafactory of Compute, initially expected to be operational by the fall of 2025, has started operations ahead of schedule. Instead of relying on Oracle, xAI has chosen to develop its advanced supercomputer using NVIDIA’s H100 GPUs, which cost around $30,000 each.

Power Supply and Facility Operations

An analyst has raised questions about the power supply for the Memphis Supercluster. Elon Musk responded by stating that the facility currently receives 8 MW from the grid, with plans to increase to 50 MW once they sign a deal with the Tennessee Valley Authority (TVA) by August 1. The facility is expected to reach 200 MW by the end of the year, sufficient to power 100,000 GPUs.

Satellite images reveal that Musk has employed 14 VoltaGrid mobile generators, each yielding 2.5 MW, contributing a total of 35 MW of electricity. Combined with the 8 MW from the grid, the facility currently operates with 43 MW, enough to power approximately 32,000 H100 GPUs.

Conclusion

The commencement of GROK 3 training at the Memphis supercomputer facility marks a significant milestone in AI development. With the most powerful AI training cluster in the world, xAI, X, and NVIDIA are poised to develop groundbreaking AI technologies by the end of 2024. The shift in strategy to use NVIDIA’s H100 GPUs and the substantial power supply arrangements highlight the ambitious goals and innovative approach of Elon Musk and his teams.