AMD MI300X加速GPU买家首秀:八路并行 死磕NVIDIA就靠它了!

三益百科 27 0

Fast Technology on January 23, AMD officially released the new generation of accelerated GPU Instinct MI300X/MI300A at the end of last year, the first batch of shipments to big customers such as supercomputing and data centers, including the US government, Microsoft and Meta, and now some small customers have also got the goods.

LaminiAI, a new AI startup, has always had a very good relationship with AMD, so it posted its own batch-delivered MI300X, which will be used to train its own LLM big language model.

They use an eight-way parallel configuration, and a single system has eight such accelerators. Eight cards can be seen from the screenshot, but they are all idle and not at full power.

As early as last July, the U.S. Department of Energy's Lawrence Livermore National Laboratory (LLNA) announced that the new "El Capitan" (Emirates Rock) supercomputing has begun to install the MI300A accelerator, which will be full of blood in 2024 and is expected to perform more than 20 billion floating-point calculations per second.

Now, small customers have also got the MI300 accelerator, and it is clear that the production capacity has come up and can be supplied in bulk, which is very critical for AMD.

AMD had previously predicted that the MI300 accelerator would be the fastest product in the company's history to reach $100m in revenue.

AMD MI300X integrates eight 5nm XCD accelerated computing modules with a total of 304cells, with four 6nm IOD modules, integrated 256MB unlimited cache, and eight total 192GB HBM3 memory with a total of 153 billion transistors.

Officially, MI300X can outperform NVIDIA H100 80GB.

MI300A is the world's first APU accelerator for AI and HPC, with six XCD modules, 228 computing units, three CCD fast and 24 Zen4 CPU cores, with 128GB HBM3 memory, with a total of 146 billion transistors.

抱歉,评论功能暂时关闭!