August 14, 2023

Server Supply Chain - AI Server Components

000660 KS, 005930 KS, MU, NVDA, TSM
By Van Tran
High demand for Nvidia’s DGX server GPUs significantly drove up overall shipments of high-bandwidth memory DRAM during 2Q23.
  • 2Q23 HBM2 DRAM shipments increased by 80%–90% qq; 3Q23 shipments forecast up 30%–40% qq
  • SK Hynix (000660 KS) supplies around 60% of HBM2/HBM2e DRAM for NVDA’s A100 and H100 GPU for AI/ML server, vs. Samsung’s (005930 KS) 40% share
  • SK Hynix sole source of HBM3 DRAM for H100 SXM5 and NVL interfaces until 1H24

Asian supply chain sources said high-bandwidth memory (HBM) DRAM shipments from SK Hynix Inc. (000660 KS) and Samsung Electronics Co. Ltd. (005930 KS) increased by around 80%–90% qq during 2Q23 because of high demand for artificial intelligence (AI)/machine learning (ML) servers running on Nvidia Corp.’s DGX platforms. Sources expect the high demand for Nvidia’s DGX servers to continue during 3Q23, with HBM DRAM shipments expected up 30%–40% qq.

Sources said SK Hynix supplies around 60% of HBM2/HBM2e DRAM packaged with Nvidia’s A100 and H100 (PCIe Gen 4 and SXM4 interfaces), while Samsung supplies around 40%. SK Hynix also is the sole supplier of the HBM3 DRAM for Nvidia’s H100 GPU for SXM5 and NVL interfaces, which are expected to ship beginning in 3Q23. “Most of the Nvidia GDX servers are still using HBM2 this year, and only a small portion will use HBM3,” a source said. Sources expect SK Hynix to continue to be the single-source supplier of HBM3 DRAM to Nvidia until 1H24, when Samsung and Micron Technology Inc. are expected to begin to compete for share of HBM3.

A typical NVDA GDX server requires at least two A100 or H100 GPUs, and each GPU is bundled with HBM DRAM. Sources said HBM DRAM is combined with the GPU chip using CoWoS 2.5D advanced packaging technology by Taiwan Semiconductor Manufacturing Co. Ltd. (2330 TT), with each Nvidia A100 or H100 requiring around five to six HBM2/HBM2e cubes (16 GB per cube), costing around $950–$1,150 in total. Sources estimated the price for the same densities for HBM3 (when shipped in 3Q23) could cost around 8%–10% more than HBM2.

High Bandwidth Memory (HBM) for Nvidia’s GDX GPUs
 A100 “Ampere”
(7 nm)
H100 “Hopper”
(4 nm)
Form FactorPCIe Gen4/SXM4PCIe Gen5SXM5/NVL
High Bandwidth MemoryHBM2HBM2eHBM3
Density40GB, 80GB80GB, 96GB80GB, 96GB, 192GB
HBM VendorSK Hynix,
Samsung
SK Hynix, SamsungSK Hynix*
* Samsung and Micron are expected to compete for HBM3 share in 1H24