AMD is taking an aggressive approach in the AI industry and has officially confirmed its plans to develop the next-generation “MI400” Instinct series of AI accelerators. The confirmation follows Lenovo’s VP’s previous statement about the existence of these accelerators on AMD’s agenda.
During a recent Q2 earnings call, AMD’s CEO, Lisa Su, hinted at the future Instinct MI400 AI Accelerators, although she did not provide specific details, leaving enthusiasts eagerly awaiting more information. Similar to the MI300 series, the MI400 accelerators will be available in a range of configurations, catering to various AI application needs.
When you look across those workloads and the investments that we’re making, not just today, but going forward with our next generation MI400 series and so on and so forth, we definitely believe that we have a very competitive and capable hardware roadmap. I think the discussion about AMD, frankly, has always been about the software roadmap, and we do see a bit of a change here on the software side.
Dr. Lisa Su (AMD CEO)
While the Instinct lineup boasts top-of-the-line hardware specifications, AMD recognizes the need to bolster its software platform to offer better support for generative AI applications. NVIDIA has been a front-runner in this aspect with features like “NVIDIA ACE” and “DLSDR.” To bridge this gap, AMD is focusing on improving its software capabilities, potentially introducing significant changes that will enhance the Instinct platform.

Apart from the next-gen MI400 Instinct lineup, AMD has also revealed plans to develop “cut-down” MI300 variants specifically for the Chinese market to comply with US trade policies. While exact specifications remain uncertain, it is speculated that Team Red will adopt a strategy similar to NVIDIA’s “H800 and A800” GPUs.
NVIDIA has been capitalizing on the AI market, experiencing tremendous sales and demand. However, competitors like Intel and AMD are gearing up to provide stiff competition due to their better performance and value offerings. With the development of the MI400 series and improvements in software support, AMD aims to establish a strong presence in the AI industry and offer viable alternatives to NVIDIA’s offerings.
AMD Radeon Instinct Accelerators – AMD MI400 Instinct AI Accelerators
CPU Architecture | GPU Architecture | GPU Process Node | GPU Chiplets | GPU Cores | GPU Clock Speed | FP16 Compute | FP32 Compute | FP64 Compute | VRAM | Memory Clock | Memory Bus | Memory Bandwidth | Form Factor | Cooling | TDP (Max) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Zen 5 (Exascale APU) | CDNA 4 | 4nm | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | |
Zen 4 (Exascale APU) | Aqua Vanjaram (CDNA 3) | 5nm+6nm | 8 (MCM) | Up To 19,456 | TBA | TBA | TBA | TBA | 192 GB HBM3 | 5.2 Gbps | 8192-bit | 5.2 TB/s | OAM | Passive Cooling | 750W |
N/A | Aldebaran (CDNA 2) | 6nm | 2 (MCM) 1 (Per Die) | 14,080 | 1700 MHz | 383 TOPs | 95.7 TFLOPs | 47.9 TFLOPs | 128 GB HBM2e | 3.2 Gbps | 8192-bit | 3.2 TB/s | OAM | Passive Cooling | 560W |
N/A | Aldebaran (CDNA 2) | 6nm | 2 (MCM) 1 (Per Die) | 13,312 | 1700 MHz | 362 TOPs | 90.5 TFLOPs | 45.3 TFLOPs | 128 GB HBM2e | 3.2 Gbps | 8192-bit | 3.2 TB/s | OAM | Passive Cooling | 500W |
N/A | Aldebaran (CDNA 2) | 6nm | 2 (MCM) 1 (Per Die) | 6656 | 1700 MHz | 181 TOPs | 45.3 TFLOPs | 22.6 TFLOPs | 64 GB HBM2e | 3.2 Gbps | 4096-bit | 1.6 TB/s | Dual Slot Card | Passive Cooling | 300W |
N/A | Arcturus (CDNA 1) | 7nm FinFET | 2 (MCM) 1 (Per Die) | 7680 | 1500 MHz | 185 TFLOPs | 23.1 TFLOPs | 11.5 TFLOPs | 32 GB HBM2 | 1200 MHz | 4096-bit | 1.23 TB/s | Dual Slot, Full Length | Passive Cooling | 300W |
N/A | Vega 20 | 7nm FinFET | 1 (Monolithic) | 4096 | 1800 MHz | 29.5 TFLOPs | 14.7 TFLOPs | 7.4 TFLOPs | 32 GB HBM2 | 1000 MHz | 4096-bit | 1 TB/s | Dual Slot, Full Length | Passive Cooling | 300W |
N/A | Vega 20 | 7nm FinFET | 1 (Monolithic) | 3840 | 1725 MHz | 26.5 TFLOPs | 13.3 TFLOPs | 6.6 TFLOPs | 16 GB HBM2 | 1000 MHz | 4096-bit | 1 TB/s | Dual Slot, Full Length | Passive Cooling | 300W |
N/A | Vega 10 | 14nm FinFET | 1 (Monolithic) | 4096 | 1500 MHz | 24.6 TFLOPs | 12.3 TFLOPs | 768 GFLOPs | 16 GB HBM2 | 945 MHz | 2048-bit | 484 GB/s | Dual Slot, Full Length | Passive Cooling | 300W |
N/A | Fiji XT | 28nm | 1 (Monolithic) | 4096 | 1000 MHz | 8.2 TFLOPs | 8.2 TFLOPs | 512 GFLOPs | 4 GB HBM1 | 500 MHz | 4096-bit | 512 GB/s | Dual Slot, Half Length | Passive Cooling | 175W |
N/A | Polaris 10 | 14nm FinFET | 1 (Monolithic) | 2304 | 1237 MHz | 5.7 TFLOPs | 5.7 TFLOPs | 384 GFLOPs | 16 GB GDDR5 | 1750 MHz | 256-bit | 224 GB/s | Single Slot, Full Length | Passive Cooling | 150W |
[…] Intel will introduce a dedicated Vision Processing Unit (VPU) for the first time, designed to be a dedicated AI engine integrated into the chip. Coupled […]
[…] company known for repackaging NVIDIA GPUs, has listed several models from NVIDIA’s speculated next-gen RTX 50 series on the ECC (Error-Correcting Code) database. While this news might pique the interest […]