Benchmark Database
GPU performance data and AI model benchmarks
| GPU | VRAM | FP16 TFLOPS | TDP | Efficiency | Performance |
|---|---|---|---|---|---|
NVIDIA GB200 NVL72 NVIDIA | 14131.2GB | 28262.4 | 86400W | 32.7 | |
GB200 NVL72 NVIDIA | 13824GB | 27648 | -W | - | |
NVIDIA DGX H100 NVIDIA | 640GB | 15936 | 10200W | 156.2 | |
AWS EC2 P5 Instance (8x H100) AWS | 640GB | 15936 | 5600W | 284.6 | |
Azure ND H100 v5 (8x H100) Microsoft | 640GB | 15936 | 5600W | 284.6 | |
Google Cloud A3 (8x H100) Google | 640GB | 15936 | 5600W | 284.6 | |
Lambda Labs 1-Click Cluster (8x H100) Lambda Labs | 640GB | 15936 | 5600W | 284.6 | |
NVIDIA B200 NVIDIA | 192GB | 3600 | 850W | 423.5 | |
NVIDIA B100 NVIDIA | 192GB | 3200 | 800W | 400.0 | |
B200 NVIDIA | 192GB | 2500 | 1000W | 250.0 | |
B100 NVIDIA | 192GB | 2500 | 1000W | 250.0 | |
NVIDIA H200 SXM NVIDIA | 141GB | 2260 | 700W | 322.9 | |
H100 NVL NVIDIA | 188GB | 1979 | 700W | 282.7 | |
H200 NVIDIA | 141GB | 1979 | 700W | 282.7 | |
NVIDIA GH200 Grace Hopper Superchip NVIDIA | 96GB | 1979 | 1000W | 197.9 | |
NVIDIA H100 SXM NVIDIA | 80GB | 1979 | 700W | 282.7 | |
CoreWeave H100 Instance CoreWeave | 80GB | 1979 | 700W | 282.7 | |
H100 SXM5 80GB NVIDIA | 80GB | 1979 | 700W | 282.7 | |
H100 PCIe 80GB NVIDIA | 80GB | 1979 | 700W | 282.7 | |
Gaudi 3 Intel | 128GB | 1835 | 600W | 305.8 | |
Intel Gaudi 3 PCIe (HL-338) Intel | 128GB | 1835 | 600W | 305.8 | |
AMD Instinct MI325X AMD | 256GB | 1307 | 750W | 174.3 | |
MI300X AMD | 192GB | 1307 | 750W | 174.3 | |
MI300A AMD | 128GB | 1307 | 750W | 174.3 | |
MI250X AMD | 128GB | 1307 | 750W | 174.3 | |
MI250 AMD | 128GB | 1307 | 750W | 174.3 | |
AMD Instinct MI350X AMD | 288GB | 1300 | 750W | 173.3 | |
AMD Instinct MI355X AMD | 288GB | 1300 | 750W | 173.3 | |
AMD Instinct MI300X AMD | 192GB | 1300 | 750W | 173.3 | |
NVIDIA DGX Station A100 NVIDIA | 320GB | 1248 | 1500W | 83.2 |