Benchmark Database

GPU performance data and AI model benchmarks

GPUVRAMFP16 TFLOPSTDPEfficiencyPerformance
NVIDIA GB200 NVL72
NVIDIA
14131.2GB28262.486400W32.7
GB200 NVL72
NVIDIA
13824GB27648-W-
NVIDIA DGX H100
NVIDIA
640GB1593610200W156.2
AWS EC2 P5 Instance (8x H100)
AWS
640GB159365600W284.6
Azure ND H100 v5 (8x H100)
Microsoft
640GB159365600W284.6
Google Cloud A3 (8x H100)
Google
640GB159365600W284.6
Lambda Labs 1-Click Cluster (8x H100)
Lambda Labs
640GB159365600W284.6
NVIDIA B200
NVIDIA
192GB3600850W423.5
NVIDIA B100
NVIDIA
192GB3200800W400.0
B200
NVIDIA
192GB25001000W250.0
B100
NVIDIA
192GB25001000W250.0
NVIDIA H200 SXM
NVIDIA
141GB2260700W322.9
H100 NVL
NVIDIA
188GB1979700W282.7
H200
NVIDIA
141GB1979700W282.7
NVIDIA GH200 Grace Hopper Superchip
NVIDIA
96GB19791000W197.9
NVIDIA H100 SXM
NVIDIA
80GB1979700W282.7
CoreWeave H100 Instance
CoreWeave
80GB1979700W282.7
H100 SXM5 80GB
NVIDIA
80GB1979700W282.7
H100 PCIe 80GB
NVIDIA
80GB1979700W282.7
Gaudi 3
Intel
128GB1835600W305.8
Intel Gaudi 3 PCIe (HL-338)
Intel
128GB1835600W305.8
AMD Instinct MI325X
AMD
256GB1307750W174.3
MI300X
AMD
192GB1307750W174.3
MI300A
AMD
128GB1307750W174.3
MI250X
AMD
128GB1307750W174.3
MI250
AMD
128GB1307750W174.3
AMD Instinct MI350X
AMD
288GB1300750W173.3
AMD Instinct MI355X
AMD
288GB1300750W173.3
AMD Instinct MI300X
AMD
192GB1300750W173.3
NVIDIA DGX Station A100
NVIDIA
320GB12481500W83.2