AS-4125GS-TNHR2-LCC

DP AMD 4U Liquid-Cooled GPU Server, NVIDIA HGX H100 8-GPU

Anewtech-Systems-GPU-Server-Supermicro-AS-4125GS-TNHR2-LCC
  • Dual-Socket, AMD EPYC™ 9004 Series Processors
  • High density 4U system with NVIDIA® HGX™ H100 8-GPU
  • 8 NVMe for NVIDIA GPUDirect Storage
  • 8 NIC for NVIDIA GPUDirect RDMA (1:1 GPU Ratio)
  • Highest GPU communication using NVIDIA® NVLink®
  • 24 DIMM slots Up to 6TB: 4800 ECC DDR5
  • 8 PCIe 5.0 x16 LP slots
  • 2 PCIe 5.0 x16 FHHL slots, 2 PCIe 5.0 x16 FHHL slots (optional)
  • 4x 5250W(2+2) Redundant Power Supplies, Titanium Level
Product Specification
Product SKUsA+ Server AS -4125GS-TNHR2-LCC
MotherboardSuper H13DSG-OM
Processor
CPUDual Socket SP5 AMD EPYC™ 9004 Series Processors
Core CountUp to 128C/256T
NoteSupports up to 400W TDP CPUs (Liquid Cooled)​
GPU
Max GPU CountUp to 8 onboard GPU(s)
Supported GPUNVIDIA SXM: HGX H100 8-GPU (80GB)
CPU-GPU InterconnectPCIe 5.0 x16 CPU-to-GPU Interconnect
GPU-GPU InterconnectNVIDIA® NVLink® with NVSwitch™
System Memory
MemorySlot Count: 24 DIMM slots
Max Memory (1DPC): Up to 6TB 4800MT/s ECC DDR5 RDIMM/LRDIMM
On-Board Devices
ChipsetAMD SP5
IPMISupport for Intelligent Platform Management Interface v.2.0
IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Input / Output
LAN1 RJ45 1 GbE Dedicated IPMI LAN port(s)
USB2 USB 3.0 Type-A port(s) (Rear)
Video1 VGA port(s)
System BIOS
BIOS TypeAMI 32MB SPI Flash EEPROM
Management
SoftwareRedfish API
Supermicro Server Manager (SSM)
Supermicro Update Manager (SUM)
SuperDoctor® 5
Super Diagnostics Offline 
KVM with dedicated LAN 
IPMI 2.0
Power configurationsPower-on mode for AC power recovery
ACPI Power Management
Security
HardwareTrusted Platform Module (TPM) 2.0
Silicon Root of Trust (RoT) – NIST 800-193 Compliant
FeaturesCryptographically Signed Firmware
Secure Boot
Secure Firmware Updates
Automatic Firmware Recovery
Supply Chain Security: Remote Attestation
Runtime BMC Protections
System Lockdown
PC Health Monitoring
CPUMonitors for CPU Cores, Chipset Voltages, Memory
7 +1 Phase-switching voltage regulator
FANFans with tachometer monitoring
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Chassis
Form Factor4U Rackmount
ModelCSE-GP401TS-R000NP
Dimensions and Weight
Height6.85" (174 mm)
Width17.7" (449 mm)
Depth33.2" (842 mm)
Package13" (H) x 48" (W) x 26.4" (D)
WeightGross Weight: 138.89 lbs (63 kg)
Net Weight: 80.03 lbs (53 kg)
Available ColorSilver
Front Panel
ButtonsUID button
Expansion Slots
PCI-Express (PCIe) ConfigurationDefault
8 PCIe 5.0 x16 (in x16) LP slot(s) 
2 PCIe 5.0 x16 (in x16) FHHL slot(s) 
Option A*
8 PCIe 5.0 x16 (in x16) LP slot(s) 
4 PCIe 5.0 x16 (in x16) FHHL slot(s) 
(*Requires additional parts.)
Drive Bays / Storage
Drive Bays ConfigurationDefault: Total 8 bay(s)8 front hot-swap 2.5" NVMe drive bay(s)
M.21 M.2 NVMe slot(s) (M-key)
System Cooling
Fans4x 8cm heavy duty fans with optimal fan speed control
Liquid CoolingDirect to Chip (D2C) Cold Plate (optional)
Power Supply4x 5250W Redundant (2 + 2) Titanium (certification pending) Level power supplies
Dimension (WxHxL)106.5 x 82.1 x 245.3 mm
+12VMax: 125A / Min: 0A (200Vac-240Vac)
AC Input5250W: 200-240Vac / 50-60Hz
Output TypeBackplanes (gold finger)
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F)
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F)
Operating Relative Humidity: 8% to 90% (non-condensing)
Non-operating Relative Humidity: 5% to 95% (non-condensing)
Generative AI SuperCluster

The full turn-key data center solution accelerates time-to-delivery for mission-critical enterprise use cases, and eliminates the complexity of building a large cluster, which previously was achievable only through the intensive design tuning and time-consuming optimization of supercomputing.

Highest Density Datasheet

 

With 32 NVIDIA HGX H100/H200 8-GPU, 4U Liquid-cooled Systems (256 GPUs) in 5 Racks

Key Features

  • Doubling compute density through Supermicro’s custom liquid-cooling solution with up to 40% reduction in electricity cost for data center
  • 256 NVIDIA H100/H200 GPUs in one scalable unit
  • 20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
  • 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options
  • NVIDIA AI Enterprise Software Ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-421GE-TNHR2-LCC-GPU-ServerCompute Node

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SRS-48UGPU-AI-LCSU-SYS-421GE-TNHR2-LCC-GPU-Server