DP Intel 8U GPU Server with NVIDIA HGX H100 8-GPU and Rear I/O

Anewtech Systems Supermicro Singapore GPU Server Supermicro Servers  Supermicro SYS-821GE-TNHR
  • 5th/4th Gen Intel® Xeon® Scalable processor support
  • 32 DIMM slots Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5
  • 2 PCIe Gen 5.0 X16 FHHL Slots, 2 PCIe Gen 5.0 X16 FHHL Slots (optional)
  • 8 PCIe Gen 5.0 X16 LP
  • Flexible networking options
  • 16x 2.5" Hot-swap  NVMe drive bays (12x by default, 4x optional)
  • 2 M.2 NVMe for boot drive only
  • 3x 2.5" Hot-swap  SATA drive bays
  • Optional: 8x 2.5" Hot-swap  SATA drive bays
  • 10 heavy duty fans with optimal fan speed control
  • 6x 3000W (3+3) Redundant Power Supplies, Titanium Level
  • Optional: 8x 3000W (4+4) Redundant Power Supplies, Titanium Level
Product Specification
Anewtecch-Systems-Liquid-Cooling-Supermicro-Server-sSYS-821GE-TNHR GPU Server Supermicro Singapore Superserver Supermicro Servers
Product SKUsSuperServer SYS-821GE-TNHR (Black front & silver body)
MotherboardSuper X13DEG-OAD
CPUDual Socket E (LGA-4677)
5th Gen Intel® Xeon® / 4th Gen Intel® Xeon® Scalable processors
Core CountUp to 64C/128T; Up to 320MB Cache per CPU
NoteSupports up to 350W TDP CPUs (Air Cooled)
Supports up to 385W TDP CPUs (Liquid Cooled)​
Max GPU Count8 onboard GPU(s)
Supported GPUNVIDIA SXM: HGX H100 8-GPU (80GB)
CPU-GPU InterconnectPCIe 5.0 x16 CPU-to-GPU Interconnect
GPU-GPU InterconnectNVIDIA® NVLink® with NVSwitch™
System Memory
MemorySlot Count: 32 DIMM slots
Max Memory (1DPC): Up to 4TB 5600MT/s ECC DDR5 RDIMM
Max Memory (2DPC): Up to 8TB 4400MT/s ECC DDR5 RDIMM
Memory Voltage1.1 V
On-Board Devices
ChipsetIntel® C741
Network Connectivity2x 10GbE BaseT with Intel® X550-AT2 (optional)
2x 25GbE SFP28 with Broadcom® BCM57414 (optional)
2x 10GbE BaseT with Intel® X710-AT2 (optional)
IPMISupport for Intelligent Platform Management Interface v.2.0
IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Input / Output
Video1 VGA port(s)
System BIOS
SoftwareIPMI 2.0
KVM with dedicated LAN 
Super Diagnostics Offline 
SuperDoctor® 5
Supermicro Update Manager (SUM)
Supermicro Power Manager (SPM)
Supermicro Server Manager (SSM)
Redfish API
Power ConfigurationsACPI Power Management
Power-on mode for AC power recovery
HardwareTrusted Platform Module (TPM) 2.0
Silicon Root of Trust (RoT) – NIST 800-193 Compliant
FeaturesCryptographically Signed Firmware
Secure Boot
Secure Firmware Updates
Automatic Firmware Recovery
Supply Chain Security: Remote Attestation
Runtime BMC Protections
System Lockdown
PC Health Monitoring
CPU8+4 Phase-switching voltage regulator
Monitors for CPU Cores, Chipset Voltages, Memory
FANFans with tachometer monitoring
Pulse Width Modulated (PWM) fan connectors
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment
Thermal Control for fan connectors
Form Factor8U Rackmount
Dimensions and Weight
Height14" (355.6mm)
Width17.2" (437mm)
Depth33.2" (843.28mm)
Package29.5" (H) x 27.5" (W) x 51.2" (D)
WeightNet Weight: 166 lbs (75.3 kg) 
Gross Weight: 225 lbs (102.1 kg)
Available ColorBlack front & silver body
Front Panel
ButtonsPower On/Off button
System Reset button
LEDsHard drive activity LED
Network activity LEDs
Power status LED
System Overheat & Power Fail LED
Expansion Slots
PCI-Express (PCIe)8 PCIe 5.0 x16 LP slot(s)
4 PCIe 5.0 x16 FHHL slot(s)
Drive Bays / Storage
Hot-swap19x 2.5" hot-swap NVMe/SATA drive bays
(16x 2.5" NVMe dedicated)
M.22 M.2 NVMe
System Cooling
Fans10 heavy duty fans with optimal fan speed control
Liquid CoolingDirect to Chip (D2C) Cold Plate (optional)
Power Supply6x 3000W Redundant Titanium Level power supplies
Dimension (W x H x L)106.5 x 82.1 x 245.5 mm
AC Input3000W: 0240Vdc / 50-60Hz (for CQC only)
2880W: 200-207Vac / 50-60Hz 
3000W: 207-240Vac / 50-60Hz
+12VMax: 91.66A / Min: 0A (200Vdc-240Vdc)
12V SBMax: 3A / Min: 0A
Output TypeBackplanes (gold finger)
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) 
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) 
Operating Relative Humidity: 8% to 90% (non-condensing) 
Non-operating Relative Humidity: 5% to 95% (non-condensing)
Generative AI SuperCluster

The full turn-key data center solution accelerates time-to-delivery for mission-critical enterprise use cases, and eliminates the complexity of building a large cluster, which previously was achievable only through the intensive design tuning and time-consuming optimization of supercomputing.

Proven Design Datasheet


With 32 NVIDIA HGX H100/H200 8-GPU, 8U Air-cooled Systems (256 GPUs) in 9 Racks

Key Features

  • Proven industry leading architecture for large scale AI infrastructure deployments
  • 256 NVIDIA H100/H200 GPUs in one scalable unit
  • 20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
  • 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options
  • NVIDIA AI Enterprise Software Ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-GPU-ServerCompute Node


Liquid Cooling GPU Server

GPU Super Server SYS-821GE-TNHR
Overview8U Dual Socket (4th Gen Intel® Xeon® Scalable Processors), up to 8 SXM5 GPUs
CPU2x 4th Gen Intel Xeon Scalable Processors
(additional memory available)
32 DIMM slots      
Up to 8TB: 32x 256 GB DRAM
Graphics8x HGX H100 SXM5 GPUs (80GB, 700W TDP)
(additional storage available)
8x 2.5” SATA      
8x 2.5” NVMe U.2 Via PCIe Switches      
Additional 8x 2.5” NVMe U.2 Via PCIe Switches (option)      
2x NVMe M.2
Power3+3 Redundant      
6x 3000W Titanium Level Efficiency Power

Accelerate Large Scale AI Training Workloads

Large-Scale AI training demands cutting-edge technologies to maximize parallel computing power of GPUs to handle billions if not trillions of AI model parameters to be trained with massive datasets that are exponentially growing. 

Leverage NVIDIA’s HGX™ H100 SXM 8-GPU and the fastest NVLink™ & NVSwitch™ GPU-GPU interconnects with up to 900GB/s bandwidth, and fastest 1:1 networking to each GPU for node clustering, these systems are optimized to train large language models from scratch in the shortest amount of time. 

Completing the stack with all-flash NVMe for a faster AI data pipeline, we provide fully integrated racks with liquid cooling options to ensure fast deployment and a smooth AI training experience.