AS-8125GS-TNHR

DP AMD 8U Server with NVIDIA HGX H100 8-GPU

Anewtech Systems Supermicro Singapore GPU Server Supermicro Servers  Supermicro AS-8125GS-TNHR
  • 24 DIMM Slots; Up to 6TB DRAM; 4800 ECC DDR5 LRDIMM;RDIMM;
  • 8 PCI-E Gen 5.0 X16 LP, 2 PCI-E Gen 5.0 X16 FHFL Slots
  • Flexible networking options
  • 1 M.2 NVMe for boot drive only
    • 12x 2.5" Hot-swap  NVMe drive bays
    • 2x 2.5" Hot-swap  SATA drive bays
  • 10 heavy duty fans with optimal fan speed control
  • 8x 3000W redundant Titanium level power supplies
Datasheet
Product Specification
Anewtech Systems Supermicro Singapore GPU Server Supermicro Servers  Supermicro AS-8125GS-TNHR-superrmicro
Product SKUsA+ Server AS-8125GS-TNHR
MotherboardSuper H13DSG-O-CPU
Processor
CPUDual Socket SP5
AMD EPYC™ 9004 Series Processors
CoresUp to 128C/256T
NoteSupports up to 400W TDP CPUs (Air Cooled)
GPU
Max GPU Count8 onboard GPU(s)
Supported GPUNVIDIA SXM: HGX H100 8-GPU (80GB)
CPU-GPU InterconnectPCIe 5.0 x16 CPU-to-GPU Interconnect
GPU-GPU InterconnectNVIDIA® NVLink™ with NVSwitch™
System Memory
MemoryMemory Capacity: 24 DIMM slots   
Up to 6TB: 24x 256 GB DRAM   
Memory Type: 4800MHz ECC DDR5 RDIMM/LRDIMM
On-Board Devices
ChipsetAMD SP5
Network Connectivity2x 10GbE BaseT with Intel® X550-AT2 (optional)
IPMISupport for Intelligent Platform Management Interface v.2.0   
IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Input / Output
Video1 VGA port(s)
System BIOS
BIOS TypeAMI 32MB SPI Flash EEPROM
Management
SoftwareOOB Management Package (SFT-OOB-LIC ), Redfish API, IPMI 2.0, SSM, Intel® Node Manager, SPM, KVM with dedicated LAN, SUM, NMI, Watch Dog, SuperDoctor® 5
Power ConfigurationsACPI Power Management   
Power-on mode for AC power recovery
PC Health Monitoring
CPU7 +1 Phase-switching voltage regulator   
Monitors for CPU Cores, Chipset Voltages, Memory
FANFans with tachometer monitoring   
Pulse Width Modulated (PWM) fan connectors   
Status monitor for speed control
TemperatureMonitoring for CPU and chassis environment   
Thermal Control for fan connectors
Chassis
Form Factor8U Rackmount
ModelCSE-GP801TS
Dimensions and Weight
Height14" (355.6mm)
Width17.2" (437mm)
Depth33.2" (843.28mm)
Package29.5" (H) x 27.5" (W) x 51.2" (D)
WeightNet Weight: 166 lbs (75.3 kg)   
Gross Weight: 225 lbs (102.1 kg)
Available ColorBlack Front & Silver Body
Front Panel
ButtonsPower On/Off button   
System Reset button
LEDsHard drive activity LED   
Network activity LEDs   
Power status LED   
System Overheat & Power Fail LED
Expansion Slots
PCI-Express (PCI-E)8 PCIe 5.0 x16 LP, 2 FHFL PCIe 5.0 x16 Slots
Drive Bays / Storage
Hot-swap14x 2.5" hot-swap NVMe/SATA drive bays (6x 2.5" NVMe hybrid; 4x 2.5" NVMe dedicated)
M.21 M.2 NVMe
System Cooling
Fans10 heavy duty fans with optimal fan speed control
Power Supply8x 3000W Redundant Power Supplies, Titanium Level
AC Input3000W:
DC Output3000W
Output TypeBackplanes (connector)
Operating Environment
Environmental Spec.Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F)   
Non-operating Temperature: -40°C to 60°C (-40°F to 140°F)   
Operating Relative Humidity: 8% to 90% (non-condensing)   
Non-operating Relative Humidity: 5% to 95% (non-condensing)
Generative AI SuperCluster

The full turn-key data center solution accelerates time-to-delivery for mission-critical enterprise use cases, and eliminates the complexity of building a large cluster, which previously was achievable only through the intensive design tuning and time-consuming optimization of supercomputing.

Proven Design Datasheet

 

With 32 NVIDIA HGX H100/H200 8-GPU, 8U Air-cooled Systems (256 GPUs) in 9 Racks

Key Features

  • Proven industry leading architecture for large scale AI infrastructure deployments
  • 256 NVIDIA H100/H200 GPUs in one scalable unit
  • 20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
  • 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
  • Customizable AI data pipeline storage fabric with industry leading parallel file system options
  • NVIDIA AI Enterprise Software Ready

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-GPU-ServerCompute Node

Anewtech-Systems-Supermicro-Generative-AI-SuperCluster-SYS-821GE-TNHR-SRS-48UGPU-AI-ACSU-GPU-Server
Large Scale AI Training Workloads

HGX H100 Systems 
Multi-Architecture Flexibility with Future-Proof Open-Standards-Based Design

 

Benefits & Advantages

  • High performance GPU interconnect up to 900GB/s - 7x better performance than PCIe
  • Superior thermal design supports maximum power/performance CPUs and GPUs
  • Dedicated networking and storage per GPU with up to double the NVIDIA GPUDirect throughput of the previous generation
  • Modular architecture for storage and I/O configuration flexibility 

Key Features

  • 8 next-generation H100 SXM GPUs with NVLink, NVSwitch interconnect
  • Supports PCIe 5.0, DDR5 and Compute Express Link (CXL) 1.1+
  • Innovative modular architecture designed for flexibility and futureproofing in 8U
  • Optimized thermal capacity and airflow to support CPUs up to 350W and GPUs up to 700W with air cooling and optional liquid cooling
  • PCIe 5.0 x16 1:1 networking slots for GPUs up to 400Gbps each supporting GPUDirect Storage and RDMA and up to 16 U.2 NVMe drive bays
Anewtech-Systems-Supermicro-GPU-Server-AS-8125GS-TNHR GPU Server Supermicro Singapore Superserver Supermicro Servers

Liquid Cooling GPU Server

Anewtech Systems Supermicro Singapore GPU Server Supermicro Servers  Supermicro Super-Server-AS-8125GS-TNHR
GPU Super Server AS -8125GS-TNHR
Overview8U Dual Socket (4th Gen AMD EPYC™), up to 8 SXM5 GPUs
CPU2x 4th Gen AMD EPYC™ Processors
Memory     
(additional memory available)
24 DIMM slots     
Up to 6TB ECC DDR5-4800 RDIMM
Graphics8x HGX H100 SXM5 GPUs (80GB, 700W TDP)
Storage     
(additional storage available)
8x 2.5” SATA     
8x 2.5” NVMe U.2 Via PCIe Switches     
Additional 8x 2.5” NVMe U.2 Via PCIe Switches (option)     
2x NVMe M.2
Power3+3 Redundant     
6x 3000W Titanium Level Efficiency Power Supplies

Accelerate Large Scale AI Training Workloads

Large-Scale AI training demands cutting-edge technologies to maximize parallel computing power of GPUs to handle billions if not trillions of AI model parameters to be trained with massive datasets that are exponentially growing. 

Leverage NVIDIA’s HGX™ H100 SXM 8-GPU and the fastest NVLink™ & NVSwitch™ GPU-GPU interconnects with up to 900GB/s bandwidth, and fastest 1:1 networking to each GPU for node clustering, these systems are optimized to train large language models from scratch in the shortest amount of time. 

Completing the stack with all-flash NVMe for a faster AI data pipeline, we provide fully integrated racks with liquid cooling options to ensure fast deployment and a smooth AI training experience.