I-Mustang-F100-A10

PCIe FPGA Highest Performance Accelerator Card with Arria 10 1150GX support DDR4 2400Hz 8GB, PCIe Gen3 x8 interface

Anewtech-I-Mustang-F100-A10

Features

  • Half-Height, Half-Length, Double-slot
  • Power-efficiency, low-latency
  • Supported OpenVINO™ toolkit, AI edge computing ready device
  • FPGAs can be optimized for different deep learning tasks
  • Intel® FPGAs supports multiple float-points and inference workloads

Specification

Anewtech-vision-accelerator-I-Mustang-F100-A10

Main FPGA Intel® Arria® 10 GX1150 FPGA
Operating Systems Ubuntu 16.04.3 LTS 64-bit, CentOS 7.4 64-bit (Support Windows® 10 in the end of 2018 & more OS are coming soon)
Voltage Regulator and Power Supply Intel® Enpirion® Power Solutions
Memory 8G on board DDR4
Dataplane Interface PCI Express x8
Compliant with PCI Express Specification V3.0
Power Consumption < 60W
Operating Temperature 5°C~60°C (ambient temperature)
Cooling Active fan
Dimensions Standard Half-Height, Half-Length, Double-Slot
Operating Humidity 5% ~ 90%
Power Connector *Preserved PCIe 6-pin 12V external power
Dip Switch/LED indicator Identify card number

I-Mustang-V100 provides economical acceleration solutions for AI inference, and they can also work with the OpenVINO toolkit to optimize inference workloads for image classification and computer vision. The OpenVINO toolkit, developed by Intel, helps to fast-track the development of high-performance computer vision and deep learning into vision applications. It includes the Model Optimizer and Inference Engine, and can optimize pre-trained deep learning models (such as Caffe and TensorFlow) into an intermediate representation (IR), and then execute the inference engine across heterogeneous Intel hardware (such as CPU, GPU, FPGA and VPU).

Anewtech-ai-platform-deep-learning-application

OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.