I-Mustang-V100-MX8

Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe gen2 x4 interface

Anewtech I-Mustang-V100-MX8 AI Accelerator Card IEI

Features

  • Half-Height, Half-Length, Single-slot compact size
  • Low power consumption ,approximate 2.5W for each Intel® Movidius™ Myriad™ X VPU
  • Supported OpenVINO™ toolkit, AI edge computing ready device
  • Eight Intel® Movidius™ Myriad™ X VPU can execute multiple topologies simultaneously

Specification

Anewtech-iei-ai-card-vision-accelerator-I-Mustang-V100-MX8

System
Chipset Eight Intel® Movidius™ Myriad™ X MA2485 VPU
Cooling method / System Fan Active fan
Power
Power Consumption Approximate 25W
Environment
Operating Temperature -20°C ~ 60°C
Humidity 5% ~ 90%
Card Slot Interface
PCIe x4

I-Mustang-V100 provides economical acceleration solutions for AI inference, and they can also work with the OpenVINO toolkit to optimize inference workloads for image classification and computer vision. The OpenVINO toolkit, developed by Intel, helps to fast-track the development of high-performance computer vision and deep learning into vision applications.

It includes the Model Optimizer and Inference Engine, and can optimize pre-trained deep learning models (such as Caffe and TensorFlow) into an intermediate representation (IR), and then execute the inference engine across heterogeneous Intel hardware (such as CPU, GPU, FPGA and VPU).


Intel® Distribution of OpenVINO™ toolkit

Anewtech-ai-open-vino-accelerator-card

OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.


Anewtech-I-Mustang-MPCIE-MX2-features

Key Features of Intel® Movidius™ Myriad™ X VPU:

  • Native FP16 support
  • Rapidly port and deploy neural networks in Caffe and Tensorflow formats
  • End-to-End acceleration for many common deep neural networks
  • Industry-leading Inferences/S/Watt performance

AI Inference System Accelerates Your AI Initiative

AI embedded systems are ideal for deep learning inference computing to help you get faster, deeper insights into your business. Our AI-based embedded systems support graphics cards, Intel® FPGA acceleration cards, and Intel® Vision Accelerator Card with Intel® Movidius™ VPU, and provide additional computational power plus end-to-end solution to run your tasks more efficiently.

With Intel® DevCloud for Edge solutions and Intel® Distribution of OpenVINO™ toolkit , you can deploy your solutions faster than ever.