I-Mustang-V100-MX8

Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe gen2 x4 interface

Anewtech-I-Mustang-V100-MX8

Features

  • Half-Height, Half-Length, Single-slot compact size
  • Low power consumption ,approximate 2.5W for each Intel® Movidius™ Myriad™ X VPU
  • Supported OpenVINO™ toolkit, AI edge computing ready device
  • Eight Intel® Movidius™ Myriad™ X VPU can execute multiple topologies simultaneously

Specification

Anewtech-vision-accelerator-I-Mustang-V100-MX8

Form factor
Form factor Dataplane Interface: PCI Express x4 (Compliant with PCI Express Specification V2.0)
System
Chipset Eight Intel® Movidius™ Myriad™ X MA2485 VPU
Supported OS Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows® 10 64bit
I/O Interface
Other On-board Devices and Interfaces Dip Switch/LED indicator: Identify card number
Power
Input *Preserved PCIe 6-pin 12V external power
(Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration)
Power consumption approximate 25W
Environment
Operating Temperature -20°C~60°C
Humidity 5% ~ 90%
Dimensions
Dimensions 169.54x56.16mm (Standard Half-Height, Half-Length, Single-slot PCIe)

I-Mustang-V100 provides economical acceleration solutions for AI inference, and they can also work with the OpenVINO toolkit to optimize inference workloads for image classification and computer vision. The OpenVINO toolkit, developed by Intel, helps to fast-track the development of high-performance computer vision and deep learning into vision applications. It includes the Model Optimizer and Inference Engine, and can optimize pre-trained deep learning models (such as Caffe and TensorFlow) into an intermediate representation (IR), and then execute the inference engine across heterogeneous Intel hardware (such as CPU, GPU, FPGA and VPU).

Anewtech-ai-platform-deep-learning-application

OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.


Anewtech-I-Mustang-MPCIE-MX2-features

Key Features of Intel® Movidius™ Myriad™ X VPU:

  • Native FP16 support
  • Rapidly port and deploy neural networks in Caffe and Tensorflow formats
  • End-to-End acceleration for many common deep neural networks
  • Industry-leading Inferences/S/Watt performance

Ordering Information

Item Description
Mustang-V100-MX8-R11 Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe gen2 x4 interface, RoHS