I-Mustang-V100-MX8

Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe Gen2 x4 interface, RoHS

Anewtech-I-Mustang-V100-MX8

Features

  • Half-Height, Half-Length, Single-slot compact size
  • Low power consumption ,approximate 2.5W for each Intel® Movidius™ Myriad™ X VPU
  • Supported OpenVINO™ toolkit, AI edge computing ready device
  • Eight Intel® Movidius™ Myriad™ X VPU can execute multiple topologies simultaneously

Specification

Anewtech-vision-accelerator-I-Mustang-V100-MX8

Main Chip Eight Intel® Movidius™ Myriad™ X MA2485 VPU
Operating Systems Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
Dataplane Interface PCI Express x4
Compliant with PCI Express Specification V2.0
Power Consumption <30W
Operating Temperature 5°C~55°C(ambient temperature)
Cooling Active fan
Dimensions Half-Height, Half-Length, Single-width PCIe
Operating Humidity 5% ~ 90%
Power Connector *Preserved PCIe 6-pin 12V external power
Dip Switch/LED indicator Identify card number

Anewtech-mustang-v100-vision-card

I-Mustang-V100 is a PCIe-based accelerator card using an Intel® Movidius™ VPU that drives the demanding workloads of modern computer vision and AI applications. It can be installed in a PC or NAS to boost performance as a perfect choice for AI deep learning inference workloads.

OpenVINO™ toolkit

OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.


Anewtech-I-Mustang-MPCIE-MX2-features

Key Features of Intel® Movidius™ Myriad™ X VPU:

  • Native FP16 support
  • Rapidly port and deploy neural networks in Caffe and Tensorflow formats
  • End-to-End acceleration for many common deep neural networks
  • Industry-leading Inferences/S/Watt performance