||Eight Intel® Movidius™ Myriad™ X MA2485 VPU
||Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
||PCI Express x4
|Compliant with PCI Express Specification V2.0
||Half-Height, Half-Length, Single-width PCIe
||5% ~ 90%
||*Preserved PCIe 6-pin 12V external power
|Dip Switch/LED indicator
||Identify card number
I-Mustang-V100 provides economical acceleration solutions for AI inference, and they can also work with the OpenVINO toolkit to optimize inference workloads for image classification and computer vision. The OpenVINO toolkit, developed by Intel, helps to fast-track the development of high-performance computer vision and deep learning into vision applications. It includes the Model Optimizer and Inference Engine, and can optimize pre-trained deep learning models (such as Caffe and TensorFlow) into an intermediate representation (IR), and then execute the inference engine across heterogeneous Intel hardware (such as CPU, GPU, FPGA and VPU).
OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.
Key Features of Intel® Movidius™ Myriad™ X VPU:
- Native FP16 support
- Rapidly port and deploy neural networks in Caffe and Tensorflow formats
- End-to-End acceleration for many common deep neural networks
- Industry-leading Inferences/S/Watt performance