AI Inference System for AI/Machine Learning Acceleration

The ability to simulate human intelligence such as to reason, goal setting, understanding and generating language, and perception and response to sensory inputs, have become benchmarks to evaluate the progress of AI. And with a series of logical rules being derived from various learning models, we are constantly creating an updated knowledge base for AI. We need devices, called “Inference Systems”, to apply that logic for different tasks.

There are many deep learning methods that focus on AI training, including large-scale machine learning, deep learning, reinforcement learning, and natural language processing. All these deep learning methods advance the possibilities of applying intelligent solutions to the real-world problems around us.





AI Inference System Accelerates Your AI Initiative

AI embedded systems are ideal for deep learning inference computing to help you get faster, deeper insights into your business. Our AI-based embedded systems support graphics cards, Intel® FPGA acceleration cards, and Intel® Vision Accelerator Card with Intel® Movidius™ VPU, and provide additional computational power plus end-to-end solution to run your tasks more efficiently.

With Intel® DevCloud for Edge solutions and Intel® Distribution of OpenVINO™ toolkit , it can help you deploy your solutions faster than ever.

    I-TANK-880 I-HTB-200 I-ITG-100AI AD-MIC-730AI
Accelerator Card Intel® Arria® 10 GX1150 FPGA    
8 x Intel® Movidius™ VPU    
4 x Intel® Movidius™ VPU    
2 x Intel® Movidius™ VPU    
NVIDIA Jetson™ Xavier      
Applications Image Classification
Object Detection
Image Segmentation