Embedded Systems Solutions logo  
 
 
 
     
     
     
 
 
     
  Events & Promos  
 
Intel Software logo
 
 
 

Broaden Your Vision with the OpenVINO™ Toolkit

 

The Intel® Distribution of OpenVINO™ toolkit accelerates inference performance for computer vision and deep learning applications - up to 19x with heterogeneous execution on many Intel® platforms and hardware accelerators (CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA).

OpenVINO™ toolkit
 
  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across computer vision accelerators-CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA-using a common API
  • Speeds time to market via a library of functions and preoptimized kernels
  • Includes optimized calls for OpenCV and OpenVX

Intel Deep Learning Deployment Toolkit

Intel� Deep Learning Deployment Toolkit

Increase Deep Learning Workload Performance on Public Models using OpenVINO™ Toolkit & Intel Architecture

Increase Deep Learning Workload Performance on Public Models using OpenVINO™ Toolkit & Intel� Architecture

Intel Optimizations improve scikit-learn efficiency closer to native code speeds on Intel Core processors Intel� Optimizations improve scikit-learn efficiency closer to native code speeds on Intel Core processors
Intel Distribution for Python* 2019 Scikit-learn* Accelerations on Amazon* AWS EC2
Intel� Distribution for Python* 2019 Scikit-learn* Accelerations on Amazon* AWS EC2

Model Optimizer

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.

Inference Engine

The Inference Engine deployment process assumes you used the  Model Optimizer  to convert your trained model to an Intermediate Representation. The scheme below illustrates the typical workflow for deploying a trained deep learning model.

The diagram below illustrates the typical workflow for deploying a trained deep learning model:

Workflow for deploying a trained deep learning

 

Model Optimizer Developer Guide: https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer

Pre trained Models

The Intel® Distribution of OpenVINO™ toolkit includes two sets of optimized models that can expedite development and improve image processing pipelines for Intel® processors. Use these models for development and production deployment without the need to search for or to train your own models.

Pre trained Models
 
Intel Edge Devices for Inference
 
Intel� Edge Devices for Inference   Intel� Edge Devices for Inference
     
Intel� Edge Devices for Inference   Intel� Edge Devices for Inference
 
 
Case Studies
 
Intel and GE* brought the power of AI to clinical diagnostic scanning and other healthcare workflows
 
Intel and GE* brought the power of AI to Clinical diagnostic scanning
 

https://ai.intel.com/wp-content/uploads/sites/53/2018/03/IntelSWDevTools_OptimizeDLforHealthcare.pdf

 
Download OpenVINO™
 
 
Learn More   Send Quote Scroll Top