Releases: openvinotoolkit/openvino
2022.3.1
Major Features and Improvements Summary
This is a Long-Term Support (LTS) release. LTS versions are released every year and supported for two years (one year for bug fixes, and two years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy v.2 for more details.
- This 2022.3.1 LTS release provides functional bug fixes and minor capability changes for the previous 2022.3 Long-Term Support (LTS) release, enabling developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.
- Intel® Movidius ™ VPU-based products are supported in this release.
You can find OpenVINO™ toolkit 2022.3 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2022.3.1
- OpenVINO™ Development tools:
pip install openvino-dev==2022.3.1
Release documentation is available here: https://docs.openvino.ai/2022.3/
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-lts/2022-3.html
2023.0.0
Summary of major features and improvements
- More integrations, minimizing code changes
- Now you can load TensorFlow and TensorFlow Lite models directly in OpenVINO Runtime and OpenVINO Model Server. Models are converted automatically. For maximum performance, it is still recommended to convert to OpenVINO Intermediate Representation or IR format before loading the model. Additionally, we’ve introduced a similar functionality with PyTorch models as a preview feature where you can convert PyTorch models directly without needing to convert to ONNX.
- Support for Python 3.11
- NEW: C++ developers can now install OpenVINO runtime from Conda Forge
- NEW: ARM processors are now supported in CPU plug-in, including dynamic shapes, full processor performance, and broad sample code/notebook coverage. Officially validated for Raspberry Pi 4 and Apple® Mac M1/M2
- Preview: A new Python API has been introduced to allow developers to convert and optimize models directly from Python scripts
- Broader model support and optimizations
- Expanded model support for generative AI: CLIP, BLIP, Stable Diffusion 2.0, text processing models, transformer models (i.e. S-BERT, GPT-J, etc.), and others of note: Detectron2, Paddle Slim, RNN-T, Segment Anything Model (SAM), Whisper, and YOLOv8 to name a few.
- Initial support for dynamic shapes on GPU - you no longer need to change to static shapes when leveraging the GPU which is especially important for NLP models.
- Neural Network Compression Framework (NNCF) is now the main quantization solution. You can use it for both post-training optimization and quantization-aware training. Try it out:
pip install nncf
- Portability and performance
- CPU plugin now offers thread scheduling on 12th gen Intel® Core and up. You can choose to run inference on E-cores, P-cores, or both, depending on your application’s configurations. It is now possible to optimize for performance or for power savings as needed.
- NEW: Default Inference Precision - no matter which device you use, OpenVINO will default to the format that enables its optimal performance. For example, FP16 for GPU or BF16 for 4th Generation Intel® Xeon®. You no longer need to convert the model beforehand to specific IR precision, and you still have the option of running in accuracy mode if needed.
- Model caching on GPU is now improved with more efficient model loading/compiling.
You can find OpenVINO™ toolkit 2023.0 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2023.0.0
- OpenVINO™ Development tools:
pip install openvino-dev==2023.0.0
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html
2023.0.0.dev20230427
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.0.0.dev20230427 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.0.0.dev20230427
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.0.0.dev20230427
Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/
2023.0.0.dev20230407
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.0.0.dev20230407 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.0.0.dev20230407
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.0.0.dev20230407
Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/
2023.0.0.dev20230217
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.0.0.dev20230217 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.0.0.dev20230217
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.0.0.dev20230217
Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/
2023.0.0.dev20230119
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.0.0.dev20230119 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.0.0.dev20230119
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.0.0.dev20230119
Release documentation is available here: https://docs.openvino.ai/nightly/
2022.3.0
Major Features and Improvements Summary
This is a Long-Term Support (LTS) release. LTS releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy v.2 to get details.
- 2022.3 LTS release provides functional bug fixes, and capability changes for the previous 2022.2 release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability and higher inferencing performance with less code changes.
- Broader model and hardware support – Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.
- Full support for 4th Generation Intel® Xeon® Scalable processor family (code name Sapphire Rapids) for deep learning inferencing workloads from edge to cloud.
- Full support for Intel’s discrete graphics cards, such as Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads.
- Improved performance when leveraging throughput hint on CPU plugin for 12th and 13th Generation Intel® Core™ processor family (code named Alder Lake and Raptor Lake).
- Enhanced “Cumulative throughput” and selection of compute modes added to AUTO functionality, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
- Expanded model coverage - Optimize & deploy with ease across an expanded range of deep learning models.
- Broader support for NLP models and use cases like text to speech and voice recognition.
- Continued performance enhancements for computer vision models Including StyleGAN2, Stable Diffusion, PyTorch RAFT and YOLOv7.
- Significant quality and model performance improvements on Intel GPUs compared to the previous OpenVINO toolkit release.
- New Jupyter notebook tutorials for Stable Diffusion text-to-image generation, YOLOv7 optimization and 3D Point Cloud Segmentation.
- Improved API and More Integrations – Easier to adopt and maintain code. Requires fewer code changes, aligns better with frameworks, & minimizes conversion
- Preview of TensorFlow Front End – Load TensorFlow models directly into OpenVINO Runtime and easily export OpenVINO IR format without offline conversion. New “–use_new_frontend” flag enables this preview – see further details below in Model Optimizer section of release notes.
- NEW: Hugging Face Optimum Intel – Gain the performance benefits of OpenVINO (including NNCF) when using Hugging Face Transformers. Initial release supports PyTorch models.
- Intel® oneAPI Deep Neural Network Library (oneDNN) has been updated to 2.7 for further refinements and significant improvements in performance for the latest Intel CPU and GPU processors.
- Introducing C API 2.0, to support new features introduced in OpenVINO API 2.0, such as dynamic shapes with CPU, pre-processing and post-process API, unified property definition and usage. The new C API 2.0 shares the same library files as the 1.0 API, but with a different header file.
- Note: Intel® Movidius ™ VPU based products are not supported in this release, but will be added back in a future OpenVINO 2022.3.1 LTS update. In the meantime, for support on those products please use OpenVINO 2022.1.
- Note: Macintosh* computers using the M1* processor can now install OpenVINO and use the OpenVINO ARM* Device Plug-in on OpenVINO 2022.3 LTS and later. This plugin is community supported; no support is provided by Intel and it doesn't fall under the LTS 2-year support policy. Learn more here: https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html
You can find OpenVINO™ toolkit 2022.3 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2022.3.0
- OpenVINO™ Development tools:
pip install openvino-dev==2022.3.0
Release documentation is available here: https://docs.openvino.ai/2022.3/
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-2022-3-lts-relnotes.html
2022.3.0.dev20221125
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2022.3.0.dev20221125 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2022.3.0.dev20221125
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2022.3.0.dev20221125
Release documentation is available here: https://docs.openvino.ai/nightly/
2022.3.0.dev20221103
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2022.3.0.dev20221103 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2022.3.0.dev20221103
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2022.3.0.dev20221103
Release documentation is available here: https://docs.openvino.ai/nightly/
* - sha256 sums for archives
82b275a2a72daf41b6cbdfdcf9c853bf9fe6507e623b39713e38394dfca4a8df l_openvino_toolkit_debian9_arm_2022.3.0.dev20221103_armhf.tgz
398078a0fd7c30515e1fbc7120838448c6d300fcf7b3f6cc976ea08954db8fdf l_openvino_toolkit_rhel8_2022.3.0.dev20221103_x86_64.tgz
c5a1026cc6d211b48d64c15ad24bbac83d14d74c840e0fbcedb168ec06b1d6ee l_openvino_toolkit_ubuntu18_2022.3.0.dev20221103_x86_64.tgz
2ac96d451222fd07789df9f8bbdae7d6c0a10b83607c3d780800c04ee7cb4c91 l_openvino_toolkit_ubuntu20_2022.3.0.dev20221103_x86_64.tgz
16bbb025f5d145b3ebd0c84859a04a1f0f67d2bab21347998cd1d23cf3f2fd8e m_openvino_toolkit_osx_2022.3.0.dev20221103_x86_64.tgz
96bb611a69a89d74848418cdca1afb719b795143e20e9a039f949c8ea147be9b w_openvino_toolkit_windows_2022.3.0.dev20221103_x86_64.zip
2022.2.0
Major Features and Improvements Summary
In this standard release, we’ve fine-tuned our largest update (2022.1) in 4 years to include support for Intel’s latest CPUs and discrete GPUs for more AI innovation and opportunity.
Note: This release intended for developers that prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details. For the latest LTS release visit our selector tool.
-
Broader model and hardware support - Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.
- NEW: Support for Intel 13th Gen Core Processor for desktop (code named Raptor Lake).
- NEW: Preview support for Intel’s discrete graphics cards, Intel® Data Center GPU Flex Series and Intel® Arc™ GPU for DL inferencing workloads in intelligent cloud, edge and media analytics workloads. Hundreds of models enabled.
- NEW: Test your model performance with preview support for Intel 4th Generation Xeon® processors (code named Sapphire Rapids).
- Broader support for NLP models and use cases like text to speech and voice recognition. Reduced memory consumption when using Dynamic Input Shapes on CPU. Improved efficiency for NLP applications.
-
Frameworks Integrations – More options that provide minimal code changes to align with your existing frameworks
- OpenVINO Execution Provider for ONNX Runtime gives ONNX Runtime developers more choice for performance optimizations by making it easy to add OpenVINO with minimal code changes.
- NEW: Accelerate PyTorch models with ONNX Runtime using OpenVINO™ integration with ONNX Runtime for PyTorch (OpenVINO™ Torch-ORT). Now PyTorch developers can stay within their framework and benefit from OpenVINO performance gains.
- OpenVINO Integration with TensorFlow now supports more deep learning models with improved inferencing performance.
-
NOTE: The above frameworks integrations are not included in the install packages. Please visit the respective github links for more information. These products are intended for those who have not yet installed native OpenVINO
-
More portability and performance - See a performance boost straight away with automatic device discovery, load balancing & dynamic inference parallelism across CPU, GPU, and more.
- NEW: Introducing new performance hint (”Cumulative throughput”) in AUTO device, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
- NEW: Introducing Intel® FPGA AI Suite support which enables real-time, low-latency, and low-power deep learning inference in this easy-to-use package
-
NOTE: The Intel® FPGA AI Suite is not included in our distribution packages, please request information here to learn more.
-
You can find OpenVINO™ toolkit 2022.2 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2022.2.0
- OpenVINO™ Development tools:
pip install openvino-dev==2022.2.0
Release documentation is available here: https://docs.openvino.ai/2022.2/
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes.html