Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

openvino-intel-npu-plugin-2024.1.0-1.1 RPM for x86_64

From OpenSuSE Tumbleweed for x86_64

Name: openvino-intel-npu-plugin Distribution: openSUSE Tumbleweed
Version: 2024.1.0 Vendor: openSUSE
Release: 1.1 Build date: Fri May 10 00:56:53 2024
Group: Unspecified Build host: reproducible
Size: 781544 Source RPM: openvino-2024.1.0-1.1.src.rpm
Packager: https://bugs.opensuse.org
Url: https://github.com/openvinotoolkit/openvino
Summary: Intel NPU plugin for OpenVINO toolkit
OpenVINO is an open-source toolkit for optimizing and deploying AI inference.

This package provides the intel NPU plugin for OpenVINO for x86_64 x86_64_v2 x86_64_v3 x86_64_v4 amd64 em64t archs.

Provides

Requires

License

Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND HPND AND JSON AND MIT AND OFL-1.1 AND Zlib

Changelog

* Thu May 09 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Fix sample source path in build script:
    * openvino-fix-build-sample-path.patch
  - Update to 2024.1.0
  - More Generative AI coverage and framework integrations to
    minimize code changes.
    * Mixtral and URLNet models optimized for performance
      improvements on Intel® Xeon® processors.
    * Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models
      optimized for improved inference speed on Intel® Core™
      Ultra processors with integrated GPU.
    * Support for Falcon-7B-Instruct, a GenAI Large Language Model
      (LLM) ready-to-use chat/instruct model with superior
      performance metrics.
    * New Jupyter Notebooks added: YOLO V9, YOLO V8
      Oriented Bounding Boxes Detection (OOB), Stable Diffusion
      in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika,
      TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with
      OpenVINO and LangChain.
  - Broader Large Language Model (LLM) support and more model
    compression techniques.
    * LLM compilation time reduced through additional optimizations
      with compressed embedding. Improved 1st token performance of
      LLMs on 4th and 5th generations of Intel® Xeon® processors
      with Intel® Advanced Matrix Extensions (Intel® AMX).
    * Better LLM compression and improved performance with oneDNN,
      INT4, and INT8 support for Intel® Arc™ GPUs.
    * Significant memory reduction for select smaller GenAI
      models on Intel® Core™ Ultra processors with integrated GPU.
  - More portability and performance to run AI at the edge,
    in the cloud, or locally.
    * The preview NPU plugin for Intel® Core™ Ultra processors
      is now available in the OpenVINO open-source GitHub
      repository, in addition to the main OpenVINO package on PyPI.
    * The JavaScript API is now more easily accessible through
      the npm repository, enabling JavaScript developers’ seamless
      access to the OpenVINO API.
    * FP16 inference on ARM processors now enabled for the
      Convolutional Neural Network (CNN) by default.
  - Support Change and Deprecation Notices
    * Using deprecated features and components is not advised. They
      are available to enable a smooth transition to new solutions
      and will be discontinued in the future. To keep using
      Discontinued features, you will have to revert to the last
      LTS OpenVINO version supporting them.
    * For more details, refer to the OpenVINO Legacy Features
      and Components page.
    * Discontinued in 2024.0:
      + Runtime components:
    - Intel® Gaussian & Neural Accelerator (Intel® GNA).
      Consider using the Neural Processing Unit (NPU)
      for low-powered systems like Intel® Core™ Ultra or
      14th generation and beyond.
    - OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
      transition guide for reference).
    - All ONNX Frontend legacy API (known as
      ONNX_IMPORTER_API)
    - 'PerfomanceMode.UNDEFINED' property as part of
      the OpenVINO Python API
      + Tools:
    - Deployment Manager. See installation and deployment
      guides for current distribution options.
    - Accuracy Checker.
    - Post-Training Optimization Tool (POT). Neural Network
      Compression Framework (NNCF) should be used instead.
    - A Git patch for NNCF integration with 
      huggingface/transformers. The recommended approach
       is to use huggingface/optimum-intel for applying
      NNCF optimization on top of models from Hugging
      Face.
    - Support for Apache MXNet, Caffe, and Kaldi model
      formats. Conversion to ONNX may be used as
      a solution.
    * Deprecated and to be removed in the future:
      + The OpenVINO™ Development Tools package (pip install
      openvino-dev) will be removed from installation options
      and distribution channels beginning with OpenVINO 2025.0.
      + Model Optimizer will be discontinued with OpenVINO 2025.0.
      Consider using the new conversion methods instead. For
      more details, see the model conversion transition guide.
      + OpenVINO property Affinity API will be discontinued with
      OpenVINO 2025.0. It will be replaced with CPU binding
      configurations (ov::hint::enable_cpu_pinning).
      + OpenVINO Model Server components:
    - “auto shape” and “auto batch size” (reshaping a model
      in runtime) will be removed in the future. OpenVINO’s
      dynamic shape models are recommended instead.
* Tue Apr 23 2024 Atri Bhattacharya <badshah400@gmail.com>
  - License update: play safe and list all third party licenses as
    part of the License tag.
* Tue Apr 23 2024 Atri Bhattacharya <badshah400@gmail.com>
  - Switch to _service file as tagged Source tarball does not
    include `./thirdparty` submodules.
  - Update openvino-fix-install-paths.patch to fix python module
    install path.
  - Enable python module and split it out into a python subpackage
    (for now default python3 only).
  - Explicitly build python metadata (dist-info) and install it
    (needs simple sed hackery to support "officially" unsupported
    platform ppc64le).
  - Specify ENABLE_JS=OFF to turn off javascript bindings as
    building these requires downloading npm stuff from the network.
  - Build with system pybind11.
  - Bump _constraints for updated disk space requirements.
  - Drop empty %check section, rpmlint was misleading when it
    recommended adding this.
* Fri Apr 19 2024 Atri Bhattacharya <badshah400@gmail.com>
  - Numerous specfile cleanups:
    * Drop redundant `mv` commands and use `install` where
      appropriate.
    * Build with system protobuf.
    * Fix Summary tags.
    * Trim package descriptions.
    * Drop forcing CMAKE_BUILD_TYPE=Release, let macro default
      RelWithDebInfo be used instead.
    * Correct naming of shared library packages.
    * Separate out libopenvino_c.so.* into own shared lib package.
    * Drop rpmlintrc rule used to hide shlib naming mistakes.
    * Rename Source tarball to %{name}-%{version}.EXT pattern.
    * Use ldconfig_scriptlet macro for post(un).
  - Add openvino-onnx-ml-defines.patch -- Define ONNX_ML at compile
    time when using system onnx to allow using 'onnx-ml.pb.h'
    instead of 'onnx.pb.h', the latter not being shipped with
    openSUSE's onnx-devel package (gh#onnx/onnx#3074).
  - Add openvino-fix-install-paths.patch: Change hard-coded install
    paths in upstream cmake macro to standard Linux dirs.
  - Add openvino-ComputeLibrary-include-string.patch: Include header
    for std::string.
  - Add external devel packages as Requires for openvino-devel.
  - Pass -Wl,-z,noexecstack to %build_ldflags to avoid an exec stack
    issue with intel CPU plugin.
  - Use ninja for build.
  - Adapt _constraits file for correct disk space and memory
    requirements.
  - Add empty %check section.
* Mon Apr 15 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Initial package
  - Version 2024.0.0
  - Add openvino-rpmlintrc.

Files

/usr/lib64/OpenVINO
/usr/lib64/OpenVINO/libopenvino_intel_npu_plugin.so


Generated by rpm2html 1.8.1

Fabrice Bellet, Mon May 27 23:42:38 2024