Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

libtorch-openmpi4-2.7.1-1.1 RPM for x86_64

From OpenSuSE Tumbleweed for x86_64

Name: libtorch-openmpi4 Distribution: openSUSE Tumbleweed
Version: 2.7.1 Vendor: openSUSE
Release: 1.1 Build date: Fri Jul 4 14:27:50 2025
Group: Development/Libraries/Python Build host: reproducible
Size: 181196592 Source RPM: python-torch-openmpi4-2.7.1-1.1.src.rpm
Packager: https://bugs.opensuse.org
Url: https://pytorch.org
Summary: Library which used by python-torch-openmpi4
Library which is used by python-torch-openmpi4

Provides

Requires

License

Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND MIT AND Zlib AND BSL-1.0

Changelog

* Fri Jul 04 2025 Christian Goll <cgoll@suse.com>
  - Updated to 2.7.1 with following fiex:
    * Fix assertion error due to inductor permuting inputs to flex attention (#151959)
    * Fix performance regression on nanogpt speedrun (#152641)
    * Fix improve PyTorch Wheel size due to introduction of addition of 128 bit vectorization (#148320) (#152396)
    * Fix fmsub function definition (#152075)
    * Fix Floating point exception in torch.mkldnn_max_pool2d (#151848)
    * Fix ONNX decomposition does not preserve custom CompositeImplicitAutograd ops (#151826)
    * Fix error with dynamic linking of libgomp library (#150084)
    * Fix segfault in profiler with Python 3.13 (#153848)
  - Changes from 2.7.0:
    * torch.onnx.dynamo_export now uses the ExportedProgram logic path (#137296)
      Users using the torch.onnx.dynamo_export API may see some ExportOptions
      become unsupported due to an internal switch to use torch.onnx.export(...,
      dynamo=True): diagnostic_options, fake_context and onnx_registry are
      removed/ignored by ExportOptions. Only dynamic_shapes is retained.
    * Finish deprecation of LRScheduler.print_lr() along with the verbose kwarg
      to the LRScheduler constructor. (#147301)
    * libtorch_python.so symbols are now invisible by default on all platforms (#142214)
    * Please use torch.export.export instead of capture_pre_autograd_graph to
      export the model for pytorch 2 export quantization (#139505)
    * New interface for
      torch.fx.passes.graph_transform_observer.GraphTransformObserver to enable
      Node Level provenance tracking (#144277)
    * torch.ao.quantization.pt2e.graph_utils.get_control_flow_submodules is no
      longer public (#141612)
    * torch.onnx.dynamo_export is deprecated (#146425, #146639, #146923)
    * XNNPACKQuantizer is deprecated in PyTorch and moved to ExecuTorch, please
      use it from executorch.backends.xnnpack.quantizer.xnnpack_quantizer instead
      of torch.ao.quantization.quantizer.xnnpack_quantizer. (#144940)
  - Changes from 2.6.0
    * [Beta] torch.compiler.set_stance
    * [Beta] torch.library.triton_op
    * [Beta] torch.compile support for Python 3.13
    * [Beta] New packaging APIs for AOTInductor
    * [Beta] New packaging APIs for AOTInductor
    * [Beta] AOTInductor: minifier
    * [Beta] AOTInductor: ABI-compatible mode code generation
    * [Beta] FP16 support for X86 CPUs (both eager and Inductor modes)
    * FlexAttention support on X86 CPU for LLMs
  - Added patches for gcc15 compatibilty:
    * add-cstdint.patch
    * gloo-gcc15-fix.patch
  - Updated vendored sources:
    * kineto-d975313.tar.gz -> kineto-a054a4b.tar.gz
    * onnx-3bf92c0.tar.gz -> onnx-b8baa84.tar.gz
    * pybind11-7c33cdc.tar.gz -> pybind11-a2e59f0.tar.gz
    * sleef-60e76d2.tar.gz -> sleef-56e1f79.tar.gz
    * XNNPACK-fcbf55a.tar.gz -> XNNPACK-51a0103.tar.gz
    * cpuinfo-fa1c679.tar.gz -> cpuinfo-1e83a2f.tar.gz
    * fmt-e69e5f9.tar.gz -> fmt-1239137.tar.gz
* Tue Dec 17 2024 Andreas Schwab <schwab@suse.de>
  - Use oneDNN only on x86_64 aarch64 ppc64le
* Fri Oct 18 2024 Guillaume GARDET <guillaume.gardet@opensuse.org>
  -  Update to 2.5.0:
    * https://github.com/pytorch/pytorch/releases/tag/v2.5.0
* Fri Oct 04 2024 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Add patch to fix build with oneDNN:
    * pytorch-patch-onednn.patch
* Tue Oct 01 2024 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to 2.4.1:
    * https://github.com/pytorch/pytorch/releases/tag/v2.4.1
  - Skip update to 2.4.0:
    * https://github.com/pytorch/pytorch/releases/tag/v2.4.0
  - Remove _service since 'osc mr download_files' is easier to use
    and maintain
  - Drop config vars not used anymore: BUILD_CAFFE2, USE_LEVELDB, USE_LMDB,
    USE_OPENCV, USE_TBB
  - Remove examples package since code has been removed upstream
  - Refresh pacth:
    * skip-third-party-check.patch
* Thu Aug 29 2024 Guang Yee <gyee@suse.com>
  - Enable sle15_python_module_pythons.
  - GCC 9.3 or newer is required, regardless if CUDA is enabled.
    See https://github.com/pytorch/pytorch/blob/v2.3.1/CMakeLists.txt#L48
    Therefore, for SLE15 we went with GCC 11 as it seems to be the most
    common one.
  - Use %gcc_version macro for Tumbleweed.
* Thu Jul 11 2024 Christian Goll <cgoll@suse.com>
  - update to 2.3.1 with following summarized highlights:
    * from 2.0.x:
    - torch.compile is the main API for PyTorch 2.0, which wraps your model and
      returns a compiled model. It is a fully additive (and optional) feature
      and hence 2.0 is 100% backward compatible by definition
    - Accelerated Transformers introduce high-performance support for training
      and inference using a custom kernel architecture for scaled dot product
      attention (SPDA). The API is integrated with torch.compile() and model
      developers may also use the scaled dot product attention kernels directly
      by calling the new scaled_dot_product_attention() operato
    * from 2.1.x:
    - automatic dynamic shape support in torch.compile,
      torch.distributed.checkpoint for saving/loading distributed training jobs
      on multiple ranks in parallel, and torch.compile support for the NumPy
      API.
    - In addition, this release offers numerous performance improvements (e.g.
      CPU inductor improvements, AVX512 support, scaled-dot-product-attention
      support) as well as a prototype release of torch.export, a sound
      full-graph capture mechanism, and torch.export-based quantization.
    * from 2.2.x:
    - 2x performance improvements to scaled_dot_product_attention via
      FlashAttention-v2 integration, as well as AOTInductor, a new
      ahead-of-time compilation and deployment tool built for non-python
      server-side deployments.
    * from 2.3.x:
    - support for user-defined Triton kernels in torch.compile, allowing for
      users to migrate their own Triton kernels from eager without
      experiencing performance complications or graph breaks. As well, Tensor
      Parallelism improves the experience for training Large Language Models
      using native PyTorch functions, which has been validated on training
      runs for 100B parameter models.
  - added seperate openmpi4 build
  - added sepetate vulcan build, although this functions isn't exposed to python
    abi
  - For the obs build all the vendored sources follow the pattern
    NAME-7digitcommit.tar.gz and not the NAME-COMMIT.tar.gz
  - added following patches:
    * skip-third-party-check.patch
    * fix-setup.patch
  - removed patches:
    * pytorch-rm-some-gitmodules.patch
    * fix-call-of-onnxInitGraph.patch
* Thu Jul 22 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Fix build on x86_64 by using GCC10 instead of GCC11
    https://github.com/google/XNNPACK/issues/1550
* Thu Jul 22 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to 1.9.0
  - Release notes: https://github.com/pytorch/pytorch/releases/tag/v1.9.0
  - Drop upstreamed patch:
    * fix-mov-operand-for-gcc.patch
  - Drop unneeded patches:
    * removed-peachpy-depedency.patch
  - Refresh patches:
    * skip-third-party-check.patch
    * fix-call-of-onnxInitGraph.patch
  - Add new patch:
    * pytorch-rm-some-gitmodules.patch
* Thu Jul 22 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Add _service file to ease future update of deps

Files

/usr/lib64/libc10.so
/usr/lib64/libshm.so
/usr/lib64/libtorch.so
/usr/lib64/libtorch_cpu.so
/usr/lib64/libtorch_global_deps.so
/usr/lib64/libtorch_python.so


Generated by rpm2html 1.8.1

Fabrice Bellet, Thu Jul 10 23:39:45 2025