Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

openmpi4-libs-4.1.6-6.1 RPM for x86_64

From OpenSuSE Tumbleweed for x86_64

Name: openmpi4-libs Distribution: openSUSE Tumbleweed
Version: 4.1.6 Vendor: openSUSE
Release: 6.1 Build date: Thu Sep 5 08:58:41 2024
Group: System/Libraries Build host: reproducible
Size: 11336227 Source RPM: openmpi4-4.1.6-6.1.src.rpm
Packager: https://bugs.opensuse.org
Url: https://www.open-mpi.org/
Summary: OpenMPI runtime libraries for OpenMPI version 4.1.6
OpenMPI is an implementation of the Message Passing Interface, a
standardized API typically used for parallel and/or distributed
computing. OpenMPI is the merged result of four prior implementations
where the team found for them to excel in one or more areas,
such as latency or throughput.

OpenMPI also includes an implementation of the OpenSHMEM parallel
programming API, which is a Partitioned Global Address Space (PGAS)
abstraction layer providing inter-process communication using
one-sided communication techniques.

This package provides the Open MPI/OpenSHMEM version 4
shared libraries.

Provides

Requires

License

BSD-3-Clause

Changelog

* Thu Sep 05 2024 Nicolas Morey <nicolas.morey@suse.com>
  - Add test-datatype-partial.c-fix-compiler-warnings.patch to fix
    testuite compilation with GCC >= 14
* Mon Jul 29 2024 Martin Jambor <mjambor@suse.com>
  - Add openmpi4-C99.diff to fix the most egregious type violations that
    not only prevent building the standard flavor with GCC 14 opn i586
    but that are just bugs too.
* Tue Jun 25 2024 Nicolas Morey <nicolas.morey@suse.com>
  - Disable 32b builds of hpc flavours
* Mon Feb 26 2024 Dominique Leuenberger <dimstar@opensuse.org>
  - Use %autosetup macro. Allows to eliminate the usage of deprecated
    PatchN.
* Tue Oct 10 2023 Nicolas Morey <nicolas.morey@suse.com>
  - Drop %vers macro so that the Version tag can be parsed more easily
* Mon Oct 02 2023 Nicolas Morey <nicolas.morey@suse.com>
  - Update to 4.1.6:
    - Update embedded PMIx to 3.2.5.
    - Fix issue with buffered sends and MTL-based interfaces (Libfabric,
      PSM, Portals).
    - Add missing MPI_F_STATUS_SIZE to mpi.h.
    - Update Fortran mpi module configure check to be more correct.
    - Update to properly handle PMIx v>=4.2.3.
    - Fix minor issues and add some minor performance optimizations with
      OFI support.
    - Support the "striping_factor" and "striping_unit" MPI_Info names
      recomended by the MPI standard for parallel IO.
    - Fixed some minor issues with UCX support.
    - Minor optimization for 0-byte MPI_Alltoallw (i.e., make it a no-op).
* Mon Aug 07 2023 Nicolas Morey <nicolas.morey@suse.com>
  - Drop support for TrueScale (bsc#1212146)
* Tue Jul 25 2023 Nicolas Morey <nicolas.morey@suse.com>
  - Update to 4.1.5:
    - Fix crash in one  -sided applications for certain process layouts.
    - Update embedded OpenPMIx to version 3.2.4
    - Backport patches to Libevent for CVE  -2016  -10195, CVE  -2016  -10196, and
      CVE  -2016  -10197.  Note that Open MPI's internal libevent does not
      use the impacted portions of the Libevent code base.
    - SHMEM improvements:
    - Fix initializer bugs in SHMEM interface.
    - Fix unsigned type comparisons generating warnings.
    - Fix use after clear issue in shmem_ds_reset.
    - UCX improvements
    - Fix memory registration bug that could occur when UCX was built
      but not selected.
    - Reduce overhead of add_procs with intercommunicators.
    - Enable multi_send_nb by default.
    - Call opal_progress while waiting for a UCX fence to complete.
    - Fix data corruption bug in osc/rdma component.
    - Fix overflow bug in alltoall collective
    - Fix crash when displaying topology.
    - Add some MPI_F_XXX constants that were missing from mpi.h.
    - coll/ucc bug fixes.
* Fri Sep 23 2022 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Replace btl-openib-Add-VF-support-for-ConnectX-5-and-6.patch
    by btl-openib-Add-VF-support-for-ConnectX-4-5-and-6.patch to add ConnectX4 VF suppor
* Thu Sep 08 2022 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Enable libfabric on all arch
  - Switch to external libevent for all flavors
  - Switch to external hwloc and PMIx for HPC builds
  - Update rpmlintrc file to ignore missing libname suffix in libopenmpi packages
  - Add patch btl-openib-Add-VF-support-for-ConnectX-5-and-6.patch to support
    ConnectX 5 and 6 VF
* Wed Aug 03 2022 Dirk Müller <dmueller@suse.com>
  - update to 4.1.4:
    * Fix possible length integer overflow in numerous non-blocking collective
    operations.
    * Fix segmentation fault in UCX if MPI Tool interface is finalized before
    MPI_Init is called.
    * Remove /usr/bin/python dependency in configure.
    * Fix OMPIO issue with long double etypes.
    * Update treematch topology component to fix numerous correctness issues.
    * Fix memory leak in UCX MCA parameter registration.
    * Fix long operation closing file descriptors on non-Linux systems that
    can appear as a hang to users.
    * Fix for attribute handling on GCC 11 due to pointer aliasing.
    * Fix multithreaded race in UCX PML's datatype handling.
    * Fix a correctness issue in CUDA Reduce algorithm.
    * Fix compilation issue with CUDA GPUDirect RDMA support.
    * Fix to make shmem_calloc(..., 0) conform to the OpenSHMEM specification.
    * Add UCC collectives component.
    * Fix divide by zero issue in OMPI IO component.
    * Fix compile issue with libnl when not in standard search locations.
    * Fixed a seg fault in the smcuda BTL.  Thanks to Moritz Kreutzer and
    @Stadik for reporting the issue.
    * Added support for ELEMENTAL to the MPI handle comparison functions
    in the mpi_f08 module.  Thanks to Salvatore Filippone for raising
    the issue.
    * Minor datatype performance improvements in the CUDA-based code paths.
    * Fix MPI_ALLTOALLV when used with MPI_IN_PLACE.
    * Fix MPI_BOTTOM handling for non-blocking collectives.  Thanks to
    Lisandro Dalcin for reporting the problem.
    * Enable OPAL memory hooks by default for UCX.
    * Many compiler warnings fixes, particularly for newer versions of
    GCC.
    * Fix intercommunicator overflow with large payload collectives.  Also
    fixed MPI_REDUCE_SCATTER_BLOCK for similar issues with large payload
    collectives.
    * Back-port ROMIO 3.3 fix to use stat64() instead of stat() on GPFS.
    * Fixed several non-blocking MPI collectives to not round fractions
    based on float precision.
    * Fix compile failure for --enable-heterogeneous.  Also updated the
    README to clarify that --enable-heterogeneous is functional, but
    still not recomended for most environments.
    * Minor fixes to OMPIO, including:
    - Fixing the open behavior of shared memory shared file pointers.
      Thanks to Axel Huebl for reporting the issue
    - Fixes to clean up lockfiles when closing files.  Thanks to Eric
      Chamberland for reporting the issue.
    * Update LSF configure failure output to be more clear (e.g., on RHEL
    platforms).
    * Update if_[in|ex]clude behavior in btl_tcp and oob_tcp to select
    * all* interfaces that fall within the specified subnet range.
    * ROMIO portability fix for OpenBSD
    * Fix handling of MPI_IN_PLACE with MPI_ALLTOALLW and improve performance
    of MPI_ALLTOALL and MPI_ALLTOALLV for MPI_IN_PLACE.
    * Fix one-sided issue with empty groups in Post-Start-Wait-Complete
    synchronization mode.
    * Fix Fortran status returns in certain use cases involving
    Generalized Requests
    * Romio datatype bug fixes.
    * Fix oshmem_shmem_finalize() when main() returns non-zero value.
    * Fix wrong affinity under LSF with the membind option.
    * Fix count==0 cases in MPI_REDUCE and MPI_IREDUCE.
    * Fix ssh launching on Bourne-flavored shells when the user has "set
    - u" set in their shell startup files.
    * Correctly process 0 slots with the mpirun --host option.
    * Ensure to unlink and rebind socket when the Open MPI session
    directory already exists.
    * Fix a segv in mpirun --disable-dissable-map.
    * Fix a potential hang in the memory hook handling.
    * Slight performance improvement in MPI_WAITALL when running in
    MPI_THREAD_MULTIPLE.
    * Fix hcoll datatype mapping and rooted operation behavior.
    * Correct some operations modifying MPI_Status.MPI_ERROR when it is
    disallowed by the MPI standard.
    * UCX updates:
    - Fix datatype reference count issues.
    - Detach dynamic window memory when freeing a window.
    - Fix memory leak in datatype handling.
    * Fix various atomic operations issues.
    * mpirun: try to set the curses winsize to the pty of the spawned
    task.  Thanks to Stack Overflow user @Seriously for reporting the
    issue.
    * PMIx updates:
    - Fix compatibility with external PMIx v4.x installations.
    - Fix handling of PMIx v3.x compiler/linker flags.  Thanks to Erik
      Schnetter for reporting the issue.
    - Skip SLURM-provided PMIx detection when appropriate.  Thanks to
      Alexander Grund for reporting the issue.
    * Fix handling by C++ compilers when they #include the STL "<version>"
    header file, which ends up including Open MPI's text VERSION file
    (which is not C code).  Thanks to @srpgilles for reporting the
    issue.
    * Fix MPI_Op support for MPI_LONG.
    * Make the MPI C++ bindings library (libmpi_cxx) explicitly depend on
    the OPAL internal library (libopen-pal).  Thanks to Ye Luo for
    reporting the issue.
    * Fix configure handling of "--with-libevent=/usr".
    * Fix memory leak when opening Lustre files.  Thanks to Bert Wesarg
    for submitting the fix.
    * Fix MPI_SENDRECV_REPLACE to correctly process datatype errors.
    Thanks to Lisandro Dalcin for reporting the issue.
    * Fix MPI_SENDRECV_REPLACE to correctly handle large data.  Thanks
    Jakub Benda for reporting this issue and suggesting a fix.
    * Add workaround for TCP "dropped connection" errors to drastically
    reduce the possibility of this happening.
    * OMPIO updates:
    - Fix handling when AMODE is not set.  Thanks to Rainer Keller for
      reporting the issue and supplying the fix.
    - Fix FBTL "posix" component linking issue.  Thanks for Honggang Li
      for reporting the issue.
    - Fixed segv with MPI_FILE_GET_BYTE_OFFSET on 0-sized file view.
    - Thanks to GitHub user @shanedsnyder for submitting the issue.
    * OFI updates:
    - Multi-plane / Multi-Nic nic selection cleanups
    - Add support for exporting Open MPI memory monitors into
      Libfabric.
    - Ensure that Cisco usNIC devices are never selected by the OFI
      MTL.
    - Fix buffer overflow in OFI networking setup.  Thanks to Alexander
      Grund for reporting the issue and supplying the fix.
    * Fix SSEND on tag matching networks.
    * Fix error handling in several MPI collectives.
    * Fix the ordering of MPI_COMM_SPLIT_TYPE.  Thanks to Wolfgang
    Bangerth for raising the issue.
    * No longer install the orted-mpir library (it's an internal / Libtool
    convenience library).  Thanks to Andrew Hesford for the fix.
    * PSM2 updates:
    - Allow advanced users to disable PSM2 version checking.
    - Fix to allow non-default installation locations of psm2.h.
* Wed Apr 28 2021 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - openmpi4 is now the default openmpi for releases > 15.3
  - Add orted-mpir-add-version-to-shared-library.patch to fix unversionned library
  - Change RPM macros install path to %{_rpmmacrodir}
* Wed Apr 28 2021 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Update to version 4.1.1
    - Fix a number of datatype issues, including an issue with
      improper handling of partial datatypes that could lead to
      an unexpected application failure.
    - Change UCX PML to not warn about MPI_Request leaks during
      MPI_FINALIZE by default.  The old behavior can be restored with
      the mca_pml_ucx_request_leak_check MCA parameter.
    - Reverted temporary solution that worked around launch issues in
      SLURM v20.11.{0,1,2}. SchedMD encourages users to avoid these
      versions and to upgrade to v20.11.3 or newer.
    - Updated PMIx to v3.2.2.
    - Disabled gcc built-in atomics by default on aarch64 platforms.
    - Disabled UCX PML when UCX v1.8.0 is detected. UCX version 1.8.0 has a bug that
      may cause data corruption when its TCP transport is used in conjunction with
      the shared memory transport. UCX versions prior to v1.8.0 are not affected by
      this issue. Thanks to @ksiazekm for reporting the issue.
    - Fixed detection of available UCX transports/devices to better inform PML
      prioritization.
    - Fixed SLURM support to mark ORTE daemons as non-MPI tasks.
    - Improved AVX detection to more accurately detect supported
      platforms.  Also improved the generated AVX code, and switched to
      using word-based MCA params for the op/avx component (vs. numeric
      big flags).
    - Improved OFI compatibility support and fixed memory leaks in error
      handling paths.
    - Improved HAN collectives with support for Barrier and Scatter. Thanks
      to @EmmanuelBRELLE for these changes and the relevant bug fixes.
    - Fixed MPI debugger support (i.e., the MPIR_Breakpoint() symbol).
      Thanks to @louisespellacy-arm for reporting the issue.
    - Fixed ORTE bug that prevented debuggers from reading MPIR_Proctable.
    - Removed PML uniformity check from the UCX PML to address performance
      regression.
    - Fixed MPI_Init_thread(3) statement about C++ binding and update
      references about MPI_THREAD_MULTIPLE.  Thanks to Andreas Lösel for
      bringing the outdated docs to our attention.
    - Added fence_nb to Flux PMIx support to address segmentation faults.
    - Ensured progress of AIO requests in the POSIX FBTL component to
      prevent exceeding maximum number of pending requests on MacOS.
    - Used OPAL's mutli-thread support in the orted to leverage atomic
      operations for object refcounting.
    - Fixed segv when launching with static TCP ports.
    - Fixed --debug-daemons mpirun CLI option.
    - Fixed bug where mpirun did not honor --host in a managed job
      allocation.
    - Made a managed allocation filter a hostfile/hostlist.
    - Fixed bug to marked a generalized request as pending once initiated.
    - Fixed external PMIx v4.x check.
    - Fixed OSHMEM build with `--enable-mem-debug`.
    - Fixed a performance regression observed with older versions of GCC when
      __ATOMIC_SEQ_CST is used. Thanks to @BiplabRaut for reporting the issue.
    - Fixed buffer allocation bug in the binomial tree scatter algorithm when
      non-contiguous datatypes are used. Thanks to @sadcat11 for reporting the issue.
    - Fixed bugs related to the accumulate and atomics functionality in the
      osc/rdma component.
    - Fixed race condition in MPI group operations observed with
      MPI_THREAD_MULTIPLE threading level.
    - Fixed a deadlock in the TCP BTL's connection matching logic.
    - Fixed pml/ob1 compilation error when CUDA support is enabled.
    - Fixed a build issue with Lustre caused by unnecessary header includes.
    - Fixed a build issue with IMB LSF workload manager.
    - Fixed linker error with UCX SPML.
* Wed Mar 24 2021 Egbert Eich <eich@suse.com>
  - Update to version 4.1.0
    * collectives: Add HAN and ADAPT adaptive collectives components.
      Both components are off by default and can be enabled by specifying
      "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...".
      We intend to enable both by default in Open MPI 5.0.
    * OMPIO is now the default for MPI-IO on all filesystems, including
      Lustre (prior to this, ROMIO was the default for Lustre).  Many
      thanks to Mark Dixon for identifying MPI I/O issues and providing
      access to Lustre systems for testing.
    * Minor MPI one-sided RDMA performance improvements.
    * Fix hcoll MPI_SCATTERV with MPI_IN_PLACE.
    * Add AVX support for MPI collectives.
    * Updates to mpirun(1) about "slots" and PE=x values.
    * Fix buffer allocation for large environment variables.  Thanks to
      @zrss for reporting the issue.
    * Upgrade the embedded OpenPMIx to v3.2.2.
    * Fix issue with extra-long values in MCA files.  Thanks to GitHub
      user @zrss for bringing the issue to our attention.
    * UCX: Fix zero-sized datatype transfers.
    * Fix --cpu-list for non-uniform modes.
    * Fix issue in PMIx callback caused by missing memory barrier on Arm platforms.
    * OFI MTL: Various bug fixes.
    * Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype
      with unexpected extent on oddly-aligned datatypes.
    * collectives: Adjust default tuning thresholds for many collective
      algorithms
    * runtime: fix situation where rank-by argument does not work
    * Portals4: Clean up error handling corner cases
    * runtime: Remove --enable-install-libpmix option, which has not
      worked since it was added
    * UCX: Allow UCX 1.8 to be used with the btl uct
    * UCX: Replace usage of the deprecated NB API of UCX with NBX
    * OMPIO: Add support for the IME file system
    * OFI/libfabric: Added support for multiple NICs
    * OFI/libfabric: Added support for Scalable Endpoints
    * OFI/libfabric: Added btl for one-sided support
    * OFI/libfabric: Multiple small bugfixes
    * libnbc: Adding numerous performance-improving algorithms
  - Removed: reproducible.patch - replaced by spec file settings.

Files

/usr/lib64/mpi/gcc/openmpi4
/usr/lib64/mpi/gcc/openmpi4/lib64
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_dstore.so.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_dstore.so.1.0.2
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_monitoring.so.50
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_monitoring.so.50.20.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ofi.so.10
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ofi.so.10.0.2
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ompio.so.41
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ompio.so.41.29.4
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_sm.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_sm.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ucx.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ucx.so.40.30.2
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_verbs.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_verbs.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi.so.40.30.6
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_mpifh.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_mpifh.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempi_ignore_tkr.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempi_ignore_tkr.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempif08.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempif08.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libompitrace.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libompitrace.so.40.30.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-pal.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-pal.so.40.30.3
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-rte.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-rte.so.40.30.3
/usr/lib64/mpi/gcc/openmpi4/lib64/liboshmem.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/liboshmem.so.40.30.3
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/libompi_dbg_msgq.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_allocator_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_allocator_bucket.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_atomic_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_atomic_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_bml_r2.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_ofi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_openib.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_self.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_tcp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_usnic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_vader.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_adapt.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_han.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_inter.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_libnbc.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_monitoring.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_self.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_sync.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_tuned.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_compress_bzip.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_compress_gzip.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_crs_none.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_app.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_orted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_env.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_pmi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_singleton.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fbtl_posix.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_dynamic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_dynamic_gen2.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_individual.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_two_phase.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_vulcan.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_filem_raw.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fs_ufs.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_grpcomm_direct.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_io_ompio.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_io_romio321.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_iof_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_iof_orted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_iof_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_memheap_buddy.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_memheap_ptmalloc.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_mpool_hugepage.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_mtl_ofi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_mtl_psm2.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_odls_default.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_odls_pspawn.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_oob_tcp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_op_avx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_monitoring.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_pt2pt.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_rdma.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_patcher_overwrite.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_plm_isolated.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_plm_rsh.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_plm_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pmix_flux.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pmix_isolated.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pmix_pmix3x.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_cm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_monitoring.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_ob1.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pstat_linux.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ras_simulator.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ras_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rcache_grdma.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_reachable_weighted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_regx_fwd.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_regx_naive.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_regx_reverse.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_mindist.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_ppr.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_rank_file.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_resilient.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_round_robin.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_seq.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rml_oob.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_routed_binomial.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_routed_direct.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_routed_radix.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rtc_hwloc.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_flux.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_jsm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_ompi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_orte.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_scoll_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_scoll_mpi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sharedfp_individual.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sharedfp_lockedfile.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sharedfp_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_shmem_mmap.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_shmem_posix.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_shmem_sysv.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_spml_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sshmem_mmap.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sshmem_sysv.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sshmem_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_app.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_novm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_orted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_topo_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_topo_treematch.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_vprotocol_pessimist.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v12.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v20.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v21.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v3.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_gds_ds12.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_gds_ds21.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_gds_hash.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_plog_default.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_plog_stdfd.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_plog_syslog.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_preg_compress.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_preg_native.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psec_native.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psec_none.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psensor_file.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psensor_heartbeat.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_pshmem_mmap.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psquash_flex128.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psquash_native.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_ptl_tcp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_ptl_usock.so


Generated by rpm2html 1.8.1

Fabrice Bellet, Mon Dec 2 00:50:36 2024