Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

python311-beautifulsoup4-4.13.4-1.1 RPM for noarch

From OpenSuSE Ports Tumbleweed for noarch

Name: python311-beautifulsoup4 Distribution: openSUSE:Factory:zSystems
Version: 4.13.4 Vendor: openSUSE
Release: 1.1 Build date: Sun Jul 13 16:04:39 2025
Group: Unspecified Build host: reproducible
Size: 2057014 Source RPM: python-beautifulsoup4-4.13.4-1.1.src.rpm
Packager: https://bugs.opensuse.org
Url: https://www.crummy.com/software/BeautifulSoup/
Summary: HTML/XML Parser for Quick-Turnaround Applications Like Screen-Scraping
Beautiful Soup is a Python HTML/XML parser designed for quick turnaround
projects like screen-scraping. Three features make it powerful:

* Beautiful Soup won't choke if you give it bad markup. It yields a parse tree
  that makes approximately as much sense as your original document. This is
  usually good enough to collect the data you need and run away

* Beautiful Soup provides a few simple methods and Pythonic idioms for
  navigating, searching, and modifying a parse tree: a toolkit for dissecting a
  document and extracting what you need. You don't have to create a custom
  parser for each application

* Beautiful Soup automatically converts incoming documents to Unicode and
  outgoing documents to UTF-8. You don't have to think about encodings, unless
  the document doesn't specify an encoding and Beautiful Soup can't autodetect
  one. Then you just have to specify the original encoding

Beautiful Soup parses anything you give it, and does the tree traversal stuff
for you. You can tell it "Find all the links", or "Find all the links of class
externalLink", or "Find all the links whose urls match "foo.com", or "Find the
table heading that's got bold text, then give me that text."

Valuable data that was once locked up in poorly-designed websites is now within
your reach. Projects that would have taken hours take only minutes with
Beautiful Soup.

Provides

Requires

License

MIT

Changelog

* Sun Jul 13 2025 Ben Greiner <code@bnavigator.de>
  - Update to 4.13.4
    * If you pass a function as the first argument to a find* method,
      the function will only ever be called once per tag, with the
      Tag object as the argument. Starting in 4.13.0, there were
      cases where the function would be called with a Tag object and
      then called again with the name of the tag. [bug=2106435]
    * Added a passthrough implementation for
      NavigableString.__getitem__ which gives a more helpful
      exception if the user tries to treat it as a Tag and access its
      HTML attributes.
    * Fixed a bug that caused an exception when unpickling the result
      of parsing certain invalid markup with lxml as the tree
      builder. [bug=2103126]
    * Converted the AUTHORS file to UTF-8 for PEP8 compliance.
      [bug=2107405]
  - Release 4.13.3 (20250204)
    * Modified the 4.13.2 change slightly to restore backwards
      compatibility. Specifically, calling a find_* method with no
      arguments should return the first Tag out of the iterator, not
      the first PageElement. [bug=2097333]
  - Release 4.13.2 (20250204)
    * Gave ElementFilter the ability to explicitly say that it
      excludes every item in the parse tree. This is used internally
      in situations where the provided filters are logically
      inconsistent or match a value against the null set.
      Without this, it's not always possible to distinguish between a
      SoupStrainer that excludes everything and one that excludes
      nothing.
      This fixes a bug where calls to find_* methods with no
      arguments returned None, instead of the first item out of the
      iterator. [bug=2097333]
      Things added to the API to support this:
    - The ElementFilter.includes_everything property
    - The MatchRule.exclude_everything member
    - The _known_rules argument to ElementFilter.match. This is an
      optional argument used internally to indicate that an
      optimization is safe.
  - Release 4.13.1 (20250203)
    * Updated pyproject.toml to require Python 3.7 or above.
      [bug=2097263]
    * Pinned the typing-extensions dependency to a minimum version of
      4.0.0. [bug=2097262]
    * Restored the English documentation to the source distribution.
      [bug=2097237]
    * Fixed a regression where HTMLFormatter and XMLFormatter were
      not propagating the indent parameter to the superconstructor.
      [bug=2097272]
  - Release 4.13.0 (20250202)
    * This release introduces Python type hints to all public classes
      and methods in Beautiful Soup. The addition of these type hints
      exposed a large number of very small inconsistencies in the
      code, which I've fixed, but the result is a larger-than-usual
      number of deprecations and changes that may break backwards
      compatibility.
      Chris Papademetrious deserves a special thanks for his work on
      this release through its long beta process.
    [#]# Deprecation notices
    * These things now give DeprecationWarnings when you try to use
      them, and are scheduled to be removed in Beautiful Soup 4.15.0.
    * Every deprecated method, attribute and class from the 3.0 and
      2.0 major versions of Beautiful Soup. These have been
      deprecated for a very long time, but they didn't issue
      DeprecationWarning when you tried to use them. Now they do, and
      they're all going away soon.
      This mainly refers to methods and attributes with camelCase
      names, for example: renderContents, replaceWith,
      replaceWithChildren, findAll, findAllNext, findAllPrevious,
      findNext, findNextSibling, findNextSiblings, findParent,
      findParents, findPrevious, findPreviousSibling,
      findPreviousSiblings, getText, nextSibling, previousSibling,
      isSelfClosing, fetchNextSiblings, fetchPreviousSiblings,
      fetchPrevious, fetchPreviousSiblings, fetchParents, findChild,
      findChildren, childGenerator, nextGenerator,
      nextSiblingGenerator, previousGenerator,
      previousSiblingGenerator, recursiveChildGenerator, and
      parentGenerator.
      This also includes the BeautifulStoneSoup class.
    * The SAXTreeBuilder class, which was never officially supported
      or tested.
    * The private class method BeautifulSoup._decode_markup(), which
      has not been used inside Beautiful Soup for many years.
    * The first argument to BeautifulSoup.decode has been changed
      from pretty_print:bool to indent_level:int, to match the
      signature of Tag.decode. Using a bool will still work but will
      give you a DeprecationWarning.
    * SoupStrainer.text and SoupStrainer.string are both deprecated,
      since a single item can't capture all the possibilities of a
      SoupStrainer designed to match strings.
    * SoupStrainer.search_tag(). It was never a documented method,
      but if you use it, you should start using
      SoupStrainer.allow_tag_creation() instead.
    * The soup:BeautifulSoup argument to the TreeBuilderForHtml5lib
      constructor is now required, not optional. It's unclear why it
      was optional in the first place, so if you discover you need
      this, contact me for possible un-deprecation.
    [#]# Compatibility notices
    * This version drops support for Python 3.6. The minimum
      supported major Python version for Beautiful Soup is now Python
      3.7.
    * Deprecation warnings have been added for all deprecated methods
      and attributes (see above). Going forward, deprecated names
      will be removed two feature releases or one major release after
      the deprecation warning is added.
    * The storage for a tag's attribute values now modifies incoming
      values to be consistent with the HTML or XML spec. This means
      that if you set an attribute value to a number, it will be
      converted to a string immediately, rather than being converted
      when you output the document. [bug=2065525]
      More importantly for backwards compatibility, setting an HTML
      attribute value to True will set the attribute's value to the
      appropriate string per the HTML spec. Setting an attribute
      value to False or None will remove the attribute value from the
      tag altogether, rather than (effectively, as before) setting
      the value to the string "False" or the string "None".
      This means that some programs that modify documents will
      generate different output than they would in earlier versions
      of Beautiful Soup, but the new documents are more likely to
      represent the intent behind the modifications.
      To give a specific example, if you have code that looks
      something like this:
    checkbox1['checked'] = True checkbox2['checked'] = False
      Then a document that used to look like this (with most browsers
      treating both boxes as checked):
    <input type="checkbox" checked="True"/> <input type="checkbox"
      checked="False"/>
      Will now look like this (with browsers treating only the first
      box as checked):
    <input type="checkbox" checked="checked"/> <input
    type="checkbox"/>
      You can get the old behavior back by instantiating a
      TreeBuilder with `attribute_dict_class=dict`, or you can
      customize how Beautiful Soup treates attribute values by
      passing in a custom subclass of dict.
    * If Tag.get_attribute_list() is used to access an attribute
      that's not set, the return value is now an empty list rather
      than [None].
    * If you pass an empty list as the attribute value when searching
      the tree, you will now find all tags which have that attribute
      set to a value in the empty list--that is, you will find
      nothing. This is consistent with other situations where a list
      of acceptable values is provided. Previously, an empty list was
      treated the same as None and False, and you would have found
      the tags which did not have that attribute set at all.
      [bug=2045469]
    * For similar reasons, if you pass in limit=0 to a find() method,
      you will now get zero results. Previously, you would get all
      matching results.
    * When using one of the find() methods or creating a
      SoupStrainer, if you specify the same attribute value in
      ``attrs`` and the keyword arguments, you'll end up with two
      different ways to match that attribute. Previously the value in
      keyword arguments would override the value in ``attrs``.
    * All exceptions were moved to the bs4.exceptions module, and all
      warnings to the bs4._warnings module (named so as not to shadow
      Python's built-in warnings module). All warnings and exceptions
      are exported from the bs4 module, which is probably the safest
      place to import them from in your own code.
    * As a side effect of this, the string constant
      BeautifulSoup.NO_PARSER_SPECIFIED_WARNING was moved to
      GuessedAtParserWarning.MESSAGE.
    * The 'html5' formatter is now much less aggressive about
      escaping ampersands, escaping only the ampersands considered
      "ambiguous" by the HTML5 spec (which is almost none of them).
      This is the sort of change that might break your unit test
      suite, but the resulting markup will be much more readable and
      more HTML5-ish.
      To quickly get the old behavior back, change code like this:
      tag.encode(formatter='html5')
      to this:
      tag.encode(formatter='html5-4.12')
      In the future, the 'html5' formatter may be become the default
      HTML formatter, which will change Beautiful Soup's default
      output. This will break a lot of test suites so it's not going
      to happen for a while. [bug=1902431]
    * Tag.sourceline and Tag.sourcepos now always have a consistent
      data type: Optional[int]. Previously these values were
      sometimes an Optional[int], and sometimes they were
      Optional[Tag], the result of searching for a child tag called
      <sourceline> or <sourcepos>. [bug=2065904]
      If your code does search for a tag called <sourceline> or
      <sourcepos>, it may stop finding that tag when you upgrade to
      Beautiful Soup 4.13. If this happens, you'll need to replace
      code that treats "sourceline" or "sourcepos" as tag names:
      tag.sourceline
      with code that explicitly calls the find() method:
      tag.find("sourceline").name
      Making the behavior of sourceline and sourcepos consistent has
      the side effect of fixing a major performance problem when a
      Tag is copied.
      With this change, the store_line_numbers argument to the
      BeautifulSoup constructor becomes much less useful, and its use
      is now discouraged, thought I'm not deprecating it yet. Please
      contact me if you have a performance or security rationale for
      setting store_line_numbers=False.
    * append(), extend(), insert(), and unwrap() were moved from
      PageElement to Tag. Those methods manipulate the 'contents'
      collection, so they would only have ever worked on Tag objects.
    * The BeautifulSoupHTMLParser constructor now requires a
      BeautifulSoup object as its first argument. This almost
      certainly does not affect you, since you probably use
      HTMLParserTreeBuilder, not BeautifulSoupHTMLParser directly.
    * The TreeBuilderForHtml5lib methods fragmentClass(),
      getFragment(), and testSerializer() now raise
      NotImplementedError. These methods are called only by
      html5lib's test suite, and Beautiful Soup isn't integrated into
      that test suite, so this code was long since unused and
      untested.
      These methods are _not_ deprecated, since they are methods
      defined by html5lib. They may one day have real
      implementations, as part of a future effort to integrate
      Beautiful Soup into html5lib's test suite.
    * AttributeValueWithCharsetSubstitution.encode() is renamed to
      substitute_encoding, to avoid confusion with the much different
      str.encode()
    * Using PageElement.replace_with() to replace an element with
      itself returns the element instead of None.
    * All TreeBuilder constructors now take the empty_element_tags
      argument. The sets of tags found in
      HTMLTreeBuilder.empty_element_tags and
      HTMLTreeBuilder.block_elements are now in
      HTMLTreeBuilder.DEFAULT_EMPTY_ELEMENT_TAGS and
      HTMLTreeBuilder.DEFAULT_BLOCK_ELEMENTS, to avoid confusing them
      with instance variables.
    * The unused constant LXMLTreeBuilderForXML.DEFAULT_PARSER_CLASS
      has been removed.
    * Some of the arguments in the methods of LXMLTreeBuilderForXML
      have been renamed for consistency with the names lxml uses for
      those arguments in the superclass. This won't affect you unless
      you were calling methods like LXMLTreeBuilderForXML.start()
      directly.
    * In particular, the arguments to
      LXMLTreeBuilderForXML.prepare_markup have been changed to match
      the arguments to the superclass, TreeBuilder.prepare_markup.
      Specifically, document_declared_encoding now appears before
      exclude_encodings, not after. If you were calling this method
      yourself, I recommend switching to using keyword arguments
      instead.
    [#]# New features
    * The new ElementFilter class encapsulates Beautiful Soup's rules
      about matching elements and deciding which parts of a document
      to parse. It's easy to override those rules with subclassing or
      function composition. The SoupStrainer class, which contains
      all the matching logic you're familiar with from the find_*
      methods, is now a subclass of ElementFilter.
    * The new PageElement.filter() method provides a fully general
      way of finding elements in a Beautiful Soup parse tree. You can
      specify a function to iterate over the tree and an
      ElementFilter to determine what matches.
    * The new_tag() method now takes a 'string' argument. This allows
      you to set the string contents of a Tag when creating it. Patch
      by Chris Papademetrious. [bug=2044599]
    * Defined a number of new iterators which are the same as
      existing iterators, but which yield the element itself before
      beginning to traverse the tree. [bug=2052936] [bug=2067634]
    - PageElement.self_and_parents
    - PageElement.self_and_descendants
    - PageElement.self_and_next_elements
    - PageElement.self_and_next_siblings
    - PageElement.self_and_previous_elements
    - PageElement.self_and_previous_siblings
      self_and_parents yields the element you call it on and then all
      of its parents. self_and_next_element yields the element you
      call it on and then every element parsed afterwards; and so on.
    * The NavigableString class now has a .string property which
      returns the string itself. This makes it easier to iterate over
      a mixed list of Tag and NavigableString objects. [bug=2044794]
    * Defined a new method, Tag.copy_self(), which creates a copy of
      a Tag with the same attributes but no contents. [bug=2065120]
      Note that this method used to be a private method named
      _clone(). The _clone() method has been removed, so if you were
      using it, change your code to call copy_self() instead.
    * The PageElement.append() method now returns the element that
      was appended; it used to have no return value. [bug=2093025]
    * The methods PageElement.insert(), PageElement.extend(),
      PageElement.insert_before(), and PageElement.insert_after() now
      return a list of the items inserted. These methods used to have
      no return value. [bug=2093025]
    * The PageElement.insert() method now takes a variable number of
      arguments and returns a list of all elements inserted, to match
      insert_before() and insert_after(). (Even if I hadn't made the
      variable-argument change, an edge case around inserting one
      Beautiful Soup object into another means that insert()'s return
      value needs to be a list.) [bug=2093025]
    * Defined a new warning class, UnusualUsageWarning, which is a
      superclass for all of the warnings issued when Beautiful Soup
      notices something unusual but not guaranteed to be wrong, like
      markup that looks like a URL (MarkupResemblesLocatorWarning) or
      XML being run through an HTML parser (XMLParsedAsHTMLWarning).
      The text of these warnings has been revamped to explain in more
      detail what is going on, how to check if you've made a mistake,
      and how to make the warning go away if you are acting
      deliberately.
      If these warnings are interfering with your workflow, or simply
      annoying you, you can filter all of them by filtering
      UnusualUsageWarning, without worrying about losing the warnings
      Beautiful Soup issues when there *definitely* is a problem you
      need to correct.
    * It's now possible to modify the behavior of the list used to
      store the values of multi-valued attributes such as HTML
      'class', by passing in whatever class you want instantiated
      (instead of a normal Python list) to the TreeBuilder
      constructor as attribute_value_list_class. [bug=2052943]
    [#]# Improvements
    * decompose() was moved from Tag to its superclass PageElement,
      since there's no reason it won't also work on NavigableString
      objects.
    * Emit an UnusualUsageWarning if the user tries to search for an
      attribute called _class; they probably mean "class_".
      [bug=2025089]
    * The MarkupResemblesLocatorWarning issued when the markup
      resembles a filename is now issued less often, due to
      improvements in detecting markup that's unlikely to be a
      filename. [bug=2052988]
    * Emit a warning if a document is parsed using a SoupStrainer
      that's set up to filter everything. In these cases, filtering
      everything is the most consistent thing to do, but there was no
      indication that this was happening, so the behavior may have
      seemed mysterious.
    * When using one of the find() methods or creating a
      SoupStrainer, you can pass a list of any accepted object
      (strings, regular expressions, etc.) for any of the objects.
      Previously you could only pass in a list of strings.
    * A SoupStrainer can now filter tag creation based on a tag's
      namespaced name. Previously only the unqualified name could be
      used.
    * Added the correct stacklevel to another instance of the
      XMLParsedAsHTMLWarning. [bug=2034451]
    * Improved the wording of the TypeError raised when you pass
      something other than markup into the BeautifulSoup constructor.
      [bug=2071530]
    * Optimized the case where you use Tag.insert() to "insert" a
      PageElement into its current location. [bug=2077020]
    * Changes to make tests work whether tests are run under
      soupsieve 2.6 or an earlier version. Based on a patch by
      Stefano Rivera.
    * Removed the strip_cdata argument to lxml's HTMLParser
      constructor, which never did anything and is deprecated as of
      lxml 5.3.0. Patch by Stefano Rivera. [bug=2076897]
    [#]# Bug fixes
    * Copying a tag with a multi-valued attribute now makes a copy of
      the list of values, eliminating a bug where both the old and
      new copy shared the same list. [bug=2067412]
    * The lxml TreeBuilder, like the other TreeBuilders, now filters
      a document's initial DOCTYPE if you've set up a SoupStrainer
      that eliminates it. [bug=2062000]
    * A lot of things can go wrong if you modify the parse tree while
      iterating over it, especially if you are removing or replacing
      elements. Most of those things fall under the category of
      unexpected behavior (which is why I don't recommend doing
      this), but there are a few ways that caused unhandled
      exceptions. The list comprehensions used by Beautiful Soup
      (e.g. .descendants, which powers the find* methods) should now
      work correctly in those cases, or at least not raise
      exceptions.
      As part of this work, I changed when the list comprehension
      determines the next element. Previously it was done after the
      yield statement; now it's done before the yield statement. This
      lets you remove the yielded element in calling code, or modify
      it in a way that would break this calculation, without causing
      an exception.
      So if your code relies on modifying the tree in a way that
      'steers' a list comprehension, rather than using the list
      comprension to decide which bits of the tree to modify, it will
      probably stop working at this point. [bug=2091118]
    * Fixed an error in the lookup table used when converting
      ISO-Latin-1 to ASCII, which no one should do anyway.
    * Corrected the markup that's output in the unlikely event that
      you encode a document to a Python internal encoding (like
      "palmos") that's not recognized by the HTML or XML standard.
    * UnicodeDammit.markup is now always a bytestring representing
      the *original* markup (sans BOM), and
      UnicodeDammit.unicode_markup is always the converted Unicode
      equivalent of the original markup. Previously,
      UnicodeDammit.markup was treated inconsistently and would often
      end up containing Unicode. UnicodeDammit.markup was not a
      documented attribute, but if you were using it, you probably
      want to switch to using .unicode_markup instead.
  - Drop soupsieve26-compat.patch
* Wed Jun 18 2025 Matej Cepl <mcepl@cepl.eu>
  - Skip failing test test_rejected_input, it is known to be flaky
    and dependent on the various changes in Python (which there
    will be more coming in few days).
* Fri Nov 01 2024 Matej Cepl <mcepl@cepl.eu>
  - Add soupsieve26-compat.patch to make tests more tolerant with
    various versions of soupsieve (better solution for lp#2086199).
* Thu Oct 31 2024 Matej Cepl <mcepl@cepl.eu>
  - Skip the test test_unsupported_pseudoclass (lp#2086199).
* Sat Jan 20 2024 Dirk Müller <dmueller@suse.com>
  - update to 4.12.3:
    * Fixed a regression such that if you set .hidden on a tag, the
      tag becomes invisible but its contents are still visible. User
      manipulation of .hidden is not a documented or supported
      feature, so don't do this, but it wasn't too difficult to
      keep the old behavior
      working.
    * Fixed a case found by Mengyuhan where html.parser giving up
      on markup would result in an AssertionError instead of a
      ParserRejectedMarkup exception.
    * Added the correct stacklevel to instances of the
      XMLParsedAsHTMLWarning.
    * Corrected the syntax of the license definition in
      pyproject.toml.
    * Corrected a typo in a test that was causing test failures
      when run against libxml2 2.12.1.
* Thu Nov 23 2023 Steve Kowalik <steven.kowalik@suse.com>
  - Require cchardet explicitly to avoid charset-normalizer braindamage.
* Mon May 08 2023 Daniel Garcia <daniel.garcia@suse.com>
  - Update to 4.12.2:
    * Fixed an unhandled exception in BeautifulSoup.decode_contents
      and methods that call it. [bug=2015545]
  - 4.12.1:
    * This version of Beautiful Soup replaces setup.py and setup.cfg
      with pyproject.toml. Beautiful Soup now uses tox as its test backend
      and hatch to do builds.
    * The main functional improvement in this version is a nonrecursive technique
      for regenerating a tree. This technique is used to avoid situations where,
      in previous versions, doing something to a very deeply nested tree
      would overflow the Python interpreter stack:
      1. Outputting a tree as a string, e.g. with
      BeautifulSoup.encode() [bug=1471755]
      2. Making copies of trees (copy.copy() and
      copy.deepcopy() from the Python standard library). [bug=1709837]
      3. Pickling a BeautifulSoup object. (Note that pickling a Tag
      object can still cause an overflow.)
    * Making a copy of a BeautifulSoup object no longer parses the
      document again, which should improve performance significantly.
    * When a BeautifulSoup object is unpickled, Beautiful Soup now
      tries to associate an appropriate TreeBuilder object with it.
    * Tag.prettify() will now consistently end prettified markup with
      a newline.
    * Added unit tests for fuzz test cases created by third
      parties. Some of these tests are skipped since they point
      to problems outside of Beautiful Soup, but this change
      puts them all in one convenient place.
    * PageElement now implements the known_xml attribute. (This was technically
      a bug, but it shouldn't be an issue in normal use.) [bug=2007895]
    * The demonstrate_parser_differences.py script was still written in
      Python 2. I've converted it to Python 3, but since no one has
      mentioned this over the years, it's a sign that no one uses this
      script and it's not serving its purpose.
  - 4.12.0:
    * Introduced the .css property, which centralizes all access to
      the Soup Sieve API. This allows Beautiful Soup to give direct
      access to as much of Soup Sieve that makes sense, without cluttering
      the BeautifulSoup and Tag classes with a lot of new methods.
      This does mean one addition to the BeautifulSoup and Tag classes
      (the .css property itself), so this might be a breaking change if you
      happen to use Beautiful Soup to parse XML that includes a tag called
      <css>. In particular, code like this will stop working in 4.12.0:
      soup.css['id']
      Code like this will work just as before:
      soup.find_one('css')['id']
      The Soup Sieve methods supported through the .css property are
      select(), select_one(), iselect(), closest(), match(), filter(),
      escape(), and compile(). The BeautifulSoup and Tag classes still
      support the select() and select_one() methods; they have not been
      deprecated, but they have been demoted to convenience methods.
      [bug=2003677]
    * When the html.parser parser decides it can't parse a document, Beautiful
      Soup now consistently propagates this fact by raising a
      ParserRejectedMarkup error. [bug=2007343]
    * Removed some error checking code from diagnose(), which is redundant with
      similar (but more Pythonic) code in the BeautifulSoup constructor.
      [bug=2007344]
    * Added intersphinx references to the documentation so that other
      projects have a target to point to when they reference Beautiful
      Soup classes. [bug=1453370]
  - 4.11.2:
    * Fixed test failures caused by nondeterministic behavior of
      UnicodeDammit's character detection, depending on the platform setup.
      [bug=1973072]
    * Fixed another crash when overriding multi_valued_attributes and using the
      html5lib parser. [bug=1948488]
    * The HTMLFormatter and XMLFormatter constructors no longer return a
      value. [bug=1992693]
    * Tag.interesting_string_types is now propagated when a tag is
      copied. [bug=1990400]
    * Warnings now do their best to provide an appropriate stacklevel,
      improving the usefulness of the message. [bug=1978744]
    * Passing a Tag's .contents into PageElement.extend() now works the
      same way as passing the Tag itself.
    * Soup Sieve tests will be skipped if the library is not installed.
  - 4.11.1:
    This release was done to ensure that the unit tests are packaged along
    with the released source. There are no functionality changes in this
    release, but there are a few other packaging changes:
    * The Japanese and Korean translations of the documentation are included.
    * The changelog is now packaged as CHANGELOG, and the license file is
      packaged as LICENSE. NEWS.txt and COPYING.txt are still present,
      but may be removed in the future.
    * TODO.txt is no longer packaged, since a TODO is not relevant for released
      code.
  - 4.11.0:
    * Ported unit tests to use pytest.
    * Added special string classes, RubyParenthesisString and RubyTextString,
      to make it possible to treat ruby text specially in get_text() calls.
      [bug=1941980]
    * It's now possible to customize the way output is indented by
      providing a value for the 'indent' argument to the Formatter
      constructor. The 'indent' argument works very similarly to the
      argument of the same name in the Python standard library's
      json.dump() function. [bug=1955497]
    * If the charset-normalizer Python module
      (https://pypi.org/project/charset-normalizer/) is installed, Beautiful
      Soup will use it to detect the character sets of incoming documents.
      This is also the module used by newer versions of the Requests library.
      For the sake of backwards compatibility, chardet and cchardet both take
      precedence if installed. [bug=1955346]
    * Added a workaround for an lxml bug
      (https://bugs.launchpad.net/lxml/+bug/1948551) that causes
      problems when parsing a Unicode string beginning with BYTE ORDER MARK.
      [bug=1947768]
    * Issue a warning when an HTML parser is used to parse a document that
      looks like XML but not XHTML. [bug=1939121]
    * Do a better job of keeping track of namespaces as an XML document is
      parsed, so that CSS selectors that use namespaces will do the right
      thing more often. [bug=1946243]
    * Some time ago, the misleadingly named "text" argument to find-type
      methods was renamed to the more accurate "string." But this supposed
      "renaming" didn't make it into important places like the method
      signatures or the docstrings. That's corrected in this
      version. "text" still works, but will give a DeprecationWarning.
      [bug=1947038]
    * Fixed a crash when pickling a BeautifulSoup object that has no
      tree builder. [bug=1934003]
    * Fixed a crash when overriding multi_valued_attributes and using the
      html5lib parser. [bug=1948488]
    * Standardized the wording of the MarkupResemblesLocatorWarning
      warnings to omit untrusted input and make the warnings less
      judgmental about what you ought to be doing. [bug=1955450]
    * Removed support for the iconv_codec library, which doesn't seem
      to exist anymore and was never put up on PyPI. (The closest
      replacement on PyPI, iconv_codecs, is GPL-licensed, so we can't use
      it--it's also quite old.)
* Sun Apr 23 2023 Matej Cepl <mcepl@suse.com>
  - Switch documentation to be within the main package.
* Fri Apr 21 2023 Dirk Müller <dmueller@suse.com>
  - add sle15_python_module_pythons (jsc#PED-68)
* Thu Apr 13 2023 Matej Cepl <mcepl@suse.com>
  - Make calling of %{sle15modernpython} optional.
* Wed Feb 09 2022 Steve Kowalik <steven.kowalik@suse.com>
  - Update to 4.10.0:
    * This is the first release of Beautiful Soup to only support Python 3.
    * The behavior of methods like .get_text() and .strings now differs
      depending on the type of tag.
    * NavigableString and its subclasses now implement the get_text()
      method, as well as the properties .strings and
      .stripped_strings.
    * The 'html5' formatter now treats attributes whose values are the
      empty string as HTML boolean attributes.
    * The 'replace_with()' method now takes a variable number of arguments,
      and can be used to replace a single element with a sequence of elements.
    * Corrected output when the namespace prefix associated with a
      namespaced attribute is the empty string, as opposed to
      None.
    * Performance improvement when processing tags that speeds up overall
      tree construction by 2%. Patch by Morotti. [bug=1899358]
    * Corrected the use of special string container classes in cases when a
      single tag may contain strings with different containers; such as
      the <template> tag, which may contain both TemplateString objects
      and Comment objects.
    * The html.parser tree builder can now handle named entities
      found in the HTML5 spec in much the same way that the html5lib
      tree builder does.
    * Added a second way to pass specify encodings to UnicodeDammit and
      EncodingDetector, based on the order of precedence defined in the
      HTML5 spec.
    * Improve the warning issued when a directory name (as opposed to
      the name of a regular file) is passed as markup into the BeautifulSoup
      constructor.
  - Do not pass the directory to pytest.

Files

/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/INSTALLER
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/METADATA
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/RECORD
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/REQUESTED
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/WHEEL
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/licenses
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/licenses/AUTHORS
/usr/lib/python3.11/site-packages/beautifulsoup4-4.13.4.dist-info/licenses/LICENSE
/usr/lib/python3.11/site-packages/bs4
/usr/lib/python3.11/site-packages/bs4/__init__.py
/usr/lib/python3.11/site-packages/bs4/__pycache__
/usr/lib/python3.11/site-packages/bs4/__pycache__/__init__.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/_deprecation.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/_deprecation.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/_typing.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/_typing.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/_warnings.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/_warnings.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/css.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/css.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/dammit.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/dammit.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/diagnose.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/diagnose.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/element.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/element.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/exceptions.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/exceptions.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/filter.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/filter.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/formatter.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/__pycache__/formatter.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/_deprecation.py
/usr/lib/python3.11/site-packages/bs4/_typing.py
/usr/lib/python3.11/site-packages/bs4/_warnings.py
/usr/lib/python3.11/site-packages/bs4/builder
/usr/lib/python3.11/site-packages/bs4/builder/__init__.py
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/__init__.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/_html5lib.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/_html5lib.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/_htmlparser.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/_htmlparser.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/_lxml.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/builder/__pycache__/_lxml.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/builder/_html5lib.py
/usr/lib/python3.11/site-packages/bs4/builder/_htmlparser.py
/usr/lib/python3.11/site-packages/bs4/builder/_lxml.py
/usr/lib/python3.11/site-packages/bs4/css.py
/usr/lib/python3.11/site-packages/bs4/dammit.py
/usr/lib/python3.11/site-packages/bs4/diagnose.py
/usr/lib/python3.11/site-packages/bs4/element.py
/usr/lib/python3.11/site-packages/bs4/exceptions.py
/usr/lib/python3.11/site-packages/bs4/filter.py
/usr/lib/python3.11/site-packages/bs4/formatter.py
/usr/lib/python3.11/site-packages/bs4/py.typed
/usr/lib/python3.11/site-packages/bs4/tests
/usr/lib/python3.11/site-packages/bs4/tests/__init__.py
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/__init__.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_builder.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_builder.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_builder_registry.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_builder_registry.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_css.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_css.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_dammit.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_dammit.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_element.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_element.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_filter.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_filter.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_formatter.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_formatter.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_fuzz.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_fuzz.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_html5lib.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_html5lib.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_htmlparser.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_htmlparser.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_lxml.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_lxml.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_navigablestring.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_navigablestring.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_pageelement.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_pageelement.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_soup.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_soup.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_tag.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_tag.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_tree.cpython-311.opt-1.pyc
/usr/lib/python3.11/site-packages/bs4/tests/__pycache__/test_tree.cpython-311.pyc
/usr/lib/python3.11/site-packages/bs4/tests/fuzz
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4670634698080256.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5000587759190016.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5270998950477824.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5375146639360000.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5492400320282624.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6306874195312640.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase
/usr/lib/python3.11/site-packages/bs4/tests/fuzz/crash-ffbdfa8a2b26f13537b68d3794b0478a4090ee4a.testcase
/usr/lib/python3.11/site-packages/bs4/tests/test_builder.py
/usr/lib/python3.11/site-packages/bs4/tests/test_builder_registry.py
/usr/lib/python3.11/site-packages/bs4/tests/test_css.py
/usr/lib/python3.11/site-packages/bs4/tests/test_dammit.py
/usr/lib/python3.11/site-packages/bs4/tests/test_element.py
/usr/lib/python3.11/site-packages/bs4/tests/test_filter.py
/usr/lib/python3.11/site-packages/bs4/tests/test_formatter.py
/usr/lib/python3.11/site-packages/bs4/tests/test_fuzz.py
/usr/lib/python3.11/site-packages/bs4/tests/test_html5lib.py
/usr/lib/python3.11/site-packages/bs4/tests/test_htmlparser.py
/usr/lib/python3.11/site-packages/bs4/tests/test_lxml.py
/usr/lib/python3.11/site-packages/bs4/tests/test_navigablestring.py
/usr/lib/python3.11/site-packages/bs4/tests/test_pageelement.py
/usr/lib/python3.11/site-packages/bs4/tests/test_soup.py
/usr/lib/python3.11/site-packages/bs4/tests/test_tag.py
/usr/lib/python3.11/site-packages/bs4/tests/test_tree.py
/usr/share/licenses/python311-beautifulsoup4
/usr/share/licenses/python311-beautifulsoup4/LICENSE


Generated by rpm2html 1.8.1

Fabrice Bellet, Sun Jul 20 23:42:17 2025