+Version 5.2.4 (2024-03-04)
+
+ Fix(Robots/bkmk_rrequests): No need to re-check error 404 via proxy.
+
+Version 5.2.3 (2024-03-03)
+
+ Feat(Robots/bkmk_rrequests): Report 40x and 50x errors.
+
+ Fix HTML pasrer based on Bs4: Find "shortcut icon".
+
+Version 5.2.2 (2024-03-03)
+
+ Robots/bkmk_rrequests: Add request headers.
+
+ Robots/bkmk_robot_base: Process "data:image/" icons.
+
+Version 5.2.1 (2024-03-02)
+
+ Speedup second access through proxy.
+
+Version 5.2.0 (2024-03-02)
+
+ For the robot based on requests allow to use a proxy.
+
+Version 5.1.0 (2024-03-01)
+
+ Robot based on requests.
+
+Version 5.0.0 (2023-11-22)
+
+ Python 3.
+
+ Report redirects and set URLs.
+
+ Delete URLs.
+
+ Remove BeautifulSoup.py (use globally installed).
+
+Version 4.6.0 (2014-07-06)
+
+ Split simple robot: separate network operations and
+ URL handling/HTML parsing.
+
+ Change parse_html to parse strings, not files.
+
+ Split parse_html/__init__.py into __main__.py.
+
+ Adapt JSON storage to recent Mozilla export format.
+
+ Add ChangeLog.
+
+ Allow parameters in BKMK_* environment variables; for example,
+ BKMK_ROBOT=forking:subproc=urllib or
+ BKMK_STORAGE=json:filename=bookmarks_db.json.
+
+ Pass subproc parameter to the subprocess to allow different robots.
+
+ Add a new robot based on urllib2.
+
+Version 4.5.6 (2014-01-14)
+
+ Remove absolute directory ~/lib to make it portable.
+
+Version 4.5.5 (2013-12-05)
+
+ Parse <meta charset="...">.
+
+Version 4.5.4 (2013-11-23)
+
+ Published through git/gitweb.
+
+Version 4.5.3 (2013-07-26)
+
+ Minor tweak in Makefile.
+
+ Switched to git.
+
+Version 4.5.2 (2012-09-24)
+
+ Removed svn:keywords.
+
+ Handle redirects with codes 303 and 307.
+
+ Fixed a bug in handling place: URIs (do not append '//').
+
+Version 4.5.1 (2011-12-28).
+
+ Read/write mozilla-specific date/time format in json storage.
+
+Version 4.5.0 (2011-12-18).
+
+ Encode international domain names with IDNA encoding.
+
+ Adapted to different Mozilla 'place' URIs.
+
+Version 4.4.0 (2011-01-07).
+
+ Moved BeautifulSoup.py and subproc.py from Robots/ to the top-level
+ directory.
+
+ Moved parse_html.py and its submodules to a separate parse_html package.
+
+ Added statistics code to parse_html, got a statistics on parser
+ success/failrure rate, reordered parsers.
+
+ Removed old cruft.
+
+Version 4.3.1 (2011-01-03).
+
+ Get favicon before HTML redirect (refresh).
+
+Version 4.3.0 (2011-01-01).
+
+ Robots no longer have one global temporary file - there are at least two
+ (html and favicon), and in the future there will be more for asynchronous
+ robot(s) that would test many URLs in parallel.
+
+Version 4.2.2.
+
+ Added HTML Parser based on lxml.
+
+Version 4.2.1 (2010-08-12).
+
+ Added HTML Parser based on html5 library.
+
+Version 4.2.0 (2010-08-11).
+
+ New storage: json; it allows to load and store Mozilla (Firefox) backup
+ files.
+
+Version 4.1.2
+
+ Process http error 307 as a temporary redirect.
+
+Version 4.1.1 (2008-03-10)
+
+ Catch and report all errors.
+
+ Consider application/xhtml+xml as HTML.
+
+ Better handling of exceptions while looking up the icon.
+
+ Recode HTML entities.
+
+ Always use utf-8 as the default encoding.
+
+Version 4.1.0 (2008-01-14)
+
+ Parser for HTML based on BeautifulSoup.
+
+ Changed User-agent header: I saw a number of sites that forbid
+ "Mozilla compatible" browsers. Added a number of fake headers to pretend
+ this is a real web-browser - there are still stupid sites
+ that are trying to protect themselves from robots by analyzing headers.
+
+ Handle redirects while looking for the icon.
+
+ Handle float timeouts in HTML redirects.
+
+ Minimal required version of Python is 2.5 now.
+
+Version 4.0.0 (2007-10-20)
+
+ Extended support for Mozilla: charset and icon in bookmarks.
+ Use the charset to add Accept-Charset header.
+ Retrieve favicon.ico (or whatever <link> points to) and store it.
+
+ The project celebrates 10th anniversary!
+
+Version 3.4.1 (2005-01-29)
+
+ Updated to Python 2.4. Switched from CVS to Subversion.
+
+Version 3.4.0 (2004-09-23)
+
+ Extended support for Mozilla: keywords in bookmarks.
+ Updated to m_lib version 1.2.
+
+Version 3.3.2
+
+ parse_html.py can now recode unicode entities in titles.
+
+Version 3.3.0
+
+ Required Python 2.2.
+
+ HTML parser. If the protocol is HTTP, and there is Content-Type header, and
+ content type is text/html, the object is parsed to extract its title; if
+ the Content-Type header has charset, or if the HTML has <META> with
+ charset, the title is converted from the given charset to the default
+ charset. The <HEADER> is also parsed to extract <META> tag with redirect,
+ if any.
+
+Version 3.0
+
+ Complete rewrite from scratch. Created mechanism for pluggable storage
+ managers, writers (DB dumpers/exporters) and robots.