1 Version 5.3.1 (2024-03-??)
3 Renamed check_urls.py to check_urls_db.py.
5 Renamed check_url.py to check_urls.py.
7 Stop splitting and un-splitting URLs. Pass bookmark.href as is.
9 Version 5.3.0 (2024-03-06)
11 Added get_url.py: a script to get one file from an URL.
13 Renamed set-URLs -> set-urls.
15 Version 5.2.5 (2024-03-05)
17 Feat(Robots/bkmk_rrequests): Ignore all problems with certificates.
19 Fix(Robots/bkmk_robot_base): Pass query part.
21 Version 5.2.4 (2024-03-04)
23 Fix(Robots/bkmk_rrequests): No need to re-check error 404 via proxy.
25 Version 5.2.3 (2024-03-03)
27 Feat(Robots/bkmk_rrequests): Report 40x and 50x errors.
29 Fix HTML pasrer based on Bs4: Find "shortcut icon".
31 Version 5.2.2 (2024-03-03)
33 Robots/bkmk_rrequests: Add request headers.
35 Robots/bkmk_robot_base: Process "data:image/" icons.
37 Version 5.2.1 (2024-03-02)
39 Speedup second access through proxy.
41 Version 5.2.0 (2024-03-02)
43 For the robot based on requests allow to use a proxy.
45 Version 5.1.0 (2024-03-01)
47 Robot based on requests.
49 Version 5.0.0 (2023-11-22)
53 Report redirects and set URLs.
57 Remove BeautifulSoup.py (use globally installed).
59 Version 4.6.0 (2014-07-06)
61 Split simple robot: separate network operations and
62 URL handling/HTML parsing.
64 Change parse_html to parse strings, not files.
66 Split parse_html/__init__.py into __main__.py.
68 Adapt JSON storage to recent Mozilla export format.
72 Allow parameters in BKMK_* environment variables; for example,
73 BKMK_ROBOT=forking:subproc=urllib or
74 BKMK_STORAGE=json:filename=bookmarks_db.json.
76 Pass subproc parameter to the subprocess to allow different robots.
78 Add a new robot based on urllib2.
80 Version 4.5.6 (2014-01-14)
82 Remove absolute directory ~/lib to make it portable.
84 Version 4.5.5 (2013-12-05)
86 Parse <meta charset="...">.
88 Version 4.5.4 (2013-11-23)
90 Published through git/gitweb.
92 Version 4.5.3 (2013-07-26)
94 Minor tweak in Makefile.
98 Version 4.5.2 (2012-09-24)
100 Removed svn:keywords.
102 Handle redirects with codes 303 and 307.
104 Fixed a bug in handling place: URIs (do not append '//').
106 Version 4.5.1 (2011-12-28).
108 Read/write mozilla-specific date/time format in json storage.
110 Version 4.5.0 (2011-12-18).
112 Encode international domain names with IDNA encoding.
114 Adapted to different Mozilla 'place' URIs.
116 Version 4.4.0 (2011-01-07).
118 Moved BeautifulSoup.py and subproc.py from Robots/ to the top-level
121 Moved parse_html.py and its submodules to a separate parse_html package.
123 Added statistics code to parse_html, got a statistics on parser
124 success/failrure rate, reordered parsers.
128 Version 4.3.1 (2011-01-03).
130 Get favicon before HTML redirect (refresh).
132 Version 4.3.0 (2011-01-01).
134 Robots no longer have one global temporary file - there are at least two
135 (html and favicon), and in the future there will be more for asynchronous
136 robot(s) that would test many URLs in parallel.
140 Added HTML Parser based on lxml.
142 Version 4.2.1 (2010-08-12).
144 Added HTML Parser based on html5 library.
146 Version 4.2.0 (2010-08-11).
148 New storage: json; it allows to load and store Mozilla (Firefox) backup
153 Process http error 307 as a temporary redirect.
155 Version 4.1.1 (2008-03-10)
157 Catch and report all errors.
159 Consider application/xhtml+xml as HTML.
161 Better handling of exceptions while looking up the icon.
163 Recode HTML entities.
165 Always use utf-8 as the default encoding.
167 Version 4.1.0 (2008-01-14)
169 Parser for HTML based on BeautifulSoup.
171 Changed User-agent header: I saw a number of sites that forbid
172 "Mozilla compatible" browsers. Added a number of fake headers to pretend
173 this is a real web-browser - there are still stupid sites
174 that are trying to protect themselves from robots by analyzing headers.
176 Handle redirects while looking for the icon.
178 Handle float timeouts in HTML redirects.
180 Minimal required version of Python is 2.5 now.
182 Version 4.0.0 (2007-10-20)
184 Extended support for Mozilla: charset and icon in bookmarks.
185 Use the charset to add Accept-Charset header.
186 Retrieve favicon.ico (or whatever <link> points to) and store it.
188 The project celebrates 10th anniversary!
190 Version 3.4.1 (2005-01-29)
192 Updated to Python 2.4. Switched from CVS to Subversion.
194 Version 3.4.0 (2004-09-23)
196 Extended support for Mozilla: keywords in bookmarks.
197 Updated to m_lib version 1.2.
201 parse_html.py can now recode unicode entities in titles.
207 HTML parser. If the protocol is HTTP, and there is Content-Type header, and
208 content type is text/html, the object is parsed to extract its title; if
209 the Content-Type header has charset, or if the HTML has <META> with
210 charset, the title is converted from the given charset to the default
211 charset. The <HEADER> is also parsed to extract <META> tag with redirect,
216 Complete rewrite from scratch. Created mechanism for pluggable storage
217 managers, writers (DB dumpers/exporters) and robots.