1 Version 5.2.4 (2024-03-04)
3 Fix(Robots/bkmk_rrequests): No need to re-check error 404 via proxy.
5 Version 5.2.3 (2024-03-03)
7 Feat(Robots/bkmk_rrequests): Report 40x and 50x errors.
9 Fix HTML pasrer based on Bs4: Find "shortcut icon".
11 Version 5.2.2 (2024-03-03)
13 Robots/bkmk_rrequests: Add request headers.
15 Robots/bkmk_robot_base: Process "data:image/" icons.
17 Version 5.2.1 (2024-03-02)
19 Speedup second access through proxy.
21 Version 5.2.0 (2024-03-02)
23 For the robot based on requests allow to use a proxy.
25 Version 5.1.0 (2024-03-01)
27 Robot based on requests.
29 Version 5.0.0 (2023-11-22)
33 Report redirects and set URLs.
37 Remove BeautifulSoup.py (use globally installed).
39 Version 4.6.0 (2014-07-06)
41 Split simple robot: separate network operations and
42 URL handling/HTML parsing.
44 Change parse_html to parse strings, not files.
46 Split parse_html/__init__.py into __main__.py.
48 Adapt JSON storage to recent Mozilla export format.
52 Allow parameters in BKMK_* environment variables; for example,
53 BKMK_ROBOT=forking:subproc=urllib or
54 BKMK_STORAGE=json:filename=bookmarks_db.json.
56 Pass subproc parameter to the subprocess to allow different robots.
58 Add a new robot based on urllib2.
60 Version 4.5.6 (2014-01-14)
62 Remove absolute directory ~/lib to make it portable.
64 Version 4.5.5 (2013-12-05)
66 Parse <meta charset="...">.
68 Version 4.5.4 (2013-11-23)
70 Published through git/gitweb.
72 Version 4.5.3 (2013-07-26)
74 Minor tweak in Makefile.
78 Version 4.5.2 (2012-09-24)
82 Handle redirects with codes 303 and 307.
84 Fixed a bug in handling place: URIs (do not append '//').
86 Version 4.5.1 (2011-12-28).
88 Read/write mozilla-specific date/time format in json storage.
90 Version 4.5.0 (2011-12-18).
92 Encode international domain names with IDNA encoding.
94 Adapted to different Mozilla 'place' URIs.
96 Version 4.4.0 (2011-01-07).
98 Moved BeautifulSoup.py and subproc.py from Robots/ to the top-level
101 Moved parse_html.py and its submodules to a separate parse_html package.
103 Added statistics code to parse_html, got a statistics on parser
104 success/failrure rate, reordered parsers.
108 Version 4.3.1 (2011-01-03).
110 Get favicon before HTML redirect (refresh).
112 Version 4.3.0 (2011-01-01).
114 Robots no longer have one global temporary file - there are at least two
115 (html and favicon), and in the future there will be more for asynchronous
116 robot(s) that would test many URLs in parallel.
120 Added HTML Parser based on lxml.
122 Version 4.2.1 (2010-08-12).
124 Added HTML Parser based on html5 library.
126 Version 4.2.0 (2010-08-11).
128 New storage: json; it allows to load and store Mozilla (Firefox) backup
133 Process http error 307 as a temporary redirect.
135 Version 4.1.1 (2008-03-10)
137 Catch and report all errors.
139 Consider application/xhtml+xml as HTML.
141 Better handling of exceptions while looking up the icon.
143 Recode HTML entities.
145 Always use utf-8 as the default encoding.
147 Version 4.1.0 (2008-01-14)
149 Parser for HTML based on BeautifulSoup.
151 Changed User-agent header: I saw a number of sites that forbid
152 "Mozilla compatible" browsers. Added a number of fake headers to pretend
153 this is a real web-browser - there are still stupid sites
154 that are trying to protect themselves from robots by analyzing headers.
156 Handle redirects while looking for the icon.
158 Handle float timeouts in HTML redirects.
160 Minimal required version of Python is 2.5 now.
162 Version 4.0.0 (2007-10-20)
164 Extended support for Mozilla: charset and icon in bookmarks.
165 Use the charset to add Accept-Charset header.
166 Retrieve favicon.ico (or whatever <link> points to) and store it.
168 The project celebrates 10th anniversary!
170 Version 3.4.1 (2005-01-29)
172 Updated to Python 2.4. Switched from CVS to Subversion.
174 Version 3.4.0 (2004-09-23)
176 Extended support for Mozilla: keywords in bookmarks.
177 Updated to m_lib version 1.2.
181 parse_html.py can now recode unicode entities in titles.
187 HTML parser. If the protocol is HTTP, and there is Content-Type header, and
188 content type is text/html, the object is parsed to extract its title; if
189 the Content-Type header has charset, or if the HTML has <META> with
190 charset, the title is converted from the given charset to the default
191 charset. The <HEADER> is also parsed to extract <META> tag with redirect,
196 Complete rewrite from scratch. Created mechanism for pluggable storage
197 managers, writers (DB dumpers/exporters) and robots.