1 Parse downloaded file and get some additional information out of headers
2 and parsed data - title, for example. Or redirects using <META HTTP-Equiv>.
3 (Partially done - now extracting title).
7 Merge "writers" to storage managers.
8 New storage managers: shelve, SQL, ZODB, MetaKit.
9 Robots (URL checkers): threading, asyncore-based.
10 Aliases in bookmarks.html.
12 Configuration file for configuring defaults - global defaults for the system
13 and local defaults for subsystems.
15 Ruleset-based mechanisms to filter out what types of URLs to check: checking
16 based on URL schema, host, port, path, filename, extension, etc.
18 Detailed reports on robot run - what's old, what's new, what was moved,
20 WWW-interface to the report.
22 Bigger database. Multiuser database. Robot should operate on a part of
24 WWW-interface to the database. User will import/export/edit bookmarks,
25 schedule robot run, etc.