X-Git-Url: https://git.phdru.name/?a=blobdiff_plain;f=doc%2FTODO;h=5705e2139cc941f8b1d3a53ae3d0d15fd1c75120;hb=a3bdba992715f6282e112e06a2beee15d20b69ca;hp=887e5a9b0afc9a4790418482df7ecd86ab553491;hpb=fb5c3b2b91aeeb615d6d6d890491af3fdff69556;p=bookmarks_db.git diff --git a/doc/TODO b/doc/TODO index 887e5a9..5705e21 100644 --- a/doc/TODO +++ b/doc/TODO @@ -1,25 +1,34 @@ - Parse downloaded file and get some additional information out of headers - and parsed data - title, for example. Or redirects using . - (Partially done - now extracting title). +Switch simple robot to urllib2. - Documentation. +A new robot based on PycURL. - Merge "writers" to storage managers. - New storage managers: shelve, SQL, ZODB, MetaKit. - Robots (URL checkers): threading, asyncore-based. - Aliases in bookmarks.html. +HTML parser based on BeautifulSoup4. - Configuration file for configuring defaults - global defaults for the system - and local defaults for subsystems. +A program to publish bookmarks with icons. - Ruleset-based mechanisms to filter out what types of URLs to check: checking - based on URL schema, host, port, path, filename, extension, etc. +Fetch description from and store it in +bookmark.description if the description is empty. (How to update old +descriptions without replacing my own comments?) - Detailed reports on robot run - what's old, what's new, what was moved, - errors, etc. - WWW-interface to the report. +Parse (or interpret) downloaded file and get javascript redirects. - Bigger database. Multiuser database. Robot should operate on a part of - the DB. - WWW-interface to the database. User will import/export/edit bookmarks, - schedule robot run, etc. +More and better documentation. + +Merge "writers" to storage managers. +New storage managers: shelve, SQL, ZODB, MetaKit. +More robots (URL checkers): threading, asyncore-based; +robots that test many URLs in parallel. +Configuration file to configure defaults - global defaults for the system +and local defaults for subsystems. + +Ruleset-based mechanisms to filter out what types of URLs to check: checking +based on URL schema, host, port, path, filename, extension, etc. + +Detailed reports on robot run - what's old, what's new, what has been moved, +errors, etc. +WWW-interface to the report. + +Bigger database. Multiuser database. Robot should operates on a part of +the DB. +WWW-interface to the database. User should import/export/edit bookmarks, +schedule robot run, etc.