X-Git-Url: https://git.phdru.name/?a=blobdiff_plain;f=doc%2FANNOUNCE;h=d43b78563a6e6f83b89fc723320f6695e7aaea13;hb=ed37b71ee49220545b4bad810a0cc6b29e78669b;hp=5160ef7106d0c55093bb581ceadf0e15125d979b;hpb=387f77d110986aa12967c9cd788ab0e4f41f2be2;p=bookmarks_db.git diff --git a/doc/ANNOUNCE b/doc/ANNOUNCE index 5160ef7..d43b785 100644 --- a/doc/ANNOUNCE +++ b/doc/ANNOUNCE @@ -2,79 +2,35 @@ Bookmarks Database and Internet Robot WHAT IS IT - There is a set of classes, libraries, programs and plugins I use to -manipulate my bookmarks.html. I like Netscape Navigator, but I need more -features, so I write and maintain these programs for my needs. I need to -extend Navigator's "What's new" feature (Navigator 4 calls it "Update -bookmarks"). + A set of classes, libraries, programs and plugins I use to manipulate my +bookmarks.html. +WHAT'S NEW in version 4.3.1 (2011-01-03). -WHAT'S NEW in version 3.4.0 - Updated to m_lib version 1.2. Extended support for Mozilla. +Get favicon before HTML redirect (refresh). +Get favicon even if it's of a wrong type; many sites return favicon as +text/plain or application/*; the only exception is text/html which is usually +an error page instead of error 404. -WHAT'S NEW in version 3.3.2 - parse_html.py can now recode unicode entities in titles. +WHAT'S NEW in version 4.3.0 (2011-01-01). -WHAT'S NEW in version 3.3.0 - Required Python 2.2. - HTML parser. If the protocol is HTTP, and there is Content-Type header, and -content type is text/html, the object is parsed to extract its title; if the -Content-Type header has charset, or if the HTML has with charset, the -title is converted from the given charset to the default charset. The object is -also parsed to extract tag with redirect. - - -WHAT'S NEW in version 3.0 - Complete rewrite from scratch. Created mechanism for pluggable storage -managers, writers (DB dumpers/exporters) and robots. +Robots no longer have one global temporary file - there are at least two +(html and favicon), and in the future there will be more for +asynchronous robot(s) that will test many URLs in parallel. WHERE TO GET - Master site: http://phd.pp.ru/Software/Python/#bookmarks_db - - Faster mirrors: http://phd.by.ru/Software/Python/#bookmarks_db - http://phd2.chat.ru/Software/Python/#bookmarks_db - + Master site: http://phdru.name/Software/Python/#bookmarks_db + Mirrors: http://phd.webhost.ru/Software/Python/#bookmarks_db + http://phd.by.ru/Software/Python/#bookmarks_db AUTHOR - Oleg Broytmann + Oleg Broytman COPYRIGHT - Copyright (C) 1997-2002 PhiloSoft Design + Copyright (C) 1997-2011 PhiloSoft Design LICENSE GPL - -STATUS - Storage managers: pickle, FLAD (Flat ASCII Database). - Writers: HTML, text, FLAD (full database or only errors). - Robots (URL checker): simple, simple+timeoutscoket, forking. - -TODO - Parse downloaded file and get some additional information out of headers - and parsed data - title, for example. Or redirects using . - (Partially done - now extracting title). - - Documentation. - - Merge "writers" to storage managers. - New storage managers: shelve, SQL, ZODB, MetaKit. - Robots (URL checkers): threading, asyncore-based. - Aliases in bookmarks.html. - - Configuration file for configuring defaults - global defaults for the system - and local defaults for subsystems. - - Ruleset-based mechanisms to filter out what types of URLs to check: checking - based on URL schema, host, port, path, filename, extension, etc. - - Detailed reports on robot run - what's old, what's new, what was moved, - errors, etc. - WWW-interface to the report. - - Bigger database. Multiuser database. Robot should operate on a part of - the DB. - WWW-interface to the database. User will import/export/edit bookmarks, - schedule robot run, etc.