TODO list for s3cmd project
===========================

- For 0.9.9
  - Recursive processing / multiple sources with most commands.
    - Incl. recursive cp/mv on remote "folders".
  - Sanitize put/get behaviour.
    For instance 'get s3://../blah/x.jpg' should save it to
	x.jpg and not attempt blah/x.jpg.
    Similar thing, 'put /foo/bar/xyz.jpg s3://bucket/dome/path/'
    should save it to s3://bucket/dome/path/xyz.jpg
  - Sync should work for one file, for example
    s3cmd sync /etc/passwd s3://bucket/passwd
  - Document --recursive and --force for buckets
  - Allow change /tmp to somewhere else
  - With --guess-mime use 'magic' module if available.

- For 1.0.0
  - Add --include/--include-from/--rinclude* for sync
  - Add 'setacl' command.
  - Add commands for CloudFront.

- After 1.0.0
  - Speed up upload / download with multiple threads.
  - Sync should be able to update metadata (UID, timstamps, etc)
    if only these change (i.e. same content, different metainfo).
  - Sync must backup non-files as well. At least directories, 
    symlinks and device nodes.
  - Keep backup files remotely on put/sync-to if requested 
    (move the old 'object' to e.g. 'object~' and only then upload 
     the new one). Could be more advanced to keep, say, last 5 
	 copies, etc.

- Implement GPG for sync
  (it's not that easy since it won't be easy to compare
   the encrypted-remote-object size with local file. 
   either we can store the metadata in a dedicated file 
   where we face a risk of inconsistencies, or we'll store
   the metadata encrypted in each object header where we'll
   have to do large number for object/HEAD requests. tough 
   call).
  Or we can only compare local timestamps with remote object 
  timestamps. If the local one is older we'll *assume* it 
  hasn't been changed.

- Keep man page up to date and write some more documentation
  - Yeah, right ;-)