Browse code

* README: Updated for 0.9.9 * s3cmd, S3/PkgInfo.py, s3cmd.1: Replaced project URLs with http://s3tools.org * NEWS: Improved message.

git-svn-id: https://s3tools.svn.sourceforge.net/svnroot/s3tools/s3cmd/trunk@372 830e0280-6d2a-0410-9c65-932aecc39d9d

Michal Ludvig authored on 2009/02/14 12:16:42
Showing 6 changed files
... ...
@@ -1,3 +1,10 @@
1
+2009-02-14  Michal Ludvig  <michal@logix.cz>
2
+
3
+	* README: Updated for 0.9.9
4
+	* s3cmd, S3/PkgInfo.py, s3cmd.1: Replaced project 
5
+	  URLs with http://s3tools.org
6
+	* NEWS: Improved message.
7
+
1 8
 2009-02-12  Michal Ludvig  <michal@logix.cz>
2 9
 
3 10
 	* s3cmd: Added --list-md5 for 'ls' command.
... ...
@@ -5,8 +5,8 @@ s3cmd 0.9.9
5 5
 
6 6
 s3cmd 0.9.9-rc3  - 2009-02-02
7 7
 ===============
8
-* Fixed crash in S3Error().__str__() (typically Amazon's Internal
9
-  errors, etc).
8
+* Fixed crash: AttributeError: 'S3Error' object has no attribute '_message'
9
+  (bug  #2547322)
10 10
 
11 11
 s3cmd 0.9.9-rc2  - 2009-01-30
12 12
 ===============
... ...
@@ -5,10 +5,17 @@ Author:
5 5
     Michal Ludvig <michal@logix.cz>
6 6
 
7 7
 S3tools / S3cmd project homepage:
8
-    http://s3tools.sourceforge.net
8
+    http://s3tools.org
9 9
 
10
-S3tools / S3cmd mailing list:
11
-    s3tools-general@lists.sourceforge.net
10
+S3tools / S3cmd mailing lists:
11
+    * Announcements of new releases:
12
+        s3tools-announce@lists.sourceforge.net
13
+
14
+    * General questions and discussion about usage
15
+        s3tools-general@lists.sourceforge.net
16
+
17
+    * Bug reports
18
+        s3tools-bugs@lists.sourceforge.net
12 19
 
13 20
 Amazon S3 homepage:
14 21
     http://aws.amazon.com/s3
... ...
@@ -79,41 +86,84 @@ to be like US$0.03 or even nil.
79 79
 
80 80
 That's the pricing model of Amazon S3 in a nutshell. Check
81 81
 Amazon S3 homepage at http://aws.amazon.com/s3 for more 
82
-details. 
82
+details.
83 83
 
84 84
 Needless to say that all these money are charged by Amazon 
85 85
 itself, there is obviously no payment for using S3cmd :-)
86 86
 
87 87
 Amazon S3 basics
88 88
 ----------------
89
-Files stored in S3 are called "objects" and their names are 
90
-officially called "keys". Each object belongs to exactly one
91
-"bucket". Buckets are kind of directories or folders with 
92
-some restrictions: 1) each user can only have 100 buckets at 
93
-the most, 2) bucket names must be unique amongst all users 
94
-of S3, 3) buckets can not be nested into a deeper
95
-hierarchy and 4) a name of a bucket can only consist of basic 
96
-alphanumeric characters plus dot (.) and dash (-). No spaces,
97
-no accented or UTF-8 letters, etc.
98
-
99
-On the other hand there are almost no restrictions on object 
100
-names ("keys"). These can be any UTF-8 strings of up to 1024 
101
-bytes long. Interestingly enough the object name can contain
102
-forward slash character (/) thus a "my/funny/picture.jpg" is
103
-a valid object name. Note that there are not directories nor
104
-buckets called "my" and "funny" - it is really a single object 
105
-name called "my/funny/picture.jpg" and S3 does not care at 
106
-all that it _looks_ like a directory structure.
89
+Files stored in S3 are called "objects" and their names are
90
+officially called "keys". Since this is sometimes confusing
91
+for the users we often refer to the objects as "files" or
92
+"remote files". Each object belongs to exactly one "bucket".
107 93
 
108 94
 To describe objects in S3 storage we invented a URI-like
109 95
 schema in the following form:
110 96
 
97
+    s3://BUCKET
98
+or
111 99
     s3://BUCKET/OBJECT
112 100
 
113
-See the HowTo later in this document for example usages of 
114
-this S3-URI schema.
101
+Buckets
102
+-------
103
+Buckets are sort of like directories or folders with some 
104
+restrictions:
105
+1) each user can only have 100 buckets at the most, 
106
+2) bucket names must be unique amongst all users of S3, 
107
+3) buckets can not be nested into a deeper hierarchy and 
108
+4) a name of a bucket can only consist of basic alphanumeric 
109
+   characters plus dot (.) and dash (-). No spaces, no accented
110
+   or UTF-8 letters, etc. 
111
+
112
+It is a good idea to use DNS-compatible bucket names. That
113
+for instance means you should not use upper case characters.
114
+While DNS compliance is not strictly required some features
115
+described below are not available for DNS-incompatible named
116
+buckets. One more step further is using a fully qualified
117
+domain name (FQDN) for a bucket - that has even more benefits.
118
+
119
+* For example "s3://--My-Bucket--" is not DNS compatible.
120
+* On the other hand "s3://my-bucket" is DNS compatible but 
121
+  is not FQDN.
122
+* Finally "s3://my-bucket.s3tools.org" is DNS compatible 
123
+  and FQDN provided you own the s3tools.org domain and can
124
+  create the domain record for "my-bucket.s3tools.org".
125
+
126
+Look for "Virtual Hosts" later in this text for more details 
127
+regarding FQDN named buckets.
128
+
129
+Objects (files stored in Amazon S3)
130
+-----------------------------------
131
+Unlike for buckets there are almost no restrictions on object 
132
+names. These can be any UTF-8 strings of up to 1024 bytes long. 
133
+Interestingly enough the object name can contain forward
134
+slash character (/) thus a "my/funny/picture.jpg" is a valid
135
+object name. Note that there are not directories nor
136
+buckets called "my" and "funny" - it is really a single object 
137
+name called "my/funny/picture.jpg" and S3 does not care at 
138
+all that it _looks_ like a directory structure.
139
+
140
+The full URI of such an image could be, for example:
115 141
 
116
-Simple S3cmd HowTo
142
+    s3://my-bucket/my/funny/picture.jpg
143
+
144
+Public vs Private files
145
+-----------------------
146
+The files stored in S3 can be either Private or Public. The 
147
+Private ones are readable only by the user who uploaded them
148
+while the Public ones can be read by anyone. Additionally the
149
+Public files can be accessed using HTTP protocol, not only
150
+using s3cmd or a similar tool.
151
+
152
+The ACL (Access Control List) of a file can be set at the 
153
+time of upload using --acl-public or --acl-private options 
154
+with 's3cmd put' or 's3cmd sync' commands (see below).
155
+
156
+Alternatively the ACL can be altered for existing remote files
157
+with 's3cmd setacl --acl-public' (or --acl-private) command.
158
+
159
+Simple s3cmd HowTo
117 160
 ------------------
118 161
 1) Register for Amazon AWS / S3
119 162
    Go to http://aws.amazon.com/s3, click the "Sign up
... ...
@@ -121,7 +171,7 @@ Simple S3cmd HowTo
121 121
    through the registration. You will have to supply 
122 122
    your Credit Card details in order to allow Amazon 
123 123
    charge you for S3 usage. 
124
-   At the end you should posses your Access and Secret Keys
124
+   At the end you should have your Access and Secret Keys
125 125
 
126 126
 2) Run "s3cmd --configure"
127 127
    You will be asked for the two keys - copy and paste 
... ...
@@ -135,66 +185,137 @@ Simple S3cmd HowTo
135 135
    you as of now. So the output will be empty.
136 136
 
137 137
 4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
138
-   As mentioned above bucket names must be unique amongst 
138
+   As mentioned above the bucket names must be unique amongst 
139 139
    _all_ users of S3. That means the simple names like "test" 
140 140
    or "asdf" are already taken and you must make up something 
141
-   more original. I sometimes prefix my bucket names with
142
-   my e-mail domain name (logix.cz) leading to a bucket name,
143
-   for instance, 'logix.cz-test':
141
+   more original. To demonstrate as many features as possible
142
+   let's create a FQDN-named bucket s3://public.s3tools.org:
144 143
 
145
-   ~$ s3cmd mb s3://logix.cz-test
146
-   Bucket 'logix.cz-test' created
144
+   ~$ s3cmd mb s3://public.s3tools.org
145
+   Bucket 's3://public.s3tools.org' created
147 146
 
148 147
 5) List your buckets again with "s3cmd ls"
149 148
    Now you should see your freshly created bucket
150 149
 
151 150
    ~$ s3cmd ls
152
-   2007-01-19 01:41  s3://logix.cz-test
151
+   2009-01-28 12:34  s3://public.s3tools.org
153 152
 
154 153
 6) List the contents of the bucket
155 154
 
156
-   ~$ s3cmd ls s3://logix.cz-test
157
-   Bucket 'logix.cz-test':
155
+   ~$ s3cmd ls s3://public.s3tools.org
158 156
    ~$ 
159 157
 
160 158
    It's empty, indeed.
161 159
 
162
-7) Upload a file into the bucket
160
+7) Upload a single file into the bucket:
161
+
162
+   ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
163
+   some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
164
+    123456 of 123456   100% in    2s    51.75 kB/s  done
165
+
166
+   Upload a two directory tree into the bucket's virtual 'directory':
167
+
168
+   ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
169
+   File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
170
+   File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
171
+   File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
172
+   File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
173
+   File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
174
+
175
+   As you can see we didn't have to create the /somewhere
176
+   'directory'. In fact it's only a filename prefix, not 
177
+   a real directory and it doesn't have to be created in
178
+   any way beforehand.
179
+
180
+8) Now list the bucket contents again:
163 181
 
164
-   ~$ s3cmd put addressbook.xml s3://logix.cz-test/addrbook.xml
165
-   File 'addressbook.xml' stored as s3://logix.cz-test/addrbook.xml (123456 bytes)
182
+   ~$ s3cmd ls s3://public.s3tools.org
183
+                          DIR   s3://public.s3tools.org/somewhere/
184
+   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
166 185
 
167
-8) Now we can list the bucket contents again
186
+   Use --recursive (or -r) to list all the remote files:
168 187
 
169
-   ~$ s3cmd ls s3://logix.cz-test
170
-   Bucket 'logix.cz-test':
171
-   2007-01-19 01:46       120k  s3://logix.cz-test/addrbook.xml
188
+   ~$ s3cmd ls s3://public.s3tools.org
189
+   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
190
+   2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
191
+   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
192
+   2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
193
+   2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
194
+   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt
172 195
 
173
-9) Retrieve the file back and verify that its hasn't been 
174
-   corrupted
196
+9) Retrieve one of the files back and verify that it hasn't been 
197
+   corrupted:
175 198
 
176
-   ~$ s3cmd get s3://logix.cz-test/addrbook.xml addressbook-2.xml
177
-   Object s3://logix.cz-test/addrbook.xml saved as 'addressbook-2.xml' (123456 bytes)
199
+   ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
200
+   s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
201
+    123456 of 123456   100% in    3s    35.75 kB/s  done
178 202
 
179
-   ~$ md5sum addressbook.xml addressbook-2.xml
180
-   39bcb6992e461b269b95b3bda303addf  addressbook.xml
181
-   39bcb6992e461b269b95b3bda303addf  addressbook-2.xml
203
+   ~$ md5sum some-file.xml some-file-2.xml
204
+   39bcb6992e461b269b95b3bda303addf  some-file.xml
205
+   39bcb6992e461b269b95b3bda303addf  some-file-2.xml
182 206
 
183 207
    Checksums of the original file matches the one of the 
184 208
    retrieved one. Looks like it worked :-)
185 209
 
186
-10) Clean up: delete the object and remove the bucket
210
+   To retrieve a whole 'directory tree' from S3 use recursive get:
187 211
 
188
-   ~$ s3cmd rb s3://logix.cz-test
189
-   ERROR: S3 error: 409 (Conflict): BucketNotEmpty
212
+   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
213
+   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
214
+   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
215
+   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
216
+   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
217
+   File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
190 218
 
191
-   Ouch, we can only remove empty buckets!
219
+   Since the destination directory wasn't specified s3cmd 
220
+   saved the directory structure in a current working 
221
+   directory ('.'). 
192 222
 
193
-   ~$ s3cmd del s3://logix.cz-test/addrbook.xml
194
-   Object s3://logix.cz-test/addrbook.xml deleted
223
+   There is an important difference between:
224
+      get s3://public.s3tools.org/somewhere
225
+   and
226
+      get s3://public.s3tools.org/somewhere/
227
+   (note the trailing slash)
228
+   S3cmd always uses the last path part, ie the word
229
+   after the last slash, for naming files.
230
+ 
231
+   In the case of s3://.../somewhere the last path part 
232
+   is 'somewhere' and therefore the recursive get names
233
+   the local files as somewhere/dir1, somewhere/dir2, etc.
195 234
 
196
-   ~$ s3cmd rb s3://logix.cz-test
197
-   Bucket 'logix.cz-test' removed
235
+   On the other hand in s3://.../somewhere/ the last path
236
+   part is empty and s3cmd will only create 'dir1' and 'dir2' 
237
+   without the 'somewhere/' prefix:
238
+
239
+   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
240
+   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
241
+   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
242
+   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
243
+   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
244
+
245
+   See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it 
246
+   was in the previous example.
247
+
248
+10) Clean up - delete the remote files and remove the bucket:
249
+
250
+   Remove everything under s3://public.s3tools.org/somewhere/
251
+
252
+   ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
253
+   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
254
+   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
255
+   ...
256
+
257
+   Now try to remove the bucket:
258
+
259
+   ~$ s3cmd rb s3://public.s3tools.org
260
+   ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
261
+
262
+   Ouch, we forgot about s3://public.s3tools.org/somefile.xml
263
+   We can force the bucket removal anyway:
264
+
265
+   ~$ s3cmd rb --force s3://public.s3tools.org/
266
+   WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
267
+   File s3://public.s3tools.org/somefile.xml deleted
268
+   Bucket 's3://public.s3tools.org/' removed
198 269
 
199 270
 Hints
200 271
 -----
... ...
@@ -207,44 +328,10 @@ its bonet run it with -d to see all 'debugging' output.
207 207
 
208 208
 After configuring it with --configure all available options
209 209
 are spitted into your ~/.s3cfg file. It's a text file ready
210
-to be modified in your favourite text editor. 
211
-
212
-Multiple local files may be specified for "s3cmd put" 
213
-operation. In that case the S3 URI should only include
214
-the bucket name, not the object part:
215
-
216
-~$ s3cmd put file-* s3://logix.cz-test/
217
-File 'file-one.txt' stored as s3://logix.cz-test/file-one.txt (4 bytes)
218
-File 'file-two.txt' stored as s3://logix.cz-test/file-two.txt (4 bytes)
219
-
220
-Alternatively if you specify the object part as well it 
221
-will be treated as a prefix and all filenames given on the
222
-command line will be appended to the prefix making up 
223
-the object name. However --force option is required in this
224
-case:
225
-
226
-~$ s3cmd put --force file-* s3://logix.cz-test/prefixed:
227
-File 'file-one.txt' stored as s3://logix.cz-test/prefixed:file-one.txt (4 bytes)
228
-File 'file-two.txt' stored as s3://logix.cz-test/prefixed:file-two.txt (4 bytes)
229
-
230
-This prefixing mode works with "s3cmd ls" as well:
231
-
232
-~$ s3cmd ls s3://logix.cz-test
233
-Bucket 'logix.cz-test':
234
-2007-01-19 02:12         4   s3://logix.cz-test/file-one.txt
235
-2007-01-19 02:12         4   s3://logix.cz-test/file-two.txt
236
-2007-01-19 02:12         4   s3://logix.cz-test/prefixed:file-one.txt
237
-2007-01-19 02:12         4   s3://logix.cz-test/prefixed:file-two.txt
238
-
239
-Now with a prefix to list only names beginning with "file-":
240
-
241
-~$ s3cmd ls s3://logix.cz-test/file-*
242
-Bucket 'logix.cz-test':
243
-2007-01-19 02:12         4   s3://logix.cz-test/file-one.txt
244
-2007-01-19 02:12         4   s3://logix.cz-test/file-two.txt
210
+to be modified in your favourite text editor.
245 211
 
246 212
 For more information refer to:
247
-* S3cmd / S3tools homepage at http://s3tools.sourceforge.net
213
+* S3cmd / S3tools homepage at http://s3tools.org
248 214
 * Amazon S3 homepage at http://aws.amazon.com/s3
249 215
 
250 216
 Enjoy!
... ...
@@ -1,6 +1,6 @@
1 1
 package = "s3cmd"
2 2
 version = "0.9.9-rc3"
3
-url = "http://s3tools.logix.cz"
3
+url = "http://s3tools.org"
4 4
 license = "GPL version 2"
5 5
 short_description = "Command line tool for managing Amazon S3 and CloudFront services"
6 6
 long_description = """
... ...
@@ -485,7 +485,7 @@ def cmd_object_del(args):
485 485
 			if Config().recursive and not Config().force:
486 486
 				raise ParameterError("Please use --force to delete ALL contents of %s" % uri)
487 487
 			elif not Config().recursive:
488
-				raise ParameterError("Object name required, not only the bucket name")
488
+				raise ParameterError("File name required, not only the bucket name")
489 489
 		subcmd_object_del_uri(uri)
490 490
 
491 491
 def subcmd_object_del_uri(uri, recursive = None):
... ...
@@ -504,7 +504,7 @@ def subcmd_object_del_uri(uri, recursive = None):
504 504
 		uri_list.append(uri)
505 505
 	for _uri in uri_list:
506 506
 		response = s3.object_delete(_uri)
507
-		output(u"Object %s deleted" % _uri)
507
+		output(u"File %s deleted" % _uri)
508 508
 
509 509
 def subcmd_cp_mv(args, process_fce, message):
510 510
 	src_uri = S3Uri(args.pop(0))
... ...
@@ -526,11 +526,11 @@ def subcmd_cp_mv(args, process_fce, message):
526 526
 
527 527
 def cmd_cp(args):
528 528
 	s3 = S3(Config())
529
-	subcmd_cp_mv(args, s3.object_copy, "Object %(src)s copied to %(dst)s")
529
+	subcmd_cp_mv(args, s3.object_copy, "File %(src)s copied to %(dst)s")
530 530
 
531 531
 def cmd_mv(args):
532 532
 	s3 = S3(Config())
533
-	subcmd_cp_mv(args, s3.object_move, "Object %(src)s moved to %(dst)s")
533
+	subcmd_cp_mv(args, s3.object_move, "File %(src)s moved to %(dst)s")
534 534
 
535 535
 def cmd_info(args):
536 536
 	s3 = S3(Config())
... ...
@@ -1277,10 +1277,10 @@ def get_commands_list():
1277 1277
 	#{"cmd":"mkdir", "label":"Make a virtual S3 directory", "param":"s3://BUCKET/path/to/dir", "func":cmd_mkdir, "argc":1},
1278 1278
 	{"cmd":"sync", "label":"Synchronize a directory tree to S3", "param":"LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR", "func":cmd_sync, "argc":2},
1279 1279
 	{"cmd":"du", "label":"Disk usage by buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_du, "argc":0},
1280
-	{"cmd":"info", "label":"Get various information about Buckets or Objects", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1},
1280
+	{"cmd":"info", "label":"Get various information about Buckets or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1},
1281 1281
 	{"cmd":"cp", "label":"Copy object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_cp, "argc":2},
1282 1282
 	{"cmd":"mv", "label":"Move object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_mv, "argc":2},
1283
-	{"cmd":"setacl", "label":"Modify Access control list for Bucket or Object", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1},
1283
+	{"cmd":"setacl", "label":"Modify Access control list for Bucket or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1},
1284 1284
 	## CloudFront commands
1285 1285
 	{"cmd":"cflist", "label":"List CloudFront distribution points", "param":"", "func":CfCmd.info, "argc":0},
1286 1286
 	{"cmd":"cfinfo", "label":"Display CloudFront distribution point parameters", "param":"[cf://DIST_ID]", "func":CfCmd.info, "argc":0},
... ...
@@ -235,5 +235,5 @@ For the most up to date list of options run
235 235
 .br
236 236
 For more info about usage, examples and other related info visit project homepage at
237 237
 .br
238
-.B http://s3tools.logix.cz
238
+.B http://s3tools.org
239 239