Browse code

rewrite of README into markdown

Eric Mill authored on 2014/09/02 07:50:22
Showing 2 changed files
1 1
deleted file mode 100644
... ...
@@ -1,371 +0,0 @@
1
-S3cmd tool for Amazon Simple Storage Service (S3)
2
-=================================================
3
-
4
-Author:
5
-    Michal Ludvig <michal@logix.cz>
6
-    Copyright (c) TGRMN Software - http://www.tgrmn.com - and contributors
7
-
8
-S3tools / S3cmd project homepage:
9
-    http://s3tools.org
10
-
11
-S3tools / S3cmd mailing lists:
12
-
13
-    * Announcements of new releases:
14
-        s3tools-announce@lists.sourceforge.net
15
-
16
-    * General questions and discussion about usage
17
-        s3tools-general@lists.sourceforge.net
18
-
19
-    * Bug reports
20
-        s3tools-bugs@lists.sourceforge.net
21
-
22
-!!!
23
-!!! Please consult INSTALL file for installation instructions!
24
-!!!
25
-
26
-What is S3cmd
27
-S3cmd is a free command line tool and client for uploading, 
28
-retrieving and managing data in Amazon S3 and other cloud 
29
-storage service providers that use the S3 protocol, such as 
30
-Google Cloud Storage or DreamHost DreamObjects. It is best 
31
-suited for power users who are familiar with command line 
32
-programs. It is also ideal for batch scripts and automated 
33
-backup to S3, triggered from cron, etc.
34
-
35
-S3cmd is written in Python. It's an open source project 
36
-available under GNU Public License v2 (GPLv2) and is free 
37
-for both commercial and private use. You will only have 
38
-to pay Amazon for using their storage.
39
-
40
-Lots of features and options have been added to S3cmd, 
41
-since its very first release in 2008.... we recently counted 
42
-more than 60 command line options, including multipart 
43
-uploads, encryption, incremental backup, s3 sync, ACL and 
44
-Metadata management, S3 bucket size, bucket policies, and 
45
-more!
46
-
47
-What is Amazon S3
48
-Amazon S3 provides a managed internet-accessible storage 
49
-service where anyone can store any amount of data and 
50
-retrieve it later again.
51
-
52
-S3 is a paid service operated by Amazon. Before storing 
53
-anything into S3 you must sign up for an "AWS" account 
54
-(where AWS = Amazon Web Services) to obtain a pair of 
55
-identifiers: Access Key and Secret Key. You will need to 
56
-give these keys to S3cmd. 
57
-Think of them as if they were a username and password for
58
-your S3 account.
59
-
60
-Amazon S3 pricing explained
61
-At the time of this writing the costs of using S3 are (in USD):
62
-
63
-$0.03 per GB per month of storage space used
64
-
65
-plus
66
-
67
-$0.00 per GB - all data uploaded
68
-
69
-plus
70
-
71
-$0.00 per GB - first 1GB / month data downloaded
72
-$0.12 per GB - up to 10 TB / month data downloaded
73
-$0.09 per GB - next 40 TB / month data downloaded
74
-$0.07 per GB - data downloaded / month over 50 TB
75
-
76
-plus
77
-
78
-$0.005 per 1,000 PUT or LIST requests
79
-$0.004 per 10,000 GET and all other requests
80
-
81
-If for instance on 1st of January you upload 2GB of 
82
-photos in JPEG from your holiday in New Zealand, at the 
83
-end of January you will be charged $0.06 for using 2GB of
84
-storage space for a month, $0.0 for uploading 2GB
85
-of data, and a few cents for requests. 
86
-That comes to slightly over $0.06 for a complete backup 
87
-of your precious holiday pictures.
88
-
89
-In February you don't touch it. Your data are still on S3 
90
-servers so you pay $0.06 for those two gigabytes, but not
91
-a single cent will be charged for any transfer. That comes 
92
-to $0.06 as an ongoing cost of your backup. Not too bad.
93
-
94
-In March you allow anonymous read access to some of your
95
-pictures and your friends download, say, 500MB of them. 
96
-As the files are owned by you, you are responsible for the 
97
-costs incurred. That means at the end of March you'll be 
98
-charged $0.06 for storage plus $0.06 for the download traffic 
99
-generated by your friends.
100
-
101
-There is no minimum monthly contract or a setup fee. What 
102
-you use is what you pay for. At the beginning my bill used
103
-to be like US$0.03 or even nil.
104
-
105
-That's the pricing model of Amazon S3 in a nutshell. Check
106
-Amazon S3 homepage at http://aws.amazon.com/s3/pricing/ for more 
107
-details.
108
-
109
-Needless to say that all these money are charged by Amazon 
110
-itself, there is obviously no payment for using S3cmd :-)
111
-
112
-Amazon S3 basics
113
-Files stored in S3 are called "objects" and their names are
114
-officially called "keys". Since this is sometimes confusing
115
-for the users we often refer to the objects as "files" or
116
-"remote files". Each object belongs to exactly one "bucket".
117
-
118
-To describe objects in S3 storage we invented a URI-like
119
-schema in the following form:
120
-
121
-    s3://BUCKET
122
-or
123
-    s3://BUCKET/OBJECT
124
-
125
-Buckets
126
-Buckets are sort of like directories or folders with some 
127
-restrictions:
128
-1) each user can only have 100 buckets at the most, 
129
-2) bucket names must be unique amongst all users of S3, 
130
-3) buckets can not be nested into a deeper hierarchy and 
131
-4) a name of a bucket can only consist of basic alphanumeric 
132
-   characters plus dot (.) and dash (-). No spaces, no accented
133
-   or UTF-8 letters, etc. 
134
-
135
-It is a good idea to use DNS-compatible bucket names. That
136
-for instance means you should not use upper case characters.
137
-While DNS compliance is not strictly required some features
138
-described below are not available for DNS-incompatible named
139
-buckets. One more step further is using a fully qualified
140
-domain name (FQDN) for a bucket - that has even more benefits.
141
-
142
-* For example "s3://--My-Bucket--" is not DNS compatible.
143
-* On the other hand "s3://my-bucket" is DNS compatible but 
144
-  is not FQDN.
145
-* Finally "s3://my-bucket.s3tools.org" is DNS compatible 
146
-  and FQDN provided you own the s3tools.org domain and can
147
-  create the domain record for "my-bucket.s3tools.org".
148
-
149
-Look for "Virtual Hosts" later in this text for more details 
150
-regarding FQDN named buckets.
151
-
152
-Objects (files stored in Amazon S3)
153
-Unlike for buckets there are almost no restrictions on object 
154
-names. These can be any UTF-8 strings of up to 1024 bytes long. 
155
-Interestingly enough the object name can contain forward
156
-slash character (/) thus a "my/funny/picture.jpg" is a valid
157
-object name. Note that there are not directories nor
158
-buckets called "my" and "funny" - it is really a single object 
159
-name called "my/funny/picture.jpg" and S3 does not care at 
160
-all that it _looks_ like a directory structure.
161
-
162
-The full URI of such an image could be, for example:
163
-
164
-    s3://my-bucket/my/funny/picture.jpg
165
-
166
-Public vs Private files
167
-The files stored in S3 can be either Private or Public. The 
168
-Private ones are readable only by the user who uploaded them
169
-while the Public ones can be read by anyone. Additionally the
170
-Public files can be accessed using HTTP protocol, not only
171
-using s3cmd or a similar tool.
172
-
173
-The ACL (Access Control List) of a file can be set at the 
174
-time of upload using --acl-public or --acl-private options 
175
-with 's3cmd put' or 's3cmd sync' commands (see below).
176
-
177
-Alternatively the ACL can be altered for existing remote files
178
-with 's3cmd setacl --acl-public' (or --acl-private) command.
179
-
180
-Simple s3cmd HowTo
181
-1) Register for Amazon AWS / S3
182
-   Go to http://aws.amazon.com/s3, click the "Sign up
183
-   for web service" button in the right column and work 
184
-   through the registration. You will have to supply 
185
-   your Credit Card details in order to allow Amazon 
186
-   charge you for S3 usage. 
187
-   At the end you should have your Access and Secret Keys
188
-
189
-2) Run "s3cmd --configure"
190
-   You will be asked for the two keys - copy and paste 
191
-   them from your confirmation email or from your Amazon 
192
-   account page. Be careful when copying them! They are 
193
-   case sensitive and must be entered accurately or you'll 
194
-   keep getting errors about invalid signatures or similar.
195
-
196
-   Remember to add ListAllMyBuckets permissions to the keys
197
-   or you will get an AccessDenied error while testing access.
198
-
199
-3) Run "s3cmd ls" to list all your buckets.
200
-   As you just started using S3 there are no buckets owned by 
201
-   you as of now. So the output will be empty.
202
-
203
-4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
204
-   As mentioned above the bucket names must be unique amongst 
205
-   _all_ users of S3. That means the simple names like "test" 
206
-   or "asdf" are already taken and you must make up something 
207
-   more original. To demonstrate as many features as possible
208
-   let's create a FQDN-named bucket s3://public.s3tools.org:
209
-
210
-   ~$ s3cmd mb s3://public.s3tools.org
211
-   Bucket 's3://public.s3tools.org' created
212
-
213
-5) List your buckets again with "s3cmd ls"
214
-   Now you should see your freshly created bucket
215
-
216
-   ~$ s3cmd ls
217
-   2009-01-28 12:34  s3://public.s3tools.org
218
-
219
-6) List the contents of the bucket
220
-
221
-   ~$ s3cmd ls s3://public.s3tools.org
222
-   ~$ 
223
-
224
-   It's empty, indeed.
225
-
226
-7) Upload a single file into the bucket:
227
-
228
-   ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
229
-   some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
230
-    123456 of 123456   100% in    2s    51.75 kB/s  done
231
-
232
-   Upload a two directory tree into the bucket's virtual 'directory':
233
-
234
-   ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
235
-   File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
236
-   File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
237
-   File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
238
-   File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
239
-   File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
240
-
241
-   As you can see we didn't have to create the /somewhere
242
-   'directory'. In fact it's only a filename prefix, not 
243
-   a real directory and it doesn't have to be created in
244
-   any way beforehand.
245
-
246
-8) Now list the bucket contents again:
247
-
248
-   ~$ s3cmd ls s3://public.s3tools.org
249
-                          DIR   s3://public.s3tools.org/somewhere/
250
-   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
251
-
252
-   Use --recursive (or -r) to list all the remote files:
253
-
254
-   ~$ s3cmd ls --recursive s3://public.s3tools.org
255
-   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
256
-   2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
257
-   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
258
-   2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
259
-   2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
260
-   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt
261
-
262
-9) Retrieve one of the files back and verify that it hasn't been 
263
-   corrupted:
264
-
265
-   ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
266
-   s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
267
-    123456 of 123456   100% in    3s    35.75 kB/s  done
268
-
269
-   ~$ md5sum some-file.xml some-file-2.xml
270
-   39bcb6992e461b269b95b3bda303addf  some-file.xml
271
-   39bcb6992e461b269b95b3bda303addf  some-file-2.xml
272
-
273
-   Checksums of the original file matches the one of the 
274
-   retrieved one. Looks like it worked :-)
275
-
276
-   To retrieve a whole 'directory tree' from S3 use recursive get:
277
-
278
-   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
279
-   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
280
-   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
281
-   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
282
-   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
283
-   File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
284
-
285
-   Since the destination directory wasn't specified s3cmd 
286
-   saved the directory structure in a current working 
287
-   directory ('.'). 
288
-
289
-   There is an important difference between:
290
-      get s3://public.s3tools.org/somewhere
291
-   and
292
-      get s3://public.s3tools.org/somewhere/
293
-   (note the trailing slash)
294
-   S3cmd always uses the last path part, ie the word
295
-   after the last slash, for naming files.
296
- 
297
-   In the case of s3://.../somewhere the last path part 
298
-   is 'somewhere' and therefore the recursive get names
299
-   the local files as somewhere/dir1, somewhere/dir2, etc.
300
-
301
-   On the other hand in s3://.../somewhere/ the last path
302
-   part is empty and s3cmd will only create 'dir1' and 'dir2' 
303
-   without the 'somewhere/' prefix:
304
-
305
-   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
306
-   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
307
-   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
308
-   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
309
-   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
310
-
311
-   See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it 
312
-   was in the previous example.
313
-
314
-10) Clean up - delete the remote files and remove the bucket:
315
-
316
-   Remove everything under s3://public.s3tools.org/somewhere/
317
-
318
-   ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
319
-   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
320
-   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
321
-   ...
322
-
323
-   Now try to remove the bucket:
324
-
325
-   ~$ s3cmd rb s3://public.s3tools.org
326
-   ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
327
-
328
-   Ouch, we forgot about s3://public.s3tools.org/somefile.xml
329
-   We can force the bucket removal anyway:
330
-
331
-   ~$ s3cmd rb --force s3://public.s3tools.org/
332
-   WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
333
-   File s3://public.s3tools.org/somefile.xml deleted
334
-   Bucket 's3://public.s3tools.org/' removed
335
-
336
-Hints
337
-The basic usage is as simple as described in the previous 
338
-section.
339
-
340
-You can increase the level of verbosity with -v option and 
341
-if you're really keen to know what the program does under 
342
-its bonet run it with -d to see all 'debugging' output.
343
-
344
-After configuring it with --configure all available options
345
-are spitted into your ~/.s3cfg file. It's a text file ready
346
-to be modified in your favourite text editor.
347
-
348
-For more information refer to:
349
-* S3cmd / S3tools homepage at http://s3tools.org
350
-
351
-===========================================================================
352
-Copyright (C) 2014 TGRMN Software - http://www.tgrmn.com - and contributors
353
-
354
-This program is free software; you can redistribute it and/or modify
355
-it under the terms of the GNU General Public License as published by
356
-the Free Software Foundation; either version 2 of the License, or
357
-(at your option) any later version.
358
-
359
-This program is distributed in the hope that it will be useful,
360
-but WITHOUT ANY WARRANTY; without even the implied warranty of
361
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
362
-GNU General Public License for more details.
363 1
new file mode 100644
... ...
@@ -0,0 +1,326 @@
0
+## S3cmd tool for Amazon Simple Storage Service (S3)
1
+
2
+
3
+* Author: Michal Ludvig, michal@logix.cz
4
+* [Project homepage](http://s3tools.org)
5
+* (c) [TGRMN Software](http://www.tgrmn.com) and contributors
6
+
7
+
8
+S3tools / S3cmd mailing lists:
9
+
10
+* Announcements of new releases: s3tools-announce@lists.sourceforge.net
11
+* General questions and discussion: s3tools-general@lists.sourceforge.net
12
+* Bug reports: s3tools-bugs@lists.sourceforge.net
13
+
14
+### What is S3cmd
15
+
16
+S3cmd (`s3cmd`) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.
17
+
18
+S3cmd is written in Python. It's an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage.
19
+
20
+Lots of features and options have been added to S3cmd, since its very first release in 2008.... we recently counted more than 60 command line options, including multipart uploads, encryption, incremental backup, s3 sync, ACL and Metadata management, S3 bucket size, bucket policies, and more!
21
+
22
+### What is Amazon S3
23
+
24
+Amazon S3 provides a managed internet-accessible storage service where anyone can store any amount of data and retrieve it later again.
25
+
26
+S3 is a paid service operated by Amazon. Before storing anything into S3 you must sign up for an "AWS" account (where AWS = Amazon Web Services) to obtain a pair of identifiers: Access Key and Secret Key. You will need to
27
+give these keys to S3cmd. Think of them as if they were a username and password for your S3 account.
28
+
29
+### Amazon S3 pricing explained
30
+
31
+At the time of this writing the costs of using S3 are (in USD):
32
+
33
+$0.15 per GB per month of storage space used
34
+
35
+plus
36
+
37
+$0.10 per GB - all data uploaded
38
+
39
+plus
40
+
41
+$0.18 per GB - first 10 TB / month data downloaded
42
+$0.16 per GB - next 40 TB / month data downloaded
43
+$0.13 per GB - data downloaded / month over 50 TB
44
+
45
+plus
46
+
47
+$0.01 per 1,000 PUT or LIST requests
48
+$0.01 per 10,000 GET and all other requests
49
+
50
+If for instance on 1st of January you upload 2GB of photos in JPEG from your holiday in New Zealand, at the end of January you will be charged $0.30 for using 2GB of storage space for a month, $0.20 for uploading 2GB of data, and a few cents for requests. That comes to slightly over $0.50 for a complete backup of your precious holiday pictures.
51
+
52
+In February you don't touch it. Your data are still on S3 servers so you pay $0.30 for those two gigabytes, but not a single cent will be charged for any transfer. That comes to $0.30 as an ongoing cost of your backup. Not too bad.
53
+
54
+In March you allow anonymous read access to some of your pictures and your friends download, say, 500MB of them. As the files are owned by you, you are responsible for the costs incurred. That means at the end of March you'll be charged $0.30 for storage plus $0.09 for the download traffic generated by your friends.
55
+
56
+There is no minimum monthly contract or a setup fee. What you use is what you pay for. At the beginning my bill used to be like US$0.03 or even nil.
57
+
58
+That's the pricing model of Amazon S3 in a nutshell. Check the [Amazon S3 homepage](http://aws.amazon.com/s3) for more details.
59
+
60
+Needless to say that all these money are charged by Amazon itself, there is obviously no payment for using S3cmd :-)
61
+
62
+### Amazon S3 basics
63
+
64
+Files stored in S3 are called "objects" and their names are officially called "keys". Since this is sometimes confusing for the users we often refer to the objects as "files" or "remote files". Each object belongs to exactly one "bucket".
65
+
66
+To describe objects in S3 storage we invented a URI-like schema in the following form:
67
+
68
+```
69
+s3://BUCKET
70
+```
71
+or
72
+
73
+```
74
+s3://BUCKET/OBJECT
75
+```
76
+
77
+### Buckets
78
+
79
+Buckets are sort of like directories or folders with some restrictions:
80
+
81
+1. each user can only have 100 buckets at the most,
82
+2. bucket names must be unique amongst all users of S3,
83
+3. buckets can not be nested into a deeper hierarchy and
84
+4. a name of a bucket can only consist of basic alphanumeric
85
+   characters plus dot (.) and dash (-). No spaces, no accented
86
+   or UTF-8 letters, etc.
87
+
88
+It is a good idea to use DNS-compatible bucket names. That for instance means you should not use upper case characters. While DNS compliance is not strictly required some features described below are not available for DNS-incompatible named buckets. One more step further is using a fully qualified domain name (FQDN) for a bucket - that has even more benefits.
89
+
90
+* For example "s3://--My-Bucket--" is not DNS compatible.
91
+* On the other hand "s3://my-bucket" is DNS compatible but
92
+  is not FQDN.
93
+* Finally "s3://my-bucket.s3tools.org" is DNS compatible
94
+  and FQDN provided you own the s3tools.org domain and can
95
+  create the domain record for "my-bucket.s3tools.org".
96
+
97
+Look for "Virtual Hosts" later in this text for more details regarding FQDN named buckets.
98
+
99
+### Objects (files stored in Amazon S3)
100
+
101
+Unlike for buckets there are almost no restrictions on object names. These can be any UTF-8 strings of up to 1024 bytes long. Interestingly enough the object name can contain forward slash character (/) thus a `my/funny/picture.jpg` is a valid object name. Note that there are not directories nor buckets called `my` and `funny` - it is really a single object name called `my/funny/picture.jpg` and S3 does not care at all that it _looks_ like a directory structure.
102
+
103
+The full URI of such an image could be, for example:
104
+
105
+```
106
+s3://my-bucket/my/funny/picture.jpg
107
+```
108
+
109
+### Public vs Private files
110
+
111
+The files stored in S3 can be either Private or Public. The Private ones are readable only by the user who uploaded them while the Public ones can be read by anyone. Additionally the Public files can be accessed using HTTP protocol, not only using `s3cmd` or a similar tool.
112
+
113
+The ACL (Access Control List) of a file can be set at the time of upload using `--acl-public` or `--acl-private` options with `s3cmd put` or `s3cmd sync` commands (see below).
114
+
115
+Alternatively the ACL can be altered for existing remote files with `s3cmd setacl --acl-public` (or `--acl-private`) command.
116
+
117
+### Simple s3cmd HowTo
118
+
119
+1) Register for Amazon AWS / S3
120
+
121
+Go to http://aws.amazon.com/s3, click the "Sign up for web service" button in the right column and work through the registration. You will have to supply your Credit Card details in order to allow Amazon charge you for S3 usage. At the end you should have your Access and Secret Keys.
122
+
123
+2) Run `s3cmd --configure`
124
+
125
+You will be asked for the two keys - copy and paste them from your confirmation email or from your Amazon account page. Be careful when copying them! They are case sensitive and must be entered accurately or you'll keep getting errors about invalid signatures or similar.
126
+
127
+Remember to add ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.
128
+
129
+3) Run `s3cmd ls` to list all your buckets.
130
+
131
+As you just started using S3 there are no buckets owned by you as of now. So the output will be empty.
132
+
133
+4) Make a bucket with `s3cmd mb s3://my-new-bucket-name`
134
+
135
+As mentioned above the bucket names must be unique amongst _all_ users of S3. That means the simple names like "test" or "asdf" are already taken and you must make up something more original. To demonstrate as many features as possible let's create a FQDN-named bucket `s3://public.s3tools.org`:
136
+
137
+```
138
+$ s3cmd mb s3://public.s3tools.org
139
+
140
+Bucket 's3://public.s3tools.org' created
141
+```
142
+
143
+5) List your buckets again with `s3cmd ls`
144
+
145
+Now you should see your freshly created bucket:
146
+
147
+```
148
+$ s3cmd ls
149
+
150
+2009-01-28 12:34  s3://public.s3tools.org
151
+```
152
+
153
+6) List the contents of the bucket:
154
+
155
+```
156
+$ s3cmd ls s3://public.s3tools.org
157
+$
158
+```
159
+
160
+It's empty, indeed.
161
+
162
+7) Upload a single file into the bucket:
163
+
164
+```
165
+$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
166
+
167
+some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
168
+ 123456 of 123456   100% in    2s    51.75 kB/s  done
169
+```
170
+
171
+Upload a two-directory tree into the bucket's virtual 'directory':
172
+
173
+```
174
+$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
175
+
176
+File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
177
+File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
178
+File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
179
+File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
180
+File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
181
+```
182
+
183
+As you can see we didn't have to create the `/somewhere` 'directory'. In fact it's only a filename prefix, not a real directory and it doesn't have to be created in any way beforehand.
184
+
185
+8) Now list the bucket's contents again:
186
+
187
+```
188
+$ s3cmd ls s3://public.s3tools.org
189
+
190
+                       DIR   s3://public.s3tools.org/somewhere/
191
+2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
192
+```
193
+
194
+Use --recursive (or -r) to list all the remote files:
195
+
196
+```
197
+$ s3cmd ls --recursive s3://public.s3tools.org
198
+
199
+2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
200
+2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
201
+2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
202
+2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
203
+2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
204
+2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt
205
+```
206
+
207
+9) Retrieve one of the files back and verify that it hasn't been
208
+   corrupted:
209
+
210
+```
211
+$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
212
+
213
+s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
214
+ 123456 of 123456   100% in    3s    35.75 kB/s  done
215
+```
216
+
217
+```
218
+$ md5sum some-file.xml some-file-2.xml
219
+
220
+39bcb6992e461b269b95b3bda303addf  some-file.xml
221
+39bcb6992e461b269b95b3bda303addf  some-file-2.xml
222
+```
223
+
224
+Checksums of the original file matches the one of the retrieved ones. Looks like it worked :-)
225
+
226
+To retrieve a whole 'directory tree' from S3 use recursive get:
227
+
228
+```
229
+$ s3cmd get --recursive s3://public.s3tools.org/somewhere
230
+
231
+File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
232
+File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
233
+File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
234
+File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
235
+File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
236
+```
237
+
238
+Since the destination directory wasn't specified, `s3cmd` saved the directory structure in a current working directory ('.').
239
+
240
+There is an important difference between:
241
+
242
+```
243
+get s3://public.s3tools.org/somewhere
244
+```
245
+
246
+and
247
+
248
+```
249
+get s3://public.s3tools.org/somewhere/
250
+```
251
+
252
+(note the trailing slash)
253
+
254
+`s3cmd` always uses the last path part, ie the word after the last slash, for naming files.
255
+
256
+In the case of `s3://.../somewhere` the last path part is 'somewhere' and therefore the recursive get names the local files as somewhere/dir1, somewhere/dir2, etc.
257
+
258
+On the other hand in `s3://.../somewhere/` the last path
259
+part is empty and s3cmd will only create 'dir1' and 'dir2'
260
+without the 'somewhere/' prefix:
261
+
262
+```
263
+$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
264
+
265
+File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
266
+File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
267
+File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
268
+File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
269
+```
270
+
271
+See? It's `/tmp/dir1` and not `/tmp/somewhere/dir1` as it was in the previous example.
272
+
273
+10) Clean up - delete the remote files and remove the bucket:
274
+
275
+Remove everything under s3://public.s3tools.org/somewhere/
276
+
277
+```
278
+$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
279
+
280
+File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
281
+File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
282
+...
283
+```
284
+
285
+Now try to remove the bucket:
286
+
287
+```
288
+$ s3cmd rb s3://public.s3tools.org
289
+
290
+ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
291
+```
292
+
293
+Ouch, we forgot about `s3://public.s3tools.org/somefile.xml`. We can force the bucket removal anyway:
294
+
295
+```
296
+$ s3cmd rb --force s3://public.s3tools.org/
297
+
298
+WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
299
+File s3://public.s3tools.org/somefile.xml deleted
300
+Bucket 's3://public.s3tools.org/' removed
301
+```
302
+
303
+### Hints
304
+
305
+The basic usage is as simple as described in the previous section.
306
+
307
+You can increase the level of verbosity with `-v` option and if you're really keen to know what the program does under its bonnet run it with `-d` to see all 'debugging' output.
308
+
309
+After configuring it with `--configure` all available options are spitted into your `~/.s3cfg` file. It's a text file ready to be modified in your favourite text editor.
310
+
311
+For more information refer to the [S3cmd / S3tools homepage](http://s3tools.org).
312
+
313
+### License
314
+
315
+Copyright (C) 2014 TGRMN Software - http://www.tgrmn.com - and contributors
316
+
317
+This program is free software; you can redistribute it and/or modify
318
+it under the terms of the GNU General Public License as published by
319
+the Free Software Foundation; either version 2 of the License, or
320
+(at your option) any later version.
321
+
322
+This program is distributed in the hope that it will be useful,
323
+but WITHOUT ANY WARRANTY; without even the implied warranty of
324
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
325
+GNU General Public License for more details.