Browse code

finish merge of signature-v4 into master

Matt Domsch authored on 2014/12/15 09:05:11
Showing 17 changed files
1 1
deleted file mode 100644
... ...
@@ -1,370 +0,0 @@
1
-S3cmd tool for Amazon Simple Storage Service (S3)
2
-=================================================
3
-
4
-Author:
5
-    Michal Ludvig <michal@logix.cz>
6
-    Copyright (c) TGRMN Software - http://www.tgrmn.com - and contributors
7
-
8
-S3tools / S3cmd project homepage:
9
-    http://s3tools.org
10
-
11
-S3tools / S3cmd mailing lists:
12
-
13
-    * Announcements of new releases:
14
-        s3tools-announce@lists.sourceforge.net
15
-
16
-    * General questions and discussion about usage
17
-        s3tools-general@lists.sourceforge.net
18
-
19
-    * Bug reports
20
-        s3tools-bugs@lists.sourceforge.net
21
-
22
-!!!
23
-!!! Please consult INSTALL file for installation instructions!
24
-!!!
25
-
26
-What is S3cmd
27
-S3cmd is a free command line tool and client for uploading, 
28
-retrieving and managing data in Amazon S3 and other cloud 
29
-storage service providers that use the S3 protocol, such as 
30
-Google Cloud Storage or DreamHost DreamObjects. It is best 
31
-suited for power users who are familiar with command line 
32
-programs. It is also ideal for batch scripts and automated 
33
-backup to S3, triggered from cron, etc.
34
-
35
-S3cmd is written in Python. It's an open source project 
36
-available under GNU Public License v2 (GPLv2) and is free 
37
-for both commercial and private use. You will only have 
38
-to pay Amazon for using their storage.
39
-
40
-Lots of features and options have been added to S3cmd, 
41
-since its very first release in 2008.... we recently counted 
42
-more than 60 command line options, including multipart 
43
-uploads, encryption, incremental backup, s3 sync, ACL and 
44
-Metadata management, S3 bucket size, bucket policies, and 
45
-more!
46
-
47
-What is Amazon S3
48
-Amazon S3 provides a managed internet-accessible storage 
49
-service where anyone can store any amount of data and 
50
-retrieve it later again.
51
-
52
-S3 is a paid service operated by Amazon. Before storing 
53
-anything into S3 you must sign up for an "AWS" account 
54
-(where AWS = Amazon Web Services) to obtain a pair of 
55
-identifiers: Access Key and Secret Key. You will need to 
56
-give these keys to S3cmd. 
57
-Think of them as if they were a username and password for
58
-your S3 account.
59
-
60
-Amazon S3 pricing explained
61
-At the time of this writing the costs of using S3 are (in USD):
62
-
63
-$0.15 per GB per month of storage space used
64
-
65
-plus
66
-
67
-$0.10 per GB - all data uploaded
68
-
69
-plus
70
-
71
-$0.18 per GB - first 10 TB / month data downloaded
72
-$0.16 per GB - next 40 TB / month data downloaded
73
-$0.13 per GB - data downloaded / month over 50 TB
74
-
75
-plus
76
-
77
-$0.01 per 1,000 PUT or LIST requests
78
-$0.01 per 10,000 GET and all other requests
79
-
80
-If for instance on 1st of January you upload 2GB of 
81
-photos in JPEG from your holiday in New Zealand, at the 
82
-end of January you will be charged $0.30 for using 2GB of
83
-storage space for a month, $0.20 for uploading 2GB
84
-of data, and a few cents for requests. 
85
-That comes to slightly over $0.50 for a complete backup 
86
-of your precious holiday pictures.
87
-
88
-In February you don't touch it. Your data are still on S3 
89
-servers so you pay $0.30 for those two gigabytes, but not
90
-a single cent will be charged for any transfer. That comes 
91
-to $0.30 as an ongoing cost of your backup. Not too bad.
92
-
93
-In March you allow anonymous read access to some of your
94
-pictures and your friends download, say, 500MB of them. 
95
-As the files are owned by you, you are responsible for the 
96
-costs incurred. That means at the end of March you'll be 
97
-charged $0.30 for storage plus $0.09 for the download traffic 
98
-generated by your friends.
99
-
100
-There is no minimum monthly contract or a setup fee. What 
101
-you use is what you pay for. At the beginning my bill used
102
-to be like US$0.03 or even nil.
103
-
104
-That's the pricing model of Amazon S3 in a nutshell. Check
105
-Amazon S3 homepage at http://aws.amazon.com/s3 for more 
106
-details.
107
-
108
-Needless to say that all these money are charged by Amazon 
109
-itself, there is obviously no payment for using S3cmd :-)
110
-
111
-Amazon S3 basics
112
-Files stored in S3 are called "objects" and their names are
113
-officially called "keys". Since this is sometimes confusing
114
-for the users we often refer to the objects as "files" or
115
-"remote files". Each object belongs to exactly one "bucket".
116
-
117
-To describe objects in S3 storage we invented a URI-like
118
-schema in the following form:
119
-
120
-    s3://BUCKET
121
-or
122
-    s3://BUCKET/OBJECT
123
-
124
-Buckets
125
-Buckets are sort of like directories or folders with some 
126
-restrictions:
127
-1) each user can only have 100 buckets at the most, 
128
-2) bucket names must be unique amongst all users of S3, 
129
-3) buckets can not be nested into a deeper hierarchy and 
130
-4) a name of a bucket can only consist of basic alphanumeric 
131
-   characters plus dot (.) and dash (-). No spaces, no accented
132
-   or UTF-8 letters, etc. 
133
-
134
-It is a good idea to use DNS-compatible bucket names. That
135
-for instance means you should not use upper case characters.
136
-While DNS compliance is not strictly required some features
137
-described below are not available for DNS-incompatible named
138
-buckets. One more step further is using a fully qualified
139
-domain name (FQDN) for a bucket - that has even more benefits.
140
-
141
-* For example "s3://--My-Bucket--" is not DNS compatible.
142
-* On the other hand "s3://my-bucket" is DNS compatible but 
143
-  is not FQDN.
144
-* Finally "s3://my-bucket.s3tools.org" is DNS compatible 
145
-  and FQDN provided you own the s3tools.org domain and can
146
-  create the domain record for "my-bucket.s3tools.org".
147
-
148
-Look for "Virtual Hosts" later in this text for more details 
149
-regarding FQDN named buckets.
150
-
151
-Objects (files stored in Amazon S3)
152
-Unlike for buckets there are almost no restrictions on object 
153
-names. These can be any UTF-8 strings of up to 1024 bytes long. 
154
-Interestingly enough the object name can contain forward
155
-slash character (/) thus a "my/funny/picture.jpg" is a valid
156
-object name. Note that there are not directories nor
157
-buckets called "my" and "funny" - it is really a single object 
158
-name called "my/funny/picture.jpg" and S3 does not care at 
159
-all that it _looks_ like a directory structure.
160
-
161
-The full URI of such an image could be, for example:
162
-
163
-    s3://my-bucket/my/funny/picture.jpg
164
-
165
-Public vs Private files
166
-The files stored in S3 can be either Private or Public. The 
167
-Private ones are readable only by the user who uploaded them
168
-while the Public ones can be read by anyone. Additionally the
169
-Public files can be accessed using HTTP protocol, not only
170
-using s3cmd or a similar tool.
171
-
172
-The ACL (Access Control List) of a file can be set at the 
173
-time of upload using --acl-public or --acl-private options 
174
-with 's3cmd put' or 's3cmd sync' commands (see below).
175
-
176
-Alternatively the ACL can be altered for existing remote files
177
-with 's3cmd setacl --acl-public' (or --acl-private) command.
178
-
179
-Simple s3cmd HowTo
180
-1) Register for Amazon AWS / S3
181
-   Go to http://aws.amazon.com/s3, click the "Sign up
182
-   for web service" button in the right column and work 
183
-   through the registration. You will have to supply 
184
-   your Credit Card details in order to allow Amazon 
185
-   charge you for S3 usage. 
186
-   At the end you should have your Access and Secret Keys
187
-
188
-2) Run "s3cmd --configure"
189
-   You will be asked for the two keys - copy and paste 
190
-   them from your confirmation email or from your Amazon 
191
-   account page. Be careful when copying them! They are 
192
-   case sensitive and must be entered accurately or you'll 
193
-   keep getting errors about invalid signatures or similar.
194
-
195
-   Remember to add ListAllMyBuckets permissions to the keys
196
-   or you will get an AccessDenied error while testing access.
197
-
198
-3) Run "s3cmd ls" to list all your buckets.
199
-   As you just started using S3 there are no buckets owned by 
200
-   you as of now. So the output will be empty.
201
-
202
-4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
203
-   As mentioned above the bucket names must be unique amongst 
204
-   _all_ users of S3. That means the simple names like "test" 
205
-   or "asdf" are already taken and you must make up something 
206
-   more original. To demonstrate as many features as possible
207
-   let's create a FQDN-named bucket s3://public.s3tools.org:
208
-
209
-   ~$ s3cmd mb s3://public.s3tools.org
210
-   Bucket 's3://public.s3tools.org' created
211
-
212
-5) List your buckets again with "s3cmd ls"
213
-   Now you should see your freshly created bucket
214
-
215
-   ~$ s3cmd ls
216
-   2009-01-28 12:34  s3://public.s3tools.org
217
-
218
-6) List the contents of the bucket
219
-
220
-   ~$ s3cmd ls s3://public.s3tools.org
221
-   ~$ 
222
-
223
-   It's empty, indeed.
224
-
225
-7) Upload a single file into the bucket:
226
-
227
-   ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
228
-   some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
229
-    123456 of 123456   100% in    2s    51.75 kB/s  done
230
-
231
-   Upload a two directory tree into the bucket's virtual 'directory':
232
-
233
-   ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
234
-   File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
235
-   File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
236
-   File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
237
-   File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
238
-   File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
239
-
240
-   As you can see we didn't have to create the /somewhere
241
-   'directory'. In fact it's only a filename prefix, not 
242
-   a real directory and it doesn't have to be created in
243
-   any way beforehand.
244
-
245
-8) Now list the bucket contents again:
246
-
247
-   ~$ s3cmd ls s3://public.s3tools.org
248
-                          DIR   s3://public.s3tools.org/somewhere/
249
-   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
250
-
251
-   Use --recursive (or -r) to list all the remote files:
252
-
253
-   ~$ s3cmd ls --recursive s3://public.s3tools.org
254
-   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
255
-   2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
256
-   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
257
-   2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
258
-   2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
259
-   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt
260
-
261
-9) Retrieve one of the files back and verify that it hasn't been 
262
-   corrupted:
263
-
264
-   ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
265
-   s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
266
-    123456 of 123456   100% in    3s    35.75 kB/s  done
267
-
268
-   ~$ md5sum some-file.xml some-file-2.xml
269
-   39bcb6992e461b269b95b3bda303addf  some-file.xml
270
-   39bcb6992e461b269b95b3bda303addf  some-file-2.xml
271
-
272
-   Checksums of the original file matches the one of the 
273
-   retrieved one. Looks like it worked :-)
274
-
275
-   To retrieve a whole 'directory tree' from S3 use recursive get:
276
-
277
-   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
278
-   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
279
-   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
280
-   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
281
-   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
282
-   File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
283
-
284
-   Since the destination directory wasn't specified s3cmd 
285
-   saved the directory structure in a current working 
286
-   directory ('.'). 
287
-
288
-   There is an important difference between:
289
-      get s3://public.s3tools.org/somewhere
290
-   and
291
-      get s3://public.s3tools.org/somewhere/
292
-   (note the trailing slash)
293
-   S3cmd always uses the last path part, ie the word
294
-   after the last slash, for naming files.
295
- 
296
-   In the case of s3://.../somewhere the last path part 
297
-   is 'somewhere' and therefore the recursive get names
298
-   the local files as somewhere/dir1, somewhere/dir2, etc.
299
-
300
-   On the other hand in s3://.../somewhere/ the last path
301
-   part is empty and s3cmd will only create 'dir1' and 'dir2' 
302
-   without the 'somewhere/' prefix:
303
-
304
-   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
305
-   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
306
-   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
307
-   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
308
-   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
309
-
310
-   See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it 
311
-   was in the previous example.
312
-
313
-10) Clean up - delete the remote files and remove the bucket:
314
-
315
-   Remove everything under s3://public.s3tools.org/somewhere/
316
-
317
-   ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
318
-   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
319
-   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
320
-   ...
321
-
322
-   Now try to remove the bucket:
323
-
324
-   ~$ s3cmd rb s3://public.s3tools.org
325
-   ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
326
-
327
-   Ouch, we forgot about s3://public.s3tools.org/somefile.xml
328
-   We can force the bucket removal anyway:
329
-
330
-   ~$ s3cmd rb --force s3://public.s3tools.org/
331
-   WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
332
-   File s3://public.s3tools.org/somefile.xml deleted
333
-   Bucket 's3://public.s3tools.org/' removed
334
-
335
-Hints
336
-The basic usage is as simple as described in the previous 
337
-section.
338
-
339
-You can increase the level of verbosity with -v option and 
340
-if you're really keen to know what the program does under 
341
-its bonet run it with -d to see all 'debugging' output.
342
-
343
-After configuring it with --configure all available options
344
-are spitted into your ~/.s3cfg file. It's a text file ready
345
-to be modified in your favourite text editor.
346
-
347
-For more information refer to:
348
-* S3cmd / S3tools homepage at http://s3tools.org
349
-
350
-===========================================================================
351
-Copyright (C) 2014 TGRMN Software - http://www.tgrmn.com - and contributors
352
-
353
-This program is free software; you can redistribute it and/or modify
354
-it under the terms of the GNU General Public License as published by
355
-the Free Software Foundation; either version 2 of the License, or
356
-(at your option) any later version.
357
-
358
-This program is distributed in the hope that it will be useful,
359
-but WITHOUT ANY WARRANTY; without even the implied warranty of
360
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
361
-GNU General Public License for more details.
362 1
\ No newline at end of file
363 2
new file mode 100644
... ...
@@ -0,0 +1,328 @@
0
+## S3cmd tool for Amazon Simple Storage Service (S3)
1
+
2
+
3
+* Author: Michal Ludvig, michal@logix.cz
4
+* [Project homepage](http://s3tools.org)
5
+* (c) [TGRMN Software](http://www.tgrmn.com) and contributors
6
+
7
+
8
+S3tools / S3cmd mailing lists:
9
+
10
+* Announcements of new releases: s3tools-announce@lists.sourceforge.net
11
+* General questions and discussion: s3tools-general@lists.sourceforge.net
12
+* Bug reports: s3tools-bugs@lists.sourceforge.net
13
+
14
+### What is S3cmd
15
+
16
+S3cmd (`s3cmd`) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.
17
+
18
+S3cmd is written in Python. It's an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage.
19
+
20
+Lots of features and options have been added to S3cmd, since its very first release in 2008.... we recently counted more than 60 command line options, including multipart uploads, encryption, incremental backup, s3 sync, ACL and Metadata management, S3 bucket size, bucket policies, and more!
21
+
22
+### What is Amazon S3
23
+
24
+Amazon S3 provides a managed internet-accessible storage service where anyone can store any amount of data and retrieve it later again.
25
+
26
+S3 is a paid service operated by Amazon. Before storing anything into S3 you must sign up for an "AWS" account (where AWS = Amazon Web Services) to obtain a pair of identifiers: Access Key and Secret Key. You will need to
27
+give these keys to S3cmd. Think of them as if they were a username and password for your S3 account.
28
+
29
+### Amazon S3 pricing explained
30
+
31
+At the time of this writing the costs of using S3 are (in USD):
32
+
33
+$0.03 per GB per month of storage space used
34
+
35
+plus
36
+
37
+$0.00 per GB - all data uploaded
38
+
39
+plus
40
+
41
+$0.000 per GB - first 1GB / month data downloaded
42
+$0.090 per GB - up to 10 TB / month data downloaded
43
+$0.085 per GB - next 40 TB / month data downloaded
44
+$0.070 per GB - data downloaded / month over 50 TB
45
+
46
+plus
47
+
48
+$0.005 per 1,000 PUT or COPY or LIST requests
49
+$0.004 per 10,000 GET and all other requests
50
+
51
+If for instance on 1st of January you upload 2GB of photos in JPEG from your holiday in New Zealand, at the end of January you will be charged $0.06 for using 2GB of storage space for a month, $0.0 for uploading 2GB of data, and a few cents for requests. That comes to slightly over $0.06 for a complete backup of your precious holiday pictures.
52
+
53
+In February you don't touch it. Your data are still on S3 servers so you pay $0.06 for those two gigabytes, but not a single cent will be charged for any transfer. That comes to $0.06 as an ongoing cost of your backup. Not too bad.
54
+
55
+In March you allow anonymous read access to some of your pictures and your friends download, say, 1500MB of them. As the files are owned by you, you are responsible for the costs incurred. That means at the end of March you'll be charged $0.06 for storage plus $0.045 for the download traffic generated by your friends.
56
+
57
+There is no minimum monthly contract or a setup fee. What you use is what you pay for. At the beginning my bill used to be like US$0.03 or even nil.
58
+
59
+That's the pricing model of Amazon S3 in a nutshell. Check the [Amazon S3 homepage](http://aws.amazon.com/s3/pricing/) for more details.
60
+
61
+Needless to say that all these money are charged by Amazon itself, there is obviously no payment for using S3cmd :-)
62
+
63
+### Amazon S3 basics
64
+
65
+Files stored in S3 are called "objects" and their names are officially called "keys". Since this is sometimes confusing for the users we often refer to the objects as "files" or "remote files". Each object belongs to exactly one "bucket".
66
+
67
+To describe objects in S3 storage we invented a URI-like schema in the following form:
68
+
69
+```
70
+s3://BUCKET
71
+```
72
+or
73
+
74
+```
75
+s3://BUCKET/OBJECT
76
+```
77
+
78
+### Buckets
79
+
80
+Buckets are sort of like directories or folders with some restrictions:
81
+
82
+1. each user can only have 100 buckets at the most,
83
+2. bucket names must be unique amongst all users of S3,
84
+3. buckets can not be nested into a deeper hierarchy and
85
+4. a name of a bucket can only consist of basic alphanumeric
86
+   characters plus dot (.) and dash (-). No spaces, no accented
87
+   or UTF-8 letters, etc.
88
+
89
+It is a good idea to use DNS-compatible bucket names. That for instance means you should not use upper case characters. While DNS compliance is not strictly required some features described below are not available for DNS-incompatible named buckets. One more step further is using a fully qualified domain name (FQDN) for a bucket - that has even more benefits.
90
+
91
+* For example "s3://--My-Bucket--" is not DNS compatible.
92
+* On the other hand "s3://my-bucket" is DNS compatible but
93
+  is not FQDN.
94
+* Finally "s3://my-bucket.s3tools.org" is DNS compatible
95
+  and FQDN provided you own the s3tools.org domain and can
96
+  create the domain record for "my-bucket.s3tools.org".
97
+
98
+Look for "Virtual Hosts" later in this text for more details regarding FQDN named buckets.
99
+
100
+### Objects (files stored in Amazon S3)
101
+
102
+Unlike for buckets there are almost no restrictions on object names. These can be any UTF-8 strings of up to 1024 bytes long. Interestingly enough the object name can contain forward slash character (/) thus a `my/funny/picture.jpg` is a valid object name. Note that there are not directories nor buckets called `my` and `funny` - it is really a single object name called `my/funny/picture.jpg` and S3 does not care at all that it _looks_ like a directory structure.
103
+
104
+The full URI of such an image could be, for example:
105
+
106
+```
107
+s3://my-bucket/my/funny/picture.jpg
108
+```
109
+
110
+### Public vs Private files
111
+
112
+The files stored in S3 can be either Private or Public. The Private ones are readable only by the user who uploaded them while the Public ones can be read by anyone. Additionally the Public files can be accessed using HTTP protocol, not only using `s3cmd` or a similar tool.
113
+
114
+The ACL (Access Control List) of a file can be set at the time of upload using `--acl-public` or `--acl-private` options with `s3cmd put` or `s3cmd sync` commands (see below).
115
+
116
+Alternatively the ACL can be altered for existing remote files with `s3cmd setacl --acl-public` (or `--acl-private`) command.
117
+
118
+### Simple s3cmd HowTo
119
+
120
+1) Register for Amazon AWS / S3
121
+
122
+Go to http://aws.amazon.com/s3, click the "Sign up for web service" button in the right column and work through the registration. You will have to supply your Credit Card details in order to allow Amazon charge you for S3 usage. At the end you should have your Access and Secret Keys.
123
+
124
+2) Run `s3cmd --configure`
125
+
126
+You will be asked for the two keys - copy and paste them from your confirmation email or from your Amazon account page. Be careful when copying them! They are case sensitive and must be entered accurately or you'll keep getting errors about invalid signatures or similar.
127
+
128
+Remember to add ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.
129
+
130
+3) Run `s3cmd ls` to list all your buckets.
131
+
132
+As you just started using S3 there are no buckets owned by you as of now. So the output will be empty.
133
+
134
+4) Make a bucket with `s3cmd mb s3://my-new-bucket-name`
135
+
136
+As mentioned above the bucket names must be unique amongst _all_ users of S3. That means the simple names like "test" or "asdf" are already taken and you must make up something more original. To demonstrate as many features as possible let's create a FQDN-named bucket `s3://public.s3tools.org`:
137
+
138
+```
139
+$ s3cmd mb s3://public.s3tools.org
140
+
141
+Bucket 's3://public.s3tools.org' created
142
+```
143
+
144
+5) List your buckets again with `s3cmd ls`
145
+
146
+Now you should see your freshly created bucket:
147
+
148
+```
149
+$ s3cmd ls
150
+
151
+2009-01-28 12:34  s3://public.s3tools.org
152
+```
153
+
154
+6) List the contents of the bucket:
155
+
156
+```
157
+$ s3cmd ls s3://public.s3tools.org
158
+$
159
+```
160
+
161
+It's empty, indeed.
162
+
163
+7) Upload a single file into the bucket:
164
+
165
+```
166
+$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
167
+
168
+some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
169
+ 123456 of 123456   100% in    2s    51.75 kB/s  done
170
+```
171
+
172
+Upload a two-directory tree into the bucket's virtual 'directory':
173
+
174
+```
175
+$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
176
+
177
+File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
178
+File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
179
+File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
180
+File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
181
+File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
182
+```
183
+
184
+As you can see we didn't have to create the `/somewhere` 'directory'. In fact it's only a filename prefix, not a real directory and it doesn't have to be created in any way beforehand.
185
+
186
+8) Now list the bucket's contents again:
187
+
188
+```
189
+$ s3cmd ls s3://public.s3tools.org
190
+
191
+                       DIR   s3://public.s3tools.org/somewhere/
192
+2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
193
+```
194
+
195
+Use --recursive (or -r) to list all the remote files:
196
+
197
+```
198
+$ s3cmd ls --recursive s3://public.s3tools.org
199
+
200
+2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
201
+2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
202
+2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
203
+2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
204
+2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
205
+2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt
206
+```
207
+
208
+9) Retrieve one of the files back and verify that it hasn't been
209
+   corrupted:
210
+
211
+```
212
+$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
213
+
214
+s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
215
+ 123456 of 123456   100% in    3s    35.75 kB/s  done
216
+```
217
+
218
+```
219
+$ md5sum some-file.xml some-file-2.xml
220
+
221
+39bcb6992e461b269b95b3bda303addf  some-file.xml
222
+39bcb6992e461b269b95b3bda303addf  some-file-2.xml
223
+```
224
+
225
+Checksums of the original file matches the one of the retrieved ones. Looks like it worked :-)
226
+
227
+To retrieve a whole 'directory tree' from S3 use recursive get:
228
+
229
+```
230
+$ s3cmd get --recursive s3://public.s3tools.org/somewhere
231
+
232
+File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
233
+File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
234
+File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
235
+File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
236
+File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
237
+```
238
+
239
+Since the destination directory wasn't specified, `s3cmd` saved the directory structure in a current working directory ('.').
240
+
241
+There is an important difference between:
242
+
243
+```
244
+get s3://public.s3tools.org/somewhere
245
+```
246
+
247
+and
248
+
249
+```
250
+get s3://public.s3tools.org/somewhere/
251
+```
252
+
253
+(note the trailing slash)
254
+
255
+`s3cmd` always uses the last path part, ie the word after the last slash, for naming files.
256
+
257
+In the case of `s3://.../somewhere` the last path part is 'somewhere' and therefore the recursive get names the local files as somewhere/dir1, somewhere/dir2, etc.
258
+
259
+On the other hand in `s3://.../somewhere/` the last path
260
+part is empty and s3cmd will only create 'dir1' and 'dir2'
261
+without the 'somewhere/' prefix:
262
+
263
+```
264
+$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
265
+
266
+File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
267
+File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
268
+File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
269
+File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
270
+```
271
+
272
+See? It's `/tmp/dir1` and not `/tmp/somewhere/dir1` as it was in the previous example.
273
+
274
+10) Clean up - delete the remote files and remove the bucket:
275
+
276
+Remove everything under s3://public.s3tools.org/somewhere/
277
+
278
+```
279
+$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
280
+
281
+File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
282
+File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
283
+...
284
+```
285
+
286
+Now try to remove the bucket:
287
+
288
+```
289
+$ s3cmd rb s3://public.s3tools.org
290
+
291
+ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
292
+```
293
+
294
+Ouch, we forgot about `s3://public.s3tools.org/somefile.xml`. We can force the bucket removal anyway:
295
+
296
+```
297
+$ s3cmd rb --force s3://public.s3tools.org/
298
+
299
+WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
300
+File s3://public.s3tools.org/somefile.xml deleted
301
+Bucket 's3://public.s3tools.org/' removed
302
+```
303
+
304
+### Hints
305
+
306
+The basic usage is as simple as described in the previous section.
307
+
308
+You can increase the level of verbosity with `-v` option and if you're really keen to know what the program does under its bonnet run it with `-d` to see all 'debugging' output.
309
+
310
+After configuring it with `--configure` all available options are spitted into your `~/.s3cfg` file. It's a text file ready to be modified in your favourite text editor.
311
+
312
+For more information refer to the [S3cmd / S3tools homepage](http://s3tools.org).
313
+
314
+### License
315
+
316
+Copyright (C) 2014 TGRMN Software - http://www.tgrmn.com - and contributors
317
+
318
+This program is free software; you can redistribute it and/or modify
319
+it under the terms of the GNU General Public License as published by
320
+the Free Software Foundation; either version 2 of the License, or
321
+(at your option) any later version.
322
+
323
+This program is distributed in the hope that it will be useful,
324
+but WITHOUT ANY WARRANTY; without even the implied warranty of
325
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
326
+GNU General Public License for more details.
327
+
... ...
@@ -159,11 +159,11 @@ class ACL(object):
159 159
         grantee.name = name
160 160
         grantee.permission = permission
161 161
 
162
-        if  name.find('@') > -1:
162
+        if  '@' in name:
163 163
             grantee.name = grantee.name.lower()
164 164
             grantee.xsi_type = "AmazonCustomerByEmail"
165 165
             grantee.tag = "EmailAddress"
166
-        elif name.find('http://acs.amazonaws.com/groups/') > -1:
166
+        elif 'http://acs.amazonaws.com/groups/' in name:
167 167
             grantee.xsi_type = "Group"
168 168
             grantee.tag = "URI"
169 169
         else:
... ...
@@ -23,6 +23,7 @@ from Utils import getTreeFromXml, appendXmlTextNode, getDictFromTree, dateS3toPy
23 23
 from Crypto import sign_string_v2
24 24
 from S3Uri import S3Uri, S3UriS3
25 25
 from FileLists import fetch_remote_list
26
+from ConnMan import ConnMan
26 27
 
27 28
 cloudfront_api_version = "2010-11-01"
28 29
 cloudfront_resource = "/%(api_ver)s/distribution" % { 'api_ver' : cloudfront_api_version }
... ...
@@ -496,14 +497,14 @@ class CloudFront(object):
496 496
         request = self.create_request(operation, dist_id, request_id, headers)
497 497
         conn = self.get_connection()
498 498
         debug("send_request(): %s %s" % (request['method'], request['resource']))
499
-        conn.request(request['method'], request['resource'], body, request['headers'])
500
-        http_response = conn.getresponse()
499
+        conn.c.request(request['method'], request['resource'], body, request['headers'])
500
+        http_response = conn.c.getresponse()
501 501
         response = {}
502 502
         response["status"] = http_response.status
503 503
         response["reason"] = http_response.reason
504 504
         response["headers"] = dict(http_response.getheaders())
505 505
         response["data"] =  http_response.read()
506
-        conn.close()
506
+        conn.put()
507 507
 
508 508
         debug("CloudFront: response: %r" % response)
509 509
 
... ...
@@ -561,7 +562,8 @@ class CloudFront(object):
561 561
     def get_connection(self):
562 562
         if self.config.proxy_host != "":
563 563
             raise ParameterError("CloudFront commands don't work from behind a HTTP proxy")
564
-        return httplib.HTTPSConnection(self.config.cloudfront_host)
564
+        conn = ConnMan.get(self.config.cloudfront_host)
565
+        return conn
565 566
 
566 567
     def _fail_wait(self, retries):
567 568
         # Wait a few seconds. The more it fails the more we wait.
... ...
@@ -77,9 +77,10 @@ class Config(object):
77 77
     gpg_encrypt = "%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s"
78 78
     gpg_decrypt = "%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s"
79 79
     use_https = False
80
+    ca_certs_file = ""
81
+    check_ssl_certificate = True
80 82
     bucket_location = "US"
81 83
     default_mime_type = "binary/octet-stream"
82
-    default_region = 'us-east-1'
83 84
     guess_mime_type = True
84 85
     use_mime_magic = True
85 86
     mime_type = ""
... ...
@@ -110,6 +111,7 @@ class Config(object):
110 110
     files_from = []
111 111
     cache_file = ""
112 112
     add_headers = ""
113
+    remove_headers = []
113 114
     ignore_failed_copy = False
114 115
     expiry_days = ""
115 116
     expiry_date = ""
... ...
@@ -224,7 +226,10 @@ class Config(object):
224 224
     def read_config_file(self, configfile):
225 225
         cp = ConfigParser(configfile)
226 226
         for option in self.option_list():
227
-            self.update_option(option, cp.get(option))
227
+            _option = cp.get(option)
228
+            if _option is not None:
229
+                _option = _option.strip()
230
+            self.update_option(option, _option)
228 231
 
229 232
         if cp.get('add_headers'):
230 233
             for option in cp.get('add_headers').split(","):
... ...
@@ -4,7 +4,9 @@
4 4
 ## License: GPL Version 2
5 5
 ## Copyright: TGRMN Software and contributors
6 6
 
7
+import sys
7 8
 import httplib
9
+import ssl
8 10
 from urlparse import urlparse
9 11
 from threading import Semaphore
10 12
 from logging import debug, info, warning, error
... ...
@@ -15,17 +17,75 @@ from Exceptions import ParameterError
15 15
 __all__ = [ "ConnMan" ]
16 16
 
17 17
 class http_connection(object):
18
+    context = None
19
+    context_set = False
20
+
21
+    @staticmethod
22
+    def _ssl_verified_context(cafile):
23
+        context = None
24
+        try:
25
+            context = ssl.create_default_context(cafile=cafile)
26
+        except AttributeError: # no ssl.create_default_context
27
+            pass
28
+        return context
29
+
30
+    @staticmethod
31
+    def _ssl_context():
32
+        if http_connection.context_set:
33
+            return http_connection.context
34
+
35
+        cfg = Config()
36
+        cafile = cfg.ca_certs_file
37
+        if cafile == "":
38
+            cafile = None
39
+        debug(u"Using ca_certs_file %s" % cafile)
40
+
41
+        context = http_connection._ssl_verified_context(cafile)
42
+
43
+        if context and not cfg.check_ssl_certificate:
44
+            context.check_hostname = False
45
+            debug(u'Disabling hostname checking')
46
+
47
+        http_connection.context = context
48
+        http_connection.context_set = True
49
+        return context
50
+
51
+    @staticmethod
52
+    def _https_connection(hostname, port=None):
53
+        try:
54
+            context = http_connection._ssl_context()
55
+            conn = httplib.HTTPSConnection(hostname, port, context=context)
56
+        except TypeError:
57
+            conn = httplib.HTTPSConnection(hostname, port)
58
+        return conn
59
+
18 60
     def __init__(self, id, hostname, ssl, cfg):
19 61
         self.hostname = hostname
20 62
         self.ssl = ssl
21 63
         self.id = id
22 64
         self.counter = 0
23
-        if cfg.proxy_host != "":
24
-            self.c = httplib.HTTPConnection(cfg.proxy_host, cfg.proxy_port)
25
-        elif not ssl:
26
-            self.c = httplib.HTTPConnection(hostname)
65
+
66
+        if not ssl:
67
+            if cfg.proxy_host != "":
68
+                self.c = httplib.HTTPConnection(cfg.proxy_host, cfg.proxy_port)
69
+                debug(u'proxied HTTPConnection(%s, %s)' % (cfg.proxy_host, cfg.proxy_port))
70
+            else:
71
+                self.c = httplib.HTTPConnection(hostname)
72
+                debug(u'non-proxied HTTPConnection(%s)' % hostname)
27 73
         else:
28
-            self.c = httplib.HTTPSConnection(hostname)
74
+            if cfg.proxy_host != "":
75
+                self.c = http_connection._https_connection(cfg.proxy_host, cfg.proxy_port)
76
+                self.c.set_tunnel(hostname)
77
+                debug(u'proxied HTTPSConnection(%s, %s)' % (cfg.proxy_host, cfg.proxy_port))
78
+                debug(u'tunnel to %s' % hostname)
79
+            else:
80
+                self.c = http_connection._https_connection(hostname)
81
+                debug(u'non-proxied HTTPSConnection(%s)' % hostname)
82
+
83
+            # S3's wildcart certificate doesn't work with DNS-style named buckets.
84
+            if 's3.amazonaws.com' in hostname and http_connection.context:
85
+                http_connection.context.check_hostname = False
86
+                debug(u'Disabling SSL certificate hostname verification for S3 wildcard cert')
29 87
 
30 88
 class ConnMan(object):
31 89
     conn_pool_sem = Semaphore()
... ...
@@ -39,8 +99,8 @@ class ConnMan(object):
39 39
             ssl = cfg.use_https
40 40
         conn = None
41 41
         if cfg.proxy_host != "":
42
-            if ssl:
43
-                raise ParameterError("use_https=True can't be used with proxy")
42
+            if ssl and sys.hexversion < 0x02070000:
43
+                raise ParameterError("use_https=True can't be used with proxy on Python <2.7")
44 44
             conn_id = "proxy://%s:%s" % (cfg.proxy_host, cfg.proxy_port)
45 45
         else:
46 46
             conn_id = "http%s://%s" % (ssl and "s" or "", hostname)
... ...
@@ -6,6 +6,7 @@
6 6
 
7 7
 from Utils import getTreeFromXml, unicodise, deunicodise
8 8
 from logging import debug, info, warning, error
9
+import ExitCodes
9 10
 
10 11
 try:
11 12
     import xml.etree.ElementTree as ET
... ...
@@ -64,6 +65,26 @@ class S3Error (S3Exception):
64 64
             retval += (u": %s" % self.info["Message"])
65 65
         return retval
66 66
 
67
+    def get_error_code(self):
68
+        if self.status in [301, 307]:
69
+            return ExitCodes.EX_SERVERMOVED
70
+        elif self.status in [400, 405, 411, 416, 501]:
71
+            return ExitCodes.EX_SERVERERROR
72
+        elif self.status == 403:
73
+            return ExitCodes.EX_ACCESSDENIED
74
+        elif self.status == 404:
75
+            return ExitCodes.EX_NOTFOUND
76
+        elif self.status == 409:
77
+            return ExitCodes.EX_CONFLICT
78
+        elif self.status == 412:
79
+            return ExitCodes.EX_PRECONDITION
80
+        elif self.status == 500:
81
+            return ExitCodes.EX_SOFTWARE
82
+        elif self.status == 503:
83
+            return ExitCodes.EX_SERVICE
84
+        else:
85
+            return ExitCodes.EX_SOFTWARE
86
+
67 87
     @staticmethod
68 88
     def parse_error_xml(tree):
69 89
         info = {}
... ...
@@ -1,16 +1,22 @@
1 1
 # patterned on /usr/include/sysexits.h
2 2
 
3
-EX_OK         = 0
4
-EX_GENERAL    = 1
5
-EX_SOMEFAILED = 2    # some parts of the command succeeded, while others failed
6
-EX_USAGE      = 64   # The command was used incorrectly (e.g. bad command line syntax)
7
-EX_SOFTWARE   = 70   # internal software error (e.g. S3 error of unknown specificity)
8
-EX_OSERR      = 71   # system error (e.g. out of memory)
9
-EX_OSFILE     = 72   # OS error (e.g. invalid Python version)
10
-EX_IOERR      = 74   # An error occurred while doing I/O on some file.
11
-EX_TEMPFAIL   = 75   # temporary failure (S3DownloadError or similar, retry later)
12
-EX_NOPERM     = 77   # Insufficient permissions to perform the operation on S3  
13
-EX_CONFIG     = 78   # Configuration file error
14
-_EX_SIGNAL    = 128
15
-_EX_SIGINT    = 2
16
-EX_BREAK      = _EX_SIGNAL + _EX_SIGINT # Control-C (KeyboardInterrupt raised)
3
+EX_OK               = 0
4
+EX_GENERAL          = 1
5
+EX_PARTIAL          = 2    # some parts of the command succeeded, while others failed
6
+EX_SERVERMOVED      = 10   # 301: Moved permanantly & 307: Moved temp
7
+EX_SERVERERROR      = 11   # 400, 405, 411, 416, 501: Bad request
8
+EX_NOTFOUND         = 12   # 404: Not found
9
+EX_CONFLICT         = 13   # 409: Conflict (ex: bucket error)
10
+EX_PRECONDITION     = 14   # 412: Precondition failed
11
+EX_SERVICE          = 15   # 503: Service not available or slow down
12
+EX_USAGE            = 64   # The command was used incorrectly (e.g. bad command line syntax)
13
+EX_SOFTWARE         = 70   # internal software error (e.g. S3 error of unknown specificity)
14
+EX_OSERR            = 71   # system error (e.g. out of memory)
15
+EX_OSFILE           = 72   # OS error (e.g. invalid Python version)
16
+EX_IOERR            = 74   # An error occurred while doing I/O on some file.
17
+EX_TEMPFAIL         = 75   # temporary failure (S3DownloadError or similar, retry later)
18
+EX_ACCESSDENIED     = 77   # Insufficient permissions to perform the operation on S3
19
+EX_CONFIG           = 78   # Configuration file error
20
+_EX_SIGNAL          = 128
21
+_EX_SIGINT          = 2
22
+EX_BREAK            = _EX_SIGNAL + _EX_SIGINT # Control-C (KeyboardInterrupt raised)
... ...
@@ -381,14 +381,14 @@ def fetch_remote_list(args, require_attribs = False, recursive = None, uri_param
381 381
             rem_list[key] = {
382 382
                 'size' : int(object['Size']),
383 383
                 'timestamp' : dateS3toUnix(object['LastModified']), ## Sadly it's upload time, not our lastmod time :-(
384
-                'md5' : object['ETag'][1:-1],
384
+                'md5' : object['ETag'].strip('"\''),
385 385
                 'object_key' : object['Key'],
386 386
                 'object_uri_str' : object_uri_str,
387 387
                 'base_uri' : remote_uri,
388 388
                 'dev' : None,
389 389
                 'inode' : None,
390 390
             }
391
-            if rem_list[key]['md5'].find("-") > 0: # always get it for multipart uploads
391
+            if '-' in rem_list[key]['md5']: # always get it for multipart uploads
392 392
                 _get_remote_attribs(S3Uri(object_uri_str), rem_list[key])
393 393
             md5 = rem_list[key]['md5']
394 394
             rem_list.record_md5(key, md5)
... ...
@@ -478,7 +478,7 @@ def compare_filelists(src_list, dst_list, src_remote, dst_remote, delay_updates
478 478
         compare_md5 = 'md5' in cfg.sync_checks
479 479
         # Multipart-uploaded files don't have a valid md5 sum - it ends with "...-nn"
480 480
         if compare_md5:
481
-            if (src_remote == True and src_list[file]['md5'].find("-") >= 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):
481
+            if (src_remote == True and '-' in src_list[file]['md5']) or (dst_remote == True and '-' in dst_list[file]['md5']):
482 482
                 compare_md5 = False
483 483
                 info(u"disabled md5 check for %s" % file)
484 484
         if attribs_match and compare_md5:
... ...
@@ -147,7 +147,7 @@ class MultiPartUpload(object):
147 147
         if remote_status is not None:
148 148
             if int(remote_status['size']) == chunk_size:
149 149
                 checksum = calculateChecksum(buffer, self.file, offset, chunk_size, self.s3.config.send_chunk)
150
-                remote_checksum = remote_status['checksum'].strip('"')
150
+                remote_checksum = remote_status['checksum'].strip('"\'')
151 151
                 if remote_checksum == checksum:
152 152
                     warning("MultiPart: size and md5sum match for %s part %d, skipping." % (self.uri, seq))
153 153
                     self.parts[seq] = remote_status['checksum']
... ...
@@ -62,7 +62,7 @@ try:
62 62
             return magic_.file(file)
63 63
 
64 64
 except ImportError, e:
65
-    if str(e).find("magic") >= 0:
65
+    if 'magic' in str(e):
66 66
         magic_message = "Module python-magic is not available."
67 67
     else:
68 68
         magic_message = "Module python-magic can't be used (%s)." % e.message
... ...
@@ -166,7 +166,7 @@ class S3Request(object):
166 166
                                           self.s3.get_hostname(self.resource['bucket']),
167 167
                                           self.resource['uri'],
168 168
                                           self.params,
169
-                                          S3Request.region_map.get(self.resource['bucket'], Config().default_region),
169
+                                          S3Request.region_map.get(self.resource['bucket'], Config().bucket_location),
170 170
                                           self.headers,
171 171
                                           self.body)
172 172
 
... ...
@@ -225,7 +225,7 @@ class S3(object):
225 225
         self.config = config
226 226
 
227 227
     def get_hostname(self, bucket):
228
-        if bucket and check_bucket_name_dns_conformity(bucket):
228
+        if bucket and check_bucket_name_dns_support(self.config.host_bucket, bucket):
229 229
             if self.redir_map.has_key(bucket):
230 230
                 host = self.redir_map[bucket]
231 231
             else:
... ...
@@ -239,7 +239,7 @@ class S3(object):
239 239
         self.redir_map[bucket] = redir_hostname
240 240
 
241 241
     def format_uri(self, resource):
242
-        if resource['bucket'] and not check_bucket_name_dns_conformity(resource['bucket']):
242
+        if resource['bucket'] and not check_bucket_name_dns_support(self.config.host_bucket, resource['bucket']):
243 243
             uri = "/%s%s" % (resource['bucket'], resource['uri'])
244 244
         else:
245 245
             uri = resource['uri']
... ...
@@ -450,8 +450,40 @@ class S3(object):
450 450
         request =  self.create_request("BUCKET_CREATE", bucket = bucket, headers = headers, extra="?lifecycle", body = body)
451 451
         return (request)
452 452
 
453
+    def _guess_content_type(self, filename):
454
+        content_type = self.config.default_mime_type
455
+        content_charset = None
456
+
457
+        if filename == "-" and not self.config.default_mime_type:
458
+            raise ParameterError("You must specify --mime-type or --default-mime-type for files uploaded from stdin.")
459
+
460
+        if self.config.guess_mime_type:
461
+            if self.config.use_mime_magic:
462
+                (content_type, content_charset) = mime_magic(filename)
463
+            else:
464
+                (content_type, content_charset) = mimetypes.guess_type(filename)
465
+        if not content_type:
466
+            content_type = self.config.default_mime_type
467
+        return (content_type, content_charset)
468
+
469
+    def content_type(self, filename=None):
470
+        # explicit command line argument always wins
471
+        content_type = self.config.mime_type
472
+        content_charset = None
473
+
474
+        if not content_type:
475
+            (content_type, content_charset) = self._guess_content_type(filename)
476
+
477
+        ## add charset to content type
478
+        if not content_charset:
479
+            content_charset = self.config.encoding.upper()
480
+        if self.add_encoding(filename, content_type) and content_charset is not None:
481
+            content_type = content_type + "; charset=" + content_charset
482
+
483
+        return content_type
484
+
453 485
     def add_encoding(self, filename, content_type):
454
-        if content_type.find("charset=") != -1:
486
+        if 'charset=' in content_type:
455 487
            return False
456 488
         exts = self.config.add_encoding_exts.split(',')
457 489
         if exts[0]=='':
... ...
@@ -492,23 +524,7 @@ class S3(object):
492 492
             headers["x-amz-server-side-encryption"] = "AES256"
493 493
 
494 494
         ## MIME-type handling
495
-        content_type = self.config.mime_type
496
-        content_charset = None
497
-        if filename != "-" and not content_type and self.config.guess_mime_type:
498
-            if self.config.use_mime_magic:
499
-                (content_type, content_charset) = mime_magic(filename)
500
-            else:
501
-                (content_type, content_charset) = mimetypes.guess_type(filename)
502
-        if not content_type:
503
-            content_type = self.config.default_mime_type
504
-        if not content_charset:
505
-            content_charset = self.config.encoding.upper()
506
-
507
-        ## add charset to content type
508
-        if self.add_encoding(filename, content_type) and content_charset is not None:
509
-            content_type = content_type + "; charset=" + content_charset
510
-
511
-        headers["content-type"] = content_type
495
+        headers["content-type"] = self.content_type(filename=filename)
512 496
 
513 497
         ## Other Amazon S3 attributes
514 498
         if self.config.acl_public:
... ...
@@ -540,7 +556,7 @@ class S3(object):
540 540
 
541 541
             if info is not None:
542 542
                 remote_size = int(info['headers']['content-length'])
543
-                remote_checksum = info['headers']['etag'].strip('"')
543
+                remote_checksum = info['headers']['etag'].strip('"\'')
544 544
                 if size == remote_size:
545 545
                     checksum = calculateChecksum('', file, 0, size, self.config.send_chunk)
546 546
                     if remote_checksum == checksum:
... ...
@@ -591,7 +607,8 @@ class S3(object):
591 591
         request_body = compose_batch_del_xml(bucket, batch)
592 592
         md5_hash = md5()
593 593
         md5_hash.update(request_body)
594
-        headers = {'content-md5': base64.b64encode(md5_hash.digest())}
594
+        headers = {'content-md5': base64.b64encode(md5_hash.digest()),
595
+                   'content-type': 'application/xml'}
595 596
         request = self.create_request("BATCH_DELETE", bucket = bucket, extra = '?delete', headers = headers, body = request_body)
596 597
         response = self.send_request(request)
597 598
         return response
... ...
@@ -614,6 +631,28 @@ class S3(object):
614 614
         debug("Received response '%s'" % (response))
615 615
         return response
616 616
 
617
+    def _sanitize_headers(self, headers):
618
+        to_remove = [
619
+            # from http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
620
+            'date',
621
+            'content-length',
622
+            'last-modified',
623
+            'content-md5',
624
+            'x-amz-version-id',
625
+            'x-amz-delete-marker',
626
+            # other headers returned from object_info() we don't want to send
627
+            'accept-ranges',
628
+            'etag',
629
+            'server',
630
+            'x-amz-id-2',
631
+            'x-amz-request-id',
632
+        ]
633
+
634
+        for h in to_remove + self.config.remove_headers:
635
+            if h.lower() in headers:
636
+                del headers[h.lower()]
637
+        return headers
638
+
617 639
     def object_copy(self, src_uri, dst_uri, extra_headers = None):
618 640
         if src_uri.type != "s3":
619 641
             raise ValueError("Expected URI type 's3', got '%s'" % src_uri.type)
... ...
@@ -621,23 +660,57 @@ class S3(object):
621 621
             raise ValueError("Expected URI type 's3', got '%s'" % dst_uri.type)
622 622
         headers = SortedDict(ignore_case = True)
623 623
         headers['x-amz-copy-source'] = "/%s/%s" % (src_uri.bucket(), self.urlencode_string(src_uri.object()))
624
-        ## TODO: For now COPY, later maybe add a switch?
625 624
         headers['x-amz-metadata-directive'] = "COPY"
626 625
         if self.config.acl_public:
627 626
             headers["x-amz-acl"] = "public-read"
628 627
         if self.config.reduced_redundancy:
629 628
             headers["x-amz-storage-class"] = "REDUCED_REDUNDANCY"
629
+        else:
630
+            headers["x-amz-storage-class"] = "STANDARD"
630 631
 
631 632
         ## Set server side encryption
632 633
         if self.config.server_side_encryption:
633 634
             headers["x-amz-server-side-encryption"] = "AES256"
634 635
 
635 636
         if extra_headers:
636
-            headers['x-amz-metadata-directive'] = "REPLACE"
637 637
             headers.update(extra_headers)
638
+
638 639
         request = self.create_request("OBJECT_PUT", uri = dst_uri, headers = headers)
639 640
         response = self.send_request(request)
640 641
         return response
642
+        
643
+    def object_modify(self, src_uri, dst_uri, extra_headers = None):
644
+        if src_uri.type != "s3":
645
+            raise ValueError("Expected URI type 's3', got '%s'" % src_uri.type)
646
+        if dst_uri.type != "s3":
647
+            raise ValueError("Expected URI type 's3', got '%s'" % dst_uri.type)
648
+
649
+        info_response = self.object_info(src_uri)
650
+        headers = info_response['headers']
651
+        headers = self._sanitize_headers(headers)
652
+        acl = self.get_acl(src_uri)
653
+
654
+        headers['x-amz-copy-source'] = "/%s/%s" % (src_uri.bucket(), self.urlencode_string(src_uri.object()))
655
+        headers['x-amz-metadata-directive'] = "REPLACE"
656
+
657
+        # cannot change between standard and reduced redundancy with a REPLACE.
658
+
659
+        ## Set server side encryption
660
+        if self.config.server_side_encryption:
661
+            headers["x-amz-server-side-encryption"] = "AES256"
662
+
663
+        if extra_headers:
664
+            headers.update(extra_headers)
665
+
666
+        if self.config.mime_type:
667
+            headers["content-type"] = self.config.mime_type
668
+
669
+        request = self.create_request("OBJECT_PUT", uri = src_uri, headers = headers)
670
+        response = self.send_request(request)
671
+
672
+        acl_response = self.set_acl(src_uri, acl)
673
+
674
+        return response
641 675
 
642 676
     def object_move(self, src_uri, dst_uri, extra_headers = None):
643 677
         response_copy = self.object_copy(src_uri, dst_uri, extra_headers)
... ...
@@ -663,6 +736,7 @@ class S3(object):
663 663
         return acl
664 664
 
665 665
     def set_acl(self, uri, acl):
666
+<<<<<<< HEAD
666 667
         body = str(acl)
667 668
         debug(u"set_acl(%s): acl-xml: %s" % (uri, body))
668 669
 
... ...
@@ -670,6 +744,19 @@ class S3(object):
670 670
             request = self.create_request("OBJECT_PUT", uri = uri, extra = "?acl", body = body)
671 671
         else:
672 672
             request = self.create_request("BUCKET_CREATE", bucket = uri.bucket(), extra = "?acl", body = body)
673
+=======
674
+        # dreamhost doesn't support set_acl properly
675
+        if 'objects.dreamhost.com' in self.config.host_base:
676
+            return { 'status' : 501 } # not implemented
677
+
678
+        headers = {'content-type': 'application/xml'}
679
+        if uri.has_object():
680
+            request = self.create_request("OBJECT_PUT", uri = uri, extra = "?acl",
681
+                                          headers = headers)
682
+        else:
683
+            request = self.create_request("BUCKET_CREATE", bucket = uri.bucket(), extra = "?acl",
684
+                                          headers = headers)
685
+>>>>>>> upstream-master
673 686
 
674 687
         response = self.send_request(request)
675 688
         return response
... ...
@@ -1223,7 +1310,7 @@ class S3(object):
1223 1223
             except KeyError:
1224 1224
                 pass
1225 1225
 
1226
-        response["md5match"] = md5_hash.find(response["md5"]) >= 0
1226
+        response["md5match"] = response["md5"] in md5_hash
1227 1227
         response["elapsed"] = timestamp_end - timestamp_start
1228 1228
         response["size"] = current_position
1229 1229
         response["speed"] = response["elapsed"] and float(response["size"]) / response["elapsed"] or float(-1)
... ...
@@ -10,7 +10,7 @@ import sys
10 10
 from BidirMap import BidirMap
11 11
 from logging import debug
12 12
 import S3
13
-from Utils import unicodise, check_bucket_name_dns_conformity
13
+from Utils import unicodise, check_bucket_name_dns_conformity, check_bucket_name_dns_support
14 14
 import Config
15 15
 
16 16
 class S3Uri(object):
... ...
@@ -78,7 +78,7 @@ class S3UriS3(S3Uri):
78 78
         return u"/".join([u"s3:/", self._bucket, self._object])
79 79
 
80 80
     def is_dns_compatible(self):
81
-        return check_bucket_name_dns_conformity(self._bucket)
81
+        return check_bucket_name_dns_support(Config.Config().host_bucket, self._bucket)
82 82
 
83 83
     def public_url(self):
84 84
         if self.is_dns_compatible():
... ...
@@ -72,7 +72,7 @@ def stripNameSpace(xml):
72 72
     """
73 73
     removeNameSpace(xml) -- remove top-level AWS namespace
74 74
     """
75
-    r = re.compile('^(<?[^>]+?>\s?)(<\w+) xmlns=[\'"](http://[^\'"]+)[\'"](.*)', re.MULTILINE)
75
+    r = re.compile('^(<?[^>]+?>\s*)(<\w+) xmlns=[\'"](http://[^\'"]+)[\'"](.*)', re.MULTILINE)
76 76
     if r.match(xml):
77 77
         xmlns = r.match(xml).groups()[2]
78 78
         xml = r.sub("\\1\\2\\4", xml)
... ...
@@ -407,6 +407,20 @@ def check_bucket_name_dns_conformity(bucket):
407 407
         return False
408 408
 __all__.append("check_bucket_name_dns_conformity")
409 409
 
410
+def check_bucket_name_dns_support(bucket_host, bucket_name):
411
+    """
412
+    Check whether either the host_bucket support buckets and
413
+    either bucket name is dns compatible
414
+    """
415
+    if "%(bucket)s" not in bucket_host:
416
+        return False
417
+
418
+    try:
419
+        return check_bucket_name(bucket_name, dns_strict = True)
420
+    except Exceptions.ParameterError:
421
+        return False
422
+__all__.append("check_bucket_name_dns_support")
423
+
410 424
 def getBucketFromHostname(hostname):
411 425
     """
412 426
     bucket, success = getBucketFromHostname(hostname)
... ...
@@ -1,4 +1,4 @@
1
-#!/usr/bin/env python
1
+#!/usr/bin/env python2
2 2
 # -*- coding=utf-8 -*-
3 3
 
4 4
 ## Amazon S3cmd - testsuite
... ...
@@ -90,7 +90,7 @@ if not os.path.isdir('testsuite/crappy-file-name'):
90 90
 def test(label, cmd_args = [], retcode = 0, must_find = [], must_not_find = [], must_find_re = [], must_not_find_re = []):
91 91
     def command_output():
92 92
         print "----"
93
-        print " ".join([arg.find(" ")>=0 and "'%s'" % arg or arg for arg in cmd_args])
93
+        print " ".join([" " in arg and "'%s'" % arg or arg for arg in cmd_args])
94 94
         print "----"
95 95
         print stdout
96 96
         print "----"
... ...
@@ -175,7 +175,7 @@ def test(label, cmd_args = [], retcode = 0, must_find = [], must_not_find = [],
175 175
 
176 176
 def test_s3cmd(label, cmd_args = [], **kwargs):
177 177
     if not cmd_args[0].endswith("s3cmd"):
178
-        cmd_args.insert(0, "python")
178
+        cmd_args.insert(0, "python2")
179 179
         cmd_args.insert(1, "s3cmd")
180 180
 
181 181
     return test(label, cmd_args, **kwargs)
... ...
@@ -219,6 +219,11 @@ def test_copy(label, src_file, dst_file):
219 219
     cmd.append(dst_file)
220 220
     return test(label, cmd)
221 221
 
222
+def test_wget_HEAD(label, src_file, **kwargs):
223
+    cmd = ['wget', '-q', '-S', '--method=HEAD']
224
+    cmd.append(src_file)
225
+    return test(label, cmd, **kwargs)
226
+
222 227
 bucket_prefix = u"%s-" % getpass.getuser()
223 228
 print "Using bucket prefix: '%s'" % bucket_prefix
224 229
 
... ...
@@ -245,7 +250,7 @@ while argv:
245 245
             print "Bucket prefix option must explicitly supply a bucket name prefix"
246 246
             sys.exit(0)
247 247
         continue
248
-    if arg.find("..") >= 0:
248
+    if ".." in arg:
249 249
         range_idx = arg.find("..")
250 250
         range_start = arg[:range_idx] or 0
251 251
         range_end = arg[range_idx+2:] or 999
... ...
@@ -422,7 +427,7 @@ test_s3cmd("Rename within S3", ['mv', '%s/xyz/etc/logo.png' % pbucket(1), '%s/xy
422 422
 
423 423
 ## ====== Rename (NoSuchKey)
424 424
 test_s3cmd("Rename (NoSuchKey)", ['mv', '%s/xyz/etc/logo.png' % pbucket(1), '%s/xyz/etc2/Logo.PNG' % pbucket(1)],
425
-    retcode = EX_SOFTWARE,
425
+    retcode = EX_NOTFOUND,
426 426
     must_find_re = [ 'ERROR:.*NoSuchKey' ],
427 427
     must_not_find = [ 'File %s/xyz/etc/logo.png moved to %s/xyz/etc2/Logo.PNG' % (pbucket(1), pbucket(1)) ])
428 428
 
... ...
@@ -488,6 +493,36 @@ test_s3cmd("Verify ACL and MIME type", ['info', '%s/copy/etc2/Logo.PNG' % pbucke
488 488
                      "ACL:.*\*anon\*: READ",
489 489
                      "URL:.*http://%s.%s/copy/etc2/Logo.PNG" % (bucket(2), cfg.host_base) ])
490 490
 
491
+## ====== modify MIME type
492
+test_s3cmd("Modify MIME type", ['modify', '--mime-type=binary/octet-stream', '%s/copy/etc2/Logo.PNG' % pbucket(2) ])
493
+
494
+test_s3cmd("Verify ACL and MIME type", ['info', '%s/copy/etc2/Logo.PNG' % pbucket(2) ],
495
+    must_find_re = [ "MIME type:.*binary/octet-stream",
496
+                     "ACL:.*\*anon\*: READ",
497
+                     "URL:.*http://%s.%s/copy/etc2/Logo.PNG" % (bucket(2), cfg.host_base) ])
498
+
499
+test_s3cmd("Modify MIME type back", ['modify', '--mime-type=image/png', '%s/copy/etc2/Logo.PNG' % pbucket(2) ])
500
+
501
+test_s3cmd("Verify ACL and MIME type", ['info', '%s/copy/etc2/Logo.PNG' % pbucket(2) ],
502
+    must_find_re = [ "MIME type:.*image/png",
503
+                     "ACL:.*\*anon\*: READ",
504
+                     "URL:.*http://%s.%s/copy/etc2/Logo.PNG" % (bucket(2), cfg.host_base) ])
505
+
506
+test_s3cmd("Add cache-control header", ['modify', '--add-header=cache-control: max-age=3600', '%s/copy/etc2/Logo.PNG' % pbucket(2) ],
507
+    must_find_re = [ "File .* modified" ])
508
+
509
+if have_wget:
510
+    test_wget_HEAD("HEAD check Cache-Control present", 'http://%s.%s/copy/etc2/Logo.PNG' % (bucket(2), cfg.host_base),
511
+                   must_find_re = [ "Cache-Control: max-age=3600" ])
512
+
513
+test_s3cmd("Remove cache-control header", ['modify', '--remove-header=cache-control', '%s/copy/etc2/Logo.PNG' % pbucket(2) ],
514
+    must_find_re = [ "File .* modified" ])
515
+
516
+if have_wget:
517
+    test_wget_HEAD("HEAD check Cache-Control not present", 'http://%s.%s/copy/etc2/Logo.PNG' % (bucket(2), cfg.host_base),
518
+                   must_not_find_re = [ "Cache-Control: max-age=3600" ])
519
+
520
+
491 521
 ## ====== Rename within S3
492 522
 test_s3cmd("Rename within S3", ['mv', '%s/copy/etc2/Logo.PNG' % pbucket(2), '%s/copy/etc/logo.png' % pbucket(2)],
493 523
     must_find = [ 'File %s/copy/etc2/Logo.PNG moved to %s/copy/etc/logo.png' % (pbucket(2), pbucket(2))])
... ...
@@ -1,4 +1,4 @@
1
-#!/usr/bin/env python
1
+#!/usr/bin/env python2
2 2
 
3 3
 ## --------------------------------------------------------------------
4 4
 ## s3cmd - S3 client
... ...
@@ -173,9 +173,9 @@ def subcmd_bucket_list(s3, uri):
173 173
             "uri": uri.compose_uri(bucket, prefix["Prefix"])})
174 174
 
175 175
     for object in response["list"]:
176
-        md5 = object['ETag'].strip('"')
176
+        md5 = object['ETag'].strip('"\'')
177 177
         if cfg.list_md5:
178
-            if md5.find('-') >= 0: # need to get md5 from the object
178
+            if '-' in md5: # need to get md5 from the object
179 179
                 object_uri = uri.compose_uri(bucket, object["Key"])
180 180
                 info_response = s3.object_info(S3Uri(object_uri))
181 181
                 try:
... ...
@@ -753,7 +753,7 @@ def cmd_cp(args):
753 753
 
754 754
 def cmd_modify(args):
755 755
     s3 = S3(Config())
756
-    return subcmd_cp_mv(args, s3.object_copy, "modify", u"File %(src)s modified")
756
+    return subcmd_cp_mv(args, s3.object_modify, "modify", u"File %(src)s modified")
757 757
 
758 758
 def cmd_mv(args):
759 759
     s3 = S3(Config())
... ...
@@ -775,7 +775,7 @@ def cmd_info(args):
775 775
                 output(u"   File size: %s" % info['headers']['content-length'])
776 776
                 output(u"   Last mod:  %s" % info['headers']['last-modified'])
777 777
                 output(u"   MIME type: %s" % info['headers']['content-type'])
778
-                md5 = info['headers']['etag'].strip('"')
778
+                md5 = info['headers']['etag'].strip('"\'')
779 779
                 try:
780 780
                     md5 = info['s3cmd-attrs']['md5']
781 781
                 except KeyError:
... ...
@@ -1403,7 +1403,7 @@ def cmd_sync_local2remote(args):
1403 1403
 
1404 1404
         # Only print out the result if any work has been done or
1405 1405
         # if the user asked for verbose output
1406
-        outstr = "Done. Uploaded %d bytes in %0.1f seconds, %0.2f %sB/s.  Copied %d files saving %d bytes transfer." % (total_size, total_elapsed, speed_fmt[0], speed_fmt[1], n_copies, saved_bytes)
1406
+        outstr = "Done. Uploaded %d bytes in %0.1f seconds, %0.2f %sB/s. Copied %d files saving %d bytes transfer." % (total_size, total_elapsed, speed_fmt[0], speed_fmt[1], n_copies, saved_bytes)
1407 1407
         if total_size + saved_bytes > 0:
1408 1408
             output(outstr)
1409 1409
         else:
... ...
@@ -1777,10 +1777,10 @@ def run_configure(config_file, args):
1777 1777
     options = [
1778 1778
         ("access_key", "Access Key", "Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables."),
1779 1779
         ("secret_key", "Secret Key"),
1780
-        ("default_region", "Default Region"),
1780
+        ("bucket_location", "Default Region"),
1781 1781
         ("gpg_passphrase", "Encryption password", "Encryption password is used to protect your files from reading\nby unauthorized persons while in transfer to S3"),
1782 1782
         ("gpg_command", "Path to GPG program"),
1783
-        ("use_https", "Use HTTPS protocol", "When using secure HTTPS protocol all communication with Amazon S3\nservers is protected from 3rd party eavesdropping. This method is\nslower than plain HTTP and can't be used if you're behind a proxy"),
1783
+        ("use_https", "Use HTTPS protocol", "When using secure HTTPS protocol all communication with Amazon S3\nservers is protected from 3rd party eavesdropping. This method is\nslower than plain HTTP, and can only be proxied with Python 2.7 or newer"),
1784 1784
         ("proxy_host", "HTTP Proxy server name", "On some networks all internet access must go through a HTTP proxy.\nTry setting it here if you can't connect to S3 directly"),
1785 1785
         ("proxy_port", "HTTP Proxy server port"),
1786 1786
         ]
... ...
@@ -1801,7 +1801,7 @@ def run_configure(config_file, args):
1801 1801
             for option in options:
1802 1802
                 prompt = option[1]
1803 1803
                 ## Option-specific handling
1804
-                if option[0] == 'proxy_host' and getattr(cfg, 'use_https') == True:
1804
+                if option[0] == 'proxy_host' and getattr(cfg, 'use_https') == True and sys.hexversion < 0x02070000:
1805 1805
                     setattr(cfg, option[0], "")
1806 1806
                     continue
1807 1807
                 if option[0] == 'proxy_port' and getattr(cfg, 'proxy_host') == "":
... ...
@@ -2134,6 +2134,8 @@ def main():
2134 2134
 
2135 2135
     optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for file transfer commands)")
2136 2136
 
2137
+    optparser.add_option("-s", "--ssl", dest="use_https", action="store_true", help="Use HTTPS connection when communicating with S3.")
2138
+    optparser.add_option(      "--no-ssl", dest="use_https", action="store_false", help="Don't use HTTPS. (default)")
2137 2139
     optparser.add_option("-e", "--encrypt", dest="encrypt", action="store_true", help="Encrypt files before uploading to S3.")
2138 2140
     optparser.add_option(      "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.")
2139 2141
     optparser.add_option("-f", "--force", dest="force", action="store_true", help="Force overwrite and other dangerous operations.")
... ...
@@ -2171,8 +2173,9 @@ def main():
2171 2171
     optparser.add_option(      "--ignore-failed-copy", dest="ignore_failed_copy", action="store_true", help="Don't exit unsuccessfully because of missing keys")
2172 2172
 
2173 2173
     optparser.add_option(      "--files-from", dest="files_from", action="append", metavar="FILE", help="Read list of source-file names from FILE. Use - to read from stdin.")
2174
-    optparser.add_option(      "--bucket-location", dest="bucket_location", help="Datacentre to create bucket in. As of now the datacenters are: US (default), EU, ap-northeast-1, ap-southeast-1, sa-east-1, us-west-1 and us-west-2")
2174
+    optparser.add_option(      "--region", "--bucket-location", metavar="REGION", dest="bucket_location", help="Region to create bucket in. As of now the regions are: us-east-1, us-west-1, us-west-2, eu-west-1, eu-central-1, ap-northeast-1, ap-southeast-1, ap-southeast-2, sa-east-1")
2175 2175
     optparser.add_option(      "--reduced-redundancy", "--rr", dest="reduced_redundancy", action="store_true", help="Store object with 'Reduced redundancy'. Lower per-GB price. [put, cp, mv]")
2176
+    optparser.add_option(      "--no-reduced-redundancy", "--no-rr", dest="reduced_redundancy", action="store_false", help="Store object without 'Reduced redundancy'. Higher per-GB price. [put, cp, mv]")
2176 2177
 
2177 2178
     optparser.add_option(      "--access-logging-target-prefix", dest="log_target_prefix", help="Target prefix for access logs (S3 URI) (for [cfmodify] and [accesslog] commands)")
2178 2179
     optparser.add_option(      "--no-access-logging", dest="log_target_prefix", action="store_false", help="Disable access logging (for [cfmodify] and [accesslog] commands)")
... ...
@@ -2184,8 +2187,9 @@ def main():
2184 2184
     optparser.add_option("-m", "--mime-type", dest="mime_type", type="mimetype", metavar="MIME/TYPE", help="Force MIME-type. Override both --default-mime-type and --guess-mime-type.")
2185 2185
 
2186 2186
     optparser.add_option(      "--add-header", dest="add_header", action="append", metavar="NAME:VALUE", help="Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache-Control' headers (or both) using this option.")
2187
+    optparser.add_option(      "--remove-header", dest="remove_headers", action="append", metavar="NAME", help="Remove a given HTTP header.  Can be used multiple times.  For instance, remove 'Expires' or 'Cache-Control' headers (or both) using this option. [modify]")
2187 2188
 
2188
-    optparser.add_option(      "--server-side-encryption", dest="server_side_encryption", action="store_true", help="Specifies that server-side encryption will be used when putting objects.")
2189
+    optparser.add_option(      "--server-side-encryption", dest="server_side_encryption", action="store_true", help="Specifies that server-side encryption will be used when putting objects. [put, sync, cp, modify]")
2189 2190
 
2190 2191
     optparser.add_option(      "--encoding", dest="encoding", metavar="ENCODING", help="Override autodetected terminal and filesystem encoding (character set). Autodetected: %s" % preferred_encoding)
2191 2192
     optparser.add_option(      "--add-encoding-exts", dest="add_encoding_exts", metavar="EXTENSIONs", help="Add encoding to these comma delimited extensions i.e. (css,js,html) when uploading to S3 )")
... ...
@@ -2223,7 +2227,9 @@ def main():
2223 2223
     optparser.add_option("-F", "--follow-symlinks", dest="follow_symlinks", action="store_true", default=False, help="Follow symbolic links as if they are regular files")
2224 2224
     optparser.add_option(      "--cache-file", dest="cache_file", action="store", default="",  metavar="FILE", help="Cache FILE containing local source MD5 values")
2225 2225
     optparser.add_option("-q", "--quiet", dest="quiet", action="store_true", default=False, help="Silence output on stdout")
2226
-    optparser.add_option(      "--region", dest="default_region", action="store", help="Override the default region")
2226
+    optparser.add_option("--ca-certs", dest="ca_certs_file", action="store", default=None, help="Path to SSL CA certificate FILE (instead of system default)")
2227
+    optparser.add_option("--check-certificate", dest="check_ssl_certificate", action="store_true", help="Check SSL certificate validity")
2228
+    optparser.add_option("--no-check-certificate", dest="check_ssl_certificate", action="store_false", help="Check SSL certificate validity")
2227 2229
 
2228 2230
     optparser.set_usage(optparser.usage + " COMMAND [parameters]")
2229 2231
     optparser.set_description('S3cmd is a tool for managing objects in '+
... ...
@@ -2303,6 +2309,10 @@ def main():
2303 2303
             debug(u"Updating Config.Config extra_headers[%s] -> %s" % (key.strip(), val.strip()))
2304 2304
             cfg.extra_headers[key.strip()] = val.strip()
2305 2305
 
2306
+    # Process --remove-header
2307
+    if options.remove_headers:
2308
+        cfg.remove_headers = options.remove_headers
2309
+
2306 2310
     ## --acl-grant/--acl-revoke arguments are pre-parsed by OptionS3ACL()
2307 2311
     if options.acl_grants:
2308 2312
         for grant in options.acl_grants:
... ...
@@ -2437,14 +2447,10 @@ def main():
2437 2437
         error(u"Not enough parameters for command '%s'" % command)
2438 2438
         sys.exit(EX_USAGE)
2439 2439
 
2440
-    try:
2441
-        rc = cmd_func(args)
2442
-        if rc is None: # if we missed any cmd_*() returns
2443
-            rc = EX_GENERAL
2444
-        return rc
2445
-    except S3Error, e:
2446
-        error(u"S3 error: %s" % e)
2447
-        sys.exit(EX_SOFTWARE)
2440
+    rc = cmd_func(args)
2441
+    if rc is None: # if we missed any cmd_*() returns
2442
+        rc = EX_GENERAL
2443
+    return rc
2448 2444
 
2449 2445
 def report_exception(e, msg=''):
2450 2446
         sys.stderr.write(u"""
... ...
@@ -2462,7 +2468,10 @@ def report_exception(e, msg=''):
2462 2462
 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2463 2463
 
2464 2464
 """ % msg)
2465
-        s = u' '.join([unicodise(a) for a in sys.argv])
2465
+	if type(e) == ImportError:
2466
+	    s = u' '.join([(a) for a in sys.argv])
2467
+	else:
2468
+            s = u' '.join([unicodise(a) for a in sys.argv])
2466 2469
         sys.stderr.write(u"Invoked as: %s\n" % s)
2467 2470
 
2468 2471
         tb = traceback.format_exc(sys.exc_info())
... ...
@@ -2525,7 +2534,7 @@ if __name__ == '__main__':
2525 2525
 
2526 2526
     except ImportError, e:
2527 2527
         report_exception(e)
2528
-        sys.exit(EX_GENERAL)
2528
+        sys.exit(1)
2529 2529
 
2530 2530
     except (ParameterError, InvalidFileError), e:
2531 2531
         error(u"Parameter problem: %s" % e)
... ...
@@ -2535,7 +2544,11 @@ if __name__ == '__main__':
2535 2535
         error(u"S3 Temporary Error: %s.  Please try again later." % e)
2536 2536
         sys.exit(EX_TEMPFAIL)
2537 2537
 
2538
-    except (S3Error, S3Exception, S3ResponseError, CloudFrontError), e:
2538
+    except S3Error, e:
2539
+        error(u"S3 error: %s" % e)
2540
+        sys.exit(e.get_error_code())
2541
+
2542
+    except (S3Exception, S3ResponseError, CloudFrontError), e:
2539 2543
         report_exception(e)
2540 2544
         sys.exit(EX_SOFTWARE)
2541 2545
 
... ...
@@ -50,7 +50,8 @@ s3cmd \fBrestore\fR \fIs3://BUCKET/OBJECT\fR
50 50
 Restore file from Glacier storage
51 51
 .TP
52 52
 s3cmd \fBsync\fR \fILOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR\fR
53
-Synchronize a directory tree to S3
53
+Synchronize a directory tree to S3 (checks files freshness using size and md5
54
+checksum, unless overriden by options, see below)
54 55
 .TP
55 56
 s3cmd \fBdu\fR \fI[s3://BUCKET[/PREFIX]]\fR
56 57
 Disk usage by buckets
... ...
@@ -47,7 +47,7 @@ if not os.getenv("S3CMD_PACKAGING"):
47 47
     man_path = os.getenv("S3CMD_INSTPATH_MAN") or "share/man"
48 48
     doc_path = os.getenv("S3CMD_INSTPATH_DOC") or "share/doc/packages"
49 49
     data_files = [
50
-        (doc_path+"/s3cmd", [ "README", "INSTALL", "NEWS" ]),
50
+        (doc_path+"/s3cmd", [ "README.md", "INSTALL", "NEWS" ]),
51 51
         (man_path+"/man1", [ "s3cmd.1" ] ),
52 52
     ]
53 53
 else: