* additional
* ambiguous
* anonymous
* anything
* application
* because
* before
* building
* capabilities
* circumstances
* commit
* committer
* compresses
* concatenated
* config
* container
* container's
* current
* definition
* delimiter
* disassociates
* discovery
* distributed
* doesnotexist
* downloads
* duplicates
* either
* enhancing
* enumerate
* escapable
* exactly
* expect
* expectations
* expected
* explicitly
* false
* filesystem
* following
* forbidden
* git with
* healthcheck
* ignore
* independent
* inheritance
* investigating
* irrelevant
* it
* logging
* looking
* membership
* mimic
* minimum
* modify
* mountpoint
* multiline
* notifier
* outputting
* outside
* overridden
* override
* parsable
* plugins
* precedence
* propagation
* provided
* provides
* registries
* repositories
* returning
* settings
* should
* signals
* someone
* something
* specifically
* successfully
* synchronize
* they've
* thinking
* uninitialized
* unintentionally
* unmarshaling
* unnamed
* unreferenced
* verify
Signed-off-by: Josh Soref <jsoref@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
... | ... |
@@ -190,7 +190,7 @@ be found. |
190 | 190 |
* Update runc to 54296cf40ad8143b62dbcaa1d90e520a2136ddfe [#31666](https://github.com/docker/docker/pull/31666) |
191 | 191 |
* Ignore cgroup2 mountpoints [opencontainers/runc#1266](https://github.com/opencontainers/runc/pull/1266) |
192 | 192 |
* Update containerd to 4ab9917febca54791c5f071a9d1f404867857fcc [#31662](https://github.com/docker/docker/pull/31662) [#31852](https://github.com/docker/docker/pull/31852) |
193 |
- * Register healtcheck service before calling restore() [docker/containerd#609](https://github.com/docker/containerd/pull/609) |
|
193 |
+ * Register healthcheck service before calling restore() [docker/containerd#609](https://github.com/docker/containerd/pull/609) |
|
194 | 194 |
* Fix `docker exec` not working after unattended upgrades that reload apparmor profiles [#31773](https://github.com/docker/docker/pull/31773) |
195 | 195 |
* Fix unmounting layer without merge dir with Overlay2 [#31069](https://github.com/docker/docker/pull/31069) |
196 | 196 |
* Do not ignore "volume in use" errors when force-delete [#31450](https://github.com/docker/docker/pull/31450) |
... | ... |
@@ -1087,12 +1087,12 @@ installing docker, please make sure to update them accordingly. |
1087 | 1087 |
+ Add security options to `docker info` output [#21172](https://github.com/docker/docker/pull/21172) [#23520](https://github.com/docker/docker/pull/23520) |
1088 | 1088 |
+ Add insecure registries to `docker info` output [#20410](https://github.com/docker/docker/pull/20410) |
1089 | 1089 |
+ Extend Docker authorization with TLS user information [#21556](https://github.com/docker/docker/pull/21556) |
1090 |
-+ devicemapper: expose Mininum Thin Pool Free Space through `docker info` [#21945](https://github.com/docker/docker/pull/21945) |
|
1090 |
++ devicemapper: expose Minimum Thin Pool Free Space through `docker info` [#21945](https://github.com/docker/docker/pull/21945) |
|
1091 | 1091 |
* API now returns a JSON object when an error occurs making it more consistent [#22880](https://github.com/docker/docker/pull/22880) |
1092 | 1092 |
- Prevent `docker run -i --restart` from hanging on exit [#22777](https://github.com/docker/docker/pull/22777) |
1093 | 1093 |
- Fix API/CLI discrepancy on hostname validation [#21641](https://github.com/docker/docker/pull/21641) |
1094 | 1094 |
- Fix discrepancy in the format of sizes in `stats` from HumanSize to BytesSize [#21773](https://github.com/docker/docker/pull/21773) |
1095 |
-- authz: when request is denied return forbbiden exit code (403) [#22448](https://github.com/docker/docker/pull/22448) |
|
1095 |
+- authz: when request is denied return forbidden exit code (403) [#22448](https://github.com/docker/docker/pull/22448) |
|
1096 | 1096 |
- Windows: fix tty-related displaying issues [#23878](https://github.com/docker/docker/pull/23878) |
1097 | 1097 |
|
1098 | 1098 |
### Runtime |
... | ... |
@@ -1887,7 +1887,7 @@ by another client (#15489) |
1887 | 1887 |
|
1888 | 1888 |
#### Remote API |
1889 | 1889 |
|
1890 |
-- Fix unmarshalling of Command and Entrypoint |
|
1890 |
+- Fix unmarshaling of Command and Entrypoint |
|
1891 | 1891 |
- Set limit for minimum client version supported |
1892 | 1892 |
- Validate port specification |
1893 | 1893 |
- Return proper errors when attach/reattach fail |
... | ... |
@@ -2572,7 +2572,7 @@ With the ongoing changes to the networking and execution subsystems of docker te |
2572 | 2572 |
- Fix ADD caching issue with . prefixed path |
2573 | 2573 |
- Fix docker build on devicemapper by reverting sparse file tar option |
2574 | 2574 |
- Fix issue with file caching and prevent wrong cache hit |
2575 |
-* Use same error handling while unmarshalling CMD and ENTRYPOINT |
|
2575 |
+* Use same error handling while unmarshaling CMD and ENTRYPOINT |
|
2576 | 2576 |
|
2577 | 2577 |
#### Documentation |
2578 | 2578 |
|
... | ... |
@@ -93,7 +93,7 @@ RUN set -x \ |
93 | 93 |
&& rm -rf "$SECCOMP_PATH" |
94 | 94 |
|
95 | 95 |
# Install Go |
96 |
-# We don't have official binary golang 1.7.5 tarballs for ARM64, eigher for Go or |
|
96 |
+# We don't have official binary golang 1.7.5 tarballs for ARM64, either for Go or |
|
97 | 97 |
# bootstrap, so we use golang-go (1.6) as bootstrap to build Go from source code. |
98 | 98 |
# We don't use the official ARMv6 released binaries as a GOROOT_BOOTSTRAP, because |
99 | 99 |
# not all ARM64 platforms support 32-bit mode. 32-bit mode is optional for ARMv8. |
... | ... |
@@ -102,7 +102,7 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response |
102 | 102 |
} |
103 | 103 |
|
104 | 104 |
// doesn't matter what version the client is on, we're using this internally only |
105 |
- // also do we need size? i'm thinkin no we don't |
|
105 |
+ // also do we need size? i'm thinking no we don't |
|
106 | 106 |
raw, err := s.backend.ContainerInspect(containerName, false, api.DefaultVersion) |
107 | 107 |
if err != nil { |
108 | 108 |
return err |
... | ... |
@@ -1637,7 +1637,7 @@ definitions: |
1637 | 1637 |
may not be applied if the version number has changed from the last read. In other words, |
1638 | 1638 |
if two update requests specify the same base version, only one of the requests can succeed. |
1639 | 1639 |
As a result, two separate update requests that happen at the same time will not |
1640 |
- unintentially overwrite each other. |
|
1640 |
+ unintentionally overwrite each other. |
|
1641 | 1641 |
type: "object" |
1642 | 1642 |
properties: |
1643 | 1643 |
Index: |
... | ... |
@@ -2,7 +2,7 @@ package swarm |
2 | 2 |
|
3 | 3 |
import "time" |
4 | 4 |
|
5 |
-// ClusterInfo represents info about the cluster for outputing in "info" |
|
5 |
+// ClusterInfo represents info about the cluster for outputting in "info" |
|
6 | 6 |
// it contains the same information as "Swarm", but without the JoinTokens |
7 | 7 |
type ClusterInfo struct { |
8 | 8 |
ID string |
... | ... |
@@ -20,7 +20,7 @@ func TestGetAllAllowed(t *testing.T) { |
20 | 20 |
}) |
21 | 21 |
|
22 | 22 |
buildArgs.AddMetaArg("ArgFromMeta", strPtr("frommeta1")) |
23 |
- buildArgs.AddMetaArg("ArgFromMetaOverriden", strPtr("frommeta2")) |
|
23 |
+ buildArgs.AddMetaArg("ArgFromMetaOverridden", strPtr("frommeta2")) |
|
24 | 24 |
buildArgs.AddMetaArg("ArgFromMetaNotUsed", strPtr("frommeta3")) |
25 | 25 |
|
26 | 26 |
buildArgs.AddArg("ArgOverriddenByOptions", strPtr("fromdockerfile2")) |
... | ... |
@@ -28,7 +28,7 @@ func TestGetAllAllowed(t *testing.T) { |
28 | 28 |
buildArgs.AddArg("ArgNoDefaultInDockerfile", nil) |
29 | 29 |
buildArgs.AddArg("ArgNoDefaultInDockerfileFromOptions", nil) |
30 | 30 |
buildArgs.AddArg("ArgFromMeta", nil) |
31 |
- buildArgs.AddArg("ArgFromMetaOverriden", strPtr("fromdockerfile3")) |
|
31 |
+ buildArgs.AddArg("ArgFromMetaOverridden", strPtr("fromdockerfile3")) |
|
32 | 32 |
|
33 | 33 |
all := buildArgs.GetAllAllowed() |
34 | 34 |
expected := map[string]string{ |
... | ... |
@@ -37,7 +37,7 @@ func TestGetAllAllowed(t *testing.T) { |
37 | 37 |
"ArgWithDefaultInDockerfile": "fromdockerfile1", |
38 | 38 |
"ArgNoDefaultInDockerfileFromOptions": "fromopt3", |
39 | 39 |
"ArgFromMeta": "frommeta1", |
40 |
- "ArgFromMetaOverriden": "fromdockerfile3", |
|
40 |
+ "ArgFromMetaOverridden": "fromdockerfile3", |
|
41 | 41 |
} |
42 | 42 |
assert.Equal(t, expected, all) |
43 | 43 |
} |
... | ... |
@@ -91,7 +91,7 @@ type Client struct { |
91 | 91 |
// CheckRedirect specifies the policy for dealing with redirect responses: |
92 | 92 |
// If the request is non-GET return `ErrRedirect`. Otherwise use the last response. |
93 | 93 |
// |
94 |
-// Go 1.8 changes behavior for HTTP redirects (specificlaly 301, 307, and 308) in the client . |
|
94 |
+// Go 1.8 changes behavior for HTTP redirects (specifically 301, 307, and 308) in the client . |
|
95 | 95 |
// The Docker client (and by extension docker API client) can be made to to send a request |
96 | 96 |
// like POST /containers//start where what would normally be in the name section of the URL is empty. |
97 | 97 |
// This triggers an HTTP 301 from the daemon. |
... | ... |
@@ -14,7 +14,7 @@ import ( |
14 | 14 |
// indicated by the given condition, either "not-running" (default), |
15 | 15 |
// "next-exit", or "removed". |
16 | 16 |
// |
17 |
-// If this client's API version is beforer 1.30, condition is ignored and |
|
17 |
+// If this client's API version is before 1.30, condition is ignored and |
|
18 | 18 |
// ContainerWait will return immediately with the two channels, as the server |
19 | 19 |
// will wait as if the condition were "not-running". |
20 | 20 |
// |
... | ... |
@@ -23,7 +23,7 @@ import ( |
23 | 23 |
// then returns two channels on which the caller can wait for the exit status |
24 | 24 |
// of the container or an error if there was a problem either beginning the |
25 | 25 |
// wait request or in getting the response. This allows the caller to |
26 |
-// sychronize ContainerWait with other calls, such as specifying a |
|
26 |
+// synchronize ContainerWait with other calls, such as specifying a |
|
27 | 27 |
// "next-exit" condition before issuing a ContainerStart request. |
28 | 28 |
func (cli *Client) ContainerWait(ctx context.Context, containerID string, condition container.WaitCondition) (<-chan container.ContainerWaitOKBody, <-chan error) { |
29 | 29 |
if versions.LessThan(cli.ClientVersion(), "1.30") { |
... | ... |
@@ -269,7 +269,7 @@ func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfi |
269 | 269 |
cResources := &container.HostConfig.Resources |
270 | 270 |
|
271 | 271 |
// validate NanoCPUs, CPUPeriod, and CPUQuota |
272 |
- // Becuase NanoCPU effectively updates CPUPeriod/CPUQuota, |
|
272 |
+ // Because NanoCPU effectively updates CPUPeriod/CPUQuota, |
|
273 | 273 |
// once NanoCPU is already set, updating CPUPeriod/CPUQuota will be blocked, and vice versa. |
274 | 274 |
// In the following we make sure the intended update (resources) does not conflict with the existing (cResource). |
275 | 275 |
if resources.NanoCPUs > 0 && cResources.CPUPeriod > 0 { |
... | ... |
@@ -185,7 +185,7 @@ const ( |
185 | 185 |
// timeouts, and avoiding goroutine leaks. Wait must be called without holding |
186 | 186 |
// the state lock. Returns a channel from which the caller will receive the |
187 | 187 |
// result. If the container exited on its own, the result's Err() method will |
188 |
-// be nil and its ExitCode() method will return the conatiners exit code, |
|
188 |
+// be nil and its ExitCode() method will return the container's exit code, |
|
189 | 189 |
// otherwise, the results Err() method will return an error indicating why the |
190 | 190 |
// wait operation failed. |
191 | 191 |
func (s *State) Wait(ctx context.Context, condition WaitCondition) <-chan StateStatus { |
... | ... |
@@ -343,7 +343,7 @@ func (r *controller) Shutdown(ctx context.Context) error { |
343 | 343 |
} |
344 | 344 |
|
345 | 345 |
// add a delay for gossip converge |
346 |
- // TODO(dongluochen): this delay shoud be configurable to fit different cluster size and network delay. |
|
346 |
+ // TODO(dongluochen): this delay should be configurable to fit different cluster size and network delay. |
|
347 | 347 |
time.Sleep(defaultGossipConvergeDelay) |
348 | 348 |
} |
349 | 349 |
|
... | ... |
@@ -87,8 +87,8 @@ func TestDiscoveryOpts(t *testing.T) { |
87 | 87 |
t.Fatalf("Heartbeat - Expected : %v, Actual : %v", expected, heartbeat) |
88 | 88 |
} |
89 | 89 |
|
90 |
- discaveryTTL := fmt.Sprintf("%d", defaultDiscoveryTTLFactor-1) |
|
91 |
- clusterOpts = map[string]string{"discovery.ttl": discaveryTTL} |
|
90 |
+ discoveryTTL := fmt.Sprintf("%d", defaultDiscoveryTTLFactor-1) |
|
91 |
+ clusterOpts = map[string]string{"discovery.ttl": discoveryTTL} |
|
92 | 92 |
heartbeat, ttl, err = discoveryOpts(clusterOpts) |
93 | 93 |
if err == nil && heartbeat == 0 { |
94 | 94 |
t.Fatal("discovery.heartbeat must be positive") |
... | ... |
@@ -247,7 +247,7 @@ func TestLoadBufferedEventsOnlyFromPast(t *testing.T) { |
247 | 247 |
} |
248 | 248 |
|
249 | 249 |
// #13753 |
250 |
-func TestIngoreBufferedWhenNoTimes(t *testing.T) { |
|
250 |
+func TestIgnoreBufferedWhenNoTimes(t *testing.T) { |
|
251 | 251 |
m1, err := eventstestutils.Scan("2016-03-07T17:28:03.022433271+02:00 container die 0b863f2a26c18557fc6cdadda007c459f9ec81b874780808138aea78a3595079 (image=ubuntu, name=small_hoover)") |
252 | 252 |
if err != nil { |
253 | 253 |
t.Fatal(err) |
... | ... |
@@ -174,27 +174,27 @@ func writeLVMConfig(root string, cfg directLVMConfig) error { |
174 | 174 |
func setupDirectLVM(cfg directLVMConfig) error { |
175 | 175 |
pvCreate, err := exec.LookPath("pvcreate") |
176 | 176 |
if err != nil { |
177 |
- return errors.Wrap(err, "error lookuping up command `pvcreate` while setting up direct lvm") |
|
177 |
+ return errors.Wrap(err, "error looking up command `pvcreate` while setting up direct lvm") |
|
178 | 178 |
} |
179 | 179 |
|
180 | 180 |
vgCreate, err := exec.LookPath("vgcreate") |
181 | 181 |
if err != nil { |
182 |
- return errors.Wrap(err, "error lookuping up command `vgcreate` while setting up direct lvm") |
|
182 |
+ return errors.Wrap(err, "error looking up command `vgcreate` while setting up direct lvm") |
|
183 | 183 |
} |
184 | 184 |
|
185 | 185 |
lvCreate, err := exec.LookPath("lvcreate") |
186 | 186 |
if err != nil { |
187 |
- return errors.Wrap(err, "error lookuping up command `lvcreate` while setting up direct lvm") |
|
187 |
+ return errors.Wrap(err, "error looking up command `lvcreate` while setting up direct lvm") |
|
188 | 188 |
} |
189 | 189 |
|
190 | 190 |
lvConvert, err := exec.LookPath("lvconvert") |
191 | 191 |
if err != nil { |
192 |
- return errors.Wrap(err, "error lookuping up command `lvconvert` while setting up direct lvm") |
|
192 |
+ return errors.Wrap(err, "error looking up command `lvconvert` while setting up direct lvm") |
|
193 | 193 |
} |
194 | 194 |
|
195 | 195 |
lvChange, err := exec.LookPath("lvchange") |
196 | 196 |
if err != nil { |
197 |
- return errors.Wrap(err, "error lookuping up command `lvchange` while setting up direct lvm") |
|
197 |
+ return errors.Wrap(err, "error looking up command `lvchange` while setting up direct lvm") |
|
198 | 198 |
} |
199 | 199 |
|
200 | 200 |
if cfg.AutoExtendPercent == 0 { |
... | ... |
@@ -95,7 +95,7 @@ func GetFSMagic(rootpath string) (FsMagic, error) { |
95 | 95 |
return FsMagic(buf.Type), nil |
96 | 96 |
} |
97 | 97 |
|
98 |
-// NewFsChecker returns a checker configured for the provied FsMagic |
|
98 |
+// NewFsChecker returns a checker configured for the provided FsMagic |
|
99 | 99 |
func NewFsChecker(t FsMagic) Checker { |
100 | 100 |
return &fsChecker{ |
101 | 101 |
t: t, |
... | ... |
@@ -54,7 +54,7 @@ func (c *fsChecker) IsMounted(path string) bool { |
54 | 54 |
return m |
55 | 55 |
} |
56 | 56 |
|
57 |
-// NewFsChecker returns a checker configured for the provied FsMagic |
|
57 |
+// NewFsChecker returns a checker configured for the provided FsMagic |
|
58 | 58 |
func NewFsChecker(t FsMagic) Checker { |
59 | 59 |
return &fsChecker{ |
60 | 60 |
t: t, |
... | ... |
@@ -328,7 +328,7 @@ func makeBackingFsDev(home string) (string, error) { |
328 | 328 |
} |
329 | 329 |
|
330 | 330 |
backingFsBlockDev := path.Join(home, "backingFsBlockDev") |
331 |
- // Re-create just in case comeone copied the home directory over to a new device |
|
331 |
+ // Re-create just in case someone copied the home directory over to a new device |
|
332 | 332 |
syscall.Unlink(backingFsBlockDev) |
333 | 333 |
stat := fileinfo.Sys().(*syscall.Stat_t) |
334 | 334 |
if err := syscall.Mknod(backingFsBlockDev, syscall.S_IFBLK|0600, int(stat.Dev)); err != nil { |
... | ... |
@@ -300,7 +300,7 @@ func (d *Driver) Remove(id string) error { |
300 | 300 |
// |
301 | 301 |
// TODO @jhowardmsft - For RS3, we can remove the retries. Also consider |
302 | 302 |
// using platform APIs (if available) to get this more succinctly. Also |
303 |
- // consider enlighting the Remove() interface to have context of why |
|
303 |
+ // consider enhancing the Remove() interface to have context of why |
|
304 | 304 |
// the remove is being called - that could improve efficiency by not |
305 | 305 |
// enumerating compute systems during a remove of a container as it's |
306 | 306 |
// not required. |
... | ... |
@@ -363,7 +363,7 @@ var newTicker = func(freq time.Duration) *time.Ticker { |
363 | 363 |
// awslogs-datetime-format options have been configured, multiline processing |
364 | 364 |
// is enabled, where log messages are stored in an event buffer until a multiline |
365 | 365 |
// pattern match is found, at which point the messages in the event buffer are |
366 |
-// pushed to CloudWatch logs as a single log event. Multline messages are processed |
|
366 |
+// pushed to CloudWatch logs as a single log event. Multiline messages are processed |
|
367 | 367 |
// according to the maximumBytesPerPut constraint, and the implementation only |
368 | 368 |
// allows for messages to be buffered for a maximum of 2*batchPublishFrequency |
369 | 369 |
// seconds. When events are ready to be processed for submission to CloudWatch |
... | ... |
@@ -121,7 +121,7 @@ func (r *RingLogger) run() { |
121 | 121 |
|
122 | 122 |
type messageRing struct { |
123 | 123 |
mu sync.Mutex |
124 |
- // singals callers of `Dequeue` to wake up either on `Close` or when a new `Message` is added |
|
124 |
+ // signals callers of `Dequeue` to wake up either on `Close` or when a new `Message` is added |
|
125 | 125 |
wait *sync.Cond |
126 | 126 |
|
127 | 127 |
sizeBytes int64 // current buffer size |
... | ... |
@@ -55,7 +55,7 @@ func (daemon *Daemon) createSpec(c *container.Container) (*specs.Spec, error) { |
55 | 55 |
} |
56 | 56 |
|
57 | 57 |
// If the container has not been started, and has configs or secrets |
58 |
- // secrets, create symlinks to each confing and secret. If it has been |
|
58 |
+ // secrets, create symlinks to each config and secret. If it has been |
|
59 | 59 |
// started before, the symlinks should have already been created. Also, it |
60 | 60 |
// is important to not mount a Hyper-V container that has been started |
61 | 61 |
// before, to protect the host from the container; for example, from |
... | ... |
@@ -39,7 +39,7 @@ func (daemon *Daemon) Reload(conf *config.Config) (err error) { |
39 | 39 |
|
40 | 40 |
daemon.reloadPlatform(conf, attributes) |
41 | 41 |
daemon.reloadDebug(conf, attributes) |
42 |
- daemon.reloadMaxConcurrentDowloadsAndUploads(conf, attributes) |
|
42 |
+ daemon.reloadMaxConcurrentDownloadsAndUploads(conf, attributes) |
|
43 | 43 |
daemon.reloadShutdownTimeout(conf, attributes) |
44 | 44 |
|
45 | 45 |
if err := daemon.reloadClusterDiscovery(conf, attributes); err != nil { |
... | ... |
@@ -74,9 +74,9 @@ func (daemon *Daemon) reloadDebug(conf *config.Config, attributes map[string]str |
74 | 74 |
attributes["debug"] = fmt.Sprintf("%t", daemon.configStore.Debug) |
75 | 75 |
} |
76 | 76 |
|
77 |
-// reloadMaxConcurrentDowloadsAndUploads updates configuration with max concurrent |
|
77 |
+// reloadMaxConcurrentDownloadsAndUploads updates configuration with max concurrent |
|
78 | 78 |
// download and upload options and updates the passed attributes |
79 |
-func (daemon *Daemon) reloadMaxConcurrentDowloadsAndUploads(conf *config.Config, attributes map[string]string) { |
|
79 |
+func (daemon *Daemon) reloadMaxConcurrentDownloadsAndUploads(conf *config.Config, attributes map[string]string) { |
|
80 | 80 |
// If no value is set for max-concurrent-downloads we assume it is the default value |
81 | 81 |
// We always "reset" as the cost is lightweight and easy to maintain. |
82 | 82 |
if conf.IsValueSet("max-concurrent-downloads") && conf.MaxConcurrentDownloads != nil { |
... | ... |
@@ -206,7 +206,7 @@ func TestBackportMountSpec(t *testing.T) { |
206 | 206 |
BindOptions: &mounttypes.BindOptions{Propagation: "shared"}, |
207 | 207 |
}, |
208 | 208 |
}, |
209 |
- comment: "bind mount with read/write + shared propgation", |
|
209 |
+ comment: "bind mount with read/write + shared propagation", |
|
210 | 210 |
}, |
211 | 211 |
{ |
212 | 212 |
mp: &volume.MountPoint{ |
... | ... |
@@ -203,7 +203,7 @@ func (serv *v2MetadataService) TagAndAdd(diffID layer.DiffID, hmacKey []byte, me |
203 | 203 |
return serv.Add(diffID, meta) |
204 | 204 |
} |
205 | 205 |
|
206 |
-// Remove unassociates a metadata entry from a layer DiffID. |
|
206 |
+// Remove disassociates a metadata entry from a layer DiffID. |
|
207 | 207 |
func (serv *v2MetadataService) Remove(metadata V2Metadata) error { |
208 | 208 |
if serv.store == nil { |
209 | 209 |
// Support a service which has no backend storage, in this case |
... | ... |
@@ -185,7 +185,7 @@ func TestLayerAlreadyExists(t *testing.T) { |
185 | 185 |
expectedRequests: []string{"apple"}, |
186 | 186 |
}, |
187 | 187 |
{ |
188 |
- name: "not matching reposies", |
|
188 |
+ name: "not matching repositories", |
|
189 | 189 |
targetRepo: "busybox", |
190 | 190 |
maxExistenceChecks: 3, |
191 | 191 |
metadata: []metadata.V2Metadata{ |
... | ... |
@@ -52,8 +52,8 @@ func escapeStr(s string, charsToEscape string) string { |
52 | 52 |
var ret string |
53 | 53 |
for _, currRune := range s { |
54 | 54 |
appended := false |
55 |
- for _, escapeableRune := range charsToEscape { |
|
56 |
- if currRune == escapeableRune { |
|
55 |
+ for _, escapableRune := range charsToEscape { |
|
56 |
+ if currRune == escapableRune { |
|
57 | 57 |
ret += `\` + string(currRune) |
58 | 58 |
appended = true |
59 | 59 |
break |
... | ... |
@@ -826,7 +826,7 @@ Get `stdout` and `stderr` logs from the container ``id`` |
826 | 826 |
|
827 | 827 |
**Query parameters**: |
828 | 828 |
|
829 |
-- **details** - 1/True/true or 0/False/flase, Show extra details provided to logs. Default `false`. |
|
829 |
+- **details** - 1/True/true or 0/False/false, Show extra details provided to logs. Default `false`. |
|
830 | 830 |
- **follow** – 1/True/true or 0/False/false, return stream. Default `false`. |
831 | 831 |
- **stdout** – 1/True/true or 0/False/false, show `stdout` log. Default `false`. |
832 | 832 |
- **stderr** – 1/True/true or 0/False/false, show `stderr` log. Default `false`. |
... | ... |
@@ -13,7 +13,7 @@ SCRIPT_VER="Wed Apr 20 18:30:19 UTC 2016" |
13 | 13 |
# - Error if running 32-bit posix tools. Probably can take from bash --version and check contains "x86_64" |
14 | 14 |
# - Warn if the CI directory cannot be deleted afterwards. Otherwise turdlets are left behind |
15 | 15 |
# - Use %systemdrive% ($SYSTEMDRIVE) rather than hard code to c: for TEMP |
16 |
-# - Consider cross builing the Windows binary and copy across. That's a bit of a heavy lift. Only reason |
|
16 |
+# - Consider cross building the Windows binary and copy across. That's a bit of a heavy lift. Only reason |
|
17 | 17 |
# for doing that is that it mirrors the actual release process for docker.exe which is cross-built. |
18 | 18 |
# However, should absolutely not be a problem if built natively, so nit-picking. |
19 | 19 |
# - Tidy up of images and containers. Either here, or in the teardown script. |
... | ... |
@@ -116,7 +116,7 @@ fi |
116 | 116 |
# Get the commit has and verify we have something |
117 | 117 |
if [ $ec -eq 0 ]; then |
118 | 118 |
export COMMITHASH=$(git rev-parse --short HEAD) |
119 |
- echo INFO: Commmit hash is $COMMITHASH |
|
119 |
+ echo INFO: Commit hash is $COMMITHASH |
|
120 | 120 |
if [ -z $COMMITHASH ]; then |
121 | 121 |
echo "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" |
122 | 122 |
ec=1 |
... | ... |
@@ -24,7 +24,7 @@ func enumerateTestsForBytes(b []byte) ([]string, error) { |
24 | 24 |
return tests, nil |
25 | 25 |
} |
26 | 26 |
|
27 |
-// enumareteTests enumerates valid `-check.f` strings for all the test functions. |
|
27 |
+// enumerateTests enumerates valid `-check.f` strings for all the test functions. |
|
28 | 28 |
// Note that we use regexp rather than parsing Go files for performance reason. |
29 | 29 |
// (Try `TESTFLAGS=-check.list make test-integration-cli` to see the slowness of parsing) |
30 | 30 |
// The files needs to be `gofmt`-ed |
... | ... |
@@ -36,10 +36,10 @@ func xmain() (int, error) { |
36 | 36 |
// Should we use cobra maybe? |
37 | 37 |
replicas := flag.Int("replicas", 1, "Number of worker service replica") |
38 | 38 |
chunks := flag.Int("chunks", 0, "Number of test chunks executed in batch (0 == replicas)") |
39 |
- pushWorkerImage := flag.String("push-worker-image", "", "Push the worker image to the registry. Required for distribuetd execution. (empty == not to push)") |
|
39 |
+ pushWorkerImage := flag.String("push-worker-image", "", "Push the worker image to the registry. Required for distributed execution. (empty == not to push)") |
|
40 | 40 |
shuffle := flag.Bool("shuffle", false, "Shuffle the input so as to mitigate makespan nonuniformity") |
41 | 41 |
// flags below are rarely used |
42 |
- randSeed := flag.Int64("rand-seed", int64(0), "Random seed used for shuffling (0 == curent time)") |
|
42 |
+ randSeed := flag.Int64("rand-seed", int64(0), "Random seed used for shuffling (0 == current time)") |
|
43 | 43 |
filtersFile := flag.String("filters-file", "", "Path to optional file composed of `-check.f` filter strings") |
44 | 44 |
dryRun := flag.Bool("dry-run", false, "Dry run") |
45 | 45 |
keepExecutor := flag.Bool("keep-executor", false, "Do not auto-remove executor containers, which is used for running privileged programs on Swarm") |
... | ... |
@@ -175,7 +175,7 @@ Function Execute-Build($type, $additionalBuildTags, $directory) { |
175 | 175 |
if ($Race) { Write-Warning "Using race detector"; $raceParm=" -race"} |
176 | 176 |
if ($ForceBuildAll) { $allParm=" -a" } |
177 | 177 |
if ($NoOpt) { $optParm=" -gcflags "+""""+"-N -l"+"""" } |
178 |
- if ($addtionalBuildTags -ne "") { $buildTags += $(" " + $additionalBuildTags) } |
|
178 |
+ if ($additionalBuildTags -ne "") { $buildTags += $(" " + $additionalBuildTags) } |
|
179 | 179 |
|
180 | 180 |
# Do the go build in the appropriate directory |
181 | 181 |
# Note -linkmode=internal is required to be able to debug on Windows. |
... | ... |
@@ -40,7 +40,7 @@ create_index() { |
40 | 40 |
# change IFS locally within subshell so the for loop saves line correctly to L var |
41 | 41 |
IFS=$'\n'; |
42 | 42 |
|
43 |
- # pretty sweet, will mimick the normal apache output. skipping "index" and hidden files |
|
43 |
+ # pretty sweet, will mimic the normal apache output. skipping "index" and hidden files |
|
44 | 44 |
for L in $(find -L . -mount -depth -maxdepth 1 -type f ! -name 'index' ! -name '.*' -prune -printf "<a href=\"%f\">%f|@_@%Td-%Tb-%TY %Tk:%TM @%f@\n"|sort|column -t -s '|' | sed 's,\([\ ]\+\)@_@,</a>\1,g'); |
45 | 45 |
do |
46 | 46 |
# file |
... | ... |
@@ -985,7 +985,7 @@ func (s *DockerSwarmSuite) TestSwarmRepeatedRootRotation(c *check.C) { |
985 | 985 |
if cert != nil { |
986 | 986 |
c.Assert(clusterTLSInfo.TrustRoot, checker.Equals, expectedCert) |
987 | 987 |
} |
988 |
- // could take another second or two for the nodes to trust the new roots after the've all gotten |
|
988 |
+ // could take another second or two for the nodes to trust the new roots after they've all gotten |
|
989 | 989 |
// new TLS certificates |
990 | 990 |
for j := 0; j < 18; j++ { |
991 | 991 |
mInfo := m.GetNode(c, m.NodeID).Description.TLSInfo |
... | ... |
@@ -1712,7 +1712,7 @@ func (s *DockerSuite) TestBuildEntrypoint(c *check.C) { |
1712 | 1712 |
} |
1713 | 1713 |
|
1714 | 1714 |
// #6445 ensure ONBUILD triggers aren't committed to grandchildren |
1715 |
-func (s *DockerSuite) TestBuildOnBuildLimitedInheritence(c *check.C) { |
|
1715 |
+func (s *DockerSuite) TestBuildOnBuildLimitedInheritance(c *check.C) { |
|
1716 | 1716 |
buildImageSuccessfully(c, "testonbuildtrigger1", build.WithDockerfile(` |
1717 | 1717 |
FROM busybox |
1718 | 1718 |
RUN echo "GRANDPARENT" |
... | ... |
@@ -3063,7 +3063,7 @@ func (s *DockerSuite) TestBuildFromGitWithContext(c *check.C) { |
3063 | 3063 |
} |
3064 | 3064 |
} |
3065 | 3065 |
|
3066 |
-func (s *DockerSuite) TestBuildFromGitwithF(c *check.C) { |
|
3066 |
+func (s *DockerSuite) TestBuildFromGitWithF(c *check.C) { |
|
3067 | 3067 |
name := "testbuildfromgitwithf" |
3068 | 3068 |
git := fakegit.New(c, "repo", map[string]string{ |
3069 | 3069 |
"myApp/myDockerfile": `FROM busybox |
... | ... |
@@ -3225,7 +3225,7 @@ func (s *DockerSuite) TestBuildCmdJSONNoShDashC(c *check.C) { |
3225 | 3225 |
} |
3226 | 3226 |
} |
3227 | 3227 |
|
3228 |
-func (s *DockerSuite) TestBuildEntrypointCanBeOverridenByChild(c *check.C) { |
|
3228 |
+func (s *DockerSuite) TestBuildEntrypointCanBeOverriddenByChild(c *check.C) { |
|
3229 | 3229 |
buildImageSuccessfully(c, "parent", build.WithDockerfile(` |
3230 | 3230 |
FROM busybox |
3231 | 3231 |
ENTRYPOINT exit 130 |
... | ... |
@@ -3245,7 +3245,7 @@ func (s *DockerSuite) TestBuildEntrypointCanBeOverridenByChild(c *check.C) { |
3245 | 3245 |
}) |
3246 | 3246 |
} |
3247 | 3247 |
|
3248 |
-func (s *DockerSuite) TestBuildEntrypointCanBeOverridenByChildInspect(c *check.C) { |
|
3248 |
+func (s *DockerSuite) TestBuildEntrypointCanBeOverriddenByChildInspect(c *check.C) { |
|
3249 | 3249 |
var ( |
3250 | 3250 |
name = "testbuildepinherit" |
3251 | 3251 |
name2 = "testbuildepinherit2" |
... | ... |
@@ -4472,26 +4472,26 @@ func (s *DockerSuite) TestBuildBuildTimeArgOverrideArgDefinedBeforeEnv(c *check. |
4472 | 4472 |
imgName := "bldargtest" |
4473 | 4473 |
envKey := "foo" |
4474 | 4474 |
envVal := "bar" |
4475 |
- envValOveride := "barOverride" |
|
4475 |
+ envValOverride := "barOverride" |
|
4476 | 4476 |
dockerfile := fmt.Sprintf(`FROM busybox |
4477 | 4477 |
ARG %s |
4478 | 4478 |
ENV %s %s |
4479 | 4479 |
RUN echo $%s |
4480 | 4480 |
CMD echo $%s |
4481 |
- `, envKey, envKey, envValOveride, envKey, envKey) |
|
4481 |
+ `, envKey, envKey, envValOverride, envKey, envKey) |
|
4482 | 4482 |
|
4483 | 4483 |
result := buildImage(imgName, |
4484 | 4484 |
cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)), |
4485 | 4485 |
build.WithDockerfile(dockerfile), |
4486 | 4486 |
) |
4487 | 4487 |
result.Assert(c, icmd.Success) |
4488 |
- if strings.Count(result.Combined(), envValOveride) != 2 { |
|
4489 |
- c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) |
|
4488 |
+ if strings.Count(result.Combined(), envValOverride) != 2 { |
|
4489 |
+ c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride) |
|
4490 | 4490 |
} |
4491 | 4491 |
|
4492 | 4492 |
containerName := "bldargCont" |
4493 |
- if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { |
|
4494 |
- c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) |
|
4493 |
+ if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) { |
|
4494 |
+ c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride) |
|
4495 | 4495 |
} |
4496 | 4496 |
} |
4497 | 4497 |
|
... | ... |
@@ -4501,25 +4501,25 @@ func (s *DockerSuite) TestBuildBuildTimeArgOverrideEnvDefinedBeforeArg(c *check. |
4501 | 4501 |
imgName := "bldargtest" |
4502 | 4502 |
envKey := "foo" |
4503 | 4503 |
envVal := "bar" |
4504 |
- envValOveride := "barOverride" |
|
4504 |
+ envValOverride := "barOverride" |
|
4505 | 4505 |
dockerfile := fmt.Sprintf(`FROM busybox |
4506 | 4506 |
ENV %s %s |
4507 | 4507 |
ARG %s |
4508 | 4508 |
RUN echo $%s |
4509 | 4509 |
CMD echo $%s |
4510 |
- `, envKey, envValOveride, envKey, envKey, envKey) |
|
4510 |
+ `, envKey, envValOverride, envKey, envKey, envKey) |
|
4511 | 4511 |
result := buildImage(imgName, |
4512 | 4512 |
cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)), |
4513 | 4513 |
build.WithDockerfile(dockerfile), |
4514 | 4514 |
) |
4515 | 4515 |
result.Assert(c, icmd.Success) |
4516 |
- if strings.Count(result.Combined(), envValOveride) != 2 { |
|
4517 |
- c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) |
|
4516 |
+ if strings.Count(result.Combined(), envValOverride) != 2 { |
|
4517 |
+ c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride) |
|
4518 | 4518 |
} |
4519 | 4519 |
|
4520 | 4520 |
containerName := "bldargCont" |
4521 |
- if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { |
|
4522 |
- c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) |
|
4521 |
+ if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) { |
|
4522 |
+ c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride) |
|
4523 | 4523 |
} |
4524 | 4524 |
} |
4525 | 4525 |
|
... | ... |
@@ -4616,25 +4616,25 @@ func (s *DockerSuite) TestBuildBuildTimeArgExpansionOverride(c *check.C) { |
4616 | 4616 |
envKey := "foo" |
4617 | 4617 |
envVal := "bar" |
4618 | 4618 |
envKey1 := "foo1" |
4619 |
- envValOveride := "barOverride" |
|
4619 |
+ envValOverride := "barOverride" |
|
4620 | 4620 |
dockerfile := fmt.Sprintf(`FROM busybox |
4621 | 4621 |
ARG %s |
4622 | 4622 |
ENV %s %s |
4623 | 4623 |
ENV %s ${%s} |
4624 | 4624 |
RUN echo $%s |
4625 |
- CMD echo $%s`, envKey, envKey, envValOveride, envKey1, envKey, envKey1, envKey1) |
|
4625 |
+ CMD echo $%s`, envKey, envKey, envValOverride, envKey1, envKey, envKey1, envKey1) |
|
4626 | 4626 |
result := buildImage(imgName, |
4627 | 4627 |
cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)), |
4628 | 4628 |
build.WithDockerfile(dockerfile), |
4629 | 4629 |
) |
4630 | 4630 |
result.Assert(c, icmd.Success) |
4631 |
- if strings.Count(result.Combined(), envValOveride) != 2 { |
|
4632 |
- c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) |
|
4631 |
+ if strings.Count(result.Combined(), envValOverride) != 2 { |
|
4632 |
+ c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride) |
|
4633 | 4633 |
} |
4634 | 4634 |
|
4635 | 4635 |
containerName := "bldargCont" |
4636 |
- if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { |
|
4637 |
- c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) |
|
4636 |
+ if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) { |
|
4637 |
+ c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride) |
|
4638 | 4638 |
} |
4639 | 4639 |
} |
4640 | 4640 |
|
... | ... |
@@ -4690,24 +4690,24 @@ func (s *DockerSuite) TestBuildBuildTimeArgDefaultOverride(c *check.C) { |
4690 | 4690 |
imgName := "bldargtest" |
4691 | 4691 |
envKey := "foo" |
4692 | 4692 |
envVal := "bar" |
4693 |
- envValOveride := "barOverride" |
|
4693 |
+ envValOverride := "barOverride" |
|
4694 | 4694 |
dockerfile := fmt.Sprintf(`FROM busybox |
4695 | 4695 |
ARG %s=%s |
4696 | 4696 |
ENV %s $%s |
4697 | 4697 |
RUN echo $%s |
4698 | 4698 |
CMD echo $%s`, envKey, envVal, envKey, envKey, envKey, envKey) |
4699 | 4699 |
result := buildImage(imgName, |
4700 |
- cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envValOveride)), |
|
4700 |
+ cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envValOverride)), |
|
4701 | 4701 |
build.WithDockerfile(dockerfile), |
4702 | 4702 |
) |
4703 | 4703 |
result.Assert(c, icmd.Success) |
4704 |
- if strings.Count(result.Combined(), envValOveride) != 1 { |
|
4705 |
- c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) |
|
4704 |
+ if strings.Count(result.Combined(), envValOverride) != 1 { |
|
4705 |
+ c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride) |
|
4706 | 4706 |
} |
4707 | 4707 |
|
4708 | 4708 |
containerName := "bldargCont" |
4709 |
- if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { |
|
4710 |
- c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) |
|
4709 |
+ if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) { |
|
4710 |
+ c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride) |
|
4711 | 4711 |
} |
4712 | 4712 |
} |
4713 | 4713 |
|
... | ... |
@@ -4824,7 +4824,7 @@ func (s *DockerSuite) TestBuildBuildTimeArgEmptyValVariants(c *check.C) { |
4824 | 4824 |
buildImageSuccessfully(c, imgName, build.WithDockerfile(dockerfile)) |
4825 | 4825 |
} |
4826 | 4826 |
|
4827 |
-func (s *DockerSuite) TestBuildBuildTimeArgDefintionWithNoEnvInjection(c *check.C) { |
|
4827 |
+func (s *DockerSuite) TestBuildBuildTimeArgDefinitionWithNoEnvInjection(c *check.C) { |
|
4828 | 4828 |
imgName := "bldargtest" |
4829 | 4829 |
envKey := "foo" |
4830 | 4830 |
dockerfile := fmt.Sprintf(`FROM busybox |
... | ... |
@@ -5785,7 +5785,7 @@ func (s *DockerSuite) TestBuildWithExtraHostInvalidFormat(c *check.C) { |
5785 | 5785 |
buildFlag string |
5786 | 5786 |
}{ |
5787 | 5787 |
{"extra_host_missing_ip", dockerfile, "--add-host=foo"}, |
5788 |
- {"extra_host_missing_ip_with_delimeter", dockerfile, "--add-host=foo:"}, |
|
5788 |
+ {"extra_host_missing_ip_with_delimiter", dockerfile, "--add-host=foo:"}, |
|
5789 | 5789 |
{"extra_host_missing_hostname", dockerfile, "--add-host=:127.0.0.1"}, |
5790 | 5790 |
{"extra_host_invalid_ipv4", dockerfile, "--add-host=foo:101.10.2"}, |
5791 | 5791 |
{"extra_host_invalid_ipv6", dockerfile, "--add-host=foo:2001::1::3F"}, |
... | ... |
@@ -54,9 +54,9 @@ func (s *DockerSuite) TestCommitPausedContainer(c *check.C) { |
54 | 54 |
} |
55 | 55 |
|
56 | 56 |
func (s *DockerSuite) TestCommitNewFile(c *check.C) { |
57 |
- dockerCmd(c, "run", "--name", "commiter", "busybox", "/bin/sh", "-c", "echo koye > /foo") |
|
57 |
+ dockerCmd(c, "run", "--name", "committer", "busybox", "/bin/sh", "-c", "echo koye > /foo") |
|
58 | 58 |
|
59 |
- imageID, _ := dockerCmd(c, "commit", "commiter") |
|
59 |
+ imageID, _ := dockerCmd(c, "commit", "committer") |
|
60 | 60 |
imageID = strings.TrimSpace(imageID) |
61 | 61 |
|
62 | 62 |
out, _ := dockerCmd(c, "run", imageID, "cat", "/foo") |
... | ... |
@@ -965,7 +965,7 @@ func (s *DockerDaemonSuite) TestDaemonUlimitDefaults(c *check.C) { |
965 | 965 |
c.Fatalf("expected `ulimit -n` to be `42`, got: %s", nofile) |
966 | 966 |
} |
967 | 967 |
if nproc != "2048" { |
968 |
- c.Fatalf("exepcted `ulimit -p` to be 2048, got: %s", nproc) |
|
968 |
+ c.Fatalf("expected `ulimit -p` to be 2048, got: %s", nproc) |
|
969 | 969 |
} |
970 | 970 |
|
971 | 971 |
// Now restart daemon with a new default |
... | ... |
@@ -987,7 +987,7 @@ func (s *DockerDaemonSuite) TestDaemonUlimitDefaults(c *check.C) { |
987 | 987 |
c.Fatalf("expected `ulimit -n` to be `43`, got: %s", nofile) |
988 | 988 |
} |
989 | 989 |
if nproc != "2048" { |
990 |
- c.Fatalf("exepcted `ulimit -p` to be 2048, got: %s", nproc) |
|
990 |
+ c.Fatalf("expected `ulimit -p` to be 2048, got: %s", nproc) |
|
991 | 991 |
} |
992 | 992 |
} |
993 | 993 |
|
... | ... |
@@ -1408,7 +1408,7 @@ func (s *DockerDaemonSuite) TestDaemonRestartWithSocketAsVolume(c *check.C) { |
1408 | 1408 |
} |
1409 | 1409 |
|
1410 | 1410 |
// os.Kill should kill daemon ungracefully, leaving behind container mounts. |
1411 |
-// A subsequent daemon restart shoud clean up said mounts. |
|
1411 |
+// A subsequent daemon restart should clean up said mounts. |
|
1412 | 1412 |
func (s *DockerDaemonSuite) TestCleanupMountsAfterDaemonAndContainerKill(c *check.C) { |
1413 | 1413 |
d := daemon.New(c, dockerBinary, dockerdBinary, daemon.Config{ |
1414 | 1414 |
Experimental: testEnv.ExperimentalDaemon(), |
... | ... |
@@ -111,7 +111,7 @@ func (s *DockerExternalGraphdriverSuite) setUpPlugin(c *check.C, name string, ex |
111 | 111 |
} |
112 | 112 |
|
113 | 113 |
respond := func(w http.ResponseWriter, data interface{}) { |
114 |
- w.Header().Set("Content-Type", "appplication/vnd.docker.plugins.v1+json") |
|
114 |
+ w.Header().Set("Content-Type", "application/vnd.docker.plugins.v1+json") |
|
115 | 115 |
switch t := data.(type) { |
116 | 116 |
case error: |
117 | 117 |
fmt.Fprintln(w, fmt.Sprintf(`{"Err": %q}`, t.Error())) |
... | ... |
@@ -16,7 +16,7 @@ func (s *DockerSuite) TestLoginWithoutTTY(c *check.C) { |
16 | 16 |
|
17 | 17 |
// run the command and block until it's done |
18 | 18 |
err := cmd.Run() |
19 |
- c.Assert(err, checker.NotNil) //"Expected non nil err when loginning in & TTY not available" |
|
19 |
+ c.Assert(err, checker.NotNil) //"Expected non nil err when logging in & TTY not available" |
|
20 | 20 |
} |
21 | 21 |
|
22 | 22 |
func (s *DockerRegistryAuthHtpasswdSuite) TestLoginToPrivateRegistry(c *check.C) { |
... | ... |
@@ -1151,7 +1151,7 @@ func (s *DockerNetworkSuite) TestDockerNetworkHostModeUngracefulDaemonRestart(c |
1151 | 1151 |
out, err := s.d.Cmd("run", "-d", "--name", cName, "--net=host", "--restart=always", "busybox", "top") |
1152 | 1152 |
c.Assert(err, checker.IsNil, check.Commentf(out)) |
1153 | 1153 |
|
1154 |
- // verfiy container has finished starting before killing daemon |
|
1154 |
+ // verify container has finished starting before killing daemon |
|
1155 | 1155 |
err = s.d.WaitRun(cName) |
1156 | 1156 |
c.Assert(err, checker.IsNil) |
1157 | 1157 |
} |
... | ... |
@@ -475,6 +475,6 @@ func (s *DockerSuite) TestPluginMetricsCollector(c *check.C) { |
475 | 475 |
|
476 | 476 |
b, err := ioutil.ReadAll(resp.Body) |
477 | 477 |
c.Assert(err, checker.IsNil) |
478 |
- // check that a known metric is there... don't epect this metric to change over time.. probably safe |
|
478 |
+ // check that a known metric is there... don't expect this metric to change over time.. probably safe |
|
479 | 479 |
c.Assert(string(b), checker.Contains, "container_actions") |
480 | 480 |
} |
... | ... |
@@ -746,7 +746,7 @@ func (s *DockerSuite) TestPsShowMounts(c *check.C) { |
746 | 746 |
fields = strings.Fields(lines[1]) |
747 | 747 |
c.Assert(fields, checker.HasLen, 2) |
748 | 748 |
|
749 |
- annonymounsVolumeID := fields[1] |
|
749 |
+ anonymousVolumeID := fields[1] |
|
750 | 750 |
|
751 | 751 |
fields = strings.Fields(lines[2]) |
752 | 752 |
c.Assert(fields[1], checker.Equals, "ps-volume-test") |
... | ... |
@@ -771,7 +771,7 @@ func (s *DockerSuite) TestPsShowMounts(c *check.C) { |
771 | 771 |
c.Assert(lines, checker.HasLen, 2) |
772 | 772 |
|
773 | 773 |
fields = strings.Fields(lines[0]) |
774 |
- c.Assert(fields[1], checker.Equals, annonymounsVolumeID) |
|
774 |
+ c.Assert(fields[1], checker.Equals, anonymousVolumeID) |
|
775 | 775 |
fields = strings.Fields(lines[1]) |
776 | 776 |
c.Assert(fields[1], checker.Equals, "ps-volume-test") |
777 | 777 |
|
... | ... |
@@ -212,7 +212,7 @@ func (s *DockerSwarmSuite) TestServiceLogsTaskLogs(c *check.C) { |
212 | 212 |
fmt.Sprintf("--replicas=%v", replicas), |
213 | 213 |
// which has this the task id as an environment variable templated in |
214 | 214 |
"--env", "TASK={{.Task.ID}}", |
215 |
- // and runs this command to print exaclty 6 logs lines |
|
215 |
+ // and runs this command to print exactly 6 logs lines |
|
216 | 216 |
"busybox", "sh", "-c", "for line in $(seq 0 5); do echo $TASK log test $line; done; sleep 100000", |
217 | 217 |
)) |
218 | 218 |
result.Assert(c, icmd.Expected{}) |
... | ... |
@@ -1887,7 +1887,7 @@ func (s *DockerSwarmSuite) TestNetworkInspectWithDuplicateNames(c *check.C) { |
1887 | 1887 |
out, err = d.Cmd("network", "rm", n2.ID) |
1888 | 1888 |
c.Assert(err, checker.IsNil, check.Commentf(out)) |
1889 | 1889 |
|
1890 |
- // Dupliates with name but with different driver |
|
1890 |
+ // Duplicates with name but with different driver |
|
1891 | 1891 |
networkCreateRequest.NetworkCreate.Driver = "overlay" |
1892 | 1892 |
|
1893 | 1893 |
status, body, err = d.SockRequest("POST", "/networks/create", networkCreateRequest) |
... | ... |
@@ -34,7 +34,7 @@ func (s *DockerSuite) TestVolumeCLICreate(c *check.C) { |
34 | 34 |
|
35 | 35 |
func (s *DockerSuite) TestVolumeCLIInspect(c *check.C) { |
36 | 36 |
c.Assert( |
37 |
- exec.Command(dockerBinary, "volume", "inspect", "doesntexist").Run(), |
|
37 |
+ exec.Command(dockerBinary, "volume", "inspect", "doesnotexist").Run(), |
|
38 | 38 |
check.Not(check.IsNil), |
39 | 39 |
check.Commentf("volume inspect should error on non-existent volume"), |
40 | 40 |
) |
... | ... |
@@ -54,10 +54,10 @@ func (s *DockerSuite) TestVolumeCLIInspectMulti(c *check.C) { |
54 | 54 |
dockerCmd(c, "volume", "create", "test2") |
55 | 55 |
dockerCmd(c, "volume", "create", "test3") |
56 | 56 |
|
57 |
- result := dockerCmdWithResult("volume", "inspect", "--format={{ .Name }}", "test1", "test2", "doesntexist", "test3") |
|
57 |
+ result := dockerCmdWithResult("volume", "inspect", "--format={{ .Name }}", "test1", "test2", "doesnotexist", "test3") |
|
58 | 58 |
c.Assert(result, icmd.Matches, icmd.Expected{ |
59 | 59 |
ExitCode: 1, |
60 |
- Err: "No such volume: doesntexist", |
|
60 |
+ Err: "No such volume: doesnotexist", |
|
61 | 61 |
}) |
62 | 62 |
|
63 | 63 |
out := result.Stdout() |
... | ... |
@@ -185,7 +185,7 @@ func (s *DockerSuite) TestVolumeCLILsFilterDangling(c *check.C) { |
185 | 185 |
|
186 | 186 |
out, _ = dockerCmd(c, "volume", "ls", "--filter", "name=testisin") |
187 | 187 |
c.Assert(out, check.Not(checker.Contains), "testnotinuse1\n", check.Commentf("expected volume 'testnotinuse1' in output")) |
188 |
- c.Assert(out, checker.Contains, "testisinuse1\n", check.Commentf("execpeted volume 'testisinuse1' in output")) |
|
188 |
+ c.Assert(out, checker.Contains, "testisinuse1\n", check.Commentf("expected volume 'testisinuse1' in output")) |
|
189 | 189 |
c.Assert(out, checker.Contains, "testisinuse2\n", check.Commentf("expected volume 'testisinuse2' in output")) |
190 | 190 |
} |
191 | 191 |
|
... | ... |
@@ -234,7 +234,7 @@ func (s *DockerSuite) TestVolumeCLIRm(c *check.C) { |
234 | 234 |
|
235 | 235 |
dockerCmd(c, "volume", "rm", volumeID) |
236 | 236 |
c.Assert( |
237 |
- exec.Command("volume", "rm", "doesntexist").Run(), |
|
237 |
+ exec.Command("volume", "rm", "doesnotexist").Run(), |
|
238 | 238 |
check.Not(check.IsNil), |
239 | 239 |
check.Commentf("volume rm should fail with non-existent volume"), |
240 | 240 |
) |
... | ... |
@@ -155,7 +155,7 @@ func (s *DockerNetworkSuite) TestDockerNetworkMacvlanMultiSubnet(c *check.C) { |
155 | 155 |
_, _, err := dockerCmdWithError("exec", "second", "ping", "-c", "1", strings.TrimSpace(ip)) |
156 | 156 |
c.Assert(err, check.IsNil) |
157 | 157 |
// verify ipv6 connectivity to the explicit --ipv6 address second to first |
158 |
- c.Skip("Temporarily skipping while invesitigating sporadic v6 CI issues") |
|
158 |
+ c.Skip("Temporarily skipping while investigating sporadic v6 CI issues") |
|
159 | 159 |
_, _, err = dockerCmdWithError("exec", "second", "ping6", "-c", "1", strings.TrimSpace(ip6)) |
160 | 160 |
c.Assert(err, check.IsNil) |
161 | 161 |
|
... | ... |
@@ -22,7 +22,7 @@ func NewIPOpt(ref *net.IP, defaultVal string) *IPOpt { |
22 | 22 |
} |
23 | 23 |
|
24 | 24 |
// Set sets an IPv4 or IPv6 address from a given string. If the given |
25 |
-// string is not parseable as an IP address it returns an error. |
|
25 |
+// string is not parsable as an IP address it returns an error. |
|
26 | 26 |
func (o *IPOpt) Set(val string) error { |
27 | 27 |
ip := net.ParseIP(val) |
28 | 28 |
if ip == nil { |
... | ... |
@@ -157,7 +157,7 @@ func TestValidateDNSSearch(t *testing.T) { |
157 | 157 |
`foo.bar-.baz`, |
158 | 158 |
`foo.-bar`, |
159 | 159 |
`foo.-bar.baz`, |
160 |
- `foo.bar.baz.this.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbe`, |
|
160 |
+ `foo.bar.baz.this.should.fail.on.long.name.because.it.is.longer.thanitshouldbethis.should.fail.on.long.name.because.it.is.longer.thanitshouldbethis.should.fail.on.long.name.because.it.is.longer.thanitshouldbethis.should.fail.on.long.name.because.it.is.longer.thanitshouldbe`, |
|
161 | 161 |
} |
162 | 162 |
|
163 | 163 |
for _, domain := range valid { |
... | ... |
@@ -180,7 +180,7 @@ func DecompressStream(archive io.Reader) (io.ReadCloser, error) { |
180 | 180 |
} |
181 | 181 |
} |
182 | 182 |
|
183 |
-// CompressStream compresseses the dest with specified compression algorithm. |
|
183 |
+// CompressStream compresses the dest with specified compression algorithm. |
|
184 | 184 |
func CompressStream(dest io.Writer, compression Compression) (io.WriteCloser, error) { |
185 | 185 |
p := pools.BufioWriter32KPool |
186 | 186 |
buf := p.Get(dest) |
... | ... |
@@ -102,10 +102,10 @@ func createSampleDir(t *testing.T, root string) { |
102 | 102 |
} |
103 | 103 |
|
104 | 104 |
func TestChangeString(t *testing.T) { |
105 |
- modifiyChange := Change{"change", ChangeModify} |
|
106 |
- toString := modifiyChange.String() |
|
105 |
+ modifyChange := Change{"change", ChangeModify} |
|
106 |
+ toString := modifyChange.String() |
|
107 | 107 |
if toString != "C change" { |
108 |
- t.Fatalf("String() of a change with ChangeModifiy Kind should have been %s but was %s", "C change", toString) |
|
108 |
+ t.Fatalf("String() of a change with ChangeModify Kind should have been %s but was %s", "C change", toString) |
|
109 | 109 |
} |
110 | 110 |
addChange := Change{"change", ChangeAdd} |
111 | 111 |
toString = addChange.String() |
... | ... |
@@ -99,7 +99,7 @@ func TestAuthZResponsePlugin(t *testing.T) { |
99 | 99 |
|
100 | 100 |
request := Request{ |
101 | 101 |
User: "user", |
102 |
- RequestURI: "someting.com/auth", |
|
102 |
+ RequestURI: "something.com/auth", |
|
103 | 103 |
RequestBody: []byte("sample body"), |
104 | 104 |
} |
105 | 105 |
server.replayResponse = Response{ |
... | ... |
@@ -373,7 +373,7 @@ func RemoveDeviceDeferred(name string) error { |
373 | 373 |
// semaphores created in `task.setCookie` will be cleaned up in `UdevWait`. |
374 | 374 |
// So these two function call must come in pairs, otherwise semaphores will |
375 | 375 |
// be leaked, and the limit of number of semaphores defined in `/proc/sys/kernel/sem` |
376 |
- // will be reached, which will eventually make all follwing calls to 'task.SetCookie' |
|
376 |
+ // will be reached, which will eventually make all following calls to 'task.SetCookie' |
|
377 | 377 |
// fail. |
378 | 378 |
// this call will not wait for the deferred removal's final executing, since no |
379 | 379 |
// udev event will be generated, and the semaphore's value will not be incremented |
... | ... |
@@ -2,7 +2,7 @@ package filenotify |
2 | 2 |
|
3 | 3 |
import "github.com/fsnotify/fsnotify" |
4 | 4 |
|
5 |
-// fsNotifyWatcher wraps the fsnotify package to satisfy the FileNotifer interface |
|
5 |
+// fsNotifyWatcher wraps the fsnotify package to satisfy the FileNotifier interface |
|
6 | 6 |
type fsNotifyWatcher struct { |
7 | 7 |
*fsnotify.Watcher |
8 | 8 |
} |
... | ... |
@@ -136,7 +136,7 @@ func TestParseWithMultipleFuncs(t *testing.T) { |
136 | 136 |
} |
137 | 137 |
} |
138 | 138 |
|
139 |
-func TestParseWithUnamedReturn(t *testing.T) { |
|
139 |
+func TestParseWithUnnamedReturn(t *testing.T) { |
|
140 | 140 |
_, err := Parse(testFixture, "Fooer4") |
141 | 141 |
if !strings.HasSuffix(err.Error(), errBadReturn.Error()) { |
142 | 142 |
t.Fatalf("expected ErrBadReturn, got %v", err) |
... | ... |
@@ -40,7 +40,7 @@ func FollowSymlinkInScope(path, root string) (string, error) { |
40 | 40 |
// |
41 | 41 |
// Example: |
42 | 42 |
// If /foo/bar -> /outside, |
43 |
-// FollowSymlinkInScope("/foo/bar", "/foo") == "/foo/outside" instead of "/oustide" |
|
43 |
+// FollowSymlinkInScope("/foo/bar", "/foo") == "/foo/outside" instead of "/outside" |
|
44 | 44 |
// |
45 | 45 |
// IMPORTANT: it is the caller's responsibility to call evalSymlinksInScope *after* relevant symlinks |
46 | 46 |
// are created and not to create subsequently, additional symlinks that could potentially make a |
... | ... |
@@ -12,7 +12,7 @@ import ( |
12 | 12 |
// This is used, for example, when validating a user provided path in docker cp. |
13 | 13 |
// If a drive letter is supplied, it must be the system drive. The drive letter |
14 | 14 |
// is always removed. Also, it translates it to OS semantics (IOW / to \). We |
15 |
-// need the path in this syntax so that it can ultimately be contatenated with |
|
15 |
+// need the path in this syntax so that it can ultimately be concatenated with |
|
16 | 16 |
// a Windows long-path which doesn't support drive-letters. Examples: |
17 | 17 |
// C: --> Fail |
18 | 18 |
// C:\ --> \ |
... | ... |
@@ -20,7 +20,7 @@ import ( |
20 | 20 |
// These types of errors do not need to be returned since it's ok for the dir to |
21 | 21 |
// be gone we can just retry the remove operation. |
22 | 22 |
// |
23 |
-// This should not return a `os.ErrNotExist` kind of error under any cirucmstances |
|
23 |
+// This should not return a `os.ErrNotExist` kind of error under any circumstances |
|
24 | 24 |
func EnsureRemoveAll(dir string) error { |
25 | 25 |
notExistErr := make(map[string]bool) |
26 | 26 |
|
... | ... |
@@ -30,7 +30,7 @@ var basicFunctions = template.FuncMap{ |
30 | 30 |
// HeaderFunctions are used to created headers of a table. |
31 | 31 |
// This is a replacement of basicFunctions for header generation |
32 | 32 |
// because we want the header to remain intact. |
33 |
-// Some functions like `split` are irrevelant so not added. |
|
33 |
+// Some functions like `split` are irrelevant so not added. |
|
34 | 34 |
var HeaderFunctions = template.FuncMap{ |
35 | 35 |
"json": func(v string) string { |
36 | 36 |
return v |
... | ... |
@@ -53,7 +53,7 @@ type Result struct { |
53 | 53 |
} |
54 | 54 |
|
55 | 55 |
// Assert compares the Result against the Expected struct, and fails the test if |
56 |
-// any of the expcetations are not met. |
|
56 |
+// any of the expectations are not met. |
|
57 | 57 |
func (r *Result) Assert(t testingT, exp Expected) *Result { |
58 | 58 |
err := r.Compare(exp) |
59 | 59 |
if err == nil { |
... | ... |
@@ -271,7 +271,7 @@ func (pm *Manager) save(p *v2.Plugin) error { |
271 | 271 |
return nil |
272 | 272 |
} |
273 | 273 |
|
274 |
-// GC cleans up unrefrenced blobs. This is recommended to run in a goroutine |
|
274 |
+// GC cleans up unreferenced blobs. This is recommended to run in a goroutine |
|
275 | 275 |
func (pm *Manager) GC() { |
276 | 276 |
pm.muGC.Lock() |
277 | 277 |
defer pm.muGC.Unlock() |
... | ... |
@@ -221,7 +221,7 @@ func (store *store) Delete(ref reference.Named) (bool, error) { |
221 | 221 |
func (store *store) Get(ref reference.Named) (digest.Digest, error) { |
222 | 222 |
if canonical, ok := ref.(reference.Canonical); ok { |
223 | 223 |
// If reference contains both tag and digest, only |
224 |
- // lookup by digest as it takes precendent over |
|
224 |
+ // lookup by digest as it takes precedence over |
|
225 | 225 |
// tag, until tag/digest combos are stored. |
226 | 226 |
if _, ok := ref.(reference.Tagged); ok { |
227 | 227 |
var err error |
... | ... |
@@ -252,7 +252,7 @@ skip: |
252 | 252 |
return nil |
253 | 253 |
} |
254 | 254 |
|
255 |
-// allowNondistributableArtifacts returns true if the provided hostname is part of the list of regsitries |
|
255 |
+// allowNondistributableArtifacts returns true if the provided hostname is part of the list of registries |
|
256 | 256 |
// that allow push of nondistributable artifacts. |
257 | 257 |
// |
258 | 258 |
// The list can contain elements with CIDR notation to specify a whole subnet. If the subnet contains an IP |
... | ... |
@@ -175,7 +175,7 @@ func (e *V1Endpoint) Ping() (PingResult, error) { |
175 | 175 |
Standalone: true, |
176 | 176 |
} |
177 | 177 |
if err := json.Unmarshal(jsonString, &info); err != nil { |
178 |
- logrus.Debugf("Error unmarshalling the _ping PingResult: %s", err) |
|
178 |
+ logrus.Debugf("Error unmarshaling the _ping PingResult: %s", err) |
|
179 | 179 |
// don't stop here. Just assume sane defaults |
180 | 180 |
} |
181 | 181 |
if hdr := resp.Header.Get("X-Docker-Registry-Version"); hdr != "" { |
... | ... |
@@ -9,7 +9,7 @@ During this meeting, we are talking about the [tasks](https://github.com/moby/mo |
9 | 9 |
|
10 | 10 |
### The CLI split |
11 | 11 |
|
12 |
-The Docker CLI was succesfully moved to [https://github.com/docker/cli](https://github.com/docker/cli) last week thanks to @tiborvass |
|
12 |
+The Docker CLI was successfully moved to [https://github.com/docker/cli](https://github.com/docker/cli) last week thanks to @tiborvass |
|
13 | 13 |
The Docker CLI is now compiled from the [Dockerfile](https://github.com/moby/moby/blob/a762ceace4e8c1c7ce4fb582789af9d8074be3e1/Dockerfile#L248) |
14 | 14 |
|
15 | 15 |
### Mailing list |
... | ... |
@@ -27,7 +27,7 @@ breaking up / removing existing packages that likely are not good candidates to |
27 | 27 |
|
28 | 28 |
With the removal of the CLI from the moby repository, new pull requests will have to be tested using API tests instead |
29 | 29 |
of using the CLI. Discussion took place whether or not these tests should use the API `client` package, or be completely |
30 |
-independend, and make raw HTTP calls. |
|
30 |
+independent, and make raw HTTP calls. |
|
31 | 31 |
|
32 | 32 |
A topic was created on the forum to discuss options: [evolution of testing](https://forums.mobyproject.org/t/evolution-of-testing-moby/38) |
33 | 33 |
|
... | ... |
@@ -102,7 +102,7 @@ func (a *volumeDriverAdapter) getCapabilities() volume.Capability { |
102 | 102 |
if err != nil { |
103 | 103 |
// `GetCapabilities` is a not a required endpoint. |
104 | 104 |
// On error assume it's a local-only driver |
105 |
- logrus.Warnf("Volume driver %s returned an error while trying to query its capabilities, using default capabilties: %v", a.name, err) |
|
105 |
+ logrus.Warnf("Volume driver %s returned an error while trying to query its capabilities, using default capabilities: %v", a.name, err) |
|
106 | 106 |
return volume.Capability{Scope: volume.LocalScope} |
107 | 107 |
} |
108 | 108 |
|
... | ... |
@@ -25,7 +25,7 @@ func (NoopVolume) Mount(_ string) (string, error) { return "noop", nil } |
25 | 25 |
// Unmount unmounts the volume from the container |
26 | 26 |
func (NoopVolume) Unmount(_ string) error { return nil } |
27 | 27 |
|
28 |
-// Status proivdes low-level details about the volume |
|
28 |
+// Status provides low-level details about the volume |
|
29 | 29 |
func (NoopVolume) Status() map[string]interface{} { return nil } |
30 | 30 |
|
31 | 31 |
// CreatedAt provides the time the volume (directory) was created at |
... | ... |
@@ -57,7 +57,7 @@ func (FakeVolume) Mount(_ string) (string, error) { return "fake", nil } |
57 | 57 |
// Unmount unmounts the volume from the container |
58 | 58 |
func (FakeVolume) Unmount(_ string) error { return nil } |
59 | 59 |
|
60 |
-// Status proivdes low-level details about the volume |
|
60 |
+// Status provides low-level details about the volume |
|
61 | 61 |
func (FakeVolume) Status() map[string]interface{} { return nil } |
62 | 62 |
|
63 | 63 |
// CreatedAt provides the time the volume (directory) was created at |
... | ... |
@@ -125,7 +125,7 @@ type MountPoint struct { |
125 | 125 |
Spec mounttypes.Mount |
126 | 126 |
|
127 | 127 |
// Track usage of this mountpoint |
128 |
- // Specicially needed for containers which are running and calls to `docker cp` |
|
128 |
+ // Specifically needed for containers which are running and calls to `docker cp` |
|
129 | 129 |
// because both these actions require mounting the volumes. |
130 | 130 |
active int |
131 | 131 |
} |
... | ... |
@@ -26,7 +26,7 @@ func ConvertTmpfsOptions(opt *mounttypes.TmpfsOptions, readOnly bool) (string, e |
26 | 26 |
// okay, since API is that way anyways. |
27 | 27 |
|
28 | 28 |
// we do this by finding the suffix that divides evenly into the |
29 |
- // value, returing the value itself, with no suffix, if it fails. |
|
29 |
+ // value, returning the value itself, with no suffix, if it fails. |
|
30 | 30 |
// |
31 | 31 |
// For the most part, we don't enforce any semantic to this values. |
32 | 32 |
// The operating system will usually align this and enforce minimum |