Browse code

Updating networking docs with technical information

- the /etc/hosts read caveat due to dynamic update
- information about docker_gwbridge
- Carries and closes #17654
- Updating with last change by Madhu
- Updating with the IPAM api 1.22

Signed-off-by: Mary Anthony <mary@docker.com>

Madhu Venugopal authored on 2015/11/03 23:15:56
Showing 5 changed files
... ...
@@ -2715,6 +2715,12 @@ Content-Type: application/json
2715 2715
 {
2716 2716
   "Name":"isolated_nw",
2717 2717
   "Driver":"bridge"
2718
+  "IPAM":{
2719
+    "Config":[{
2720
+      "Subnet":"172.20.0.0/16",
2721
+      "IPRange":"172.20.10.0/24",
2722
+      "Gateway":"172.20.10.11"
2723
+    }]
2718 2724
 }
2719 2725
 ```
2720 2726
 
... ...
@@ -2740,6 +2746,7 @@ JSON Parameters:
2740 2740
 
2741 2741
 - **Name** - The new network's name. this is a mandatory field
2742 2742
 - **Driver** - Name of the network driver to use. Defaults to `bridge` driver
2743
+- **IPAM** - Optional custom IP scheme for the network
2743 2744
 - **Options** - Network specific options to be used by the drivers
2744 2745
 - **CheckDuplicate** - Requests daemon to check for networks with same name
2745 2746
 
... ...
@@ -2710,12 +2710,23 @@ Create a network
2710 2710
 **Example request**:
2711 2711
 
2712 2712
 ```
2713
+Create a network
2714
+
2715
+**Example request**:
2716
+
2717
+```
2713 2718
 POST /networks/create HTTP/1.1
2714 2719
 Content-Type: application/json
2715 2720
 
2716 2721
 {
2717 2722
   "Name":"isolated_nw",
2718 2723
   "Driver":"bridge"
2724
+  "IPAM":{
2725
+    "Config":[{
2726
+      "Subnet":"172.20.0.0/16",
2727
+      "IPRange":"172.20.10.0/24",
2728
+      "Gateway":"172.20.10.11"
2729
+    }]
2719 2730
 }
2720 2731
 ```
2721 2732
 
... ...
@@ -2741,6 +2752,7 @@ JSON Parameters:
2741 2741
 
2742 2742
 - **Name** - The new network's name. this is a mandatory field
2743 2743
 - **Driver** - Name of the network driver to use. Defaults to `bridge` driver
2744
+- **IPAM** - Optional custom IP scheme for the network
2744 2745
 - **Options** - Network specific options to be used by the drivers
2745 2746
 - **CheckDuplicate** - Requests daemon to check for networks with same name
2746 2747
 
... ...
@@ -404,6 +404,19 @@ container itself as well as `localhost` and a few other common things.  The
404 404
     ::1	            localhost ip6-localhost ip6-loopback
405 405
     86.75.30.9      db-static
406 406
 
407
+If a container is connected to the default bridge network and `linked`
408
+with other containers, then the container's `/etc/hosts` file is updated
409
+with the linked container's name.
410
+
411
+If the container is connected to user-defined network, the container's
412
+`/etc/hosts` file is updated with names of all other containers in that
413
+user-defined network.
414
+
415
+> **Note** Since Docker may live update the container’s `/etc/hosts` file, there
416
+may be situations when processes inside the container can end up reading an
417
+empty or incomplete `/etc/hosts` file. In most cases, retrying the read again
418
+should fix the problem.
419
+
407 420
 ## Restart policies (--restart)
408 421
 
409 422
 Using the `--restart` flag on Docker run you can specify a restart policy for
... ...
@@ -11,7 +11,7 @@ weight=-3
11 11
 
12 12
 # Get started with multi-host networking
13 13
 
14
-This article uses an example to explain the basics of creating a mult-host
14
+This article uses an example to explain the basics of creating a multi-host
15 15
 network. Docker Engine supports multi-host-networking out-of-the-box through the
16 16
 `overlay` network driver.  Unlike `bridge` networks overlay networks require
17 17
 some pre-existing conditions before you can create one. These conditions are:
... ...
@@ -21,8 +21,10 @@ some pre-existing conditions before you can create one. These conditions are:
21 21
 * A cluster of hosts with connectivity to the key-value store.
22 22
 * A properly configured Engine `daemon` on each host in the cluster.
23 23
 
24
-You'll use Docker Machine to create both the the key-value store server and the
25
-host cluster. This example creates a Swarm cluster.
24
+Though Docker Machine and Docker Swarm are not mandatory to experience Docker
25
+multi-host-networking, this example uses them to illustrate how they are
26
+integrated. You'll use Machine to create both the the key-value store
27
+server and the host cluster. This example creates a Swarm cluster.
26 28
 
27 29
 ## Prerequisites
28 30
 
... ...
@@ -46,7 +48,7 @@ store) key-value stores. This example uses Consul.
46 46
 
47 47
 2. Provision a VirtualBox machine called `mh-keystore`.  
48 48
 
49
-				$ docker-machine create -d virtualbox mh-keystore
49
+		$ docker-machine create -d virtualbox mh-keystore
50 50
 
51 51
 	When you provision a new machine, the process adds Docker Engine to the
52 52
 	host. This means rather than installing Consul manually, you can create an
... ...
@@ -55,10 +57,10 @@ store) key-value stores. This example uses Consul.
55 55
 
56 56
 3. Start a `progrium/consul` container running on the `mh-keystore` machine.
57 57
 
58
-			$  docker $(docker-machine config mh-keystore) run -d \
59
-				-p "8500:8500" \
60
-				-h "consul" \
61
-				progrium/consul -server -bootstrap
58
+		$  docker $(docker-machine config mh-keystore) run -d \
59
+			-p "8500:8500" \
60
+			-h "consul" \
61
+			progrium/consul -server -bootstrap
62 62
 
63 63
 	 You passed the `docker run` command the connection configuration using a bash
64 64
 	 expansion `$(docker-machine config mh-keystore)`.  The client started a
... ...
@@ -66,13 +68,13 @@ store) key-value stores. This example uses Consul.
66 66
 
67 67
 4. Set your local environment to the `mh-keystore` machine.
68 68
 
69
-			$  eval "$(docker-machine env mh-keystore)"
69
+		$  eval "$(docker-machine env mh-keystore)"
70 70
 
71 71
 5. Run the `docker ps` command to see the `consul` container.
72 72
 
73
-			$ docker ps
74
-			CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                            NAMES
75
-			4d51392253b3        progrium/consul     "/bin/start -server -"   25 minutes ago      Up 25 minutes       53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp   admiring_panini
73
+		$ docker ps
74
+		CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                            NAMES
75
+		4d51392253b3        progrium/consul     "/bin/start -server -"   25 minutes ago      Up 25 minutes       53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp   admiring_panini
76 76
 
77 77
 Keep your terminal open and move onto the next step.
78 78
 
... ...
@@ -87,13 +89,13 @@ that machine options that are needed by the `overlay` network driver.
87 87
 
88 88
 1. Create a Swarm master.
89 89
 
90
-			$ docker-machine create \
91
-			-d virtualbox \
92
-			--swarm --swarm-image="swarm" --swarm-master \
93
-			--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
94
-			--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
95
-			--engine-opt="cluster-advertise=eth1:2376" \
96
-			mhs-demo0
90
+		$ docker-machine create \
91
+		-d virtualbox \
92
+		--swarm --swarm-image="swarm" --swarm-master \
93
+		--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
94
+		--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
95
+		--engine-opt="cluster-advertise=eth1:2376" \
96
+		mhs-demo0
97 97
 
98 98
 	At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network.
99 99
 
... ...
@@ -126,74 +128,71 @@ To create an overlay network
126 126
 
127 127
 1. Set your docker environment to the Swarm master.
128 128
 
129
-			$ eval $(docker-machine env --swarm mhs-demo0)
129
+		$ eval $(docker-machine env --swarm mhs-demo0)
130 130
 
131
-		Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone.
131
+	Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone.
132 132
 
133 133
 2. Use the `docker info` command to view the Swarm.
134 134
 
135
-			$ docker info
136
-			Containers: 3
137
-			Images: 2
138
-			Role: primary
139
-			Strategy: spread
140
-			Filters: affinity, health, constraint, port, dependency
141
-			Nodes: 2
142
-			mhs-demo0: 192.168.99.104:2376
143
-			â”” Containers: 2
144
-			â”” Reserved CPUs: 0 / 1
145
-			â”” Reserved Memory: 0 B / 1.021 GiB
146
-			â”” Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
147
-			mhs-demo1: 192.168.99.105:2376
148
-			â”” Containers: 1
149
-			â”” Reserved CPUs: 0 / 1
150
-			â”” Reserved Memory: 0 B / 1.021 GiB
151
-			â”” Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
152
-			CPUs: 2
153
-			Total Memory: 2.043 GiB
154
-			Name: 30438ece0915
135
+		$ docker info
136
+		Containers: 3
137
+		Images: 2
138
+		Role: primary
139
+		Strategy: spread
140
+		Filters: affinity, health, constraint, port, dependency
141
+		Nodes: 2
142
+		mhs-demo0: 192.168.99.104:2376
143
+		â”” Containers: 2
144
+		â”” Reserved CPUs: 0 / 1
145
+		â”” Reserved Memory: 0 B / 1.021 GiB
146
+		â”” Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
147
+		mhs-demo1: 192.168.99.105:2376
148
+		â”” Containers: 1
149
+		â”” Reserved CPUs: 0 / 1
150
+		â”” Reserved Memory: 0 B / 1.021 GiB
151
+		â”” Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
152
+		CPUs: 2
153
+		Total Memory: 2.043 GiB
154
+		Name: 30438ece0915
155 155
 
156 156
 	From this information, you can see that you are running three containers and 2 images on the Master.
157 157
 
158 158
 3. Create your `overlay` network.
159 159
 
160
-			$ docker network create --driver overlay my-net
160
+		$ docker network create --driver overlay my-net
161 161
 
162
-		You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster.
162
+	You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster.
163 163
 
164 164
 4. Check that the network is running:
165 165
 
166
-			$ docker network ls
167
-			NETWORK ID          NAME                DRIVER
168
-			412c2496d0eb        mhs-demo1/host      host                
169
-			dd51763e6dd2        mhs-demo0/bridge    bridge              
170
-			6b07d0be843f        my-net              overlay             
171
-			b4234109bd9b        mhs-demo0/none      null                
172
-			1aeead6dd890        mhs-demo0/host      host                
173
-			d0bb78cbe7bd        mhs-demo1/bridge    bridge              
174
-			1c0eb8f69ebb        mhs-demo1/none      null     
166
+		$ docker network ls
167
+		NETWORK ID          NAME                DRIVER
168
+		412c2496d0eb        mhs-demo1/host      host                
169
+		dd51763e6dd2        mhs-demo0/bridge    bridge              
170
+		6b07d0be843f        my-net              overlay             
171
+		b4234109bd9b        mhs-demo0/none      null                
172
+		1aeead6dd890        mhs-demo0/host      host                
173
+		d0bb78cbe7bd        mhs-demo1/bridge    bridge              
174
+		1c0eb8f69ebb        mhs-demo1/none      null     
175 175
 
176 176
 	Because you are in the Swarm master environment, you see all the networks on all Swarm agents. Notice that each `NETWORK ID` is unique.  The default networks on each engine and the single overlay network.  
177 177
 
178 178
 5. Switch to each Swarm agent in turn and list the network.
179 179
 
180
-			$ eval $(docker-machine env mhs-demo0)
181
-
182
-			$ docker network ls
183
-			NETWORK ID          NAME                DRIVER
184
-			6b07d0be843f        my-net              overlay             
185
-			dd51763e6dd2        bridge              bridge              
186
-			b4234109bd9b        none                null                
187
-			1aeead6dd890        host                host                
188
-
189
-			$ eval $(docker-machine env mhs-demo1)
190
-
191
-			$ docker network ls
192
-			NETWORK ID          NAME                DRIVER
193
-			d0bb78cbe7bd        bridge              bridge              
194
-			1c0eb8f69ebb        none                null                
195
-			412c2496d0eb        host                host                
196
-			6b07d0be843f        my-net              overlay        
180
+		$ eval $(docker-machine env mhs-demo0)
181
+		$ docker network ls
182
+		NETWORK ID          NAME                DRIVER
183
+		6b07d0be843f        my-net              overlay             
184
+		dd51763e6dd2        bridge              bridge              
185
+		b4234109bd9b        none                null                
186
+		1aeead6dd890        host                host                
187
+		$ eval $(docker-machine env mhs-demo1)
188
+		$ docker network ls
189
+		NETWORK ID          NAME                DRIVER
190
+		d0bb78cbe7bd        bridge              bridge              
191
+		1c0eb8f69ebb        none                null                
192
+		412c2496d0eb        host                host                
193
+		6b07d0be843f        my-net              overlay        
197 194
 
198 195
   Both agents reports it has the `my-net `network with the `6b07d0be843f` id.  You have a multi-host container network running!
199 196
 
... ...
@@ -203,7 +202,7 @@ Once your network is created, you can start a container on any of the hosts and
203 203
 
204 204
 1. Point your environment to your `mhs-demo0` instance.
205 205
 
206
-			$ eval $(docker-machine env mhs-demo0)
206
+		$ eval $(docker-machine env mhs-demo0)
207 207
 
208 208
 2. Start an Nginx server on `mhs-demo0`.
209 209
 
... ...
@@ -215,7 +214,7 @@ Once your network is created, you can start a container on any of the hosts and
215 215
 
216 216
 		$ eval $(docker-machine env mhs-demo1)
217 217
 
218
-2. Run a Busybox instance and get the contents of the Ngnix server's home page.
218
+4. Run a Busybox instance and get the contents of the Ngnix server's home page.
219 219
 
220 220
 		$ docker run -it --rm --net=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web
221 221
 		Unable to find image 'busybox:latest' locally
... ...
@@ -252,9 +251,68 @@ Once your network is created, you can start a container on any of the hosts and
252 252
 		</html>
253 253
 		-                    100% |*******************************|   612   0:00:00 ETA
254 254
 
255
-## Step 5: Extra Credit with Docker Compose
255
+## Step 5: Check external connectivity
256
+
257
+As you've seen, Docker's built-in overlay network driver provides out-of-the-box
258
+connectivity between the containers on multiple hosts within the same network.
259
+Additionally, containers connected to the multi-host network are automatically
260
+connected to the `docker_gwbridge` network. This network allows the containers
261
+to have external connectivity outside of their cluster.
262
+
263
+1. Change your environment to the Swarm agent.
256 264
 
257
-You can try starting a second network on your existing Swarm cluser using Docker Compose.
265
+		$ eval $(docker-machine env mhs-demo1)
266
+
267
+2. View the `docker_gwbridge` network, by listing the networks.
268
+
269
+		$ docker network ls
270
+		NETWORK ID          NAME                DRIVER
271
+		6b07d0be843f        my-net              overlay
272
+		dd51763e6dd2        bridge              bridge
273
+		b4234109bd9b        none                null
274
+		1aeead6dd890        host                host
275
+		e1dbd5dff8be        docker_gwbridge     bridge
276
+
277
+3. Repeat steps 1 and 2 on the Swarm master.
278
+
279
+		$ eval $(docker-machine env mhs-demo0)
280
+		$ docker network ls
281
+		NETWORK ID          NAME                DRIVER
282
+		6b07d0be843f        my-net              overlay
283
+		d0bb78cbe7bd        bridge              bridge
284
+		1c0eb8f69ebb        none                null
285
+		412c2496d0eb        host                host
286
+		97102a22e8d2        docker_gwbridge     bridge
287
+
288
+2. Check the Ngnix container's network interfaces.
289
+
290
+		$ docker exec web ip addr
291
+		1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
292
+		link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
293
+		inet 127.0.0.1/8 scope host lo
294
+		    valid_lft forever preferred_lft forever
295
+		inet6 ::1/128 scope host
296
+		    valid_lft forever preferred_lft forever
297
+		22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
298
+		link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff
299
+		inet 10.0.9.3/24 scope global eth0
300
+		    valid_lft forever preferred_lft forever
301
+		inet6 fe80::42:aff:fe00:903/64 scope link
302
+		    valid_lft forever preferred_lft forever
303
+		24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
304
+		link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
305
+		inet 172.18.0.2/16 scope global eth1
306
+		    valid_lft forever preferred_lft forever
307
+		inet6 fe80::42:acff:fe12:2/64 scope link
308
+		    valid_lft forever preferred_lft forever
309
+
310
+	The `eth0` interface represents the container interface that is connected to
311
+	the `my-net` overlay network. While the `eth1` interface represents the
312
+	container interface that is connected to the `docker_gwbridge` network.
313
+
314
+## Step 6: Extra Credit with Docker Compose
315
+
316
+You can try starting a second network on your existing Swarm cluster using Docker Compose.
258 317
 
259 318
 1. Log into the Swarm master.
260 319
 
... ...
@@ -271,7 +329,6 @@ You can try starting a second network on your existing Swarm cluser using Docker
271 271
 				- "constraint:node==swl-demo0"
272 272
 			ports:
273 273
 				- "80:5000"
274
-
275 274
 		mongo:
276 275
 			image: mongo
277 276
 
... ...
@@ -283,5 +340,7 @@ You can try starting a second network on your existing Swarm cluser using Docker
283 283
 
284 284
 ## Related information
285 285
 
286
+* [Understand Docker container networks](dockernetworks.md)
287
+* [Work with network commands](work-with-networks.md)
286 288
 * [Docker Swarm overview](https://docs.docker.com/swarm)
287 289
 * [Docker Machine overview](https://docs.docker.com/machine)
... ...
@@ -355,9 +355,9 @@ ports and the exposed ports, use `docker port`.
355 355
    Publish a container's port, or range of ports, to the host.
356 356
 
357 357
    Format: `ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort`
358
-Both hostPort and containerPort can be specified as a range of ports. 
358
+Both hostPort and containerPort can be specified as a range of ports.
359 359
 When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range.
360
-(e.g., `docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox` 
360
+(e.g., `docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox`
361 361
 but not `docker run -p 1230-1236:1230-1240 --name RangeContainerPortsBiggerThanRangeHostPorts -t busybox`)
362 362
 With ip: `docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage`
363 363
 Use `docker port` to see the actual mapping: `docker port CONTAINER $CONTAINERPORT`
... ...
@@ -437,17 +437,17 @@ standard input.
437 437
 ""--ulimit""=[]
438 438
     Ulimit options
439 439
 
440
-**-v**, **--volume**=[] Create a bind mount 
440
+**-v**, **--volume**=[] Create a bind mount
441 441
    (format: `[host-dir:]container-dir[:<suffix options>]`, where suffix options
442 442
 are comma delimited and selected from [rw|ro] and [z|Z].)
443
-   
443
+
444 444
    (e.g., using -v /host-dir:/container-dir, bind mounts /host-dir in the
445 445
 host to /container-dir in the Docker container)
446
-   
446
+
447 447
    If 'host-dir' is missing, then docker automatically creates the new volume
448 448
 on the host. **This auto-creation of the host path has been deprecated in
449 449
 Release: v1.9.**
450
-   
450
+
451 451
    The **-v** option can be used one or
452 452
 more times to add one or more mounts to a container. These mounts can then be
453 453
 used in other containers using the **--volumes-from** option.
... ...
@@ -469,31 +469,31 @@ content label. Shared volume labels allow all containers to read/write content.
469 469
 The `Z` option tells Docker to label the content with a private unshared label.
470 470
 Only the current container can use a private volume.
471 471
 
472
-The `container-dir` must always be an absolute path such as `/src/docs`. 
473
-The `host-dir` can either be an absolute path or a `name` value. If you 
474
-supply an absolute path for the `host-dir`, Docker bind-mounts to the path 
472
+The `container-dir` must always be an absolute path such as `/src/docs`.
473
+The `host-dir` can either be an absolute path or a `name` value. If you
474
+supply an absolute path for the `host-dir`, Docker bind-mounts to the path
475 475
 you specify. If you supply a `name`, Docker creates a named volume by that `name`.
476 476
 
477
-A `name` value must start with start with an alphanumeric character, 
478
-followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen). 
477
+A `name` value must start with start with an alphanumeric character,
478
+followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
479 479
 An absolute path starts with a `/` (forward slash).
480 480
 
481
-For example, you can specify either `/foo` or `foo` for a `host-dir` value. 
482
-If you supply the `/foo` value, Docker creates a bind-mount. If you supply 
481
+For example, you can specify either `/foo` or `foo` for a `host-dir` value.
482
+If you supply the `/foo` value, Docker creates a bind-mount. If you supply
483 483
 the `foo` specification, Docker creates a named volume.
484 484
 
485 485
 **--volumes-from**=[]
486 486
    Mount volumes from the specified container(s)
487 487
 
488 488
    Mounts already mounted volumes from a source container onto another
489
-   container. You must supply the source's container-id. To share 
489
+   container. You must supply the source's container-id. To share
490 490
    a volume, use the **--volumes-from** option when running
491
-   the target container. You can share volumes even if the source container 
491
+   the target container. You can share volumes even if the source container
492 492
    is not running.
493 493
 
494
-   By default, Docker mounts the volumes in the same mode (read-write or 
495
-   read-only) as it is mounted in the source container. Optionally, you 
496
-   can change this by suffixing the container-id with either the `:ro` or 
494
+   By default, Docker mounts the volumes in the same mode (read-write or
495
+   read-only) as it is mounted in the source container. Optionally, you
496
+   can change this by suffixing the container-id with either the `:ro` or
497 497
    `:rw ` keyword.
498 498
 
499 499
    If the location of the volume from the source container overlaps with
... ...
@@ -558,7 +558,7 @@ Now run a regular container, and it correctly does NOT see the shared memory seg
558 558
 ```
559 559
  $ docker run -it shm ipcs -m
560 560
 
561
- ------ Shared Memory Segments --------	
561
+ ------ Shared Memory Segments --------
562 562
  key        shmid      owner      perms      bytes      nattch     status      
563 563
 ```
564 564
 
... ...
@@ -637,6 +637,15 @@ Running the **env** command in the linker container shows environment variables
637 637
 When linking two containers Docker will use the exposed ports of the container
638 638
 to create a secure tunnel for the parent to access.
639 639
 
640
+If a container is connected to the default bridge network and `linked`
641
+with other containers, then the container's `/etc/hosts` file is updated
642
+with the linked container's name.
643
+
644
+> **Note** Since Docker may live update the container’s `/etc/hosts` file, there
645
+may be situations when processes inside the container can end up reading an
646
+empty or incomplete `/etc/hosts` file. In most cases, retrying the read again
647
+should fix the problem.
648
+
640 649
 
641 650
 ## Mapping Ports for External Usage
642 651