Browse code

add overlay networking security model node

Signed-off-by: Charles Smith <charles.smith@docker.com>

Charles Smith authored on 2016/07/27 12:40:17
Showing 19 changed files
... ...
@@ -93,5 +93,5 @@ You can connect a container to one or more networks. The networks need not be th
93 93
 * [network disconnect](network_disconnect.md)
94 94
 * [network ls](network_ls.md)
95 95
 * [network rm](network_rm.md)
96
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
96
+* [Understand Docker container networks](../../userguide/networking/index.md)
97 97
 * [Work with networks](../../userguide/networking/work-with-networks.md)
... ...
@@ -192,4 +192,4 @@ to create an externally isolated `overlay` network, you can specify the
192 192
 * [network disconnect](network_disconnect.md)
193 193
 * [network ls](network_ls.md)
194 194
 * [network rm](network_rm.md)
195
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
195
+* [Understand Docker container networks](../../userguide/networking/index.md)
... ...
@@ -34,4 +34,4 @@ Disconnects a container from a network. The container must be running to disconn
34 34
 * [network create](network_create.md)
35 35
 * [network ls](network_ls.md)
36 36
 * [network rm](network_rm.md)
37
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
37
+* [Understand Docker container networks](../../userguide/networking/index.md)
... ...
@@ -119,4 +119,4 @@ $ docker network inspect simple-network
119 119
 * [network create](network_create.md)
120 120
 * [network ls](network_ls.md)
121 121
 * [network rm](network_rm.md)
122
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
122
+* [Understand Docker container networks](../../userguide/networking/index.md)
... ...
@@ -209,4 +209,4 @@ d1584f8dc718: host
209 209
 * [network create](network_create.md)
210 210
 * [network inspect](network_inspect.md)
211 211
 * [network rm](network_rm.md)
212
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
212
+* [Understand Docker container networks](../../userguide/networking/index.md)
... ...
@@ -50,4 +50,4 @@ deletion.
50 50
 * [network create](network_create.md)
51 51
 * [network ls](network_ls.md)
52 52
 * [network inspect](network_inspect.md)
53
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
53
+* [Understand Docker container networks](../../userguide/networking/index.md)
... ...
@@ -120,10 +120,10 @@ certificates](https.md).
120 120
 
121 121
 The daemon is also potentially vulnerable to other inputs, such as image
122 122
 loading from either disk with 'docker load', or from the network with
123
-'docker pull'. As of Docker 1.3.2, images are now extracted in a chrooted 
124
-subprocess on Linux/Unix platforms, being the first-step in a wider effort 
125
-toward privilege separation. As of Docker 1.10.0, all images are stored and 
126
-accessed by the cryptographic checksums of their contents, limiting the 
123
+'docker pull'. As of Docker 1.3.2, images are now extracted in a chrooted
124
+subprocess on Linux/Unix platforms, being the first-step in a wider effort
125
+toward privilege separation. As of Docker 1.10.0, all images are stored and
126
+accessed by the cryptographic checksums of their contents, limiting the
127 127
 possibility of an attacker causing a collision with an existing image.
128 128
 
129 129
 Eventually, it is expected that the Docker daemon will run restricted
... ...
@@ -272,3 +272,4 @@ pull requests, and communicate via the mailing list.
272 272
 * [Seccomp security profiles for Docker](../security/seccomp.md)
273 273
 * [AppArmor security profiles for Docker](../security/apparmor.md)
274 274
 * [On the Security of Containers (2014)](https://medium.com/@ewindisch/on-the-security-of-containers-2c60ffe25a9e)
275
+* [Docker swarm mode overlay network security model](../userguide/networking/overlay-security-model.md)
... ...
@@ -43,7 +43,7 @@ This guide helps users learn how to use Docker Engine.
43 43
 
44 44
 ## Configure networks
45 45
 
46
-- [Understand Docker container networks](networking/dockernetworks.md)
46
+- [Understand Docker container networks](networking/index.md)
47 47
 - [Embedded DNS server in user-defined networks](networking/configure-dns.md)
48 48
 - [Get started with multi-host networking](networking/get-started-overlay.md)
49 49
 - [Work with network commands](networking/work-with-networks.md)
... ...
@@ -55,8 +55,8 @@ This guide helps users learn how to use Docker Engine.
55 55
 - [Binding container ports to the host](networking/default_network/binding.md)
56 56
 - [Build your own bridge](networking/default_network/build-bridges.md)
57 57
 - [Configure container DNS](networking/default_network/configure-dns.md)
58
-- [Customize the docker0 bridge](networking/default_network/custom-docker0.md)  
59
-- [IPv6 with Docker](networking/default_network/ipv6.md)  
58
+- [Customize the docker0 bridge](networking/default_network/custom-docker0.md)
59
+- [IPv6 with Docker](networking/default_network/ipv6.md)
60 60
 
61 61
 ## Misc
62 62
 
... ...
@@ -12,7 +12,7 @@ parent = "smn_networking_def"
12 12
 
13 13
 The information in this section explains binding container ports within the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
14 14
 
15
-> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to
15
+> **Note**: The [Docker networks feature](../index.md) allows you to
16 16
 create user-defined networks in addition to the default bridge network.
17 17
 
18 18
 By default Docker containers can make connections to the outside world, but the
... ...
@@ -100,6 +100,6 @@ address: this alternative is preferred for performance reasons.
100 100
 
101 101
 ## Related information
102 102
 
103
-- [Understand Docker container networks](../dockernetworks.md)
103
+- [Understand Docker container networks](../index.md)
104 104
 - [Work with network commands](../work-with-networks.md)
105 105
 - [Legacy container links](dockerlinks.md)
... ...
@@ -14,7 +14,7 @@ This section explains how to build your own bridge to replace the Docker default
14 14
 bridge. This is a `bridge` network named `bridge` created automatically when you
15 15
 install Docker.
16 16
 
17
-> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to
17
+> **Note**: The [Docker networks feature](../index.md) allows you to
18 18
 create user-defined networks in addition to the default bridge network.
19 19
 
20 20
 You can set up your own bridge before starting Docker and use `-b BRIDGE` or
... ...
@@ -14,7 +14,7 @@ The information in this section explains configuring container DNS within
14 14
 the Docker default bridge. This is a `bridge` network named `bridge` created
15 15
 automatically when you install Docker.  
16 16
 
17
-> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network. Please refer to the [Docker Embedded DNS](../configure-dns.md) section for more information on DNS configurations in user-defined networks.
17
+> **Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network. Please refer to the [Docker Embedded DNS](../configure-dns.md) section for more information on DNS configurations in user-defined networks.
18 18
 
19 19
 How can Docker supply each container with a hostname and DNS configuration, without having to build a custom image with the hostname written inside?  Its trick is to overlay three crucial `/etc` files inside the container with virtual files where it can write fresh information.  You can see this by running `mount` inside a container:
20 20
 
... ...
@@ -14,7 +14,7 @@ The information in this section explains container communication within the
14 14
 Docker default bridge. This is a `bridge` network named `bridge` created
15 15
 automatically when you install Docker.  
16 16
 
17
-**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
17
+**Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network.
18 18
 
19 19
 ## Communicating to the outside world
20 20
 
... ...
@@ -12,7 +12,7 @@ parent = "smn_networking_def"
12 12
 
13 13
 The information in this section explains how to customize the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.  
14 14
 
15
-**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
15
+**Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network.
16 16
 
17 17
 By default, the Docker server creates and configures the host system's `docker0` interface as an _Ethernet bridge_ inside the Linux kernel that can pass packets back and forth between other physical or virtual network interfaces so that they behave as a single Ethernet network.
18 18
 
... ...
@@ -13,7 +13,7 @@ weight=-2
13 13
 
14 14
 The information in this section explains legacy container links within the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
15 15
 
16
-Before the [Docker networks feature](../dockernetworks.md), you could use the
16
+Before the [Docker networks feature](../index.md), you could use the
17 17
 Docker link feature to allow containers to discover each other and securely
18 18
 transfer information about one container to another container. With the
19 19
 introduction of the Docker networks feature, you can still create links but they
... ...
@@ -14,19 +14,69 @@ weight=-3
14 14
 This article uses an example to explain the basics of creating a multi-host
15 15
 network. Docker Engine supports multi-host networking out-of-the-box through the
16 16
 `overlay` network driver.  Unlike `bridge` networks, overlay networks require
17
-some pre-existing conditions before you can create one. These conditions are:
17
+some pre-existing conditions before you can create one:
18 18
 
19
-* Access to a key-value store. Docker supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores.
19
+* [Docker Engine running in swarm mode](#overlay-networking-and-swarm-mode)
20
+
21
+OR
22
+
23
+* [A cluster of hosts using a key value store](#overlay-networking-with-an-external-key-value-store)
24
+
25
+## Overlay networking and swarm mode
26
+
27
+Using docker engine running in [swarm mode](../../swarm/swarm-mode.md), you can create an overlay network on a manager node.
28
+
29
+The swarm makes the overlay network available only to nodes in the swarm that
30
+require it for a service. When you create a service that uses an overlay
31
+network, the manager node automatically extends the overlay network to nodes
32
+that run service tasks.
33
+
34
+To learn more about running Docker Engine in swarm mode, refer to the
35
+[Swarm mode overview](../../swarm/index.md).
36
+
37
+The example below shows how to create a network and use it for a service from a manager node in the swarm:
38
+
39
+```bash
40
+# Create an overlay network `my-multi-host-network`.
41
+$ docker network create \
42
+  --driver overlay \
43
+  --subnet 10.0.9.0/24 \
44
+  my-multi-host-network
45
+
46
+400g6bwzd68jizzdx5pgyoe95
47
+
48
+# Create an nginx service and extend the my-multi-host-network to nodes where
49
+# the service's tasks run.
50
+$ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
51
+
52
+716thylsndqma81j6kkkb5aus
53
+```
54
+
55
+Overlay networks for a swarm are not available to unmanaged containers. For more information refer to [Docker swarm mode overlay network security model](overlay-security-model.md).
56
+
57
+
58
+## Overlay networking with an external key-value store
59
+
60
+To use an Docker engine with an external key-value store, you need the
61
+following:
62
+
63
+* Access to the key-value store. Docker supports Consul, Etcd, and ZooKeeper
64
+(Distributed store) key-value stores.
20 65
 * A cluster of hosts with connectivity to the key-value store.
21 66
 * A properly configured Engine `daemon` on each host in the cluster.
22
-* Hosts within the cluster must have unique hostnames because the key-value store uses the hostnames to identify cluster members.
67
+* Hosts within the cluster must have unique hostnames because the key-value
68
+store uses the hostnames to identify cluster members.
23 69
 
24 70
 Though Docker Machine and Docker Swarm are not mandatory to experience Docker
25
-multi-host networking, this example uses them to illustrate how they are
26
-integrated. You'll use Machine to create both the key-value store
27
-server and the host cluster. This example creates a Swarm cluster.
71
+multi-host networking with a key-value store, this example uses them to
72
+illustrate how they are integrated. You'll use Machine to create both the
73
+key-value store server and the host cluster. This example creates a Swarm
74
+cluster.
75
+
76
+>**Note:** Docker Engine running in swarm mode is not compatible with networking
77
+with an external key-value store.
28 78
 
29
-## Prerequisites
79
+### Prerequisites
30 80
 
31 81
 Before you begin, make sure you have a system on your network with the latest
32 82
 version of Docker Engine and Docker Machine installed. The example also relies
... ...
@@ -37,7 +87,7 @@ If you have not already done so, make sure you upgrade Docker Engine and Docker
37 37
 Machine to the latest versions.
38 38
 
39 39
 
40
-## Step 1: Set up a key-value store
40
+### Set up a key-value store
41 41
 
42 42
 An overlay network requires a key-value store. The key-value store holds
43 43
 information about the network state which includes discovery, networks,
... ...
@@ -80,7 +130,7 @@ key-value stores. This example uses Consul.
80 80
 Keep your terminal open and move onto the next step.
81 81
 
82 82
 
83
-## Step 2: Create a Swarm cluster
83
+### Create a Swarm cluster
84 84
 
85 85
 In this step, you use `docker-machine` to provision the hosts for your network.
86 86
 At this point, you won't actually create the network. You'll create several
... ...
@@ -123,7 +173,7 @@ At this point you have a set of hosts running on your network. You are ready to
123 123
 
124 124
 Leave your terminal open and go onto the next step.
125 125
 
126
-## Step 3: Create the overlay Network
126
+### Create the overlay Network
127 127
 
128 128
 To create an overlay network
129 129
 
... ...
@@ -213,7 +263,7 @@ To create an overlay network
213 213
   Both agents report they have the `my-net` network with the `6b07d0be843f` ID.
214 214
 	You now have a multi-host container network running!
215 215
 
216
-##  Step 4: Run an application on your Network
216
+### Run an application on your Network
217 217
 
218 218
 Once your network is created, you can start a container on any of the hosts and it automatically is part of the network.
219 219
 
... ...
@@ -263,7 +313,7 @@ Once your network is created, you can start a container on any of the hosts and
263 263
 		</html>
264 264
 		-                    100% |*******************************|   612   0:00:00 ETA
265 265
 
266
-## Step 5: Check external connectivity
266
+### Check external connectivity
267 267
 
268 268
 As you've seen, Docker's built-in overlay network driver provides out-of-the-box
269 269
 connectivity between the containers on multiple hosts within the same network.
... ...
@@ -326,7 +376,7 @@ to have external connectivity outside of their cluster.
326 326
 	the `my-net` overlay network. While the `eth1` interface represents the
327 327
 	container interface that is connected to the `docker_gwbridge` network.
328 328
 
329
-## Step 6: Extra Credit with Docker Compose
329
+### Extra Credit with Docker Compose
330 330
 
331 331
 Please refer to the Networking feature introduced in [Compose V2 format]
332 332
 (https://docs.docker.com/compose/networking/) and execute the
... ...
@@ -334,7 +384,7 @@ multi-host networking scenario in the Swarm cluster used above.
334 334
 
335 335
 ## Related information
336 336
 
337
-* [Understand Docker container networks](dockernetworks.md)
337
+* [Understand Docker container networks](index.md)
338 338
 * [Work with network commands](work-with-networks.md)
339 339
 * [Docker Swarm overview](https://docs.docker.com/swarm)
340 340
 * [Docker Machine overview](https://docs.docker.com/machine)
... ...
@@ -1,21 +1,571 @@
1 1
 <!--[metadata]>
2 2
 +++
3
-title = "Network configuration"
4
-description = "Docker networking feature is introduced"
5
-keywords = ["network, networking, bridge, docker,  documentation"]
3
+aliases=[
4
+"/engine/userguide/networking/dockernetworks/"
5
+]
6
+title = "Docker container networking"
7
+description = "How do we connect docker containers within and across hosts ?"
8
+keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
6 9
 [menu.main]
7
-identifier="smn_networking"
8
-parent= "engine_guide"
9
-weight=7
10
+identifier="networking_index"
11
+parent = "smn_networking"
12
+weight = -5
10 13
 +++
11 14
 <![end-metadata]-->
12 15
 
13
-# Docker networks feature overview
16
+# Understand Docker container networks
14 17
 
15
-This sections explains how to use the Docker networks feature. This feature allows users to define their own networks and connect containers to them. Using this feature you can create a network on a single host or a network that spans across multiple hosts.
18
+This section provides an overview of the default networking behavior that Docker
19
+Engine delivers natively. It describes the type of networks created by default
20
+and how to create your own, user-defined networks. It also describes the
21
+resources required to create networks on a single host or across a cluster of
22
+hosts.
23
+
24
+## Default Networks
25
+
26
+When you install Docker, it creates three networks automatically. You can list
27
+these networks using the `docker network ls` command:
28
+
29
+```
30
+$ docker network ls
31
+
32
+NETWORK ID          NAME                DRIVER
33
+7fca4eb8c647        bridge              bridge
34
+9f904ee27bf5        none                null
35
+cf03ee007fb4        host                host
36
+```
37
+
38
+Historically, these three networks are part of Docker's implementation. When
39
+you run a container you can use the `--network` flag to specify which network you
40
+want to run a container on. These three networks are still available to you.
41
+
42
+The `bridge` network represents the `docker0` network present in all Docker
43
+installations. Unless you specify otherwise with the `docker run
44
+--network=<NETWORK>` option, the Docker daemon connects containers to this network
45
+by default. You can see this bridge as part of a host's network stack by using
46
+the `ifconfig` command on the host.
47
+
48
+```
49
+$ ifconfig
50
+
51
+docker0   Link encap:Ethernet  HWaddr 02:42:47:bc:3a:eb
52
+          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
53
+          inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
54
+          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
55
+          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
56
+          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
57
+          collisions:0 txqueuelen:0
58
+          RX bytes:1100 (1.1 KB)  TX bytes:648 (648.0 B)
59
+```
60
+
61
+The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:
62
+
63
+```
64
+$ docker attach nonenetcontainer
65
+
66
+root@0cb243cd1293:/# cat /etc/hosts
67
+127.0.0.1	localhost
68
+::1	localhost ip6-localhost ip6-loopback
69
+fe00::0	ip6-localnet
70
+ff00::0	ip6-mcastprefix
71
+ff02::1	ip6-allnodes
72
+ff02::2	ip6-allrouters
73
+root@0cb243cd1293:/# ifconfig
74
+lo        Link encap:Local Loopback
75
+          inet addr:127.0.0.1  Mask:255.0.0.0
76
+          inet6 addr: ::1/128 Scope:Host
77
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
78
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
79
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
80
+          collisions:0 txqueuelen:0
81
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
82
+
83
+root@0cb243cd1293:/#
84
+```
85
+>**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`.
86
+
87
+The `host` network adds a container on the hosts network stack. You'll find the
88
+network configuration inside the container is identical to the host.
89
+
90
+With the exception of the `bridge` network, you really don't need to
91
+interact with these default networks. While you can list and inspect them, you
92
+cannot remove them. They are required by your Docker installation. However, you
93
+can add your own user-defined networks and these you can remove when you no
94
+longer need them. Before you learn more about creating your own networks, it is
95
+worth looking at the default `bridge` network a bit.
96
+
97
+
98
+### The default bridge network in detail
99
+The default `bridge` network is present on all Docker hosts. The `docker network inspect`
100
+command returns information about a network:
101
+
102
+```
103
+$ docker network inspect bridge
104
+
105
+[
106
+   {
107
+       "Name": "bridge",
108
+       "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
109
+       "Scope": "local",
110
+       "Driver": "bridge",
111
+       "IPAM": {
112
+           "Driver": "default",
113
+           "Config": [
114
+               {
115
+                   "Subnet": "172.17.0.1/16",
116
+                   "Gateway": "172.17.0.1"
117
+               }
118
+           ]
119
+       },
120
+       "Containers": {},
121
+       "Options": {
122
+           "com.docker.network.bridge.default_bridge": "true",
123
+           "com.docker.network.bridge.enable_icc": "true",
124
+           "com.docker.network.bridge.enable_ip_masquerade": "true",
125
+           "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
126
+           "com.docker.network.bridge.name": "docker0",
127
+           "com.docker.network.driver.mtu": "9001"
128
+       }
129
+   }
130
+]
131
+```
132
+The Engine automatically creates a `Subnet` and `Gateway` to the network.
133
+The `docker run` command automatically adds new containers to this network.
134
+
135
+```
136
+$ docker run -itd --name=container1 busybox
137
+
138
+3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
139
+
140
+$ docker run -itd --name=container2 busybox
141
+
142
+94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
143
+```
144
+
145
+Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`:
146
+
147
+```
148
+$ docker network inspect bridge
149
+
150
+{[
151
+    {
152
+        "Name": "bridge",
153
+        "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
154
+        "Scope": "local",
155
+        "Driver": "bridge",
156
+        "IPAM": {
157
+            "Driver": "default",
158
+            "Config": [
159
+                {
160
+                    "Subnet": "172.17.0.1/16",
161
+                    "Gateway": "172.17.0.1"
162
+                }
163
+            ]
164
+        },
165
+        "Containers": {
166
+            "3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
167
+                "EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
168
+                "MacAddress": "02:42:ac:11:00:02",
169
+                "IPv4Address": "172.17.0.2/16",
170
+                "IPv6Address": ""
171
+            },
172
+            "94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
173
+                "EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
174
+                "MacAddress": "02:42:ac:11:00:03",
175
+                "IPv4Address": "172.17.0.3/16",
176
+                "IPv6Address": ""
177
+            }
178
+        },
179
+        "Options": {
180
+            "com.docker.network.bridge.default_bridge": "true",
181
+            "com.docker.network.bridge.enable_icc": "true",
182
+            "com.docker.network.bridge.enable_ip_masquerade": "true",
183
+            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
184
+            "com.docker.network.bridge.name": "docker0",
185
+            "com.docker.network.driver.mtu": "9001"
186
+        }
187
+    }
188
+]
189
+```
190
+
191
+The `docker network inspect` command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy `docker run --link` option.
192
+
193
+You can `attach` to a running `container` and investigate its configuration:
194
+
195
+```
196
+$ docker attach container1
197
+
198
+root@0cb243cd1293:/# ifconfig
199
+ifconfig
200
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
201
+          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
202
+          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
203
+          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
204
+          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
205
+          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
206
+          collisions:0 txqueuelen:0
207
+          RX bytes:1296 (1.2 KiB)  TX bytes:648 (648.0 B)
208
+
209
+lo        Link encap:Local Loopback
210
+          inet addr:127.0.0.1  Mask:255.0.0.0
211
+          inet6 addr: ::1/128 Scope:Host
212
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
213
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
214
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
215
+          collisions:0 txqueuelen:0
216
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
217
+```
218
+
219
+Then use `ping`to send three ICMP requests and test the connectivity of the
220
+containers on this `bridge` network.
221
+
222
+```
223
+root@0cb243cd1293:/# ping -w3 172.17.0.3
224
+
225
+PING 172.17.0.3 (172.17.0.3): 56 data bytes
226
+64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
227
+64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
228
+64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms
229
+
230
+--- 172.17.0.3 ping statistics ---
231
+3 packets transmitted, 3 packets received, 0% packet loss
232
+round-trip min/avg/max = 0.074/0.083/0.096 ms
233
+```
234
+
235
+Finally, use the `cat` command to check the `container1` network configuration:
236
+
237
+```
238
+root@0cb243cd1293:/# cat /etc/hosts
239
+
240
+172.17.0.2	3386a527aa08
241
+127.0.0.1	localhost
242
+::1	localhost ip6-localhost ip6-loopback
243
+fe00::0	ip6-localnet
244
+ff00::0	ip6-mcastprefix
245
+ff02::1	ip6-allnodes
246
+ff02::2	ip6-allrouters
247
+```
248
+To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, attach to `container2` and repeat these three commands.
249
+
250
+```
251
+$ docker attach container2
252
+
253
+root@0cb243cd1293:/# ifconfig
254
+
255
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03
256
+          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
257
+          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
258
+          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
259
+          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
260
+          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
261
+          collisions:0 txqueuelen:0
262
+          RX bytes:1166 (1.1 KiB)  TX bytes:1026 (1.0 KiB)
263
+
264
+lo        Link encap:Local Loopback
265
+          inet addr:127.0.0.1  Mask:255.0.0.0
266
+          inet6 addr: ::1/128 Scope:Host
267
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
268
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
269
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
270
+          collisions:0 txqueuelen:0
271
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
272
+
273
+root@0cb243cd1293:/# ping -w3 172.17.0.2
274
+
275
+PING 172.17.0.2 (172.17.0.2): 56 data bytes
276
+64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
277
+64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
278
+64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
279
+
280
+--- 172.17.0.2 ping statistics ---
281
+3 packets transmitted, 3 packets received, 0% packet loss
282
+round-trip min/avg/max = 0.067/0.071/0.075 ms
283
+/ # cat /etc/hosts
284
+172.17.0.3	94447ca47985
285
+127.0.0.1	localhost
286
+::1	localhost ip6-localhost ip6-loopback
287
+fe00::0	ip6-localnet
288
+ff00::0	ip6-mcastprefix
289
+ff02::1	ip6-allnodes
290
+ff02::2	ip6-allrouters
291
+```
292
+
293
+The default `docker0` bridge network supports the use of port mapping and `docker run --link` to allow communications between containers in the `docker0` network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead.
294
+
295
+## User-defined networks
296
+
297
+You can create your own user-defined networks that better isolate containers.
298
+Docker provides some default **network drivers** for creating these networks.
299
+You can create a new **bridge network**, **overlay network** or **MACVLAN
300
+network**. You can also create a **network plugin** or **remote network**
301
+written to your own specifications.
302
+
303
+You can create multiple networks. You can add containers to more than one
304
+network. Containers can only communicate within networks but not across
305
+networks. A container attached to two networks can communicate with member
306
+containers in either network. When a container is connected to multiple
307
+networks, its external connectivity is provided via the first non-internal
308
+network, in lexical order.
309
+
310
+The next few sections describe each of Docker's built-in network drivers in
311
+greater detail.
312
+
313
+### A bridge network
314
+
315
+The easiest user-defined network to create is a `bridge` network. This network
316
+is similar to the historical, default `docker0` network. There are some added
317
+features and some old features that aren't available.
318
+
319
+```
320
+$ docker network create --driver bridge isolated_nw
321
+1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b
322
+
323
+$ docker network inspect isolated_nw
324
+
325
+[
326
+    {
327
+        "Name": "isolated_nw",
328
+        "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
329
+        "Scope": "local",
330
+        "Driver": "bridge",
331
+        "IPAM": {
332
+            "Driver": "default",
333
+            "Config": [
334
+                {
335
+                    "Subnet": "172.21.0.0/16",
336
+                    "Gateway": "172.21.0.1/16"
337
+                }
338
+            ]
339
+        },
340
+        "Containers": {},
341
+        "Options": {}
342
+    }
343
+]
344
+
345
+$ docker network ls
346
+
347
+NETWORK ID          NAME                DRIVER
348
+9f904ee27bf5        none                null
349
+cf03ee007fb4        host                host
350
+7fca4eb8c647        bridge              bridge
351
+c5ee82f76de3        isolated_nw         bridge
352
+
353
+```
354
+
355
+After you create the network, you can launch containers on it using  the `docker run --network=<NETWORK>` option.
356
+
357
+```
358
+$ docker run --network=isolated_nw -itd --name=container3 busybox
359
+
360
+8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c
361
+
362
+$ docker network inspect isolated_nw
363
+[
364
+    {
365
+        "Name": "isolated_nw",
366
+        "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
367
+        "Scope": "local",
368
+        "Driver": "bridge",
369
+        "IPAM": {
370
+            "Driver": "default",
371
+            "Config": [
372
+                {}
373
+            ]
374
+        },
375
+        "Containers": {
376
+            "8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c": {
377
+                "EndpointID": "93b2db4a9b9a997beb912d28bcfc117f7b0eb924ff91d48cfa251d473e6a9b08",
378
+                "MacAddress": "02:42:ac:15:00:02",
379
+                "IPv4Address": "172.21.0.2/16",
380
+                "IPv6Address": ""
381
+            }
382
+        },
383
+        "Options": {}
384
+    }
385
+]
386
+```
387
+
388
+The containers you launch into this network must reside on the same Docker host.
389
+Each container in the network can immediately communicate with other containers
390
+in the network. Though, the network itself isolates the containers from external
391
+networks.
392
+
393
+![An isolated network](images/bridge_network.png)
394
+
395
+Within a user-defined bridge network, linking is not supported. You can
396
+expose and publish container ports on containers in this network. This is useful
397
+if you want to make a portion of the `bridge` network available to an outside
398
+network.
399
+
400
+![Bridge network](images/network_access.png)
401
+
402
+A bridge network is useful in cases where you want to run a relatively small
403
+network on a single host. You can, however, create significantly larger networks
404
+by creating an `overlay` network.
405
+
406
+
407
+### An overlay network with Docker Engine swarm mode
408
+
409
+You can create an overlay network on a manager node running in swarm mode
410
+without an external key-value store. The swarm makes the overlay network
411
+available only to nodes in the swarm that require it for a service. When you
412
+create a service that uses the overlay network, the manager node automatically
413
+extends the overlay network to nodes that run service tasks.
414
+
415
+To learn more about running Docker Engine in swarm mode, refer to the
416
+[Swarm mode overview](../../swarm/index.md).
417
+
418
+The example below shows how to create a network and use it for a service from a manager node in the swarm:
419
+
420
+```bash
421
+# Create an overlay network `my-multi-host-network`.
422
+$ docker network create \
423
+  --driver overlay \
424
+  --subnet 10.0.9.0/24 \
425
+  my-multi-host-network
426
+
427
+400g6bwzd68jizzdx5pgyoe95
428
+
429
+# Create an nginx service and extend the my-multi-host-network to nodes where
430
+# the service's tasks run.
431
+$ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
432
+
433
+716thylsndqma81j6kkkb5aus
434
+```
435
+
436
+Overlay networks for a swarm are not available to containers started with
437
+`docker run` that don't run as part of a swarm mode service. For more
438
+information refer to [Docker swarm mode overlay network security model](overlay-security-model.md).
439
+
440
+### An overlay network with an external key-value store
441
+
442
+If you are not using Docker Engine in swarm mode, the `overlay` network requires
443
+a valid key-value store service. Supported key-value stores include Consul,
444
+Etcd, and ZooKeeper (Distributed store). Before creating a network on this
445
+version of the Engine, you must install and configure your chosen key-value
446
+store service. The Docker hosts that you intend to network and the service must
447
+be able to communicate.
448
+
449
+>**Note:** Docker Engine running in swarm mode is not compatible with networking
450
+with an external key-value store.
451
+
452
+![Key-value store](images/key_value.png)
453
+
454
+Each host in the network must run a Docker Engine instance. The easiest way to
455
+provision the hosts is with Docker Machine.
456
+
457
+![Engine on each host](images/engine_on_net.png)
458
+
459
+You should open the following ports between each of your hosts.
460
+
461
+| Protocol | Port | Description           |
462
+|----------|------|-----------------------|
463
+| udp      | 4789 | Data plane (VXLAN)    |
464
+| tcp/udp  | 7946 | Control plane         |
465
+
466
+Your key-value store service may require additional ports.
467
+Check your vendor's documentation and open any required ports.
468
+
469
+Once you have several machines provisioned, you can use Docker Swarm to quickly
470
+form them into a swarm which includes a discovery service as well.
471
+
472
+To create an overlay network, you configure options on  the `daemon` on each
473
+Docker Engine for use with `overlay` network. There are three options to set:
474
+
475
+<table>
476
+    <thead>
477
+    <tr>
478
+        <th>Option</th>
479
+        <th>Description</th>
480
+    </tr>
481
+    </thead>
482
+    <tbody>
483
+    <tr>
484
+        <td><pre>--cluster-store=PROVIDER://URL</pre></td>
485
+        <td>Describes the location of the KV service.</td>
486
+    </tr>
487
+    <tr>
488
+        <td><pre>--cluster-advertise=HOST_IP|HOST_IFACE:PORT</pre></td>
489
+        <td>The IP address or interface of the HOST used for clustering.</td>
490
+    </tr>
491
+    <tr>
492
+        <td><pre>--cluster-store-opt=KEY-VALUE OPTIONS</pre></td>
493
+        <td>Options such as TLS certificate or tuning discovery Timers</td>
494
+    </tr>
495
+    </tbody>
496
+</table>
497
+
498
+Create an `overlay` network on one of the machines in the Swarm.
499
+
500
+    $ docker network create --driver overlay my-multi-host-network
501
+
502
+This results in a single network spanning multiple hosts. An `overlay` network
503
+provides complete isolation for the containers.
504
+
505
+![An overlay network](images/overlay_network.png)
506
+
507
+Then, on each host, launch containers making sure to specify the network name.
508
+
509
+    $ docker run -itd --network=my-multi-host-network busybox
510
+
511
+Once connected, each container has access to all the containers in the network
512
+regardless of which Docker host the container was launched on.
513
+
514
+![Published port](images/overlay-network-final.png)
515
+
516
+If you would like to try this for yourself, see the [Getting started for
517
+overlay](get-started-overlay.md).
518
+
519
+### Custom network plugin
520
+
521
+If you like, you can write your own network driver plugin. A network
522
+driver plugin makes use of Docker's plugin infrastructure. In this
523
+infrastructure, a plugin is a process running on the same Docker host as the
524
+Docker `daemon`.
525
+
526
+Network plugins follow the same restrictions and installation rules as other
527
+plugins. All plugins make use of the plugin API. They have a lifecycle that
528
+encompasses installation, starting, stopping and activation.
529
+
530
+Once you have created and installed a custom network driver, you use it like the
531
+built-in network drivers. For example:
532
+
533
+    $ docker network create --driver weave mynet
534
+
535
+You can inspect it, add containers to and from it, and so forth. Of course,
536
+different plugins may make use of different technologies or frameworks. Custom
537
+networks can include features not present in Docker's default networks. For more
538
+information on writing plugins, see [Extending Docker](../../extend/index.md) and
539
+[Writing a network driver plugin](../../extend/plugins_network.md).
540
+
541
+### Docker embedded DNS server
542
+
543
+Docker daemon runs an embedded DNS server to provide automatic service discovery
544
+for containers connected to user defined networks. Name resolution requests from
545
+the containers are handled first by the embedded DNS server. If the embedded DNS
546
+server is unable to resolve the request it will be forwarded to any external DNS
547
+servers configured for the container. To facilitate this when the container is
548
+created, only the embedded DNS server reachable at `127.0.0.11` will be listed
549
+in the container's `resolv.conf` file. More information on embedded DNS server on
550
+user-defined networks can be found in the [embedded DNS server in user-defined networks]
551
+(configure-dns.md)
552
+
553
+## Links
554
+
555
+Before the Docker network feature, you could use the Docker link feature to
556
+allow containers to discover each other.  With the introduction of Docker networks,
557
+containers can be discovered by its name automatically. But you can still create
558
+links but they behave differently when used in the default `docker0` bridge network
559
+compared to user-defined networks. For more information, please refer to
560
+[Legacy Links](default_network/dockerlinks.md) for link feature in default `bridge` network
561
+and the [linking containers in user-defined networks](work-with-networks.md#linking-containers-in-user-defined-networks) for links
562
+functionality in user-defined networks.
563
+
564
+## Related information
16 565
 
17
-- [Understand Docker container networks](dockernetworks.md)
18 566
 - [Work with network commands](work-with-networks.md)
19 567
 - [Get started with multi-host networking](get-started-overlay.md)
20
-
21
-If you are already familiar with Docker's default bridge network, `docker0` that network continues to be supported. It is created automatically in every installation. The default bridge network is also named `bridge`. To see a list of topics related to that network, read the articles listed in the [Docker default bridge network](default_network/index.md).
568
+- [Managing Data in Containers](../../tutorials/dockervolumes.md)
569
+- [Docker Machine overview](https://docs.docker.com/machine)
570
+- [Docker Swarm overview](https://docs.docker.com/swarm)
571
+- [Investigate the LibNetwork project](https://github.com/docker/libnetwork)
22 572
new file mode 100644
... ...
@@ -0,0 +1,22 @@
0
+<!--[metadata]>
1
+title = "Network configuration"
2
+description = "Docker networking feature is introduced"
3
+keywords = ["network, networking, bridge, docker,  documentation"]
4
+type="menu"
5
+[menu.main]
6
+identifier="smn_networking"
7
+parent= "engine_guide"
8
+weight=7
9
+<![end-metadata]-->
10
+
11
+# Docker networks feature overview
12
+
13
+This sections explains how to use the Docker networks feature. This feature allows users to define their own networks and connect containers to them. Using this feature you can create a network on a single host or a network that spans across multiple hosts.
14
+
15
+- [Understand Docker container networks](index.md)
16
+- [Work with network commands](work-with-networks.md)
17
+- [Get started with multi-host networking](get-started-overlay.md)
18
+
19
+If you are already familiar with Docker's default bridge network, `docker0` that network continues to be supported. It is created automatically in every installation. The default bridge network is also named `bridge`. To see a list of topics related to that network, read the articles listed in the [Docker default bridge network](default_network/index.md).
0 20
new file mode 100644
... ...
@@ -0,0 +1,66 @@
0
+<!--[metadata]>
1
+title = "Swarm mode overlay network security model"
2
+description = "Docker swarm mode overlay network security model"
3
+keywords = ["network, docker, documentation, user guide, multihost, swarm mode", "overlay"]
4
+[menu.main]
5
+parent = "smn_networking"
6
+weight=-2
7
+<![end-metadata]-->
8
+
9
+# Docker swarm mode overlay network security model
10
+
11
+Overlay networking for Docker Engine swarm mode comes secure out of the box. The
12
+swarm nodes exchange overlay network information using a gossip protocol. By
13
+default the nodes encrypt and authenticate information they exchange via gossip
14
+using the [AES algorithm](https://en.wikipedia.org/wiki/Galois/Counter_Mode) in
15
+GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data
16
+every 12 hours.
17
+
18
+You can also encrypt data exchanged between containers on different nodes on the
19
+overlay network. To enable encryption, when you create an overlay network pass
20
+the `--opt encrypted` flag:
21
+
22
+```bash
23
+$ docker network create --opt encrypted --driver overlay my-multi-host-network
24
+
25
+dt0zvqn0saezzinc8a5g4worx
26
+```
27
+
28
+When you enable overlay encryption, Docker creates IPSEC tunnels between all the
29
+nodes where tasks are scheduled for services attached to the overlay network.
30
+These tunnels also use the AES algorithm in GCM mode and manager nodes
31
+automatically rotate the keys every 12 hours.
32
+
33
+## Swarm mode overlay networks and unmanaged containers
34
+
35
+Because the overlay networks for swarm mode use encryption keys from the manager
36
+nodes to encrypt the gossip communications, only containers running as tasks in
37
+the swarm have access to the keys. Consequently, containers started outside of
38
+swarm mode using `docker run` (unmanaged containers) cannot attach to the
39
+overlay network.
40
+
41
+For example:
42
+
43
+```bash
44
+$ docker run --network my-multi-host-network nginx
45
+
46
+docker: Error response from daemon: swarm-scoped network
47
+(my-multi-host-network) is not compatible with `docker create` or `docker
48
+run`. This network can only be used by a docker service.
49
+```
50
+
51
+To work around this situation, migrate the unmanaged containers to managed
52
+services. For instance:
53
+
54
+```bash
55
+$ docker service create --network my-multi-host-network my-image
56
+```
57
+
58
+Because [swarm mode](../../swarm/index.md) is an optional feature, the Docker
59
+Engine preserves backward compatibility. You can continue to rely on a
60
+third-party key-value store to support overlay networking if you wish.
61
+However, switching to swarm-mode is strongly encouraged. In addition to the
62
+security benefits described in this article, swarm mode enables you to leverage
63
+the substantially greater scalability provided by the new services API.
... ...
@@ -23,7 +23,7 @@ available through the Docker Engine CLI. These commands are:
23 23
 * `docker network inspect`
24 24
 
25 25
 While not required, it is a good idea to read [Understanding Docker
26
-network](dockernetworks.md) before trying the examples in this section. The
26
+network](index.md) before trying the examples in this section. The
27 27
 examples for the rely on a `bridge` network so that you can try them
28 28
 immediately.  If you would prefer to experiment with an `overlay` network see
29 29
 the [Getting started with multi-host networks](get-started-overlay.md) instead.