Browse code

Networking API and UX documentation

More doc updates will follow

Signed-off-by: Madhu Venugopal <madhu@docker.com>

Madhu Venugopal authored on 2015/09/29 10:57:03
Showing 22 changed files
... ...
@@ -2,7 +2,7 @@
2 2
 +++
3 3
 title = "Network configuration"
4 4
 description = "Docker networking"
5
-keywords = ["network, networking, bridge, docker,  documentation"]
5
+keywords = ["network, networking, bridge, overlay, cluster, multihost, docker, documentation"]
6 6
 [menu.main]
7 7
 parent= "smn_administrate"
8 8
 +++
... ...
@@ -10,6 +10,9 @@ parent= "smn_administrate"
10 10
 
11 11
 # Network configuration
12 12
 
13
+> **Note:**
14
+> This document is outdated and needs a major overhaul.
15
+
13 16
 ## Summary
14 17
 
15 18
 When Docker starts, it creates a virtual interface named `docker0` on
... ...
@@ -15,6 +15,7 @@ weight = 6
15 15
 
16 16
 Currently, you can extend Docker by adding a plugin. This section contains the following topics:
17 17
 
18
-* [Understand Docker plugins](/extend/plugins)
19
-* [Write a volume plugin](/extend/plugins_volume)
20
-* [Docker plugin API](/extend/plugin_api)
18
+* [Understand Docker plugins](/extend/plugins.md)
19
+* [Write a volume plugin](/extend/plugins_volume.md)
20
+* [Write a network plugin](/extend/plugins_network.md)
21
+* [Docker plugin API](/extend/plugin_api.md)
... ...
@@ -17,8 +17,10 @@ plugins.
17 17
 ## Types of plugins
18 18
 
19 19
 Plugins extend Docker's functionality.  They come in specific types.  For
20
-example, a [volume plugin](/extend/plugins_volume) might enable Docker
21
-volumes to persist across multiple Docker hosts.
20
+example, a [volume plugin](/extend/plugins_volume.md) might enable Docker
21
+volumes to persist across multiple Docker hosts and a 
22
+[network plugin](/extend/plugins_network.md) might provide network plumbing
23
+using a favorite networking technology, such as vxlan overlay, ipvlan, EVPN, etc.
22 24
 
23 25
 Currently Docker supports volume and network driver plugins. In the future it
24 26
 will support additional plugin types.
25 27
new file mode 100644
... ...
@@ -0,0 +1,40 @@
0
+# Docker network driver plugins
1
+
2
+Docker supports network driver plugins via 
3
+[LibNetwork](https://github.com/docker/libnetwork). Network driver plugins are 
4
+implemented as "remote drivers" for LibNetwork, which shares plugin 
5
+infrastructure with Docker. In effect this means that network driver plugins 
6
+are activated in the same way as other plugins, and use the same kind of 
7
+protocol.
8
+
9
+## Using network driver plugins
10
+
11
+The means of installing and running a network driver plugin will depend on the
12
+particular plugin.
13
+
14
+Once running however, network driver plugins are used just like the built-in
15
+network drivers: by being mentioned as a driver in network-oriented Docker
16
+commands. For example,
17
+
18
+    docker network create -d weave mynet
19
+
20
+Some network driver plugins are listed in [plugins.md](/docs/extend/plugins.md)
21
+
22
+The network thus created is owned by the plugin, so subsequent commands
23
+referring to that network will also be run through the plugin such as,
24
+
25
+    docker run --net=mynet busybox top
26
+
27
+## Network driver plugin protocol
28
+
29
+The network driver protocol, additional to the plugin activation call, is
30
+documented as part of LibNetwork:
31
+[https://github.com/docker/libnetwork/blob/master/docs/remote.md](https://github.com/docker/libnetwork/blob/master/docs/remote.md).
32
+
33
+# Related GitHub PRs and issues
34
+
35
+Please record your feedback in the following issue, on the usual
36
+Google Groups, or the IRC channel #docker-network.
37
+
38
+ - [#14083](https://github.com/docker/docker/issues/14083) Feedback on
39
+   experimental networking features
... ...
@@ -2484,6 +2484,218 @@ Status Codes
2484 2484
 -   **409** - volume is in use and cannot be removed
2485 2485
 -   **500** - server error
2486 2486
 
2487
+## 2.5 Networks
2488
+
2489
+### List networks
2490
+
2491
+`GET /networks`
2492
+
2493
+**Example request**:
2494
+
2495
+  GET /networks HTTP/1.1
2496
+
2497
+**Example response**:
2498
+
2499
+  HTTP/1.1 200 OK
2500
+  Content-Type: application/json
2501
+
2502
+```
2503
+  [
2504
+    {
2505
+      "name": "bridge",
2506
+      "id": "f995e41e471c833266786a64df584fbe4dc654ac99f63a4ee7495842aa093fc4",
2507
+      "driver": "bridge"
2508
+    },
2509
+    {
2510
+      "name": "none",
2511
+      "id": "21e34df9b29c74ae45ba312f8e9f83c02433c9a877cfebebcf57be78f69b77c8",
2512
+      "driver": "null"
2513
+    },
2514
+    {
2515
+      "name": "host",
2516
+      "id": "3f43a0873f00310a71cd6a71e2e60c113cf17d1812be2ec22fd519fbac68ec91",
2517
+      "driver": "host"
2518
+    }
2519
+  ]
2520
+```
2521
+
2522
+
2523
+
2524
+Query Parameters:
2525
+
2526
+- **filter** - JSON encoded value of the filters (a `map[string][]string`) to process on the volumes list. Available filters: `name=[network-names]` , `id=[network-ids]`
2527
+
2528
+Status Codes:
2529
+
2530
+-   **200** - no error
2531
+-   **500** - server error
2532
+
2533
+### Inspect network
2534
+
2535
+`GET /networks/<network-id>`
2536
+
2537
+**Example request**:
2538
+
2539
+  GET /networks/f995e41e471c833266786a64df584fbe4dc654ac99f63a4ee7495842aa093fc4 HTTP/1.1
2540
+
2541
+**Example response**:
2542
+
2543
+  HTTP/1.1 200 OK
2544
+  Content-Type: application/json
2545
+
2546
+```
2547
+  {
2548
+    "name": "bridge",
2549
+    "id": "f995e41e471c833266786a64df584fbe4dc654ac99f63a4ee7495842aa093fc4",
2550
+    "driver": "bridge",
2551
+    "containers": {
2552
+      "931d29e96e63022a3691f55ca18b28600239acf53878451975f77054b05ba559": {
2553
+        "endpoint": "aa79321e2899e6d72fcd46e6a4ad7f81ab9a19c3b06e384ef4ce51fea35827f9",
2554
+        "mac_address": "02:42:ac:11:00:04",
2555
+        "ipv4_address": "172.17.0.4/16"
2556
+      },
2557
+      "961249b4ae6c764b11eed923e8463c102689111fffd933627b2e7e359c7d0f7c": {
2558
+        "endpoint": "4f62c5aea6b9a70512210be7db976bd4ec2cdba47125e4fe514d18c81b1624b1",
2559
+        "mac_address": "02:42:ac:11:00:02",
2560
+        "ipv4_address": "172.17.0.2/16"
2561
+      },
2562
+      "9f6e0fec4449f42a173ed85be96dc2253b6719edd850d8169bc31bdc45db675c": {
2563
+        "endpoint": "352b512a5bccdfc77d16c2c04d04408e718f879a16f9ce3913a4733139e4f98d",
2564
+        "mac_address": "02:42:ac:11:00:03",
2565
+        "ipv4_address": "172.17.0.3/16"
2566
+      }
2567
+    }
2568
+  }
2569
+```
2570
+
2571
+Status Codes:
2572
+
2573
+-   **200** - no error
2574
+-   **404** - network not found
2575
+
2576
+### Create a network
2577
+
2578
+`POST /networks/create`
2579
+
2580
+Create a network
2581
+
2582
+**Example request**:
2583
+
2584
+  POST /networks/create HTTP/1.1
2585
+  Content-Type: application/json
2586
+
2587
+```
2588
+  {
2589
+    "name":"isolated_nw",
2590
+    "driver":"bridge"
2591
+  }
2592
+```
2593
+
2594
+**Example response**:
2595
+
2596
+  HTTP/1.1 201 Created
2597
+  Content-Type: application/json
2598
+
2599
+```
2600
+  {
2601
+    "id": "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30",
2602
+    "warning": ""
2603
+  }
2604
+```
2605
+
2606
+Status Codes:
2607
+
2608
+- **201** - no error
2609
+- **404** - driver not found
2610
+- **500** - server error
2611
+
2612
+JSON Parameters:
2613
+
2614
+- **name** - The new network's name. this is a mandatory field
2615
+- **driver** - Name of the network driver to use. Defaults to `bridge` driver
2616
+- **options** - Network specific options to be used by the drivers
2617
+- **check_duplicate** - Requests daemon to check for networks with same name
2618
+
2619
+### Connect a container to a network
2620
+
2621
+`POST /networks/(id)/connect`
2622
+
2623
+Connects a container to a network
2624
+
2625
+**Example request**:
2626
+
2627
+  POST /networks/22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30/connect HTTP/1.1
2628
+  Content-Type: application/json
2629
+
2630
+```
2631
+  {
2632
+    "container":"3613f73ba0e4"
2633
+  }
2634
+```
2635
+
2636
+**Example response**:
2637
+
2638
+  HTTP/1.1 200 OK
2639
+
2640
+Status Codes:
2641
+
2642
+- **201** - no error
2643
+- **404** - network or container is not found
2644
+
2645
+JSON Parameters:
2646
+
2647
+- **container** - container-id/name to be connected to the network
2648
+
2649
+### Disconnect a container from a network
2650
+
2651
+`POST /networks/(id)/disconnect`
2652
+
2653
+Disconnects a container from a network
2654
+
2655
+**Example request**:
2656
+
2657
+  POST /networks/22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30/disconnect HTTP/1.1
2658
+  Content-Type: application/json
2659
+
2660
+```
2661
+  {
2662
+    "container":"3613f73ba0e4"
2663
+  }
2664
+```
2665
+
2666
+**Example response**:
2667
+
2668
+  HTTP/1.1 200 OK
2669
+
2670
+Status Codes:
2671
+
2672
+- **201** - no error
2673
+- **404** - network or container not found
2674
+
2675
+JSON Parameters:
2676
+
2677
+- **container** - container-id/name to be disconnected from a network
2678
+
2679
+### Remove a network
2680
+
2681
+`DELETE /networks/(id)`
2682
+
2683
+Instruct the driver to remove the network (`id`).
2684
+
2685
+**Example request**:
2686
+
2687
+  DELETE /networks/22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30 HTTP/1.1
2688
+
2689
+**Example response**:
2690
+
2691
+  HTTP/1.1 204 No Content
2692
+
2693
+Status Codes
2694
+
2695
+-   **204** - no error
2696
+-   **404** - no such network
2697
+-   **500** - server error
2698
+
2487 2699
 # 3. Going further
2488 2700
 
2489 2701
 ## 3.1 Inside `docker run`
2490 2702
new file mode 100644
... ...
@@ -0,0 +1,30 @@
0
+<!--[metadata]>
1
+title = "network connect"
2
+description = "The network connect command description and usage"
3
+keywords = ["network, connect"]
4
+[menu.main]
5
+parent = "smn_cli"
6
+<![end-metadata]-->
7
+
8
+# network connect
9
+
10
+    Usage:  docker network connect [OPTIONS] NETWORK CONTAINER
11
+
12
+    Connects a container to a network
13
+
14
+      --help=false       Print usage
15
+
16
+Connects a running container to a network. This enables instant communication with other containers belonging to the same network. 
17
+
18
+```
19
+  $ docker network create -d overlay multi-host-network
20
+  $ docker run -d --name=container1 busybox top
21
+  $ docker network connect multi-host-network container1
22
+```
23
+
24
+the container will be connected to the network that is created and managed by the driver (multi-host overlay driver in the above example) or external network plugins.
25
+
26
+Multiple containers can be connected to the same network and the containers in the same network will start to communicate with each other. If the driver/plugin supports multi-host connectivity, then the containers connected to the same multi-host network will be able to communicate seamlessly.
27
+
0 28
new file mode 100644
... ...
@@ -0,0 +1,32 @@
0
+<!--[metadata]>
1
+title = "network create"
2
+description = "The network create command description and usage"
3
+keywords = ["network, create"]
4
+[menu.main]
5
+parent = "smn_cli"
6
+<![end-metadata]-->
7
+
8
+# network create
9
+
10
+    Usage:  docker network create [OPTIONS] NETWORK-NAME
11
+
12
+    Creates a new network with a name specified by the user
13
+
14
+      -d, --driver=      Driver to manage the Network
15
+      --help=false       Print usage
16
+
17
+Creates a new network that containers can connect to. If the driver supports multi-host networking, the created network will be made available across all the hosts in the cluster. Daemon will do its best to identify network name conflicts. But its the users responsibility to make sure network name is unique across the cluster. You create a network and then configure the container to use it, for example:
18
+
19
+```
20
+  $ docker network create -d overlay multi-host-network
21
+  $ docker run -itd --net=multi-host-network busybox
22
+```
23
+
24
+the container will be connected to the network that is created and managed by the driver (multi-host overlay driver in the above example) or external network plugins.
25
+
26
+Multiple containers can be connected to the same network and the containers in the same network will start to communicate with each other. If the driver/plugin supports multi-host connectivity, then the containers connected to the same multi-host network will be able to communicate seamlessly.
27
+
28
+*Note*: UX needs enhancement to accept network options to be passed to the drivers
29
+
0 30
new file mode 100644
... ...
@@ -0,0 +1,27 @@
0
+<!--[metadata]>
1
+title = "network disconnect"
2
+description = "The network disconnect command description and usage"
3
+keywords = ["network, disconnect"]
4
+[menu.main]
5
+parent = "smn_cli"
6
+<![end-metadata]-->
7
+
8
+# network disconnect
9
+
10
+    Usage:  docker network disconnect [OPTIONS] NETWORK CONTAINER
11
+
12
+    Disconnects a container from a network
13
+
14
+      --help=false       Print usage
15
+
16
+Disconnects a running container from a  network.
17
+
18
+```
19
+  $ docker network create -d overlay multi-host-network
20
+  $ docker run -d --net=multi-host-network --name=container1 busybox top
21
+  $ docker network disconnect multi-host-network container1
22
+```
23
+
24
+the container will be disconnected from the network.
0 25
new file mode 100644
... ...
@@ -0,0 +1,49 @@
0
+<!--[metadata]>
1
+title = "network inspect"
2
+description = "The network inspect command description and usage"
3
+keywords = ["network, inspect"]
4
+[menu.main]
5
+parent = "smn_cli"
6
+<![end-metadata]-->
7
+
8
+# network inspect
9
+
10
+    Usage:  docker network inspect [OPTIONS] NETWORK
11
+
12
+    Displays detailed information on a network
13
+
14
+      --help=false       Print usage
15
+
16
+Returns information about a network. By default, this command renders all results
17
+in a JSON object. 
18
+
19
+Example output:
20
+
21
+```
22
+$ sudo docker run -itd --name=container1 busybox
23
+f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27
24
+
25
+$ sudo docker run -itd --name=container2 busybox
26
+bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727
27
+
28
+$ sudo docker network inspect bridge
29
+{
30
+    "name": "bridge",
31
+    "id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
32
+    "driver": "bridge",
33
+    "containers": {
34
+        "bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
35
+            "endpoint": "e0ac95934f803d7e36384a2029b8d1eeb56cb88727aa2e8b7edfeebaa6dfd758",
36
+            "mac_address": "02:42:ac:11:00:03",
37
+            "ipv4_address": "172.17.0.3/16"
38
+        },
39
+        "f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27": {
40
+            "endpoint": "31de280881d2a774345bbfb1594159ade4ae4024ebfb1320cb74a30225f6a8ae",
41
+            "mac_address": "02:42:ac:11:00:02",
42
+            "ipv4_address": "172.17.0.2/16"
43
+        }
44
+    }
45
+}
46
+```
0 47
new file mode 100644
... ...
@@ -0,0 +1,32 @@
0
+<!--[metadata]>
1
+title = "network ls"
2
+description = "The network ls command description and usage"
3
+keywords = ["network, list"]
4
+[menu.main]
5
+parent = "smn_cli"
6
+<![end-metadata]-->
7
+
8
+# docker network ls
9
+
10
+    Usage:  docker network ls [OPTIONS]
11
+
12
+    Lists all the networks created by the user
13
+      --help=false          Print usage
14
+      -l, --latest=false    Show the latest network created
15
+      -n=-1                 Show n last created networks
16
+      --no-trunc=false      Do not truncate the output
17
+      -q, --quiet=false     Only display numeric IDs
18
+
19
+Lists all the networks Docker knows about. This include the networks that spans across multiple hosts in a cluster.
20
+
21
+Example output:
22
+
23
+```
24
+    $ sudo docker network ls
25
+    NETWORK ID          NAME                DRIVER
26
+    7fca4eb8c647        bridge              bridge
27
+    9f904ee27bf5        none                null
28
+    cf03ee007fb4        host                host
29
+```
0 30
new file mode 100644
... ...
@@ -0,0 +1,23 @@
0
+<!--[metadata]>
1
+title = "network rm"
2
+description = "the network rm command description and usage"
3
+keywords = ["network, rm"]
4
+[menu.main]
5
+parent = "smn_cli"
6
+<![end-metadata]-->
7
+
8
+# network rm
9
+
10
+    Usage:  docker network rm [OPTIONS] NETWORK
11
+
12
+    Deletes a network
13
+
14
+      --help=false       Print usage
15
+
16
+Removes a network. You cannot remove a network that is in use by 1 or more containers.
17
+
18
+```
19
+  $ docker network rm my-network
20
+```
... ...
@@ -132,6 +132,12 @@ namespaces, cgroups, capabilities, and filesystem access controls. It allows
132 132
 you to manage the lifecycle of the container performing additional operations
133 133
 after the container is created.
134 134
 
135
+## libnetwork
136
+
137
+libnetwork provides a native Go implementation for creating and managing container
138
+network namespaces and other network resources. It manage the networking lifecycle 
139
+of the container performing additional operations after the container is created.
140
+
135 141
 ## link
136 142
 
137 143
 links provide an interface to connect Docker containers running on the same host
... ...
@@ -149,7 +155,12 @@ installs Docker on them, then configures the Docker client to talk to them.
149 149
 
150 150
 *Also known as : docker-machine*
151 151
 
152
-## overlay
152
+## overlay network driver
153
+
154
+Overlay network driver provides out of the box multi-host network connectivity
155
+for docker containers in a cluster.
156
+
157
+## overlay storage driver
153 158
 
154 159
 OverlayFS is a [filesystem](#filesystem) service for Linux which implements a
155 160
 [union mount](http://en.wikipedia.org/wiki/Union_mount) for other file systems.
... ...
@@ -245,11 +245,12 @@ of the containers.
245 245
 ## Network settings
246 246
 
247 247
     --dns=[]         : Set custom dns servers for the container
248
-    --net="bridge"   : Set the Network mode for the container
248
+    --net="bridge"   : Connects a container to a network
249 249
                         'bridge': creates a new network stack for the container on the docker bridge
250 250
                         'none': no networking for this container
251 251
                         'container:<name|id>': reuses another container network stack
252 252
                         'host': use the host network stack inside the container
253
+                        'NETWORK': connects the container to user-created network using `docker network create` command
253 254
     --add-host=""    : Add a line to /etc/hosts (host:IP)
254 255
     --mac-address="" : Sets the container's Ethernet device's MAC address
255 256
 
... ...
@@ -269,12 +270,12 @@ By default, the MAC address is generated using the IP address allocated to the
269 269
 container. You can set the container's MAC address explicitly by providing a
270 270
 MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`).
271 271
 
272
-Supported networking modes are:
272
+Supported networks :
273 273
 
274 274
 <table>
275 275
   <thead>
276 276
     <tr>
277
-      <th class="no-wrap">Mode</th>
277
+      <th class="no-wrap">Network</th>
278 278
       <th>Description</th>
279 279
     </tr>
280 280
   </thead>
... ...
@@ -304,19 +305,25 @@ Supported networking modes are:
304 304
         its *name* or *id*.
305 305
       </td>
306 306
     </tr>
307
+    <tr>
308
+      <td class="no-wrap"><strong>NETWORK</strong></td>
309
+      <td>
310
+        Connects the container to a user created network (using `docker network create` command)
311
+      </td>
312
+    </tr>
307 313
   </tbody>
308 314
 </table>
309 315
 
310
-#### Mode: none
316
+#### Network: none
311 317
 
312
-With the networking mode set to `none` a container will not have a
318
+With the network is `none` a container will not have
313 319
 access to any external routes.  The container will still have a
314 320
 `loopback` interface enabled in the container but it does not have any
315 321
 routes to external traffic.
316 322
 
317
-#### Mode: bridge
323
+#### Network: bridge
318 324
 
319
-With the networking mode set to `bridge` a container will use docker's
325
+With the network set to `bridge` a container will use docker's
320 326
 default networking setup.  A bridge is setup on the host, commonly named
321 327
 `docker0`, and a pair of `veth` interfaces will be created for the
322 328
 container.  One side of the `veth` pair will remain on the host attached
... ...
@@ -325,9 +332,9 @@ container's namespaces in addition to the `loopback` interface.  An IP
325 325
 address will be allocated for containers on the bridge's network and
326 326
 traffic will be routed though this bridge to the container.
327 327
 
328
-#### Mode: host
328
+#### Network: host
329 329
 
330
-With the networking mode set to `host` a container will share the host's
330
+With the network set to `host` a container will share the host's
331 331
 network stack and all interfaces from the host will be available to the
332 332
 container.  The container's hostname will match the hostname on the host
333 333
 system.  Note that `--add-host` `--hostname`  `--dns` `--dns-search`
... ...
@@ -343,9 +350,9 @@ or a High Performance Web Server.
343 343
 > **Note**: `--net="host"` gives the container full access to local system
344 344
 > services such as D-bus and is therefore considered insecure.
345 345
 
346
-#### Mode: container
346
+#### Network: container
347 347
 
348
-With the networking mode set to `container` a container will share the
348
+With the network set to `container` a container will share the
349 349
 network stack of another container.  The other container's name must be
350 350
 provided in the format of `--net container:<name|id>`. Note that `--add-host`
351 351
 `--hostname` `--dns` `--dns-search` `--dns-opt` and `--mac-address` are
... ...
@@ -360,6 +367,21 @@ running the `redis-cli` command and connecting to the Redis server over the
360 360
     $ # use the redis container's network stack to access localhost
361 361
     $ docker run --rm -it --net container:redis example/redis-cli -h 127.0.0.1
362 362
 
363
+#### Network: User-Created NETWORK
364
+
365
+In addition to all the above special networks, user can create a network using
366
+their favorite network driver or external plugin. The driver used to create the
367
+network takes care of all the network plumbing requirements for the container
368
+connected to that network.
369
+
370
+Example creating a network using the inbuilt overlay network driver and running 
371
+a container in the created network
372
+
373
+```
374
+$ docker network create -d overlay multi-host-network
375
+$ docker run --net=multi-host-network -itd --name=container3 busybox
376
+```
377
+
363 378
 ### Managing /etc/hosts
364 379
 
365 380
 Your container will have lines in `/etc/hosts` which define the hostname of the
... ...
@@ -347,7 +347,7 @@ allowing linked communication to continue.
347 347
 # Next step
348 348
 
349 349
 Now that you know how to link Docker containers together, the next step is
350
-learning how to manage data, volumes and mounts inside your containers.
350
+learning how to take complete control over docker networking.
351 351
 
352
-Go to [Managing Data in Containers](/userguide/dockervolumes).
352
+Go to [Docker Networking](/userguide/dockernetworks.md).
353 353
 
354 354
new file mode 100644
... ...
@@ -0,0 +1,510 @@
0
+<!--[metadata]>
1
+title = "Docker container networking"
2
+description = "How do we connect docker containers within and across hosts ?"
3
+keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
4
+[menu.main]
5
+parent = "smn_containers"
6
+weight = 3
7
+<![end-metadata]-->
8
+
9
+# Docker container networking
10
+
11
+So far we've been introduced to some [basic Docker
12
+concepts](/userguide/usingdocker/), seen how to work with [Docker
13
+images](/userguide/dockerimages/) as well as learned about basic [networking
14
+and links between containers](/userguide/dockerlinks/). In this section
15
+we're going to discuss how you can take control over more advanced 
16
+container networking.
17
+
18
+This section makes use of `docker network` commands and outputs to explain the
19
+advanced networking functionality supported by Docker.
20
+
21
+# Default Networks
22
+
23
+By default, docker creates 3 networks using 3 different network drivers :
24
+
25
+```
26
+$ sudo docker network ls
27
+NETWORK ID          NAME                DRIVER
28
+7fca4eb8c647        bridge              bridge
29
+9f904ee27bf5        none                null
30
+cf03ee007fb4        host                host
31
+```
32
+
33
+`docker network inspect` gives more information about a network
34
+
35
+```
36
+$ sudo docker network inspect bridge
37
+{
38
+    "name": "bridge",
39
+    "id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
40
+    "driver": "bridge"
41
+}
42
+```
43
+
44
+By default containers are launched on Bridge network
45
+
46
+```
47
+$ sudo docker run -itd --name=container1 busybox
48
+f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27
49
+
50
+$ sudo docker run -itd --name=container2 busybox
51
+bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727
52
+```
53
+
54
+```
55
+$ sudo docker network inspect bridge
56
+{
57
+    "name": "bridge",
58
+    "id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
59
+    "driver": "bridge",
60
+    "containers": {
61
+        "bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
62
+            "endpoint": "e0ac95934f803d7e36384a2029b8d1eeb56cb88727aa2e8b7edfeebaa6dfd758",
63
+            "mac_address": "02:42:ac:11:00:03",
64
+            "ipv4_address": "172.17.0.3/16"
65
+        },
66
+        "f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27": {
67
+            "endpoint": "31de280881d2a774345bbfb1594159ade4ae4024ebfb1320cb74a30225f6a8ae",
68
+            "mac_address": "02:42:ac:11:00:02",
69
+            "ipv4_address": "172.17.0.2/16"
70
+        }
71
+    }
72
+}
73
+```
74
+`docker network inspect` command above shows all the connected containers and its network resources on a given network
75
+
76
+Containers in a network should be able to communicate with each other using container names
77
+
78
+```
79
+$ sudo docker attach container1
80
+
81
+/ # ifconfig
82
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
83
+          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
84
+          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
85
+          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
86
+          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
87
+          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
88
+          collisions:0 txqueuelen:0
89
+          RX bytes:1382 (1.3 KiB)  TX bytes:258 (258.0 B)
90
+
91
+lo        Link encap:Local Loopback
92
+          inet addr:127.0.0.1  Mask:255.0.0.0
93
+          inet6 addr: ::1/128 Scope:Host
94
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
95
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
96
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
97
+          collisions:0 txqueuelen:0
98
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
99
+
100
+/ # ping container2
101
+PING container2 (172.17.0.3): 56 data bytes
102
+64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.125 ms
103
+64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.130 ms
104
+64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.172 ms
105
+^C
106
+--- container2 ping statistics ---
107
+3 packets transmitted, 3 packets received, 0% packet loss
108
+round-trip min/avg/max = 0.125/0.142/0.172 ms
109
+
110
+/ # cat /etc/hosts
111
+172.17.0.2      f2870c98fd50
112
+127.0.0.1       localhost
113
+::1     localhost ip6-localhost ip6-loopback
114
+fe00::0 ip6-localnet
115
+ff00::0 ip6-mcastprefix
116
+ff02::1 ip6-allnodes
117
+ff02::2 ip6-allrouters
118
+172.17.0.2      container1
119
+172.17.0.2      container1.bridge
120
+172.17.0.3      container2
121
+172.17.0.3      container2.bridge
122
+```
123
+
124
+
125
+```
126
+$ sudo docker attach container2
127
+
128
+/ # ifconfig
129
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03
130
+          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
131
+          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
132
+          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
133
+          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
134
+          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
135
+          collisions:0 txqueuelen:0
136
+          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
137
+
138
+lo        Link encap:Local Loopback
139
+          inet addr:127.0.0.1  Mask:255.0.0.0
140
+          inet6 addr: ::1/128 Scope:Host
141
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
142
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
143
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
144
+          collisions:0 txqueuelen:0
145
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
146
+
147
+/ # ping container1
148
+PING container1 (172.17.0.2): 56 data bytes
149
+64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.277 ms
150
+64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.179 ms
151
+64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.130 ms
152
+64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.113 ms
153
+^C
154
+--- container1 ping statistics ---
155
+4 packets transmitted, 4 packets received, 0% packet loss
156
+round-trip min/avg/max = 0.113/0.174/0.277 ms
157
+/ # cat /etc/hosts
158
+172.17.0.3      bda12f892278
159
+127.0.0.1       localhost
160
+::1     localhost ip6-localhost ip6-loopback
161
+fe00::0 ip6-localnet
162
+ff00::0 ip6-mcastprefix
163
+ff02::1 ip6-allnodes
164
+ff02::2 ip6-allrouters
165
+172.17.0.2      container1
166
+172.17.0.2      container1.bridge
167
+172.17.0.3      container2
168
+172.17.0.3      container2.bridge
169
+/ #
170
+
171
+```
172
+
173
+# User defined Networks
174
+
175
+In addition to the inbuilt networks, user can create  networks using inbuilt drivers
176
+(such as bridge or overlay driver) or external plugins supplied by the community.
177
+Networks by definition should provides complete isolation for the containers.
178
+
179
+```
180
+$ docker network create -d bridge isolated_nw
181
+8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7
182
+
183
+$ docker network inspect isolated_nw
184
+{
185
+    "name": "isolated_nw",
186
+    "id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
187
+    "driver": "bridge"
188
+}
189
+
190
+$ docker network ls
191
+NETWORK ID          NAME                DRIVER
192
+9f904ee27bf5        none                null
193
+cf03ee007fb4        host                host
194
+7fca4eb8c647        bridge              bridge
195
+8b05faa32aeb        isolated_nw         bridge
196
+
197
+```
198
+
199
+Container can be launched on a user-defined network using the --net=<NETWORK> option 
200
+in `docker run` command
201
+
202
+```
203
+$ docker run --net=isolated_nw -itd --name=container3 busybox
204
+777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843
205
+
206
+$ docker network inspect isolated_nw
207
+{
208
+    "name": "isolated_nw",
209
+    "id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
210
+    "driver": "bridge",
211
+    "containers": {
212
+        "777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843": {
213
+            "endpoint": "c7f22f8da07fb8ecc687d08377cfcdb80b4dd8624c2a8208b1a4268985e38683",
214
+            "mac_address": "02:42:ac:14:00:01",
215
+            "ipv4_address": "172.20.0.1/16"
216
+        }
217
+    }
218
+}
219
+```
220
+
221
+
222
+# Connecting to Multiple networks
223
+
224
+Docker containers can dynamically connect to 1 or more networks with each network backed
225
+by same or different network driver / plugin.
226
+
227
+```
228
+$ docker network connect isolated_nw container2
229
+$ docker network inspect isolated_nw
230
+{
231
+    "name": "isolated_nw",
232
+    "id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
233
+    "driver": "bridge",
234
+    "containers": {
235
+        "777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843": {
236
+            "endpoint": "c7f22f8da07fb8ecc687d08377cfcdb80b4dd8624c2a8208b1a4268985e38683",
237
+            "mac_address": "02:42:ac:14:00:01",
238
+            "ipv4_address": "172.20.0.1/16"
239
+        },
240
+        "bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
241
+            "endpoint": "2ac11345af68b0750341beeda47cc4cce93bb818d8eb25e61638df7a4997cb1b",
242
+            "mac_address": "02:42:ac:14:00:02",
243
+            "ipv4_address": "172.20.0.2/16"
244
+        }
245
+    }
246
+}
247
+```
248
+
249
+Lets check the network resources used by container2.
250
+
251
+```
252
+$ docker inspect --format='{{.NetworkSettings.Networks}}' container2
253
+[bridge isolated_nw]
254
+
255
+$ sudo docker attach container2
256
+
257
+/ # ifconfig
258
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03
259
+          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
260
+          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
261
+          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
262
+          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
263
+          TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
264
+          collisions:0 txqueuelen:0
265
+          RX bytes:1586 (1.5 KiB)  TX bytes:1460 (1.4 KiB)
266
+
267
+eth1      Link encap:Ethernet  HWaddr 02:42:AC:14:00:02
268
+          inet addr:172.20.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
269
+          inet6 addr: fe80::42:acff:fe14:2/64 Scope:Link
270
+          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
271
+          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
272
+          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
273
+          collisions:0 txqueuelen:0
274
+          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
275
+
276
+lo        Link encap:Local Loopback
277
+          inet addr:127.0.0.1  Mask:255.0.0.0
278
+          inet6 addr: ::1/128 Scope:Host
279
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
280
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
281
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
282
+          collisions:0 txqueuelen:0
283
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
284
+```
285
+
286
+
287
+In the example discussed in this section  thus far, container3 and container2 are 
288
+connected to isolated_nw and can talk to each other. 
289
+But container3 and container1 are not in the same network and hence they cannot communicate.
290
+
291
+```
292
+$ docker attach container3
293
+
294
+/ # ifconfig
295
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:14:00:01
296
+          inet addr:172.20.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
297
+          inet6 addr: fe80::42:acff:fe14:1/64 Scope:Link
298
+          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
299
+          RX packets:24 errors:0 dropped:0 overruns:0 frame:0
300
+          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
301
+          collisions:0 txqueuelen:0
302
+          RX bytes:1944 (1.8 KiB)  TX bytes:648 (648.0 B)
303
+
304
+lo        Link encap:Local Loopback
305
+          inet addr:127.0.0.1  Mask:255.0.0.0
306
+          inet6 addr: ::1/128 Scope:Host
307
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
308
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
309
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
310
+          collisions:0 txqueuelen:0
311
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
312
+
313
+/ # ping container2.isolated_nw
314
+PING container2.isolated_nw (172.20.0.2): 56 data bytes
315
+64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.217 ms
316
+64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.150 ms
317
+64 bytes from 172.20.0.2: seq=2 ttl=64 time=0.188 ms
318
+64 bytes from 172.20.0.2: seq=3 ttl=64 time=0.176 ms
319
+^C
320
+--- container2.isolated_nw ping statistics ---
321
+4 packets transmitted, 4 packets received, 0% packet loss
322
+round-trip min/avg/max = 0.150/0.182/0.217 ms
323
+/ # ping container2
324
+PING container2 (172.20.0.2): 56 data bytes
325
+64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.120 ms
326
+64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.109 ms
327
+^C
328
+--- container2 ping statistics ---
329
+2 packets transmitted, 2 packets received, 0% packet loss
330
+round-trip min/avg/max = 0.109/0.114/0.120 ms
331
+
332
+/ # ping container1
333
+ping: bad address 'container1'
334
+
335
+/ # ping 172.17.0.2
336
+PING 172.17.0.2 (172.17.0.2): 56 data bytes
337
+^C
338
+--- 172.17.0.2 ping statistics ---
339
+4 packets transmitted, 0 packets received, 100% packet loss
340
+
341
+/ # ping 172.17.0.3
342
+PING 172.17.0.3 (172.17.0.3): 56 data bytes
343
+^C
344
+--- 172.17.0.3 ping statistics ---
345
+4 packets transmitted, 0 packets received, 100% packet loss
346
+
347
+```
348
+
349
+While container2 is attached to both the networks (bridge and isolated_nw) and hence it 
350
+can talk to both container1 and container3
351
+
352
+```
353
+$ docker attach container2
354
+
355
+/ # cat /etc/hosts
356
+172.17.0.3      bda12f892278
357
+127.0.0.1       localhost
358
+::1     localhost ip6-localhost ip6-loopback
359
+fe00::0 ip6-localnet
360
+ff00::0 ip6-mcastprefix
361
+ff02::1 ip6-allnodes
362
+ff02::2 ip6-allrouters
363
+172.17.0.2      container1
364
+172.17.0.2      container1.bridge
365
+172.17.0.3      container2
366
+172.17.0.3      container2.bridge
367
+172.20.0.1      container3
368
+172.20.0.1      container3.isolated_nw
369
+172.20.0.2      container2
370
+172.20.0.2      container2.isolated_nw
371
+
372
+/ # ping container3
373
+PING container3 (172.20.0.1): 56 data bytes
374
+64 bytes from 172.20.0.1: seq=0 ttl=64 time=0.138 ms
375
+64 bytes from 172.20.0.1: seq=1 ttl=64 time=0.133 ms
376
+64 bytes from 172.20.0.1: seq=2 ttl=64 time=0.133 ms
377
+^C
378
+--- container3 ping statistics ---
379
+3 packets transmitted, 3 packets received, 0% packet loss
380
+round-trip min/avg/max = 0.133/0.134/0.138 ms
381
+
382
+/ # ping container1
383
+PING container1 (172.17.0.2): 56 data bytes
384
+64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.121 ms
385
+64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.250 ms
386
+64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.133 ms
387
+^C
388
+--- container1 ping statistics ---
389
+3 packets transmitted, 3 packets received, 0% packet loss
390
+round-trip min/avg/max = 0.121/0.168/0.250 ms
391
+/ #
392
+```
393
+
394
+
395
+Just like it is easy to connect a container to multiple networks,  one can 
396
+disconnect a container from a network using the `docker network disconnect` command.
397
+
398
+```
399
+root@Ubuntu-vm ~$ docker network disconnect isolated_nw container2
400
+
401
+$ docker inspect --format='{{.NetworkSettings.Networks}}' container2
402
+[bridge]
403
+
404
+root@Ubuntu-vm ~$ docker network inspect isolated_nw
405
+{
406
+    "name": "isolated_nw",
407
+    "id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
408
+    "driver": "bridge",
409
+    "containers": {
410
+        "777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843": {
411
+            "endpoint": "c7f22f8da07fb8ecc687d08377cfcdb80b4dd8624c2a8208b1a4268985e38683",
412
+            "mac_address": "02:42:ac:14:00:01",
413
+            "ipv4_address": "172.20.0.1/16"
414
+        }
415
+    }
416
+}
417
+```
418
+
419
+Once a container is disconnected from a network, it cannot communicate with other containers
420
+connected to that network. In this example, container2 cannot talk to container3 any more 
421
+in isolated_nw
422
+
423
+```
424
+$ sudo docker attach container2
425
+
426
+/ # ifconfig
427
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03
428
+          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
429
+          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
430
+          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
431
+          RX packets:26 errors:0 dropped:0 overruns:0 frame:0
432
+          TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
433
+          collisions:0 txqueuelen:0
434
+          RX bytes:1964 (1.9 KiB)  TX bytes:1838 (1.7 KiB)
435
+
436
+lo        Link encap:Local Loopback
437
+          inet addr:127.0.0.1  Mask:255.0.0.0
438
+          inet6 addr: ::1/128 Scope:Host
439
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
440
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
441
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
442
+          collisions:0 txqueuelen:0
443
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
444
+
445
+/ # ping container3
446
+PING container3 (172.20.0.1): 56 data bytes
447
+^C
448
+--- container3 ping statistics ---
449
+2 packets transmitted, 0 packets received, 100% packet loss
450
+
451
+
452
+But container2 still has full connectivity to the bridge network
453
+
454
+/ # ping container1
455
+PING container1 (172.17.0.2): 56 data bytes
456
+64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms
457
+64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms
458
+^C
459
+--- container1 ping statistics ---
460
+2 packets transmitted, 2 packets received, 0% packet loss
461
+round-trip min/avg/max = 0.119/0.146/0.174 ms
462
+/ #
463
+
464
+```
465
+
466
+When all the containers in a network stops or disconnected the network can be removed
467
+
468
+```
469
+$ docker network inspect isolated_nw
470
+{
471
+    "name": "isolated_nw",
472
+    "id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
473
+    "driver": "bridge"
474
+}
475
+
476
+$ docker network rm isolated_nw
477
+
478
+$ docker network ls
479
+NETWORK ID          NAME                DRIVER
480
+9f904ee27bf5        none                null
481
+cf03ee007fb4        host                host
482
+7fca4eb8c647        bridge              bridge
483
+```
484
+
485
+# Native Multi-host networking
486
+
487
+With the help of libnetwork and the inbuilt `VXLAN based overlay network driver` docker supports multi-host networking natively out of the box. Technical details are documented under https://github.com/docker/libnetwork/blob/master/docs/overlay.md.
488
+Using the exact same above `docker network` UI, the user can exercise the power of multi-host networking.
489
+
490
+In order to create a network using the inbuilt overlay driver,
491
+
492
+```
493
+$ docker network create -d overlay multi-host-network
494
+```
495
+
496
+Since `network` object is globally significant, this feature requires distributed states provided by `libkv`. Using `libkv`, the user can plug any of the supported Key-Value store (such as consul, etcd or zookeeper).
497
+User can specify the Key-Value store of choice using the `--cluster-store` daemon flag, which takes configuration value of format `PROVIDER://URL`, where
498
+`PROVIDER` is the name of the Key-Value store (such as consul, etcd or zookeeper) and
499
+`URL` is the url to reach the Key-Value store.
500
+Example : `docker daemon --cluster-store=consul://localhost:8500`
501
+
502
+# Next step
503
+
504
+Now that you know how to link Docker containers together, the next step is
505
+learning how to manage data, volumes and mounts inside your containers.
506
+
507
+Go to [Managing Data in Containers](/userguide/dockervolumes.md).
... ...
@@ -42,7 +42,7 @@ Go to [Using Docker Hub](/docker-hub).
42 42
 Docker offers a *container-based* virtualization platform to power your
43 43
 applications. To learn how to Dockerize applications and run them:
44 44
 
45
-Go to [Dockerizing Applications](/userguide/dockerizing).
45
+Go to [Dockerizing Applications](/docs/userguide/dockerizing.md).
46 46
 
47 47
 ## Working with containers
48 48
 
... ...
@@ -52,7 +52,7 @@ Once you get a grip on running your applications in Docker containers
52 52
 we're going to show you how to manage those containers. To find out
53 53
 about how to inspect, monitor and manage containers:
54 54
 
55
-Go to [Working With Containers](/userguide/usingdocker).
55
+Go to [Working With Containers](/docs/userguide/usingdocker.md).
56 56
 
57 57
 ## Working with Docker images
58 58
 
... ...
@@ -61,7 +61,7 @@ Go to [Working With Containers](/userguide/usingdocker).
61 61
 Once you've learnt how to use Docker it's time to take the next step and
62 62
 learn how to build your own application images with Docker.
63 63
 
64
-Go to [Working with Docker Images](/userguide/dockerimages).
64
+Go to [Working with Docker Images](/docs/userguide/dockerimages.md).
65 65
 
66 66
 ## Linking containers together
67 67
 
... ...
@@ -69,14 +69,24 @@ Until now we've seen how to build individual applications inside Docker
69 69
 containers. Now learn how to build whole application stacks with Docker
70 70
 by linking together multiple Docker containers.
71 71
 
72
-Go to [Linking Containers Together](/userguide/dockerlinks).
72
+Go to [Linking Containers Together](/docs/userguide/dockerlinks.md).
73
+
74
+## Docker container networking
75
+
76
+Links provides a very easy and convenient way to connect the containers.
77
+But, it is very opinionated and doesnt provide a lot of flexibility or
78
+choice to the end-users. Now, lets learn about a flexible way to connect 
79
+containers together within a host or across multiple hosts in a cluster
80
+using various networking technologies, with the help of extensible plugins.
81
+
82
+Go to [Docker Networking](/docs/userguide/dockernetworks.md).
73 83
 
74 84
 ## Managing data in containers
75 85
 
76 86
 Now we know how to link Docker containers together the next step is
77 87
 learning how to manage data, volumes and mounts inside our containers.
78 88
 
79
-Go to [Managing Data in Containers](/userguide/dockervolumes).
89
+Go to [Managing Data in Containers](/docs/userguide/dockervolumes.md).
80 90
 
81 91
 ## Working with Docker Hub
82 92
 
... ...
@@ -84,7 +94,7 @@ Now we've learned a bit more about how to use Docker we're going to see
84 84
 how to combine Docker with the services available on Docker Hub including
85 85
 Trusted Builds and private repositories.
86 86
 
87
-Go to [Working with Docker Hub](/userguide/dockerrepos).
87
+Go to [Working with Docker Hub](/docs/userguide/dockerrepos.md).
88 88
 
89 89
 ## Docker Compose
90 90
 
... ...
@@ -71,11 +71,6 @@ to build a Docker binary with the experimental features enabled:
71 71
 
72 72
 ## Current experimental features
73 73
 
74
-* [Network plugins](plugins_network.md)
75
-* [Networking and Services UI](networking.md)
76
-* [Native multi-host networking](network_overlay.md)
77
-* [Compose, Swarm and networking integration](compose_swarm_networking.md)
78
-
79 74
 ## How to comment on an experimental feature
80 75
 
81 76
 Each feature's documentation includes a list of proposal pull requests or PRs associated with the feature. If you want to comment on or suggest a change to a feature, please add it to the existing feature PR.  
82 77
deleted file mode 100644
... ...
@@ -1,238 +0,0 @@
1
-# Experimental: Compose, Swarm and Multi-Host Networking
2
-
3
-The [experimental build of Docker](https://github.com/docker/docker/tree/master/experimental) has an entirely new networking system, which enables secure communication between containers on multiple hosts. In combination with Docker Swarm and Docker Compose, you can now run multi-container apps on multi-host clusters with the same tooling and configuration format you use to develop them locally.
4
-
5
-> Note: This functionality is in the experimental stage, and contains some hacks and workarounds which will be removed as it matures.
6
-
7
-## Prerequisites
8
-
9
-Before you start, you’ll need to install the experimental build of Docker, and the latest versions of Machine and Compose.
10
-
11
--   To install the experimental Docker build on a Linux machine, follow the instructions [here](https://github.com/docker/docker/tree/master/experimental#install-docker-experimental).
12
-
13
--   To install the experimental Docker build on a Mac, run these commands:
14
-
15
-        $ curl -L https://experimental.docker.com/builds/Darwin/x86_64/docker-latest > /usr/local/bin/docker
16
-        $ chmod +x /usr/local/bin/docker
17
-
18
--   To install Machine, follow the instructions [here](http://docs.docker.com/machine/).
19
-
20
--   To install Compose, follow the instructions [here](http://docs.docker.com/compose/install/).
21
-
22
-You’ll also need a [Docker Hub](https://hub.docker.com/account/signup/) account and a [Digital Ocean](https://www.digitalocean.com/) account.  
23
-It works with the amazonec2 driver as well (by adapting the commands accordingly), except you'll need to manually open the ports 8500 (consul) and 7946 (serf) by editing the inbound rules of the corresponding security group.
24
-
25
-## Set up a swarm with multi-host networking
26
-
27
-Set the `DIGITALOCEAN_ACCESS_TOKEN` environment variable to a valid Digital Ocean API token, which you can generate in the [API panel](https://cloud.digitalocean.com/settings/applications).
28
-
29
-    export DIGITALOCEAN_ACCESS_TOKEN=abc12345
30
-
31
-Start a consul server:
32
-
33
-    docker-machine --debug create \
34
-        -d digitalocean \
35
-        --engine-install-url="https://experimental.docker.com" \
36
-        consul
37
-
38
-    docker $(docker-machine config consul) run -d \
39
-        -p "8500:8500" \
40
-        -h "consul" \
41
-        progrium/consul -server -bootstrap
42
-
43
-(In a real world setting you’d set up a distributed consul, but that’s beyond the scope of this guide!)
44
-
45
-Create a Swarm token:
46
-
47
-    export SWARM_TOKEN=$(docker run swarm create)
48
-
49
-Next, you create a Swarm master with Machine: 
50
-
51
-    docker-machine --debug create \
52
-        -d digitalocean \
53
-        --digitalocean-image="ubuntu-14-04-x64" \
54
-        --engine-install-url="https://experimental.docker.com" \
55
-        --engine-opt="default-network=overlay:multihost" \
56
-        --engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \
57
-        --engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \
58
-        swarm-0
59
-
60
-Usually Machine can create Swarms for you, but it doesn't yet fully support multi-host networks yet, so you'll have to start up the Swarm manually:
61
-
62
-    docker $(docker-machine config swarm-0) run -d \
63
-        --restart="always" \
64
-        --net="bridge" \
65
-        swarm:latest join \
66
-            --addr "$(docker-machine ip swarm-0):2376" \
67
-            "token://$SWARM_TOKEN"
68
-
69
-    docker $(docker-machine config swarm-0) run -d \
70
-        --restart="always" \
71
-        --net="bridge" \
72
-        -p "3376:3376" \
73
-        -v "/etc/docker:/etc/docker" \
74
-        swarm:latest manage \
75
-            --tlsverify \
76
-            --tlscacert="/etc/docker/ca.pem" \
77
-            --tlscert="/etc/docker/server.pem" \
78
-            --tlskey="/etc/docker/server-key.pem" \
79
-            -H "tcp://0.0.0.0:3376" \
80
-            --strategy spread \
81
-            "token://$SWARM_TOKEN"
82
-
83
-Create a Swarm node:
84
-
85
-    docker-machine --debug create \
86
-        -d digitalocean \
87
-        --digitalocean-image="ubuntu-14-10-x64" \
88
-        --engine-install-url="https://experimental.docker.com" \
89
-        --engine-opt="default-network=overlay:multihost" \
90
-        --engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \
91
-        --engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \
92
-        --engine-label="com.docker.network.driver.overlay.neighbor_ip=$(docker-machine ip swarm-0)" \
93
-        swarm-1
94
-
95
-    docker $(docker-machine config swarm-1) run -d \
96
-        --restart="always" \
97
-        --net="bridge" \
98
-        swarm:latest join \
99
-            --addr "$(docker-machine ip swarm-1):2376" \
100
-            "token://$SWARM_TOKEN"
101
-
102
-You can create more Swarm nodes if you want - it’s best to give them sensible names (swarm-2, swarm-3, etc).
103
-
104
-Finally, point Docker at your swarm:
105
-
106
-    export DOCKER_HOST=tcp://"$(docker-machine ip swarm-0):3376"
107
-    export DOCKER_TLS_VERIFY=1
108
-    export DOCKER_CERT_PATH="$HOME/.docker/machine/machines/swarm-0"
109
-
110
-## Run containers and get them communicating
111
-
112
-Now that you’ve got a swarm up and running, you can create containers on it just like a single Docker instance:
113
-
114
-    $ docker run busybox echo hello world
115
-    hello world
116
-
117
-If you run `docker ps -a`, you can see what node that container was started on by looking at its name (here it’s swarm-3):
118
-
119
-    $ docker ps -a
120
-    CONTAINER ID        IMAGE                      COMMAND                CREATED              STATUS                      PORTS                                   NAMES
121
-    41f59749737b        busybox                    "echo hello world"     15 seconds ago       Exited (0) 13 seconds ago                                           swarm-3/trusting_leakey
122
-
123
-As you start more containers, they’ll be placed on different nodes across the cluster, thanks to Swarm’s default “spread” scheduling strategy.
124
-
125
-Every container started on this swarm will use the “overlay:multihost” network by default, meaning they can all intercommunicate. Each container gets an IP address on that network, and an `/etc/hosts` file which will be updated on-the-fly with every other container’s IP address and name. That means that if you have a running container named ‘foo’, other containers can access it at the hostname ‘foo’.
126
-
127
-Let’s verify that multi-host networking is functioning. Start a long-running container:
128
-
129
-    $ docker run -d --name long-running busybox top
130
-    <container id>
131
-
132
-If you start a new container and inspect its /etc/hosts file, you’ll see the long-running container in there:
133
-
134
-    $ docker run busybox cat /etc/hosts
135
-    ...
136
-    172.21.0.6  long-running
137
-
138
-Verify that connectivity works between containers:
139
-
140
-    $ docker run busybox ping long-running
141
-    PING long-running (172.21.0.6): 56 data bytes
142
-    64 bytes from 172.21.0.6: seq=0 ttl=64 time=7.975 ms
143
-    64 bytes from 172.21.0.6: seq=1 ttl=64 time=1.378 ms
144
-    64 bytes from 172.21.0.6: seq=2 ttl=64 time=1.348 ms
145
-    ^C
146
-    --- long-running ping statistics ---
147
-    3 packets transmitted, 3 packets received, 0% packet loss
148
-    round-trip min/avg/max = 1.140/2.099/7.975 ms
149
-
150
-## Run a Compose application
151
-
152
-Here’s an example of a simple Python + Redis app using multi-host networking on a swarm.
153
-
154
-Create a directory for the app:
155
-
156
-    $ mkdir composetest
157
-    $ cd composetest
158
-
159
-Inside this directory, create 2 files.
160
-
161
-First, create `app.py` - a simple web app that uses the Flask framework and increments a value in Redis:
162
-
163
-    from flask import Flask
164
-    from redis import Redis
165
-    import os
166
-    app = Flask(__name__)
167
-    redis = Redis(host='composetest_redis_1', port=6379)
168
-
169
-    @app.route('/')
170
-    def hello():
171
-        redis.incr('hits')
172
-        return 'Hello World! I have been seen %s times.' % redis.get('hits')
173
-
174
-    if __name__ == "__main__":
175
-        app.run(host="0.0.0.0", debug=True)
176
-
177
-Note that we’re connecting to a host called `composetest_redis_1` - this is the name of the Redis container that Compose will start.
178
-
179
-Second, create a Dockerfile for the app container:
180
-
181
-    FROM python:2.7
182
-    RUN pip install flask redis
183
-    ADD . /code
184
-    WORKDIR /code
185
-    CMD ["python", "app.py"]
186
-
187
-Build the Docker image and push it to the Hub (you’ll need a Hub account). Replace `<username>` with your Docker Hub username:
188
-
189
-    $ docker build -t <username>/counter .
190
-    $ docker push <username>/counter
191
-
192
-Next, create a `docker-compose.yml`, which defines the configuration for the web and redis containers. Once again, replace `<username>` with your Hub username:
193
-
194
-    web:
195
-      image: <username>/counter
196
-      ports:
197
-       - "80:5000"
198
-    redis:
199
-      image: redis
200
-
201
-Now start the app:
202
-
203
-    $ docker-compose up -d
204
-    Pulling web (username/counter:latest)...
205
-    swarm-0: Pulling username/counter:latest... : downloaded
206
-    swarm-2: Pulling username/counter:latest... : downloaded
207
-    swarm-1: Pulling username/counter:latest... : downloaded
208
-    swarm-3: Pulling username/counter:latest... : downloaded
209
-    swarm-4: Pulling username/counter:latest... : downloaded
210
-    Creating composetest_web_1...
211
-    Pulling redis (redis:latest)...
212
-    swarm-2: Pulling redis:latest... : downloaded
213
-    swarm-1: Pulling redis:latest... : downloaded
214
-    swarm-3: Pulling redis:latest... : downloaded
215
-    swarm-4: Pulling redis:latest... : downloaded
216
-    swarm-0: Pulling redis:latest... : downloaded
217
-    Creating composetest_redis_1...
218
-
219
-Swarm has created containers for both web and redis, and placed them on different nodes, which you can check with `docker ps`:
220
-
221
-    $ docker ps
222
-    CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                                  NAMES
223
-    92faad2135c9        redis                      "/entrypoint.sh redi   43 seconds ago      Up 42 seconds                                              swarm-2/composetest_redis_1
224
-    adb809e5cdac        username/counter           "/bin/sh -c 'python    55 seconds ago      Up 54 seconds       45.67.8.9:80->5000/tcp                 swarm-1/composetest_web_1
225
-
226
-You can also see that the web container has exposed port 80 on its swarm node. If you curl that IP, you’ll get a response from the container:
227
-
228
-    $ curl http://45.67.8.9
229
-    Hello World! I have been seen 1 times.
230
-
231
-If you hit it repeatedly, the counter will increment, demonstrating that the web and redis container are communicating:
232
-
233
-    $ curl http://45.67.8.9
234
-    Hello World! I have been seen 2 times.
235
-    $ curl http://45.67.8.9
236
-    Hello World! I have been seen 3 times.
237
-    $ curl http://45.67.8.9
238
-    Hello World! I have been seen 4 times.
239 1
deleted file mode 100644
... ...
@@ -1,14 +0,0 @@
1
-# Native Multi-host networking
2
-
3
-There is a lot to talk about the native multi-host networking and the `overlay` driver that makes it happen. The technical details are documented under https://github.com/docker/libnetwork/blob/master/docs/overlay.md.
4
-Using the above experimental UI `docker network`, `docker service` and `--publish-service`, the user can exercise the power of multi-host networking.
5
-
6
-Since `network` and `service` objects are globally significant, this feature requires distributed states provided by the `libkv` project.
7
-Using `libkv`, the user can plug any of the supported Key-Value store (such as consul, etcd or zookeeper).
8
-User can specify the Key-Value store of choice using the `--cluster-store` daemon flag, which takes configuration value of format `PROVIDER:URL`, where
9
-`PROVIDER` is the name of the Key-Value store (such as consul, etcd or zookeeper) and
10
-`URL` is the url to reach the Key-Value store.
11
-Example : `docker daemon --cluster-store=consul://localhost:8500`
12
-
13
-Send us feedback and comments on [#14083](https://github.com/docker/docker/issues/14083)
14
-or on the usual Google Groups (docker-user, docker-dev) and IRC channels.
15 1
deleted file mode 100644
... ...
@@ -1,120 +0,0 @@
1
-# Experimental: Networking and Services
2
-
3
-In this feature:
4
-
5
-- `network` and `service` become first class objects in the Docker UI
6
-  - one can now create networks, publish services on that network and attach containers to the services
7
-- Native multi-host networking
8
-  - `network` and `service` objects are globally significant and provides multi-host container connectivity natively
9
-- Inbuilt simple Service Discovery
10
-  - With multi-host networking and top-level `service` object, Docker now provides out of the box simple Service Discovery for containers running in a network
11
-- Batteries included but removable
12
-  - Docker provides inbuilt native multi-host networking by default & can be swapped by any remote driver provided by external plugins.
13
-
14
-This is an experimental feature. For information on installing and using experimental features, see [the experimental feature overview](README.md).
15
-
16
-## Using Networks
17
-
18
-        Usage: docker network [OPTIONS] COMMAND [OPTIONS] [arg...]
19
-
20
-        Commands:
21
-            create                   Create a network
22
-            rm                       Remove a network
23
-            ls                       List all networks
24
-            info                     Display information of a network
25
-
26
-        Run 'docker network COMMAND --help' for more information on a command.
27
-
28
-          --help=false       Print usage
29
-
30
-The `docker network` command is used to manage Networks.
31
-
32
-To create a network, `docker network create foo`. You can also specify a driver
33
-if you have loaded a networking plugin e.g `docker network create -d <plugin_name> foo`
34
-
35
-        $ docker network create foo
36
-        aae601f43744bc1f57c515a16c8c7c4989a2cad577978a32e6910b799a6bccf6
37
-        $ docker network create -d overlay bar
38
-        d9989793e2f5fe400a58ef77f706d03f668219688ee989ea68ea78b990fa2406
39
-
40
-`docker network ls` is used to display the currently configured networks
41
-
42
-        $ docker network ls
43
-        NETWORK ID          NAME                TYPE
44
-        d367e613ff7f        none                null
45
-        bd61375b6993        host                host
46
-        cc455abccfeb        bridge              bridge
47
-        aae601f43744        foo                 bridge
48
-        d9989793e2f5        bar                 overlay
49
-
50
-To get detailed information on a network, you can use the `docker network info`
51
-command.
52
-
53
-        $ docker network info foo
54
-        Network Id: aae601f43744bc1f57c515a16c8c7c4989a2cad577978a32e6910b799a6bccf6
55
-        Name: foo
56
-        Type: null
57
-
58
-If you no longer have need of a network, you can delete it with `docker network rm`
59
-
60
-        $ docker network rm bar
61
-        bar
62
-        $ docker network ls
63
-        NETWORK ID          NAME                TYPE
64
-        aae601f43744        foo                 bridge
65
-        d367e613ff7f        none                null
66
-        bd61375b6993        host                host
67
-        cc455abccfeb        bridge              bridge
68
-
69
-## User-Defined default network
70
-
71
-Docker daemon supports a configuration flag `--default-network` which takes configuration value of format `DRIVER:NETWORK`, where,
72
-`DRIVER` represents the in-built drivers such as bridge, overlay, container, host and none. or Remote drivers via Network Plugins.
73
-`NETWORK` is the name of the network created using the `docker network create` command
74
-When a container is created and if the network mode (`--net`) is not specified, then this default network will be used to connect
75
-the container. If `--default-network` is not specified, the default network will be the `bridge` driver.
76
-Example : `docker daemon --default-network=overlay:multihost`
77
-
78
-## Using Services
79
-
80
-        Usage: docker service COMMAND [OPTIONS] [arg...]
81
-
82
-        Commands:
83
-            publish   Publish a service
84
-            unpublish Remove a service
85
-            attach    Attach a backend (container) to the service
86
-            detach    Detach the backend from the service
87
-            ls        Lists all services
88
-            info      Display information about a service
89
-
90
-        Run 'docker service COMMAND --help' for more information on a command.
91
-
92
-          --help=false       Print usage
93
-
94
-Assuming we want to publish a service from container `a0ebc12d3e48` on network `foo` as `my-service` we would use the following command:
95
-
96
-        $ docker service publish my-service.foo
97
-        ec56fd74717d00f968c26675c9a77707e49ae64b8e54832ebf78888eb116e428
98
-        $ docker service attach a0ebc12d3e48 my-service.foo
99
-
100
-This would make the container `a0ebc12d3e48` accessible as `my-service` on network `foo`. Any other container in network `foo` can use DNS to resolve the address of `my-service`
101
-
102
-This can also be achieved by using the `--publish-service` flag for `docker run`:
103
-
104
-        docker run -itd --publish-service db.foo postgres
105
-
106
-`db.foo` in this instance means "place the container on network `foo`, and allow other hosts on `foo` to discover it under the name `db`"
107
-
108
-We can see the current services using the `docker service ls` command
109
-
110
-        $ docker service ls
111
-        SERVICE ID          NAME                NETWORK             PROVIDER
112
-        ec56fd74717d        my-service          foo                 a0ebc12d3e48
113
-
114
-To remove the a service:
115
-
116
-        $ docker service detach a0ebc12d3e48 my-service.foo
117
-        $ docker service unpublish my-service.foo
118
-
119
-Send us feedback and comments on [#14083](https://github.com/docker/docker/issues/14083)
120
-or on the usual Google Groups (docker-user, docker-dev) and IRC channels.
121 1
deleted file mode 100644
... ...
@@ -1,489 +0,0 @@
1
-# Networking API
2
-
3
-### List networks
4
-
5
-`GET /networks`
6
-
7
-List networks
8
-
9
-**Example request**:
10
-
11
-        GET /networks HTTP/1.1
12
-
13
-**Example response**:
14
-
15
-        HTTP/1.1 200 OK
16
-        Content-Type: application/json
17
-
18
-        [
19
-          {
20
-            "name": "none",
21
-            "id": "8e4e55c6863ef4241c548c1c6fc77289045e9e5d5b5e4875401a675326981898",
22
-            "type": "null",
23
-            "endpoints": []
24
-          },
25
-          {
26
-            "name": "host",
27
-            "id": "062b6d9ea7913fde549e2d186ff0402770658f8c4e769958e1b943ff4e675011",
28
-            "type": "host",
29
-            "endpoints": []
30
-          },
31
-          {
32
-            "name": "bridge",
33
-            "id": "a87dd9a9d58f030962df1c15fb3fa142fbd9261339de458bc89be1895cef2c70",
34
-            "type": "bridge",
35
-            "endpoints": []
36
-          }
37
-        ]
38
-
39
-Query Parameters:
40
-
41
--   **name** – Filter results with the given name
42
--   **partial-id** – Filter results using the partial network ID
43
-
44
-Status Codes:
45
-
46
--   **200** – no error
47
--   **400** – bad parameter
48
--   **500** – server error
49
-
50
-### Create a Network
51
-
52
-`POST /networks`
53
-
54
-**Example request**
55
-
56
-        POST /networks HTTP/1.1
57
-        Content-Type: application/json
58
-
59
-        {
60
-          "name": "foo",
61
-          "network_type": "",
62
-          "options": {}
63
-        }
64
-
65
-**Example Response**
66
-
67
-        HTTP/1.1 200 OK
68
-        "32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653",
69
-
70
-Status Codes:
71
-
72
--   **200** – no error
73
--   **400** – bad request
74
--   **500** – server error
75
-
76
-### Get a network
77
-
78
-`GET /networks/<network_id>`
79
-
80
-Get a network
81
-
82
-**Example request**:
83
-
84
-        GET /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653 HTTP/1.1
85
-
86
-**Example response**:
87
-
88
-        HTTP/1.1 200 OK
89
-        Content-Type: application/json
90
-
91
-        {
92
-          "name": "foo",
93
-          "id": "32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653",
94
-          "type": "bridge",
95
-          "endpoints": []
96
-        }
97
-
98
-Status Codes:
99
-
100
--   **200** – no error
101
--   **404** – not found
102
--   **500** – server error
103
-
104
-### List a networks endpoints
105
-
106
-`GET /networks/<network_id>/endpoints`
107
-
108
-**Example request**
109
-
110
-        GET /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints HTTP/1.1
111
-
112
-**Example Response**
113
-
114
-        HTTP/1.1 200 OK
115
-        Content-Type: application/json
116
-
117
-        [
118
-            {
119
-                "id": "7e0c116b882ee489a8a5345a2638c0129099aa47f4ba114edde34e75c1e4ae0d",
120
-                "name": "/lonely_pasteur",
121
-                "network": "foo"
122
-            }
123
-        ]
124
-
125
-Query Parameters:
126
-
127
--   **name** – Filter results with the given name
128
--   **partial-id** – Filter results using the partial network ID
129
-
130
-Status Codes:
131
-
132
--   **200** – no error
133
--   **400** – bad parameter
134
--   **500** – server error
135
-
136
-### Create an endpoint on a network
137
-
138
-`POST /networks/<network_id>/endpoints`
139
-
140
-**Example request**
141
-
142
-        POST /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints HTTP/1.1
143
-        Content-Type: application/json
144
-
145
-        {
146
-          "name": "baz",
147
-          "exposed_ports": [
148
-            {
149
-              "proto": 6,
150
-              "port": 8080
151
-            }
152
-          ],
153
-          "port_mapping": null
154
-        }
155
-
156
-**Example Response**
157
-
158
-        HTTP/1.1 200 OK
159
-        Content-Type: application/json
160
-
161
-        "b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a"
162
-
163
-Status Codes:
164
-
165
--   **200** – no error
166
--   **400** – bad parameter
167
--   **500** – server error
168
-
169
-### Get an endpoint
170
-
171
-`GET /networks/<network_id>/endpoints/<endpoint_id>`
172
-
173
-**Example request**
174
-
175
-        GET /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a HTTP/1.1
176
-
177
-**Example Response**
178
-
179
-        HTTP/1.1 200 OK
180
-        Content-Type: application/json
181
-
182
-        {
183
-            "id": "b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a",
184
-            "name": "baz",
185
-            "network": "foo"
186
-        }
187
-
188
-Status Codes:
189
-
190
--   **200** – no error
191
--   **404** - not found
192
--   **500** – server error
193
-
194
-### Join an endpoint to a container
195
-
196
-`POST /networks/<network_id>/endpoints/<endpoint_id>/containers`
197
-
198
-**Example request**
199
-
200
-        POST /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653//endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a/containers HTTP/1.1
201
-        Content-Type: application/json
202
-
203
-        {
204
-            "container_id": "e76f406417031bd24c17aeb9bb2f5968b628b9fb6067da264b234544754bf857",
205
-            "host_name": null,
206
-            "domain_name": null,
207
-            "hosts_path": null,
208
-            "resolv_conf_path": null,
209
-            "dns": null,
210
-            "extra_hosts": null,
211
-            "parent_updates": null,
212
-            "use_default_sandbox": true
213
-        }
214
-
215
-**Example response**
216
-
217
-        HTTP/1.1 200 OK
218
-        Content-Type: application/json
219
-
220
-        "/var/run/docker/netns/e76f40641703"
221
-
222
-
223
-Status Codes:
224
-
225
--   **200** – no error
226
--   **400** – bad parameter
227
--   **404** - not found
228
--   **500** – server error
229
-
230
-### Detach an endpoint from a container
231
-
232
-`DELETE /networks/<network_id>/endpoints/<endpoint_id>/containers/<container_id>`
233
-
234
-**Example request**
235
-
236
-        DELETE /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a/containers/e76f406417031bd24c17aeb9bb2f5968b628b9fb6067da264b234544754bf857 HTTP/1.1
237
-        Content-Type: application/json
238
-
239
-**Example response**
240
-
241
-        HTTP/1.1 200 OK
242
-
243
-Status Codes:
244
-
245
--   **200** – no error
246
--   **400** – bad parameter
247
--   **404** - not found
248
--   **500** – server error
249
-
250
-
251
-### Delete an endpoint
252
-
253
-`DELETE /networks/<network_id>/endpoints/<endpoint_id>`
254
-
255
-**Example request**
256
-
257
-        DELETE /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a HTTP/1.1
258
-
259
-**Example Response**
260
-
261
-        HTTP/1.1 200 OK
262
-
263
-Status Codes:
264
-
265
--   **200** – no error
266
--   **404** - not found
267
--   **500** – server error
268
-
269
-### Delete a network
270
-
271
-`DELETE /networks/<network_id>`
272
-
273
-Delete a network
274
-
275
-**Example request**:
276
-
277
-        DELETE /networks/0984d158bd8ae108e4d6bc8fcabedf51da9a174b32cc777026d4a29045654951 HTTP/1.1
278
-
279
-**Example response**:
280
-
281
-        HTTP/1.1 200 OK
282
-
283
-Status Codes:
284
-
285
--   **200** – no error
286
--   **404** – not found
287
--   **500** – server error
288
-
289
-# Services API
290
-
291
-### Publish a Service
292
-
293
-`POST /services`
294
-
295
-Publish a service
296
-
297
-**Example Request**
298
-
299
-        POST /services HTTP/1.1
300
-        Content-Type: application/json
301
-
302
-        {
303
-          "name": "bar",
304
-          "network_name": "foo",
305
-          "exposed_ports": null,
306
-          "port_mapping": null
307
-        }
308
-
309
-**Example Response**
310
-
311
-        HTTP/1.1 200 OK
312
-        Content-Type: application/json
313
-
314
-        "0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff"
315
-
316
-Status Codes:
317
-
318
--   **200** – no error
319
--   **400** – bad parameter
320
--   **500** – server error
321
-
322
-### Get a Service
323
-
324
-`GET /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff`
325
-
326
-Get a service
327
-
328
-**Example Request**:
329
-
330
-        GET /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff HTTP/1.1
331
-
332
-**Example Response**:
333
-
334
-        HTTP/1.1 200 OK
335
-        Content-Type: application/json
336
-
337
-        {
338
-          "name": "bar",
339
-          "id": "0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff",
340
-          "network": "foo"
341
-        }
342
-
343
-Status Codes:
344
-
345
--   **200** – no error
346
--   **400** – bad parameter
347
--   **404** - not found
348
--   **500** – server error
349
-
350
-### Attach a backend to a service
351
-
352
-`POST /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend`
353
-
354
-Attach a backend to a service
355
-
356
-**Example Request**:
357
-
358
-        POST /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend HTTP/1.1
359
-        Content-Type: application/json
360
-
361
-        {
362
-          "container_id": "98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547",
363
-          "host_name": "",
364
-          "domain_name": "",
365
-          "hosts_path": "",
366
-          "resolv_conf_path": "",
367
-          "dns": null,
368
-          "extra_hosts": null,
369
-          "parent_updates": null,
370
-          "use_default_sandbox": false
371
-        }
372
-
373
-**Example Response**:
374
-
375
-        HTTP/1.1 200 OK
376
-        Content-Type: application/json
377
-
378
-        "/var/run/docker/netns/98c5241f9475"
379
-
380
-Status Codes:
381
-
382
--   **200** – no error
383
--   **400** – bad parameter
384
--   **500** – server error
385
-
386
-### Get Backends for a Service
387
-
388
-Get all backends for a given service
389
-
390
-**Example Request**
391
-
392
-        GET /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend HTTP/1.1
393
-
394
-**Example Response**
395
-
396
-        HTTP/1.1 200 OK
397
-        Content-Type: application/json
398
-
399
-        [
400
-          {
401
-            "id": "98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547"
402
-          }
403
-        ]
404
-
405
-Status Codes:
406
-
407
--   **200** – no error
408
--   **400** – bad parameter
409
--   **500** – server error
410
-
411
-### List Services
412
-
413
-`GET /services`
414
-
415
-List services
416
-
417
-**Example request**:
418
-
419
-        GET /services HTTP/1.1
420
-
421
-**Example response**:
422
-
423
-        HTTP/1.1 200 OK
424
-        Content-Type: application/json
425
-
426
-        [
427
-          {
428
-            "name": "/stupefied_stallman",
429
-            "id": "c826b26bf736fb4a77db33f83562e59f9a770724e259ab9c3d50d948f8233ae4",
430
-            "network": "bridge"
431
-          },
432
-          {
433
-            "name": "bar",
434
-            "id": "0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff",
435
-            "network": "foo"
436
-          }
437
-        ]
438
-
439
-Query Parameters:
440
-
441
--   **name** – Filter results with the given name
442
--   **partial-id** – Filter results using the partial network ID
443
--   **network** - Filter results by the given network
444
-
445
-Status Codes:
446
-
447
--   **200** – no error
448
--   **400** – bad parameter
449
--   **500** – server error
450
-
451
-### Detach a Backend from a Service
452
-
453
-`DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend/98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547`
454
-
455
-Detach a backend from a service
456
-
457
-**Example Request**
458
-
459
-        DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend/98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547 HTTP/1.1
460
-
461
-**Example Response**
462
-
463
-        HTTP/1.1 200 OK
464
-
465
-Status Codes:
466
-
467
--   **200** – no error
468
--   **400** – bad parameter
469
--   **500** – server error
470
-
471
-### Un-Publish a Service
472
-
473
-`DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff`
474
-
475
-Unpublish a service
476
-
477
-**Example Request**
478
-
479
-        DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff HTTP/1.1
480
-
481
-**Example Response**
482
-
483
-        HTTP/1.1 200 OK
484
-
485
-Status Codes:
486
-
487
--   **200** – no error
488
--   **400** – bad parameter
489
--   **500** – server error
490 1
deleted file mode 100644
... ...
@@ -1,45 +0,0 @@
1
-# Experimental: Docker network driver plugins
2
-
3
-Docker supports network driver plugins via 
4
-[LibNetwork](https://github.com/docker/libnetwork). Network driver plugins are 
5
-implemented as "remote drivers" for LibNetwork, which shares plugin 
6
-infrastructure with Docker. In effect this means that network driver plugins 
7
-are activated in the same way as other plugins, and use the same kind of 
8
-protocol.
9
-
10
-## Using network driver plugins
11
-
12
-The means of installing and running a network driver plugin will depend on the
13
-particular plugin.
14
-
15
-Once running however, network driver plugins are used just like the built-in
16
-network drivers: by being mentioned as a driver in network-oriented Docker
17
-commands. For example,
18
-
19
-    docker network create -d weave mynet
20
-
21
-Some network driver plugins are listed in [plugins.md](/docs/extend/plugins.md)
22
-
23
-The network thus created is owned by the plugin, so subsequent commands
24
-referring to that network will also be run through the plugin.
25
-
26
-## Network driver plugin protocol
27
-
28
-The network driver protocol, additional to the plugin activation call, is
29
-documented as part of LibNetwork:
30
-[https://github.com/docker/libnetwork/blob/master/docs/remote.md](https://github.com/docker/libnetwork/blob/master/docs/remote.md).
31
-
32
-# Related GitHub PRs and issues
33
-
34
-Please record your feedback in the following issue, on the usual
35
-Google Groups, or the IRC channel #docker-network.
36
-
37
- - [#14083](https://github.com/docker/docker/issues/14083) Feedback on
38
-   experimental networking features
39
-
40
-Other pertinent issues:
41
-
42
- - [#13977](https://github.com/docker/docker/issues/13977) UI for using networks
43
- - [#14023](https://github.com/docker/docker/pull/14023) --default-network option
44
- - [#14051](https://github.com/docker/docker/pull/14051) --publish-service option
45
- - [#13441](https://github.com/docker/docker/pull/13441) (Deprecated) Networks API & UI