Browse code

Adding User Guide

* Added User Guide section outlines.
* Added User Guide to menu.
* Moved HTTPS example to articles.
* Replaced Hello World example with User Guide.
* Moved use cases out of examples.
* Updated Introduction to add User Guide.
* Redirected migrated /use and /articles links.
* Added Docker.io section
* Added Dockerized section
* Added Using Docker section
* Added Docker Images section
* Added Docker Links section
* Added Docker Volumes section

Docker-DCO-1.1-Signed-off-by: James Turnbull <james@lovedthanlost.net> (github: jamtur01)

James Turnbull authored on 2014/05/22 06:05:19
Showing 82 changed files
... ...
@@ -28,7 +28,6 @@ pages:
28 28
 - ['index.md', 'About', 'Docker']
29 29
 - ['introduction/index.md', '**HIDDEN**']
30 30
 - ['introduction/understanding-docker.md', 'About', 'Understanding Docker']
31
-- ['introduction/working-with-docker.md', 'About', 'Working with Docker']
32 31
 
33 32
 # Installation:
34 33
 - ['installation/index.md', '**HIDDEN**']
... ...
@@ -50,44 +49,47 @@ pages:
50 50
 - ['installation/windows.md', 'Installation', 'Microsoft Windows']
51 51
 - ['installation/binaries.md', 'Installation', 'Binaries']
52 52
 
53
-# Examples:
54
-- ['use/index.md', '**HIDDEN**']
55
-- ['use/basics.md', 'Examples', 'First steps with Docker']
56
-- ['examples/index.md', '**HIDDEN**']
57
-- ['examples/hello_world.md', 'Examples', 'Hello World']
58
-- ['examples/nodejs_web_app.md', 'Examples', 'Node.js web application']
59
-- ['examples/python_web_app.md', 'Examples', 'Python web application']
60
-- ['examples/mongodb.md', 'Examples', 'Dockerizing MongoDB']
61
-- ['examples/running_redis_service.md', 'Examples', 'Redis service']
62
-- ['examples/postgresql_service.md', 'Examples', 'PostgreSQL service']
63
-- ['examples/running_riak_service.md', 'Examples', 'Running a Riak service']
64
-- ['examples/running_ssh_service.md', 'Examples', 'Running an SSH service']
65
-- ['examples/couchdb_data_volumes.md', 'Examples', 'CouchDB service']
66
-- ['examples/apt-cacher-ng.md', 'Examples', 'Apt-Cacher-ng service']
67
-- ['examples/https.md', 'Examples', 'Running Docker with HTTPS']
68
-- ['examples/using_supervisord.md', 'Examples', 'Using Supervisor']
69
-- ['examples/cfengine_process_management.md', 'Examples', 'Process management with CFEngine']
70
-- ['use/working_with_links_names.md', 'Examples', 'Linking containers together']
71
-- ['use/working_with_volumes.md', 'Examples', 'Sharing Directories using volumes']
72
-- ['use/puppet.md', 'Examples', 'Using Puppet']
73
-- ['use/chef.md', 'Examples', 'Using Chef']
74
-- ['use/workingwithrepository.md', 'Examples', 'Working with a Docker Repository']
75
-- ['use/port_redirection.md', 'Examples', 'Redirect ports']
76
-- ['use/ambassador_pattern_linking.md', 'Examples', 'Cross-Host linking using Ambassador Containers']
77
-- ['use/host_integration.md', 'Examples', 'Automatically starting Containers']
78
-
79
-#- ['user-guide/index.md', '**HIDDEN**']
80
-# - ['user-guide/writing-your-docs.md', 'User Guide', 'Writing your docs']
81
-# - ['user-guide/styling-your-docs.md', 'User Guide', 'Styling your docs']
82
-# - ['user-guide/configuration.md', 'User Guide', 'Configuration']
83
-# ./faq.md
53
+# User Guide:
54
+- ['userguide/index.md', 'User Guide', 'The Docker User Guide' ]
55
+- ['userguide/dockerio.md', 'User Guide', 'Getting Started with Docker.io' ]
56
+- ['userguide/dockerizing.md', 'User Guide', 'Dockerizing Applications' ]
57
+- ['userguide/usingdocker.md', 'User Guide', 'Working with Containers' ]
58
+- ['userguide/dockerimages.md', 'User Guide', 'Working with Docker Images' ]
59
+- ['userguide/dockerlinks.md', 'User Guide', 'Linking containers together' ]
60
+- ['userguide/dockervolumes.md', 'User Guide', 'Managing data in containers' ]
61
+- ['userguide/dockerrepos.md', 'User Guide', 'Working with Docker.io' ]
84 62
 
85 63
 # Docker.io docs:
86
-- ['docker-io/index.md', '**HIDDEN**']
87
-# - ['index/home.md', 'Docker Index', 'Help']
64
+- ['docker-io/index.md', 'Docker.io', 'Docker.io' ]
88 65
 - ['docker-io/accounts.md', 'Docker.io', 'Accounts']
89 66
 - ['docker-io/repos.md', 'Docker.io', 'Repositories']
90
-- ['docker-io/builds.md', 'Docker.io', 'Trusted Builds']
67
+- ['docker-io/builds.md', 'Docker.io', 'Automated Builds']
68
+
69
+# Examples:
70
+- ['examples/index.md', '**HIDDEN**']
71
+- ['examples/nodejs_web_app.md', 'Examples', 'Dockerizing a Node.js web application']
72
+- ['examples/mongodb.md', 'Examples', 'Dockerizing MongoDB']
73
+- ['examples/running_redis_service.md', 'Examples', 'Dockerizing a Redis service']
74
+- ['examples/postgresql_service.md', 'Examples', 'Dockerizing a PostgreSQL service']
75
+- ['examples/running_riak_service.md', 'Examples', 'Dockerizing a Riak service']
76
+- ['examples/running_ssh_service.md', 'Examples', 'Dockerizing an SSH service']
77
+- ['examples/couchdb_data_volumes.md', 'Examples', 'Dockerizing a CouchDB service']
78
+- ['examples/apt-cacher-ng.md', 'Examples', 'Dockerizing an Apt-Cacher-ng service']
79
+
80
+# Articles
81
+- ['articles/index.md', '**HIDDEN**']
82
+- ['articles/basics.md', 'Articles', 'Docker basics']
83
+- ['articles/networking.md', 'Articles', 'Advanced networking']
84
+- ['articles/security.md', 'Articles', 'Security']
85
+- ['articles/https.md', 'Articles', 'Running Docker with HTTPS']
86
+- ['articles/host_integration.md', 'Articles', 'Automatically starting Containers']
87
+- ['articles/using_supervisord.md', 'Articles', 'Using Supervisor']
88
+- ['articles/cfengine_process_management.md', 'Articles', 'Process management with CFEngine']
89
+- ['articles/puppet.md', 'Articles', 'Using Puppet']
90
+- ['articles/chef.md', 'Articles', 'Using Chef']
91
+- ['articles/ambassador_pattern_linking.md', 'Articles', 'Cross-Host linking using Ambassador Containers']
92
+- ['articles/runmetrics.md', 'Articles', 'Runtime metrics']
93
+- ['articles/baseimages.md', 'Articles', 'Creating a Base Image']
91 94
 
92 95
 # Reference
93 96
 - ['reference/index.md', '**HIDDEN**']
... ...
@@ -96,11 +98,6 @@ pages:
96 96
 - ['reference/builder.md', 'Reference', 'Dockerfile']
97 97
 - ['faq.md', 'Reference', 'FAQ']
98 98
 - ['reference/run.md', 'Reference', 'Run Reference']
99
-- ['articles/index.md', '**HIDDEN**']
100
-- ['articles/runmetrics.md', 'Reference', 'Runtime metrics']
101
-- ['articles/security.md', 'Reference', 'Security']
102
-- ['articles/baseimages.md', 'Reference', 'Creating a Base Image']
103
-- ['use/networking.md', 'Reference', 'Advanced networking']
104 99
 - ['reference/api/index.md', '**HIDDEN**']
105 100
 - ['reference/api/docker-io_api.md', 'Reference', 'Docker.io API']
106 101
 - ['reference/api/registry_api.md', 'Reference', 'Docker Registry API']
... ...
@@ -134,9 +131,6 @@ pages:
134 134
 - ['terms/filesystem.md', '**HIDDEN**']
135 135
 - ['terms/image.md', '**HIDDEN**']
136 136
 
137
-# TODO: our theme adds a dropdown even for sections that have no subsections.
138
-  #- ['faq.md', 'FAQ']
139
-
140 137
 # Contribute:
141 138
 - ['contributing/index.md', '**HIDDEN**']
142 139
 - ['contributing/contributing.md', 'Contribute', 'Contributing']
... ...
@@ -11,7 +11,14 @@
11 11
     { "Condition": { "KeyPrefixEquals": "en/v0.6.3/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "" } },
12 12
     { "Condition": { "KeyPrefixEquals": "jsearch/index.html" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "jsearch/" } },
13 13
     { "Condition": { "KeyPrefixEquals": "index/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "docker-io/" } },
14
-    { "Condition": { "KeyPrefixEquals": "reference/api/index_api/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "reference/api/docker-io_api/" } }
14
+    { "Condition": { "KeyPrefixEquals": "reference/api/index_api/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "reference/api/docker-io_api/" } },
15
+    { "Condition": { "KeyPrefixEquals": "examples/hello_world/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerizing/" } },
16
+    { "Condition": { "KeyPrefixEquals": "examples/python_web_app/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerizing/" } },
17
+    { "Condition": { "KeyPrefixEquals": "use/working_with_volumes/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockervolumes/" } },
18
+    { "Condition": { "KeyPrefixEquals": "use/working_with_links_names/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerlinks/" } },
19
+    { "Condition": { "KeyPrefixEquals": "use/workingwithrepository/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerrepos/" } },
20
+    { "Condition": { "KeyPrefixEquals": "use/port_redirection" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerlinks/" } },
21
+    { "Condition": { "KeyPrefixEquals": "use/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "examples/" } },
15 22
   ]
16 23
 }
17 24
 
... ...
@@ -1,8 +1,13 @@
1 1
 # Articles
2 2
 
3
-## Contents:
4
-
3
+ - [Docker Basics](basics/)
5 4
  - [Docker Security](security/)
5
+ - [Running the Docker daemon with HTTPS](https/)
6
+ - [Configure Networking](networking/)
7
+ - [Using Supervisor with Docker](using_supervisord/)
8
+ - [Process Management with CFEngine](cfengine_process_management/)
9
+ - [Using Puppet](puppet/)
6 10
  - [Create a Base Image](baseimages/)
7 11
  - [Runtime Metrics](runmetrics/)
8
-
12
+ - [Automatically Start Containers](host_integration/)
13
+ - [Link via an Ambassador Container](ambassador_pattern_linking/)
9 14
new file mode 100644
... ...
@@ -0,0 +1,150 @@
0
+page_title: Link via an Ambassador Container
1
+page_description: Using the Ambassador pattern to abstract (network) services
2
+page_keywords: Examples, Usage, links, docker, documentation, examples, names, name, container naming
3
+
4
+# Link via an Ambassador Container
5
+
6
+## Introduction
7
+
8
+Rather than hardcoding network links between a service consumer and
9
+provider, Docker encourages service portability, for example instead of:
10
+
11
+    (consumer) --> (redis)
12
+
13
+Requiring you to restart the `consumer` to attach it to a different
14
+`redis` service, you can add ambassadors:
15
+
16
+    (consumer) --> (redis-ambassador) --> (redis)
17
+
18
+Or
19
+
20
+    (consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)
21
+
22
+When you need to rewire your consumer to talk to a different Redis
23
+server, you can just restart the `redis-ambassador` container that the
24
+consumer is connected to.
25
+
26
+This pattern also allows you to transparently move the Redis server to a
27
+different docker host from the consumer.
28
+
29
+Using the `svendowideit/ambassador` container, the link wiring is
30
+controlled entirely from the `docker run` parameters.
31
+
32
+## Two host Example
33
+
34
+Start actual Redis server on one Docker host
35
+
36
+    big-server $ docker run -d --name redis crosbymichael/redis
37
+
38
+Then add an ambassador linked to the Redis server, mapping a port to the
39
+outside world
40
+
41
+    big-server $ docker run -d --link redis:redis --name redis_ambassador -p 6379:6379 svendowideit/ambassador
42
+
43
+On the other host, you can set up another ambassador setting environment
44
+variables for each remote port we want to proxy to the `big-server`
45
+
46
+    client-server $ docker run -d --name redis_ambassador --expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
47
+
48
+Then on the `client-server` host, you can use a Redis client container
49
+to talk to the remote Redis server, just by linking to the local Redis
50
+ambassador.
51
+
52
+    client-server $ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
53
+    redis 172.17.0.160:6379> ping
54
+    PONG
55
+
56
+## How it works
57
+
58
+The following example shows what the `svendowideit/ambassador` container
59
+does automatically (with a tiny amount of `sed`)
60
+
61
+On the Docker host (192.168.1.52) that Redis will run on:
62
+
63
+    # start actual redis server
64
+    $ docker run -d --name redis crosbymichael/redis
65
+
66
+    # get a redis-cli container for connection testing
67
+    $ docker pull relateiq/redis-cli
68
+
69
+    # test the redis server by talking to it directly
70
+    $ docker run -t -i --rm --link redis:redis relateiq/redis-cli
71
+    redis 172.17.0.136:6379> ping
72
+    PONG
73
+    ^D
74
+
75
+    # add redis ambassador
76
+    $ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh
77
+
78
+In the `redis_ambassador` container, you can see the linked Redis
79
+containers `env`:
80
+
81
+    $ env
82
+    REDIS_PORT=tcp://172.17.0.136:6379
83
+    REDIS_PORT_6379_TCP_ADDR=172.17.0.136
84
+    REDIS_NAME=/redis_ambassador/redis
85
+    HOSTNAME=19d7adf4705e
86
+    REDIS_PORT_6379_TCP_PORT=6379
87
+    HOME=/
88
+    REDIS_PORT_6379_TCP_PROTO=tcp
89
+    container=lxc
90
+    REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
91
+    TERM=xterm
92
+    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
93
+    PWD=/
94
+
95
+This environment is used by the ambassador `socat` script to expose Redis
96
+to the world (via the `-p 6379:6379` port mapping):
97
+
98
+    $ docker rm redis_ambassador
99
+    $ sudo ./contrib/mkimage-unittest.sh
100
+    $ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh
101
+
102
+    $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
103
+
104
+Now ping the Redis server via the ambassador:
105
+
106
+Now go to a different server:
107
+
108
+    $ sudo ./contrib/mkimage-unittest.sh
109
+    $ docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh
110
+
111
+    $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
112
+
113
+And get the `redis-cli` image so we can talk over the ambassador bridge.
114
+
115
+    $ docker pull relateiq/redis-cli
116
+    $ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
117
+    redis 172.17.0.160:6379> ping
118
+    PONG
119
+
120
+## The svendowideit/ambassador Dockerfile
121
+
122
+The `svendowideit/ambassador` image is a small `busybox` image with
123
+`socat` built in. When you start the container, it uses a small `sed`
124
+script to parse out the (possibly multiple) link environment variables
125
+to set up the port forwarding. On the remote host, you need to set the
126
+variable using the `-e` command line option.
127
+
128
+    --expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379
129
+
130
+Will forward the local `1234` port to the remote IP and port, in this
131
+case `192.168.1.52:6379`.
132
+
133
+    #
134
+    #
135
+    # first you need to build the docker-ut image
136
+    # using ./contrib/mkimage-unittest.sh
137
+    # then
138
+    #   docker build -t SvenDowideit/ambassador .
139
+    #   docker tag SvenDowideit/ambassador ambassador
140
+    # then to run it (on the host that has the real backend on it)
141
+    #   docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador
142
+    # on the remote host, you can set up another ambassador
143
+    #   docker run -t -i --name redis_ambassador --expose 6379 sh
144
+
145
+    FROM    docker-ut
146
+    MAINTAINER      SvenDowideit@home.org.au
147
+
148
+
149
+    CMD     env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/'  | sh && top
0 150
new file mode 100644
... ...
@@ -0,0 +1,179 @@
0
+page_title: First steps with Docker
1
+page_description: Common usage and commands
2
+page_keywords: Examples, Usage, basic commands, docker, documentation, examples
3
+
4
+# First steps with Docker
5
+
6
+## Check your Docker install
7
+
8
+This guide assumes you have a working installation of Docker. To check
9
+your Docker install, run the following command:
10
+
11
+    # Check that you have a working install
12
+    $ docker info
13
+
14
+If you get `docker: command not found` or something like
15
+`/var/lib/docker/repositories: permission denied` you may have an
16
+incomplete Docker installation or insufficient privileges to access
17
+Docker on your machine.
18
+
19
+Please refer to [*Installation*](/installation/#installation-list)
20
+for installation instructions.
21
+
22
+## Download a pre-built image
23
+
24
+    # Download an ubuntu image
25
+    $ sudo docker pull ubuntu
26
+
27
+This will find the `ubuntu` image by name on
28
+[*Docker.io*](/userguide/dockerrepos/#find-public-images-on-dockerio)
29
+and download it from [Docker.io](https://index.docker.io) to a local
30
+image cache.
31
+
32
+> **Note**:
33
+> When the image has successfully downloaded, you will see a 12 character
34
+> hash `539c0211cd76: Download complete` which is the
35
+> short form of the image ID. These short image IDs are the first 12
36
+> characters of the full image ID - which can be found using
37
+> `docker inspect` or `docker images --no-trunc=true`
38
+
39
+**If you're using OS X** then you shouldn't use `sudo`.
40
+
41
+## Running an interactive shell
42
+
43
+    # Run an interactive shell in the ubuntu image,
44
+    # allocate a tty, attach stdin and stdout
45
+    # To detach the tty without exiting the shell,
46
+    # use the escape sequence Ctrl-p + Ctrl-q
47
+    # note: This will continue to exist in a stopped state once exited (see "docker ps -a")
48
+    $ sudo docker run -i -t ubuntu /bin/bash
49
+
50
+## Bind Docker to another host/port or a Unix socket
51
+
52
+> **Warning**:
53
+> Changing the default `docker` daemon binding to a
54
+> TCP port or Unix *docker* user group will increase your security risks
55
+> by allowing non-root users to gain *root* access on the host. Make sure
56
+> you control access to `docker`. If you are binding
57
+> to a TCP port, anyone with access to that port has full Docker access;
58
+> so it is not advisable on an open network.
59
+
60
+With `-H` it is possible to make the Docker daemon to listen on a
61
+specific IP and port. By default, it will listen on
62
+`unix:///var/run/docker.sock` to allow only local connections by the
63
+*root* user. You *could* set it to `0.0.0.0:4243` or a specific host IP
64
+to give access to everybody, but that is **not recommended** because
65
+then it is trivial for someone to gain root access to the host where the
66
+daemon is running.
67
+
68
+Similarly, the Docker client can use `-H` to connect to a custom port.
69
+
70
+`-H` accepts host and port assignment in the following format:
71
+
72
+    tcp://[host][:port]` or `unix://path
73
+
74
+For example:
75
+
76
+-   `tcp://host:4243` -> TCP connection on
77
+    host:4243
78
+-   `unix://path/to/socket` -> Unix socket located
79
+    at `path/to/socket`
80
+
81
+`-H`, when empty, will default to the same value as
82
+when no `-H` was passed in.
83
+
84
+`-H` also accepts short form for TCP bindings:
85
+
86
+    host[:port]` or `:port
87
+
88
+Run Docker in daemon mode:
89
+
90
+    $ sudo <path to>/docker -H 0.0.0.0:5555 -d &
91
+
92
+Download an `ubuntu` image:
93
+
94
+    $ sudo docker -H :5555 pull ubuntu
95
+
96
+You can use multiple `-H`, for example, if you want to listen on both
97
+TCP and a Unix socket
98
+
99
+    # Run docker in daemon mode
100
+    $ sudo <path to>/docker -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -d &
101
+    # Download an ubuntu image, use default Unix socket
102
+    $ sudo docker pull ubuntu
103
+    # OR use the TCP port
104
+    $ sudo docker -H tcp://127.0.0.1:4243 pull ubuntu
105
+
106
+## Starting a long-running worker process
107
+
108
+    # Start a very useful long-running process
109
+    $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
110
+
111
+    # Collect the output of the job so far
112
+    $ sudo docker logs $JOB
113
+
114
+    # Kill the job
115
+    $ sudo docker kill $JOB
116
+
117
+## Listing containers
118
+
119
+    $ sudo docker ps # Lists only running containers
120
+    $ sudo docker ps -a # Lists all containers
121
+
122
+## Controlling containers
123
+
124
+    # Start a new container
125
+    $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
126
+
127
+    # Stop the container
128
+    $ docker stop $JOB
129
+
130
+    # Start the container
131
+    $ docker start $JOB
132
+
133
+    # Restart the container
134
+    $ docker restart $JOB
135
+
136
+    # SIGKILL a container
137
+    $ docker kill $JOB
138
+
139
+    # Remove a container
140
+    $ docker stop $JOB # Container must be stopped to remove it
141
+    $ docker rm $JOB
142
+
143
+## Bind a service on a TCP port
144
+
145
+    # Bind port 4444 of this container, and tell netcat to listen on it
146
+    $ JOB=$(sudo docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444)
147
+
148
+    # Which public port is NATed to my container?
149
+    $ PORT=$(sudo docker port $JOB 4444 | awk -F: '{ print $2 }')
150
+
151
+    # Connect to the public port
152
+    $ echo hello world | nc 127.0.0.1 $PORT
153
+
154
+    # Verify that the network connection worked
155
+    $ echo "Daemon received: $(sudo docker logs $JOB)"
156
+
157
+## Committing (saving) a container state
158
+
159
+Save your containers state to an image, so the state can be
160
+re-used.
161
+
162
+When you commit your container only the differences between the image
163
+the container was created from and the current state of the container
164
+will be stored (as a diff). See which images you already have using the
165
+`docker images` command.
166
+
167
+    # Commit your container to a new named image
168
+    $ sudo docker commit <container_id> <some_name>
169
+
170
+    # List your containers
171
+    $ sudo docker images
172
+
173
+You now have an image state from which you can create new instances.
174
+
175
+Read more about [*Share Images via
176
+Repositories*](/userguide/dockerrepos/#working-with-the-repository) or
177
+continue to the complete [*Command
178
+Line*](/reference/commandline/cli/#cli)
0 179
new file mode 100644
... ...
@@ -0,0 +1,144 @@
0
+page_title: Process Management with CFEngine
1
+page_description: Managing containerized processes with CFEngine
2
+page_keywords: cfengine, process, management, usage, docker, documentation
3
+
4
+# Process Management with CFEngine
5
+
6
+Create Docker containers with managed processes.
7
+
8
+Docker monitors one process in each running container and the container
9
+lives or dies with that process. By introducing CFEngine inside Docker
10
+containers, we can alleviate a few of the issues that may arise:
11
+
12
+ - It is possible to easily start multiple processes within a
13
+   container, all of which will be managed automatically, with the
14
+   normal `docker run` command.
15
+ - If a managed process dies or crashes, CFEngine will start it again
16
+   within 1 minute.
17
+ - The container itself will live as long as the CFEngine scheduling
18
+   daemon (cf-execd) lives. With CFEngine, we are able to decouple the
19
+   life of the container from the uptime of the service it provides.
20
+
21
+## How it works
22
+
23
+CFEngine, together with the cfe-docker integration policies, are
24
+installed as part of the Dockerfile. This builds CFEngine into our
25
+Docker image.
26
+
27
+The Dockerfile's `ENTRYPOINT` takes an arbitrary
28
+amount of commands (with any desired arguments) as parameters. When we
29
+run the Docker container these parameters get written to CFEngine
30
+policies and CFEngine takes over to ensure that the desired processes
31
+are running in the container.
32
+
33
+CFEngine scans the process table for the `basename` of the commands given
34
+to the `ENTRYPOINT` and runs the command to start the process if the `basename`
35
+is not found. For example, if we start the container with
36
+`docker run "/path/to/my/application parameters"`, CFEngine will look for a
37
+process named `application` and run the command. If an entry for `application`
38
+is not found in the process table at any point in time, CFEngine will execute
39
+`/path/to/my/application parameters` to start the application once again. The
40
+check on the process table happens every minute.
41
+
42
+Note that it is therefore important that the command to start your
43
+application leaves a process with the basename of the command. This can
44
+be made more flexible by making some minor adjustments to the CFEngine
45
+policies, if desired.
46
+
47
+## Usage
48
+
49
+This example assumes you have Docker installed and working. We will
50
+install and manage `apache2` and `sshd`
51
+in a single container.
52
+
53
+There are three steps:
54
+
55
+1. Install CFEngine into the container.
56
+2. Copy the CFEngine Docker process management policy into the
57
+   containerized CFEngine installation.
58
+3. Start your application processes as part of the `docker run` command.
59
+
60
+### Building the image
61
+
62
+The first two steps can be done as part of a Dockerfile, as follows.
63
+
64
+    FROM ubuntu
65
+    MAINTAINER Eystein Måløy Stenberg <eytein.stenberg@gmail.com>
66
+
67
+    RUN apt-get -y install wget lsb-release unzip ca-certificates
68
+
69
+    # install latest CFEngine
70
+    RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
71
+    RUN echo "deb http://cfengine.com/pub/apt $(lsb_release -cs) main" > /etc/apt/sources.list.d/cfengine-community.list
72
+    RUN apt-get update
73
+    RUN apt-get install cfengine-community
74
+
75
+    # install cfe-docker process management policy
76
+    RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
77
+    RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
78
+    RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
79
+    RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip
80
+
81
+    # apache2 and openssh are just for testing purposes, install your own apps here
82
+    RUN apt-get -y install openssh-server apache2
83
+    RUN mkdir -p /var/run/sshd
84
+    RUN echo "root:password" | chpasswd  # need a password for ssh
85
+
86
+    ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"]
87
+
88
+By saving this file as Dockerfile to a working directory, you can then build
89
+your image with the docker build command, e.g.
90
+`docker build -t managed_image`.
91
+
92
+### Testing the container
93
+
94
+Start the container with `apache2` and `sshd` running and managed, forwarding
95
+a port to our SSH instance:
96
+
97
+    $ docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start"
98
+
99
+We now clearly see one of the benefits of the cfe-docker integration: it
100
+allows to start several processes as part of a normal `docker run` command.
101
+
102
+We can now log in to our new container and see that both `apache2` and `sshd`
103
+are running. We have set the root password to "password" in the Dockerfile
104
+above and can use that to log in with ssh:
105
+
106
+    ssh -p222 root@127.0.0.1
107
+
108
+    ps -ef
109
+    UID        PID  PPID  C STIME TTY          TIME CMD
110
+    root         1     0  0 07:48 ?        00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start
111
+    root        18     1  0 07:48 ?        00:00:00 /var/cfengine/bin/cf-execd -F
112
+    root        20     1  0 07:48 ?        00:00:00 /usr/sbin/sshd
113
+    root        32     1  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
114
+    www-data    34    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
115
+    www-data    35    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
116
+    www-data    36    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
117
+    root        93    20  0 07:48 ?        00:00:00 sshd: root@pts/0
118
+    root       105    93  0 07:48 pts/0    00:00:00 -bash
119
+    root       112   105  0 07:49 pts/0    00:00:00 ps -ef
120
+
121
+If we stop apache2, it will be started again within a minute by
122
+CFEngine.
123
+
124
+    service apache2 status
125
+     Apache2 is running (pid 32).
126
+    service apache2 stop
127
+             * Stopping web server apache2 ... waiting    [ OK ]
128
+    service apache2 status
129
+     Apache2 is NOT running.
130
+    # ... wait up to 1 minute...
131
+    service apache2 status
132
+     Apache2 is running (pid 173).
133
+
134
+## Adapting to your applications
135
+
136
+To make sure your applications get managed in the same manner, there are
137
+just two things you need to adjust from the above example:
138
+
139
+ - In the Dockerfile used above, install your applications instead of
140
+   `apache2` and `sshd`.
141
+ - When you start the container with `docker run`,
142
+   specify the command line arguments to your applications rather than
143
+   `apache2` and `sshd`.
0 144
new file mode 100644
... ...
@@ -0,0 +1,74 @@
0
+page_title: Chef Usage
1
+page_description: Installation and using Docker via Chef
2
+page_keywords: chef, installation, usage, docker, documentation
3
+
4
+# Using Chef
5
+
6
+> **Note**:
7
+> Please note this is a community contributed installation path. The only
8
+> `official` installation is using the
9
+> [*Ubuntu*](/installation/ubuntulinux/#ubuntu-linux) installation
10
+> path. This version may sometimes be out of date.
11
+
12
+## Requirements
13
+
14
+To use this guide you'll need a working installation of
15
+[Chef](http://www.getchef.com/). This cookbook supports a variety of
16
+operating systems.
17
+
18
+## Installation
19
+
20
+The cookbook is available on the [Chef Community
21
+Site](http://community.opscode.com/cookbooks/docker) and can be
22
+installed using your favorite cookbook dependency manager.
23
+
24
+The source can be found on
25
+[GitHub](https://github.com/bflad/chef-docker).
26
+
27
+## Usage
28
+
29
+The cookbook provides recipes for installing Docker, configuring init
30
+for Docker, and resources for managing images and containers. It
31
+supports almost all Docker functionality.
32
+
33
+### Installation
34
+
35
+    include_recipe 'docker'
36
+
37
+### Images
38
+
39
+The next step is to pull a Docker image. For this, we have a resource:
40
+
41
+    docker_image 'samalba/docker-registry'
42
+
43
+This is equivalent to running:
44
+
45
+    $ docker pull samalba/docker-registry
46
+
47
+There are attributes available to control how long the cookbook will
48
+allow for downloading (5 minute default).
49
+
50
+To remove images you no longer need:
51
+
52
+    docker_image 'samalba/docker-registry' do
53
+      action :remove
54
+    end
55
+
56
+### Containers
57
+
58
+Now you have an image where you can run commands within a container
59
+managed by Docker.
60
+
61
+    docker_container 'samalba/docker-registry' do
62
+      detach true
63
+      port '5000:5000'
64
+      env 'SETTINGS_FLAVOR=local'
65
+      volume '/mnt/docker:/docker-storage'
66
+    end
67
+
68
+This is equivalent to running the following command, but under upstart:
69
+
70
+    $ docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry
71
+
72
+The resources will accept a single string or an array of values for any
73
+Docker flags that allow multiple values.
0 74
new file mode 100644
... ...
@@ -0,0 +1,60 @@
0
+page_title: Automatically Start Containers
1
+page_description: How to generate scripts for upstart, systemd, etc.
2
+page_keywords: systemd, upstart, supervisor, docker, documentation, host integration
3
+
4
+# Automatically Start Containers
5
+
6
+You can use your Docker containers with process managers like
7
+`upstart`, `systemd` and `supervisor`.
8
+
9
+## Introduction
10
+
11
+If you want a process manager to manage your containers you will need to
12
+run the docker daemon with the `-r=false` so that docker will not
13
+automatically restart your containers when the host is restarted.
14
+
15
+When you have finished setting up your image and are happy with your
16
+running container, you can then attach a process manager to manage it.
17
+When your run `docker start -a` docker will automatically attach to the
18
+running container, or start it if needed and forward all signals so that
19
+the process manager can detect when a container stops and correctly
20
+restart it.
21
+
22
+Here are a few sample scripts for systemd and upstart to integrate with
23
+docker.
24
+
25
+## Sample Upstart Script
26
+
27
+In this example We've already created a container to run Redis with
28
+`--name redis_server`. To create an upstart script for our container, we
29
+create a file named `/etc/init/redis.conf` and place the following into
30
+it:
31
+
32
+    description "Redis container"
33
+    author "Me"
34
+    start on filesystem and started docker
35
+    stop on runlevel [!2345]
36
+    respawn
37
+    script
38
+      /usr/bin/docker start -a redis_server
39
+    end script
40
+
41
+Next, we have to configure docker so that it's run with the option
42
+`-r=false`. Run the following command:
43
+
44
+    $ sudo sh -c "echo 'DOCKER_OPTS=\"-r=false\"' > /etc/default/docker"
45
+
46
+## Sample systemd Script
47
+
48
+    [Unit]
49
+    Description=Redis container
50
+    Author=Me
51
+    After=docker.service
52
+
53
+    [Service]
54
+    Restart=always
55
+    ExecStart=/usr/bin/docker start -a redis_server
56
+    ExecStop=/usr/bin/docker stop -t 2 redis_server
57
+
58
+    [Install]
59
+    WantedBy=local.target
0 60
new file mode 100644
... ...
@@ -0,0 +1,107 @@
0
+page_title: Docker HTTPS Setup
1
+page_description: How to setup docker with https
2
+page_keywords: docker, example, https, daemon
3
+
4
+# Running Docker with https
5
+
6
+By default, Docker runs via a non-networked Unix socket. It can also
7
+optionally communicate using a HTTP socket.
8
+
9
+If you need Docker reachable via the network in a safe manner, you can
10
+enable TLS by specifying the tlsverify flag and pointing Docker's
11
+tlscacert flag to a trusted CA certificate.
12
+
13
+In daemon mode, it will only allow connections from clients
14
+authenticated by a certificate signed by that CA. In client mode, it
15
+will only connect to servers with a certificate signed by that CA.
16
+
17
+> **Warning**: 
18
+> Using TLS and managing a CA is an advanced topic. Please make you self
19
+> familiar with openssl, x509 and tls before using it in production.
20
+
21
+## Create a CA, server and client keys with OpenSSL
22
+
23
+First, initialize the CA serial file and generate CA private and public
24
+keys:
25
+
26
+    $ echo 01 > ca.srl
27
+    $ openssl genrsa -des3 -out ca-key.pem
28
+    $ openssl req -new -x509 -days 365 -key ca-key.pem -out ca.pem
29
+
30
+Now that we have a CA, you can create a server key and certificate
31
+signing request. Make sure that "Common Name (e.g. server FQDN or YOUR
32
+name)" matches the hostname you will use to connect to Docker or just
33
+use `\*` for a certificate valid for any hostname:
34
+
35
+    $ openssl genrsa -des3 -out server-key.pem
36
+    $ openssl req -new -key server-key.pem -out server.csr
37
+
38
+Next we're going to sign the key with our CA:
39
+
40
+    $ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \
41
+      -out server-cert.pem
42
+
43
+For client authentication, create a client key and certificate signing
44
+request:
45
+
46
+    $ openssl genrsa -des3 -out client-key.pem
47
+    $ openssl req -new -key client-key.pem -out client.csr
48
+
49
+To make the key suitable for client authentication, create a extensions
50
+config file:
51
+
52
+    $ echo extendedKeyUsage = clientAuth > extfile.cnf
53
+
54
+Now sign the key:
55
+
56
+    $ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \
57
+      -out client-cert.pem -extfile extfile.cnf
58
+
59
+Finally you need to remove the passphrase from the client and server
60
+key:
61
+
62
+    $ openssl rsa -in server-key.pem -out server-key.pem
63
+    $ openssl rsa -in client-key.pem -out client-key.pem
64
+
65
+Now you can make the Docker daemon only accept connections from clients
66
+providing a certificate trusted by our CA:
67
+
68
+    $ sudo docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \
69
+      -H=0.0.0.0:4243
70
+
71
+To be able to connect to Docker and validate its certificate, you now
72
+need to provide your client keys, certificates and trusted CA:
73
+
74
+    $ docker --tlsverify --tlscacert=ca.pem --tlscert=client-cert.pem --tlskey=client-key.pem \
75
+      -H=dns-name-of-docker-host:4243
76
+
77
+> **Warning**: 
78
+> As shown in the example above, you don't have to run the
79
+> `docker` client with `sudo` or
80
+> the `docker` group when you use certificate
81
+> authentication. That means anyone with the keys can give any
82
+> instructions to your Docker daemon, giving them root access to the
83
+> machine hosting the daemon. Guard these keys as you would a root
84
+> password!
85
+
86
+## Other modes
87
+
88
+If you don't want to have complete two-way authentication, you can run
89
+Docker in various other modes by mixing the flags.
90
+
91
+### Daemon modes
92
+
93
+ - tlsverify, tlscacert, tlscert, tlskey set: Authenticate clients
94
+ - tls, tlscert, tlskey: Do not authenticate clients
95
+
96
+### Client modes
97
+
98
+ - tls: Authenticate server based on public/default CA pool
99
+ - tlsverify, tlscacert: Authenticate server based on given CA
100
+ - tls, tlscert, tlskey: Authenticate with client certificate, do not
101
+   authenticate server based on given CA
102
+ - tlsverify, tlscacert, tlscert, tlskey: Authenticate with client
103
+   certificate, authenticate server based on given CA
104
+
105
+The client will send its client certificate if found, so you just need
106
+to drop your keys into ~/.docker/<ca, cert or key>.pem
0 107
new file mode 100644
... ...
@@ -0,0 +1,703 @@
0
+page_title: Network Configuration
1
+page_description: Docker networking
2
+page_keywords: network, networking, bridge, docker, documentation
3
+
4
+# Network Configuration
5
+
6
+## TL;DR
7
+
8
+When Docker starts, it creates a virtual interface named `docker0` on
9
+the host machine.  It randomly chooses an address and subnet from the
10
+private range defined by [RFC 1918](http://tools.ietf.org/html/rfc1918)
11
+that are not in use on the host machine, and assigns it to `docker0`.
12
+Docker made the choice `172.17.42.1/16` when I started it a few minutes
13
+ago, for example — a 16-bit netmask providing 65,534 addresses for the
14
+host machine and its containers.
15
+
16
+> **Note:** 
17
+> This document discusses advanced networking configuration
18
+> and options for Docker. In most cases you won't need this information.
19
+> If you're looking to get started with a simpler explanation of Docker
20
+> networking and an introduction to the concept of container linking see
21
+> the [Docker User Guide](/userguide/dockerlinks/).
22
+
23
+But `docker0` is no ordinary interface.  It is a virtual *Ethernet
24
+bridge* that automatically forwards packets between any other network
25
+interfaces that are attached to it.  This lets containers communicate
26
+both with the host machine and with each other.  Every time Docker
27
+creates a container, it creates a pair of “peer” interfaces that are
28
+like opposite ends of a pipe — a packet send on one will be received on
29
+the other.  It gives one of the peers to the container to become its
30
+`eth0` interface and keeps the other peer, with a unique name like
31
+`vethAQI2QT`, out in the namespace of the host machine.  By binding
32
+every `veth*` interface to the `docker0` bridge, Docker creates a
33
+virtual subnet shared between the host machine and every Docker
34
+container.
35
+
36
+The remaining sections of this document explain all of the ways that you
37
+can use Docker options and — in advanced cases — raw Linux networking
38
+commands to tweak, supplement, or entirely replace Docker's default
39
+networking configuration.
40
+
41
+## Quick Guide to the Options
42
+
43
+Here is a quick list of the networking-related Docker command-line
44
+options, in case it helps you find the section below that you are
45
+looking for.
46
+
47
+Some networking command-line options can only be supplied to the Docker
48
+server when it starts up, and cannot be changed once it is running:
49
+
50
+ *  `-b BRIDGE` or `--bridge=BRIDGE` — see
51
+    [Building your own bridge](#bridge-building)
52
+
53
+ *  `--bip=CIDR` — see
54
+    [Customizing docker0](#docker0)
55
+
56
+ *  `-H SOCKET...` or `--host=SOCKET...` —
57
+    This might sound like it would affect container networking,
58
+    but it actually faces in the other direction:
59
+    it tells the Docker server over what channels
60
+    it should be willing to receive commands
61
+    like “run container” and “stop container.”
62
+
63
+ *  `--icc=true|false` — see
64
+    [Communication between containers](#between-containers)
65
+
66
+ *  `--ip=IP_ADDRESS` — see
67
+    [Binding container ports](#binding-ports)
68
+
69
+ *  `--ip-forward=true|false` — see
70
+    [Communication between containers](#between-containers)
71
+
72
+ *  `--iptables=true|false` — see
73
+    [Communication between containers](#between-containers)
74
+
75
+ *  `--mtu=BYTES` — see
76
+    [Customizing docker0](#docker0)
77
+
78
+There are two networking options that can be supplied either at startup
79
+or when `docker run` is invoked.  When provided at startup, set the
80
+default value that `docker run` will later use if the options are not
81
+specified:
82
+
83
+ *  `--dns=IP_ADDRESS...` — see
84
+    [Configuring DNS](#dns)
85
+
86
+ *  `--dns-search=DOMAIN...` — see
87
+    [Configuring DNS](#dns)
88
+
89
+Finally, several networking options can only be provided when calling
90
+`docker run` because they specify something specific to one container:
91
+
92
+ *  `-h HOSTNAME` or `--hostname=HOSTNAME` — see
93
+    [Configuring DNS](#dns) and
94
+    [How Docker networks a container](#container-networking)
95
+
96
+ *  `--link=CONTAINER_NAME:ALIAS` — see
97
+    [Configuring DNS](#dns) and
98
+    [Communication between containers](#between-containers)
99
+
100
+ *  `--net=bridge|none|container:NAME_or_ID|host` — see
101
+    [How Docker networks a container](#container-networking)
102
+
103
+ *  `-p SPEC` or `--publish=SPEC` — see
104
+    [Binding container ports](#binding-ports)
105
+
106
+ *  `-P` or `--publish-all=true|false` — see
107
+    [Binding container ports](#binding-ports)
108
+
109
+The following sections tackle all of the above topics in an order that
110
+moves roughly from simplest to most complex.
111
+
112
+## <a name="dns"></a>Configuring DNS
113
+
114
+How can Docker supply each container with a hostname and DNS
115
+configuration, without having to build a custom image with the hostname
116
+written inside?  Its trick is to overlay three crucial `/etc` files
117
+inside the container with virtual files where it can write fresh
118
+information.  You can see this by running `mount` inside a container:
119
+
120
+    $$ mount
121
+    ...
122
+    /dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...
123
+    /dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...
124
+    tmpfs on /etc/resolv.conf type tmpfs ...
125
+    ...
126
+
127
+This arrangement allows Docker to do clever things like keep
128
+`resolv.conf` up to date across all containers when the host machine
129
+receives new configuration over DHCP later.  The exact details of how
130
+Docker maintains these files inside the container can change from one
131
+Docker version to the next, so you should leave the files themselves
132
+alone and use the following Docker options instead.
133
+
134
+Four different options affect container domain name services.
135
+
136
+ *  `-h HOSTNAME` or `--hostname=HOSTNAME` — sets the hostname by which
137
+    the container knows itself.  This is written into `/etc/hostname`,
138
+    into `/etc/hosts` as the name of the container’s host-facing IP
139
+    address, and is the name that `/bin/bash` inside the container will
140
+    display inside its prompt.  But the hostname is not easy to see from
141
+    outside the container.  It will not appear in `docker ps` nor in the
142
+    `/etc/hosts` file of any other container.
143
+
144
+ *  `--link=CONTAINER_NAME:ALIAS` — using this option as you `run` a
145
+    container gives the new container’s `/etc/hosts` an extra entry
146
+    named `ALIAS` that points to the IP address of the container named
147
+    `CONTAINER_NAME`.  This lets processes inside the new container
148
+    connect to the hostname `ALIAS` without having to know its IP.  The
149
+    `--link=` option is discussed in more detail below, in the section
150
+    [Communication between containers](#between-containers).
151
+
152
+ *  `--dns=IP_ADDRESS...` — sets the IP addresses added as `server`
153
+    lines to the container's `/etc/resolv.conf` file.  Processes in the
154
+    container, when confronted with a hostname not in `/etc/hosts`, will
155
+    connect to these IP addresses on port 53 looking for name resolution
156
+    services.
157
+
158
+ *  `--dns-search=DOMAIN...` — sets the domain names that are searched
159
+    when a bare unqualified hostname is used inside of the container, by
160
+    writing `search` lines into the container’s `/etc/resolv.conf`.
161
+    When a container process attempts to access `host` and the search
162
+    domain `exmaple.com` is set, for instance, the DNS logic will not
163
+    only look up `host` but also `host.example.com`.
164
+
165
+Note that Docker, in the absence of either of the last two options
166
+above, will make `/etc/resolv.conf` inside of each container look like
167
+the `/etc/resolv.conf` of the host machine where the `docker` daemon is
168
+running.  The options then modify this default configuration.
169
+
170
+## <a name="between-containers"></a>Communication between containers
171
+
172
+Whether two containers can communicate is governed, at the operating
173
+system level, by three factors.
174
+
175
+1.  Does the network topology even connect the containers’ network
176
+    interfaces?  By default Docker will attach all containers to a
177
+    single `docker0` bridge, providing a path for packets to travel
178
+    between them.  See the later sections of this document for other
179
+    possible topologies.
180
+
181
+2.  Is the host machine willing to forward IP packets?  This is governed
182
+    by the `ip_forward` system parameter.  Packets can only pass between
183
+    containers if this parameter is `1`.  Usually you will simply leave
184
+    the Docker server at its default setting `--ip-forward=true` and
185
+    Docker will go set `ip_forward` to `1` for you when the server
186
+    starts up.  To check the setting or turn it on manually:
187
+
188
+        # Usually not necessary: turning on forwarding,
189
+        # on the host where your Docker server is running
190
+
191
+        $ cat /proc/sys/net/ipv4/ip_forward
192
+        0
193
+        $ sudo echo 1 > /proc/sys/net/ipv4/ip_forward
194
+        $ cat /proc/sys/net/ipv4/ip_forward
195
+        1
196
+
197
+3.  Do your `iptables` allow this particular connection to be made?
198
+    Docker will never make changes to your system `iptables` rules if
199
+    you set `--iptables=false` when the daemon starts.  Otherwise the
200
+    Docker server will add a default rule to the `FORWARD` chain with a
201
+    blanket `ACCEPT` policy if you retain the default `--icc=true`, or
202
+    else will set the policy to `DROP` if `--icc=false`.
203
+
204
+Nearly everyone using Docker will want `ip_forward` to be on, to at
205
+least make communication *possible* between containers.  But it is a
206
+strategic question whether to leave `--icc=true` or change it to
207
+`--icc=false` (on Ubuntu, by editing the `DOCKER_OPTS` variable in
208
+`/etc/default/docker` and restarting the Docker server) so that
209
+`iptables` will protect other containers — and the main host — from
210
+having arbitrary ports probed or accessed by a container that gets
211
+compromised.
212
+
213
+If you choose the most secure setting of `--icc=false`, then how can
214
+containers communicate in those cases where you *want* them to provide
215
+each other services?
216
+
217
+The answer is the `--link=CONTAINER_NAME:ALIAS` option, which was
218
+mentioned in the previous section because of its effect upon name
219
+services.  If the Docker daemon is running with both `--icc=false` and
220
+`--iptables=true` then, when it sees `docker run` invoked with the
221
+`--link=` option, the Docker server will insert a pair of `iptables`
222
+`ACCEPT` rules so that the new container can connect to the ports
223
+exposed by the other container — the ports that it mentioned in the
224
+`EXPOSE` lines of its `Dockerfile`.  Docker has more documentation on
225
+this subject — see the [linking Docker containers](/userguide/dockerlinks)
226
+page for further details.
227
+
228
+> **Note**:
229
+> The value `CONTAINER_NAME` in `--link=` must either be an
230
+> auto-assigned Docker name like `stupefied_pare` or else the name you
231
+> assigned with `--name=` when you ran `docker run`.  It cannot be a
232
+> hostname, which Docker will not recognize in the context of the
233
+> `--link=` option.
234
+
235
+You can run the `iptables` command on your Docker host to see whether
236
+the `FORWARD` chain has a default policy of `ACCEPT` or `DROP`:
237
+
238
+    # When --icc=false, you should see a DROP rule:
239
+
240
+    $ sudo iptables -L -n
241
+    ...
242
+    Chain FORWARD (policy ACCEPT)
243
+    target     prot opt source               destination
244
+    DROP       all  --  0.0.0.0/0            0.0.0.0/0
245
+    ...
246
+
247
+    # When a --link= has been created under --icc=false,
248
+    # you should see port-specific ACCEPT rules overriding
249
+    # the subsequent DROP policy for all other packets:
250
+
251
+    $ sudo iptables -L -n
252
+    ...
253
+    Chain FORWARD (policy ACCEPT)
254
+    target     prot opt source               destination
255
+    ACCEPT     tcp  --  172.17.0.2           172.17.0.3           tcp spt:80
256
+    ACCEPT     tcp  --  172.17.0.3           172.17.0.2           tcp dpt:80
257
+    DROP       all  --  0.0.0.0/0            0.0.0.0/0
258
+
259
+> **Note**:
260
+> Docker is careful that its host-wide `iptables` rules fully expose
261
+> containers to each other’s raw IP addresses, so connections from one
262
+> container to another should always appear to be originating from the
263
+> first container’s own IP address.
264
+
265
+## <a name="binding-ports"></a>Binding container ports to the host
266
+
267
+By default Docker containers can make connections to the outside world,
268
+but the outside world cannot connect to containers.  Each outgoing
269
+connection will appear to originate from one of the host machine’s own
270
+IP addresses thanks to an `iptables` masquerading rule on the host
271
+machine that the Docker server creates when it starts:
272
+
273
+    # You can see that the Docker server creates a
274
+    # masquerade rule that let containers connect
275
+    # to IP addresses in the outside world:
276
+
277
+    $ sudo iptables -t nat -L -n
278
+    ...
279
+    Chain POSTROUTING (policy ACCEPT)
280
+    target     prot opt source               destination
281
+    MASQUERADE  all  --  172.17.0.0/16       !172.17.0.0/16
282
+    ...
283
+
284
+But if you want containers to accept incoming connections, you will need
285
+to provide special options when invoking `docker run`.  These options
286
+are covered in more detail in the [Docker User Guide](/userguide/dockerlinks)
287
+page.  There are two approaches.
288
+
289
+First, you can supply `-P` or `--publish-all=true|false` to `docker run`
290
+which is a blanket operation that identifies every port with an `EXPOSE`
291
+line in the image’s `Dockerfile` and maps it to a host port somewhere in
292
+the range 49000–49900.  This tends to be a bit inconvenient, since you
293
+then have to run other `docker` sub-commands to learn which external
294
+port a given service was mapped to.
295
+
296
+More convenient is the `-p SPEC` or `--publish=SPEC` option which lets
297
+you be explicit about exactly which external port on the Docker server —
298
+which can be any port at all, not just those in the 49000–49900 block —
299
+you want mapped to which port in the container.
300
+
301
+Either way, you should be able to peek at what Docker has accomplished
302
+in your network stack by examining your NAT tables.
303
+
304
+    # What your NAT rules might look like when Docker
305
+    # is finished setting up a -P forward:
306
+
307
+    $ iptables -t nat -L -n
308
+    ...
309
+    Chain DOCKER (2 references)
310
+    target     prot opt source               destination
311
+    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:49153 to:172.17.0.2:80
312
+
313
+    # What your NAT rules might look like when Docker
314
+    # is finished setting up a -p 80:80 forward:
315
+
316
+    Chain DOCKER (2 references)
317
+    target     prot opt source               destination
318
+    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80
319
+
320
+You can see that Docker has exposed these container ports on `0.0.0.0`,
321
+the wildcard IP address that will match any possible incoming port on
322
+the host machine.  If you want to be more restrictive and only allow
323
+container services to be contacted through a specific external interface
324
+on the host machine, you have two choices.  When you invoke `docker run`
325
+you can use either `-p IP:host_port:container_port` or `-p IP::port` to
326
+specify the external interface for one particular binding.
327
+
328
+Or if you always want Docker port forwards to bind to one specific IP
329
+address, you can edit your system-wide Docker server settings (on
330
+Ubuntu, by editing `DOCKER_OPTS` in `/etc/default/docker`) and add the
331
+option `--ip=IP_ADDRESS`.  Remember to restart your Docker server after
332
+editing this setting.
333
+
334
+Again, this topic is covered without all of these low-level networking
335
+details in the [Docker User Guide](/userguide/dockerlinks/) document if you
336
+would like to use that as your port redirection reference instead.
337
+
338
+## <a name="docker0"></a>Customizing docker0
339
+
340
+By default, the Docker server creates and configures the host system’s
341
+`docker0` interface as an *Ethernet bridge* inside the Linux kernel that
342
+can pass packets back and forth between other physical or virtual
343
+network interfaces so that they behave as a single Ethernet network.
344
+
345
+Docker configures `docker0` with an IP address and netmask so the host
346
+machine can both receive and send packets to containers connected to the
347
+bridge, and gives it an MTU — the *maximum transmission unit* or largest
348
+packet length that the interface will allow — of either 1,500 bytes or
349
+else a more specific value copied from the Docker host’s interface that
350
+supports its default route.  Both are configurable at server startup:
351
+
352
+ *  `--bip=CIDR` — supply a specific IP address and netmask for the
353
+    `docker0` bridge, using standard CIDR notation like
354
+    `192.168.1.5/24`.
355
+
356
+ *  `--mtu=BYTES` — override the maximum packet length on `docker0`.
357
+
358
+On Ubuntu you would add these to the `DOCKER_OPTS` setting in
359
+`/etc/default/docker` on your Docker host and restarting the Docker
360
+service.
361
+
362
+Once you have one or more containers up and running, you can confirm
363
+that Docker has properly connected them to the `docker0` bridge by
364
+running the `brctl` command on the host machine and looking at the
365
+`interfaces` column of the output.  Here is a host with two different
366
+containers connected:
367
+
368
+    # Display bridge info
369
+
370
+    $ sudo brctl show
371
+    bridge name     bridge id               STP enabled     interfaces
372
+    docker0         8000.3a1d7362b4ee       no              veth65f9
373
+                                                            vethdda6
374
+
375
+If the `brctl` command is not installed on your Docker host, then on
376
+Ubuntu you should be able to run `sudo apt-get install bridge-utils` to
377
+install it.
378
+
379
+Finally, the `docker0` Ethernet bridge settings are used every time you
380
+create a new container.  Docker selects a free IP address from the range
381
+available on the bridge each time you `docker run` a new container, and
382
+configures the container’s `eth0` interface with that IP address and the
383
+bridge’s netmask.  The Docker host’s own IP address on the bridge is
384
+used as the default gateway by which each container reaches the rest of
385
+the Internet.
386
+
387
+    # The network, as seen from a container
388
+
389
+    $ sudo docker run -i -t --rm base /bin/bash
390
+
391
+    $$ ip addr show eth0
392
+    24: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
393
+        link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff
394
+        inet 172.17.0.3/16 scope global eth0
395
+           valid_lft forever preferred_lft forever
396
+        inet6 fe80::306f:e0ff:fe35:5791/64 scope link
397
+           valid_lft forever preferred_lft forever
398
+
399
+    $$ ip route
400
+    default via 172.17.42.1 dev eth0
401
+    172.17.0.0/16 dev eth0  proto kernel  scope link  src 172.17.0.3
402
+
403
+    $$ exit
404
+
405
+Remember that the Docker host will not be willing to forward container
406
+packets out on to the Internet unless its `ip_forward` system setting is
407
+`1` — see the section above on [Communication between
408
+containers](#between-containers) for details.
409
+
410
+## <a name="bridge-building"></a>Building your own bridge
411
+
412
+If you want to take Docker out of the business of creating its own
413
+Ethernet bridge entirely, you can set up your own bridge before starting
414
+Docker and use `-b BRIDGE` or `--bridge=BRIDGE` to tell Docker to use
415
+your bridge instead.  If you already have Docker up and running with its
416
+old `bridge0` still configured, you will probably want to begin by
417
+stopping the service and removing the interface:
418
+
419
+    # Stopping Docker and removing docker0
420
+
421
+    $ sudo service docker stop
422
+    $ sudo ip link set dev docker0 down
423
+    $ sudo brctl delbr docker0
424
+
425
+Then, before starting the Docker service, create your own bridge and
426
+give it whatever configuration you want.  Here we will create a simple
427
+enough bridge that we really could just have used the options in the
428
+previous section to customize `docker0`, but it will be enough to
429
+illustrate the technique.
430
+
431
+    # Create our own bridge
432
+
433
+    $ sudo brctl addbr bridge0
434
+    $ sudo ip addr add 192.168.5.1/24 dev bridge0
435
+    $ sudo ip link set dev bridge0 up
436
+
437
+    # Confirming that our bridge is up and running
438
+
439
+    $ ip addr show bridge0
440
+    4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
441
+        link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
442
+        inet 192.168.5.1/24 scope global bridge0
443
+           valid_lft forever preferred_lft forever
444
+
445
+    # Tell Docker about it and restart (on Ubuntu)
446
+
447
+    $ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker
448
+    $ sudo service docker start
449
+
450
+The result should be that the Docker server starts successfully and is
451
+now prepared to bind containers to the new bridge.  After pausing to
452
+verify the bridge’s configuration, try creating a container — you will
453
+see that its IP address is in your new IP address range, which Docker
454
+will have auto-detected.
455
+
456
+Just as we learned in the previous section, you can use the `brctl show`
457
+command to see Docker add and remove interfaces from the bridge as you
458
+start and stop containers, and can run `ip addr` and `ip route` inside a
459
+container to see that it has been given an address in the bridge’s IP
460
+address range and has been told to use the Docker host’s IP address on
461
+the bridge as its default gateway to the rest of the Internet.
462
+
463
+## <a name="container-networking"></a>How Docker networks a container
464
+
465
+While Docker is under active development and continues to tweak and
466
+improve its network configuration logic, the shell commands in this
467
+section are rough equivalents to the steps that Docker takes when
468
+configuring networking for each new container.
469
+
470
+Let’s review a few basics.
471
+
472
+To communicate using the Internet Protocol (IP), a machine needs access
473
+to at least one network interface at which packets can be sent and
474
+received, and a routing table that defines the range of IP addresses
475
+reachable through that interface.  Network interfaces do not have to be
476
+physical devices.  In fact, the `lo` loopback interface available on
477
+every Linux machine (and inside each Docker container) is entirely
478
+virtual — the Linux kernel simply copies loopback packets directly from
479
+the sender’s memory into the receiver’s memory.
480
+
481
+Docker uses special virtual interfaces to let containers communicate
482
+with the host machine — pairs of virtual interfaces called “peers” that
483
+are linked inside of the host machine’s kernel so that packets can
484
+travel between them.  They are simple to create, as we will see in a
485
+moment.
486
+
487
+The steps with which Docker configures a container are:
488
+
489
+1.  Create a pair of peer virtual interfaces.
490
+
491
+2.  Give one of them a unique name like `veth65f9`, keep it inside of
492
+    the main Docker host, and bind it to `docker0` or whatever bridge
493
+    Docker is supposed to be using.
494
+
495
+3.  Toss the other interface over the wall into the new container (which
496
+    will already have been provided with an `lo` interface) and rename
497
+    it to the much prettier name `eth0` since, inside of the container’s
498
+    separate and unique network interface namespace, there are no
499
+    physical interfaces with which this name could collide.
500
+
501
+4.  Give the container’s `eth0` a new IP address from within the
502
+    bridge’s range of network addresses, and set its default route to
503
+    the IP address that the Docker host owns on the bridge.
504
+
505
+With these steps complete, the container now possesses an `eth0`
506
+(virtual) network card and will find itself able to communicate with
507
+other containers and the rest of the Internet.
508
+
509
+You can opt out of the above process for a particular container by
510
+giving the `--net=` option to `docker run`, which takes four possible
511
+values.
512
+
513
+ *  `--net=bridge` — The default action, that connects the container to
514
+    the Docker bridge as described above.
515
+
516
+ *  `--net=host` — Tells Docker to skip placing the container inside of
517
+    a separate network stack.  In essence, this choice tells Docker to
518
+    **not containerize the container’s networking**!  While container
519
+    processes will still be confined to their own filesystem and process
520
+    list and resource limits, a quick `ip addr` command will show you
521
+    that, network-wise, they live “outside” in the main Docker host and
522
+    have full access to its network interfaces.  Note that this does
523
+    **not** let the container reconfigure the host network stack — that
524
+    would require `--privileged=true` — but it does let container
525
+    processes open low-numbered ports like any other root process.
526
+
527
+ *  `--net=container:NAME_or_ID` — Tells Docker to put this container’s
528
+    processes inside of the network stack that has already been created
529
+    inside of another container.  The new container’s processes will be
530
+    confined to their own filesystem and process list and resource
531
+    limits, but will share the same IP address and port numbers as the
532
+    first container, and processes on the two containers will be able to
533
+    connect to each other over the loopback interface.
534
+
535
+ *  `--net=none` — Tells Docker to put the container inside of its own
536
+    network stack but not to take any steps to configure its network,
537
+    leaving you free to build any of the custom configurations explored
538
+    in the last few sections of this document.
539
+
540
+To get an idea of the steps that are necessary if you use `--net=none`
541
+as described in that last bullet point, here are the commands that you
542
+would run to reach roughly the same configuration as if you had let
543
+Docker do all of the configuration:
544
+
545
+    # At one shell, start a container and
546
+    # leave its shell idle and running
547
+
548
+    $ sudo docker run -i -t --rm --net=none base /bin/bash
549
+    root@63f36fc01b5f:/#
550
+
551
+    # At another shell, learn the container process ID
552
+    # and create its namespace entry in /var/run/netns/
553
+    # for the "ip netns" command we will be using below
554
+
555
+    $ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
556
+    2778
557
+    $ pid=2778
558
+    $ sudo mkdir -p /var/run/netns
559
+    $ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
560
+
561
+    # Check the bridge’s IP address and netmask
562
+
563
+    $ ip addr show docker0
564
+    21: docker0: ...
565
+    inet 172.17.42.1/16 scope global docker0
566
+    ...
567
+
568
+    # Create a pair of "peer" interfaces A and B,
569
+    # bind the A end to the bridge, and bring it up
570
+
571
+    $ sudo ip link add A type veth peer name B
572
+    $ sudo brctl addif docker0 A
573
+    $ sudo ip link set A up
574
+
575
+    # Place B inside the container's network namespace,
576
+    # rename to eth0, and activate it with a free IP
577
+
578
+    $ sudo ip link set B netns $pid
579
+    $ sudo ip netns exec $pid ip link set dev B name eth0
580
+    $ sudo ip netns exec $pid ip link set eth0 up
581
+    $ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
582
+    $ sudo ip netns exec $pid ip route add default via 172.17.42.1
583
+
584
+At this point your container should be able to perform networking
585
+operations as usual.
586
+
587
+When you finally exit the shell and Docker cleans up the container, the
588
+network namespace is destroyed along with our virtual `eth0` — whose
589
+destruction in turn destroys interface `A` out in the Docker host and
590
+automatically un-registers it from the `docker0` bridge.  So everything
591
+gets cleaned up without our having to run any extra commands!  Well,
592
+almost everything:
593
+
594
+    # Clean up dangling symlinks in /var/run/netns
595
+
596
+    find -L /var/run/netns -type l -delete
597
+
598
+Also note that while the script above used modern `ip` command instead
599
+of old deprecated wrappers like `ipconfig` and `route`, these older
600
+commands would also have worked inside of our container.  The `ip addr`
601
+command can be typed as `ip a` if you are in a hurry.
602
+
603
+Finally, note the importance of the `ip netns exec` command, which let
604
+us reach inside and configure a network namespace as root.  The same
605
+commands would not have worked if run inside of the container, because
606
+part of safe containerization is that Docker strips container processes
607
+of the right to configure their own networks.  Using `ip netns exec` is
608
+what let us finish up the configuration without having to take the
609
+dangerous step of running the container itself with `--privileged=true`.
610
+
611
+## Tools and Examples
612
+
613
+Before diving into the following sections on custom network topologies,
614
+you might be interested in glancing at a few external tools or examples
615
+of the same kinds of configuration.  Here are two:
616
+
617
+ *  Jérôme Petazzoni has create a `pipework` shell script to help you
618
+    connect together containers in arbitrarily complex scenarios:
619
+    <https://github.com/jpetazzo/pipework>
620
+
621
+ *  Brandon Rhodes has created a whole network topology of Docker
622
+    containers for the next edition of Foundations of Python Network
623
+    Programming that includes routing, NAT’d firewalls, and servers that
624
+    offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP:
625
+    <https://github.com/brandon-rhodes/fopnp/tree/m/playground>
626
+
627
+Both tools use networking commands very much like the ones you saw in
628
+the previous section, and will see in the following sections.
629
+
630
+## <a name="point-to-point"></a>Building a point-to-point connection
631
+
632
+By default, Docker attaches all containers to the virtual subnet
633
+implemented by `docker0`.  You can create containers that are each
634
+connected to some different virtual subnet by creating your own bridge
635
+as shown in [Building your own bridge](#bridge-building), starting each
636
+container with `docker run --net=none`, and then attaching the
637
+containers to your bridge with the shell commands shown in [How Docker
638
+networks a container](#container-networking).
639
+
640
+But sometimes you want two particular containers to be able to
641
+communicate directly without the added complexity of both being bound to
642
+a host-wide Ethernet bridge.
643
+
644
+The solution is simple: when you create your pair of peer interfaces,
645
+simply throw *both* of them into containers, and configure them as
646
+classic point-to-point links.  The two containers will then be able to
647
+communicate directly (provided you manage to tell each container the
648
+other’s IP address, of course).  You might adjust the instructions of
649
+the previous section to go something like this:
650
+
651
+    # Start up two containers in two terminal windows
652
+
653
+    $ sudo docker run -i -t --rm --net=none base /bin/bash
654
+    root@1f1f4c1f931a:/#
655
+
656
+    $ sudo docker run -i -t --rm --net=none base /bin/bash
657
+    root@12e343489d2f:/#
658
+
659
+    # Learn the container process IDs
660
+    # and create their namespace entries
661
+
662
+    $ sudo docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a
663
+    2989
664
+    $ sudo docker inspect -f '{{.State.Pid}}' 12e343489d2f
665
+    3004
666
+    $ sudo mkdir -p /var/run/netns
667
+    $ sudo ln -s /proc/2989/ns/net /var/run/netns/2989
668
+    $ sudo ln -s /proc/3004/ns/net /var/run/netns/3004
669
+
670
+    # Create the "peer" interfaces and hand them out
671
+
672
+    $ sudo ip link add A type veth peer name B
673
+
674
+    $ sudo ip link set A netns 2989
675
+    $ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A
676
+    $ sudo ip netns exec 2989 ip link set A up
677
+    $ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A
678
+
679
+    $ sudo ip link set B netns 3004
680
+    $ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B
681
+    $ sudo ip netns exec 3004 ip link set B up
682
+    $ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B
683
+
684
+The two containers should now be able to ping each other and make
685
+connections sucessfully.  Point-to-point links like this do not depend
686
+on a subnet nor a netmask, but on the bare assertion made by `ip route`
687
+that some other single IP address is connected to a particular network
688
+interface.
689
+
690
+Note that point-to-point links can be safely combined with other kinds
691
+of network connectivity — there is no need to start the containers with
692
+`--net=none` if you want point-to-point links to be an addition to the
693
+container’s normal networking instead of a replacement.
694
+
695
+A final permutation of this pattern is to create the point-to-point link
696
+between the Docker host and one container, which would allow the host to
697
+communicate with that one container on some single IP address and thus
698
+communicate “out-of-band” of the bridge that connects the other, more
699
+usual containers.  But unless you have very specific networking needs
700
+that drive you to such a solution, it is probably far preferable to use
701
+`--icc=false` to lock down inter-container communication, as we explored
702
+earlier.
0 703
new file mode 100644
... ...
@@ -0,0 +1,93 @@
0
+page_title: Puppet Usage
1
+page_description: Installating and using Puppet
2
+page_keywords: puppet, installation, usage, docker, documentation
3
+
4
+# Using Puppet
5
+
6
+> *Note:* Please note this is a community contributed installation path. The
7
+> only `official` installation is using the
8
+> [*Ubuntu*](/installation/ubuntulinux/#ubuntu-linux) installation
9
+> path. This version may sometimes be out of date.
10
+
11
+## Requirements
12
+
13
+To use this guide you'll need a working installation of Puppet from
14
+[Puppet Labs](https://puppetlabs.com) .
15
+
16
+The module also currently uses the official PPA so only works with
17
+Ubuntu.
18
+
19
+## Installation
20
+
21
+The module is available on the [Puppet
22
+Forge](https://forge.puppetlabs.com/garethr/docker/) and can be
23
+installed using the built-in module tool.
24
+
25
+    $ puppet module install garethr/docker
26
+
27
+It can also be found on
28
+[GitHub](https://github.com/garethr/garethr-docker) if you would rather
29
+download the source.
30
+
31
+## Usage
32
+
33
+The module provides a puppet class for installing Docker and two defined
34
+types for managing images and containers.
35
+
36
+### Installation
37
+
38
+    include 'docker'
39
+
40
+### Images
41
+
42
+The next step is probably to install a Docker image. For this, we have a
43
+defined type which can be used like so:
44
+
45
+    docker::image { 'ubuntu': }
46
+
47
+This is equivalent to running:
48
+
49
+    $ docker pull ubuntu
50
+
51
+Note that it will only be downloaded if an image of that name does not
52
+already exist. This is downloading a large binary so on first run can
53
+take a while. For that reason this define turns off the default 5 minute
54
+timeout for the exec type. Note that you can also remove images you no
55
+longer need with:
56
+
57
+    docker::image { 'ubuntu':
58
+      ensure => 'absent',
59
+    }
60
+
61
+### Containers
62
+
63
+Now you have an image where you can run commands within a container
64
+managed by Docker.
65
+
66
+    docker::run { 'helloworld':
67
+      image   => 'ubuntu',
68
+      command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"',
69
+    }
70
+
71
+This is equivalent to running the following command, but under upstart:
72
+
73
+    $ docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
74
+
75
+Run also contains a number of optional parameters:
76
+
77
+    docker::run { 'helloworld':
78
+      image        => 'ubuntu',
79
+      command      => '/bin/sh -c "while true; do echo hello world; sleep 1; done"',
80
+      ports        => ['4444', '4555'],
81
+      volumes      => ['/var/lib/couchdb', '/var/log'],
82
+      volumes_from => '6446ea52fbc9',
83
+      memory_limit => 10485760, # bytes
84
+      username     => 'example',
85
+      hostname     => 'example.com',
86
+      env          => ['FOO=BAR', 'FOO2=BAR2'],
87
+      dns          => ['8.8.8.8', '8.8.4.4'],
88
+    }
89
+
90
+> *Note:*
91
+> The `ports`, `env`, `dns` and `volumes` attributes can be set with either a single
92
+> string or as above with an array of values.
... ...
@@ -38,7 +38,7 @@ of another container. Of course, if the host system is setup
38 38
 accordingly, containers can interact with each other through their
39 39
 respective network interfaces — just like they can interact with
40 40
 external hosts. When you specify public ports for your containers or use
41
-[*links*](/use/working_with_links_names/#working-with-links-names)
41
+[*links*](/userguide/dockerlinks/#working-with-links-names)
42 42
 then IP traffic is allowed between containers. They can ping each other,
43 43
 send/receive UDP packets, and establish TCP connections, but that can be
44 44
 restricted if necessary. From a network architecture point of view, all
45 45
new file mode 100644
... ...
@@ -0,0 +1,116 @@
0
+page_title: Using Supervisor with Docker
1
+page_description: How to use Supervisor process management with Docker
2
+page_keywords: docker, supervisor, process management
3
+
4
+# Using Supervisor with Docker
5
+
6
+> **Note**:
7
+> - **If you don't like sudo** then see [*Giving non-root
8
+>   access*](/installation/binaries/#dockergroup)
9
+
10
+Traditionally a Docker container runs a single process when it is
11
+launched, for example an Apache daemon or a SSH server daemon. Often
12
+though you want to run more than one process in a container. There are a
13
+number of ways you can achieve this ranging from using a simple Bash
14
+script as the value of your container's `CMD` instruction to installing
15
+a process management tool.
16
+
17
+In this example we're going to make use of the process management tool,
18
+[Supervisor](http://supervisord.org/), to manage multiple processes in
19
+our container. Using Supervisor allows us to better control, manage, and
20
+restart the processes we want to run. To demonstrate this we're going to
21
+install and manage both an SSH daemon and an Apache daemon.
22
+
23
+## Creating a Dockerfile
24
+
25
+Let's start by creating a basic `Dockerfile` for our
26
+new image.
27
+
28
+    FROM ubuntu:13.04
29
+    MAINTAINER examples@docker.io
30
+    RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
31
+    RUN apt-get update
32
+    RUN apt-get upgrade -y
33
+
34
+## Installing Supervisor
35
+
36
+We can now install our SSH and Apache daemons as well as Supervisor in
37
+our container.
38
+
39
+    RUN apt-get install -y openssh-server apache2 supervisor
40
+    RUN mkdir -p /var/run/sshd
41
+    RUN mkdir -p /var/log/supervisor
42
+
43
+Here we're installing the `openssh-server`,
44
+`apache2` and `supervisor`
45
+(which provides the Supervisor daemon) packages. We're also creating two
46
+new directories that are needed to run our SSH daemon and Supervisor.
47
+
48
+## Adding Supervisor's configuration file
49
+
50
+Now let's add a configuration file for Supervisor. The default file is
51
+called `supervisord.conf` and is located in
52
+`/etc/supervisor/conf.d/`.
53
+
54
+    ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
55
+
56
+Let's see what is inside our `supervisord.conf`
57
+file.
58
+
59
+    [supervisord]
60
+    nodaemon=true
61
+
62
+    [program:sshd]
63
+    command=/usr/sbin/sshd -D
64
+
65
+    [program:apache2]
66
+    command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
67
+
68
+The `supervisord.conf` configuration file contains
69
+directives that configure Supervisor and the processes it manages. The
70
+first block `[supervisord]` provides configuration
71
+for Supervisor itself. We're using one directive, `nodaemon`
72
+which tells Supervisor to run interactively rather than
73
+daemonize.
74
+
75
+The next two blocks manage the services we wish to control. Each block
76
+controls a separate process. The blocks contain a single directive,
77
+`command`, which specifies what command to run to
78
+start each process.
79
+
80
+## Exposing ports and running Supervisor
81
+
82
+Now let's finish our `Dockerfile` by exposing some
83
+required ports and specifying the `CMD` instruction
84
+to start Supervisor when our container launches.
85
+
86
+    EXPOSE 22 80
87
+    CMD ["/usr/bin/supervisord"]
88
+
89
+Here We've exposed ports 22 and 80 on the container and we're running
90
+the `/usr/bin/supervisord` binary when the container
91
+launches.
92
+
93
+## Building our image
94
+
95
+We can now build our new image.
96
+
97
+    $ sudo docker build -t <yourname>/supervisord .
98
+
99
+## Running our Supervisor container
100
+
101
+Once We've got a built image we can launch a container from it.
102
+
103
+    $ sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
104
+    2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
105
+    2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
106
+    2013-11-25 18:53:22,342 INFO supervisord started with pid 1
107
+    2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6
108
+    2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7
109
+    . . .
110
+
111
+We've launched a new container interactively using the `docker run` command.
112
+That container has run Supervisor and launched the SSH and Apache daemons with
113
+it. We've specified the `-p` flag to expose ports 22 and 80. From here we can
114
+now identify the exposed ports and connect to one or both of the SSH and Apache
115
+daemons.
... ...
@@ -1,33 +1,32 @@
1
-page_title: Trusted Builds on Docker.io
2
-page_description: Docker.io Trusted Builds
3
-page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker.io, docs, documentation, trusted, builds, trusted builds
1
+page_title: Automated Builds on Docker.io
2
+page_description: Docker.io Automated Builds
3
+page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker.io, docs, documentation, trusted, builds, trusted builds, automated builds
4
+# Automated Builds on Docker.io
4 5
 
5
-# Trusted Builds on Docker.io
6
+## Automated Builds
6 7
 
7
-## Trusted Builds
8
-
9
-*Trusted Builds* is a special feature allowing you to specify a source
8
+*Automated Builds* is a special feature allowing you to specify a source
10 9
 repository with a `Dockerfile` to be built by the
11 10
 [Docker.io](https://index.docker.io) build clusters. The system will
12 11
 clone your repository and build the `Dockerfile` using the repository as
13 12
 the context. The resulting image will then be uploaded to the registry
14
-and marked as a *Trusted Build*.
13
+and marked as an *Automated Build*.
15 14
 
16
-Trusted Builds have a number of advantages. For example, users of *your* Trusted
17
-Build can be certain that the resulting image was built exactly how it claims
18
-to be.
15
+Automated Builds have a number of advantages. For example, users of
16
+*your* Automated Build can be certain that the resulting image was built
17
+exactly how it claims to be.
19 18
 
20 19
 Furthermore, the `Dockerfile` will be available to anyone browsing your repository
21
-on the registry. Another advantage of the Trusted Builds feature is the automated
20
+on the registry. Another advantage of the Automated Builds feature is the automated
22 21
 builds. This makes sure that your repository is always up to date.
23 22
 
24
-Trusted builds are supported for both public and private repositories on
25
-both [GitHub](http://github.com) and
23
+Automated Builds are supported for both public and private repositories
24
+on both [GitHub](http://github.com) and
26 25
 [BitBucket](https://bitbucket.org/).
27 26
 
28
-### Setting up Trusted Builds with GitHub
27
+### Setting up Automated Builds with GitHub
29 28
 
30
-In order to setup a Trusted Build, you need to first link your [Docker.io](
29
+In order to setup an Automated Build, you need to first link your [Docker.io](
31 30
 https://index.docker.io) account with a GitHub one. This will allow the registry
32 31
 to see your repositories.
33 32
 
... ...
@@ -35,7 +34,7 @@ to see your repositories.
35 35
 > https://index.docker.io) needs to setup a GitHub service hook. Although nothing
36 36
 > else is done with your account, this is how GitHub manages permissions, sorry!
37 37
 
38
-Click on the [Trusted Builds tab](https://index.docker.io/builds/) to
38
+Click on the [Automated Builds tab](https://index.docker.io/builds/) to
39 39
 get started and then select [+ Add
40 40
 New](https://index.docker.io/builds/add/).
41 41
 
... ...
@@ -45,9 +44,9 @@ service](https://index.docker.io/associate/github/).
45 45
 Then follow the instructions to authorize and link your GitHub account
46 46
 to Docker.io.
47 47
 
48
-#### Creating a Trusted Build
48
+#### Creating an Automated Build
49 49
 
50
-You can [create a Trusted Build](https://index.docker.io/builds/github/select/)
50
+You can [create an Automated Build](https://index.docker.io/builds/github/select/)
51 51
 from any of your public or private GitHub repositories with a `Dockerfile`.
52 52
 
53 53
 #### GitHub organizations
... ...
@@ -59,7 +58,7 @@ organization on GitHub.
59 59
 #### GitHub service hooks
60 60
 
61 61
 You can follow the below steps to configure the GitHub service hooks for your
62
-Trusted Build:
62
+Automated Build:
63 63
 
64 64
 <table class="table table-bordered">
65 65
   <thead>
... ...
@@ -84,13 +83,13 @@ Trusted Build:
84 84
   </tbody>
85 85
 </table>
86 86
 
87
-### Setting up Trusted Builds with BitBucket
87
+### Setting up Automated Builds with BitBucket
88 88
 
89
-In order to setup a Trusted Build, you need to first link your
89
+In order to setup an Automated Build, you need to first link your
90 90
 [Docker.io]( https://index.docker.io) account with a BitBucket one. This
91 91
 will allow the registry to see your repositories.
92 92
 
93
-Click on the [Trusted Builds tab](https://index.docker.io/builds/) to
93
+Click on the [Automated Builds tab](https://index.docker.io/builds/) to
94 94
 get started and then select [+ Add
95 95
 New](https://index.docker.io/builds/add/).
96 96
 
... ...
@@ -100,14 +99,14 @@ service](https://index.docker.io/associate/bitbucket/).
100 100
 Then follow the instructions to authorize and link your BitBucket account
101 101
 to Docker.io.
102 102
 
103
-#### Creating a Trusted Build
103
+#### Creating an Automated Build
104 104
 
105 105
 You can [create a Trusted
106 106
 Build](https://index.docker.io/builds/bitbucket/select/)
107 107
 from any of your public or private BitBucket repositories with a
108 108
 `Dockerfile`.
109 109
 
110
-### The Dockerfile and Trusted Builds
110
+### The Dockerfile and Automated Builds
111 111
 
112 112
 During the build process, we copy the contents of your `Dockerfile`. We also
113 113
 add it to the [Docker.io](https://index.docker.io) for the Docker community
... ...
@@ -120,20 +119,19 @@ repository's full description.
120 120
 
121 121
 > **Warning:**
122 122
 > If you change the full description after a build, it will be
123
-> rewritten the next time the Trusted Build has been built. To make changes,
123
+> rewritten the next time the Automated Build has been built. To make changes,
124 124
 > modify the README.md from the Git repository. We will look for a README.md
125 125
 > in the same directory as your `Dockerfile`.
126 126
 
127 127
 ### Build triggers
128 128
 
129
-If you need another way to trigger your Trusted Builds outside of GitHub
129
+If you need another way to trigger your Automated Builds outside of GitHub
130 130
 or BitBucket, you can setup a build trigger. When you turn on the build
131
-trigger for a Trusted Build, it will give you a URL to which you can
132
-send POST requests. This will trigger the Trusted Build process, which
133
-is similar to GitHub web hooks.
131
+trigger for an Automated Build, it will give you a URL to which you can
132
+send POST requests. This will trigger the Automated Build process, which
133
+is similar to GitHub webhooks.
134 134
 
135
-Build Triggers are available under the Settings tab of each Trusted
136
-Build.
135
+Build Triggers are available under the Settings tab of each Automated Build.
137 136
 
138 137
 > **Note:** 
139 138
 > You can only trigger one build at a time and no more than one
... ...
@@ -144,10 +142,10 @@ Build.
144 144
 
145 145
 ### Webhooks
146 146
 
147
-Also available for Trusted Builds are Webhooks. Webhooks can be called
147
+Also available for Automated Builds are Webhooks. Webhooks can be called
148 148
 after a successful repository push is made.
149 149
 
150
-The web hook call will generate a HTTP POST with the following JSON
150
+The webhook call will generate a HTTP POST with the following JSON
151 151
 payload:
152 152
 
153 153
 ```
... ...
@@ -181,7 +179,7 @@ payload:
181 181
 }
182 182
 ```
183 183
 
184
-Webhooks are available under the Settings tab of each Trusted
184
+Webhooks are available under the Settings tab of each Automated
185 185
 Build.
186 186
 
187 187
 > **Note:** If you want to test your webhook out then we recommend using
... ...
@@ -190,15 +188,15 @@ Build.
190 190
 
191 191
 ### Repository links
192 192
 
193
-Repository links are a way to associate one Trusted Build with another. If one
194
-gets updated, linking system also triggers a build for the other Trusted Build.
195
-This makes it easy to keep your Trusted Builds up to date.
193
+Repository links are a way to associate one Automated Build with another. If one
194
+gets updated, linking system also triggers a build for the other Automated Build.
195
+This makes it easy to keep your Automated Builds up to date.
196 196
 
197
-To add a link, go to the settings page of a Trusted Build and click on
197
+To add a link, go to the settings page of an Automated Build and click on
198 198
 *Repository Links*. Then enter the name of the repository that you want have
199 199
 linked.
200 200
 
201 201
 > **Warning:**
202 202
 > You can add more than one repository link, however, you should
203
-> be very careful. Creating a two way relationship between Trusted Builds will
203
+> be very careful. Creating a two way relationship between Automated Builds will
204 204
 > cause a never ending build loop.
205 205
new file mode 100644
... ...
@@ -0,0 +1,8 @@
0
+# Docker.io
1
+
2
+## Contents:
3
+
4
+- [Accounts](accounts/)
5
+- [Repositories](repos/)
6
+- [Automated Builds](builds/)
7
+
... ...
@@ -1,25 +1,9 @@
1
-
2 1
 # Examples
3 2
 
4
-## Introduction:
5
-
6
-Here are some examples of how to use Docker to create running processes,
7
-starting from a very simple *Hello World* and progressing to more
8
-substantial services like those which you might find in production.
9
-
10
-## Contents:
11
-
12
- - [Check your Docker install](hello_world/)
13
- - [Hello World](hello_world/#hello-world)
14
- - [Hello World Daemon](hello_world/#hello-world-daemon)
15
- - [Node.js Web App](nodejs_web_app/)
16
- - [Redis Service](running_redis_service/)
17
- - [SSH Daemon Service](running_ssh_service/)
18
- - [CouchDB Service](couchdb_data_volumes/)
19
- - [PostgreSQL Service](postgresql_service/)
20
- - [Building an Image with MongoDB](mongodb/)
21
- - [Riak Service](running_riak_service/)
22
- - [Using Supervisor with Docker](using_supervisord/)
23
- - [Process Management with CFEngine](cfengine_process_management/)
24
- - [Python Web App](python_web_app/)
25
-
3
+ - [Dockerizing a Node.js Web App](nodejs_web_app/)
4
+ - [Dockerizing a Redis Service](running_redis_service/)
5
+ - [Dockerizing an SSH Daemon Service](running_ssh_service/)
6
+ - [Dockerizing a CouchDB Service](couchdb_data_volumes/)
7
+ - [Dockerizing a PostgreSQL Service](postgresql_service/)
8
+ - [Dockerizing MongoDB](mongodb/)
9
+ - [Dockerizing a Riak Service](running_riak_service/)
... ...
@@ -1,14 +1,10 @@
1
-page_title: Running an apt-cacher-ng service
1
+page_title: Dockerizing an apt-cacher-ng service
2 2
 page_description: Installing and running an apt-cacher-ng service
3 3
 page_keywords: docker, example, package installation, networking, debian, ubuntu
4 4
 
5
-# Apt-Cacher-ng Service
5
+# Dockerizing an Apt-Cacher-ng Service
6 6
 
7 7
 > **Note**: 
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12 8
 > - **If you don't like sudo** then see [*Giving non-root
13 9
 >   access*](/installation/binaries/#dockergroup).
14 10
 > - **If you're using OS X or docker via TCP** then you shouldn't use
15 11
deleted file mode 100644
... ...
@@ -1,144 +0,0 @@
1
-page_title: Process Management with CFEngine
2
-page_description: Managing containerized processes with CFEngine
3
-page_keywords: cfengine, process, management, usage, docker, documentation
4
-
5
-# Process Management with CFEngine
6
-
7
-Create Docker containers with managed processes.
8
-
9
-Docker monitors one process in each running container and the container
10
-lives or dies with that process. By introducing CFEngine inside Docker
11
-containers, we can alleviate a few of the issues that may arise:
12
-
13
- - It is possible to easily start multiple processes within a
14
-   container, all of which will be managed automatically, with the
15
-   normal `docker run` command.
16
- - If a managed process dies or crashes, CFEngine will start it again
17
-   within 1 minute.
18
- - The container itself will live as long as the CFEngine scheduling
19
-   daemon (cf-execd) lives. With CFEngine, we are able to decouple the
20
-   life of the container from the uptime of the service it provides.
21
-
22
-## How it works
23
-
24
-CFEngine, together with the cfe-docker integration policies, are
25
-installed as part of the Dockerfile. This builds CFEngine into our
26
-Docker image.
27
-
28
-The Dockerfile's `ENTRYPOINT` takes an arbitrary
29
-amount of commands (with any desired arguments) as parameters. When we
30
-run the Docker container these parameters get written to CFEngine
31
-policies and CFEngine takes over to ensure that the desired processes
32
-are running in the container.
33
-
34
-CFEngine scans the process table for the `basename` of the commands given
35
-to the `ENTRYPOINT` and runs the command to start the process if the `basename`
36
-is not found. For example, if we start the container with
37
-`docker run "/path/to/my/application parameters"`, CFEngine will look for a
38
-process named `application` and run the command. If an entry for `application`
39
-is not found in the process table at any point in time, CFEngine will execute
40
-`/path/to/my/application parameters` to start the application once again. The
41
-check on the process table happens every minute.
42
-
43
-Note that it is therefore important that the command to start your
44
-application leaves a process with the basename of the command. This can
45
-be made more flexible by making some minor adjustments to the CFEngine
46
-policies, if desired.
47
-
48
-## Usage
49
-
50
-This example assumes you have Docker installed and working. We will
51
-install and manage `apache2` and `sshd`
52
-in a single container.
53
-
54
-There are three steps:
55
-
56
-1. Install CFEngine into the container.
57
-2. Copy the CFEngine Docker process management policy into the
58
-   containerized CFEngine installation.
59
-3. Start your application processes as part of the `docker run` command.
60
-
61
-### Building the image
62
-
63
-The first two steps can be done as part of a Dockerfile, as follows.
64
-
65
-    FROM ubuntu
66
-    MAINTAINER Eystein Måløy Stenberg <eytein.stenberg@gmail.com>
67
-
68
-    RUN apt-get -y install wget lsb-release unzip ca-certificates
69
-
70
-    # install latest CFEngine
71
-    RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
72
-    RUN echo "deb http://cfengine.com/pub/apt $(lsb_release -cs) main" > /etc/apt/sources.list.d/cfengine-community.list
73
-    RUN apt-get update
74
-    RUN apt-get install cfengine-community
75
-
76
-    # install cfe-docker process management policy
77
-    RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
78
-    RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
79
-    RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
80
-    RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip
81
-
82
-    # apache2 and openssh are just for testing purposes, install your own apps here
83
-    RUN apt-get -y install openssh-server apache2
84
-    RUN mkdir -p /var/run/sshd
85
-    RUN echo "root:password" | chpasswd  # need a password for ssh
86
-
87
-    ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"]
88
-
89
-By saving this file as Dockerfile to a working directory, you can then build
90
-your image with the docker build command, e.g.
91
-`docker build -t managed_image`.
92
-
93
-### Testing the container
94
-
95
-Start the container with `apache2` and `sshd` running and managed, forwarding
96
-a port to our SSH instance:
97
-
98
-    $ docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start"
99
-
100
-We now clearly see one of the benefits of the cfe-docker integration: it
101
-allows to start several processes as part of a normal `docker run` command.
102
-
103
-We can now log in to our new container and see that both `apache2` and `sshd`
104
-are running. We have set the root password to "password" in the Dockerfile
105
-above and can use that to log in with ssh:
106
-
107
-    ssh -p222 root@127.0.0.1
108
-
109
-    ps -ef
110
-    UID        PID  PPID  C STIME TTY          TIME CMD
111
-    root         1     0  0 07:48 ?        00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start
112
-    root        18     1  0 07:48 ?        00:00:00 /var/cfengine/bin/cf-execd -F
113
-    root        20     1  0 07:48 ?        00:00:00 /usr/sbin/sshd
114
-    root        32     1  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
115
-    www-data    34    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
116
-    www-data    35    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
117
-    www-data    36    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
118
-    root        93    20  0 07:48 ?        00:00:00 sshd: root@pts/0
119
-    root       105    93  0 07:48 pts/0    00:00:00 -bash
120
-    root       112   105  0 07:49 pts/0    00:00:00 ps -ef
121
-
122
-If we stop apache2, it will be started again within a minute by
123
-CFEngine.
124
-
125
-    service apache2 status
126
-     Apache2 is running (pid 32).
127
-    service apache2 stop
128
-             * Stopping web server apache2 ... waiting    [ OK ]
129
-    service apache2 status
130
-     Apache2 is NOT running.
131
-    # ... wait up to 1 minute...
132
-    service apache2 status
133
-     Apache2 is running (pid 173).
134
-
135
-## Adapting to your applications
136
-
137
-To make sure your applications get managed in the same manner, there are
138
-just two things you need to adjust from the above example:
139
-
140
- - In the Dockerfile used above, install your applications instead of
141
-   `apache2` and `sshd`.
142
- - When you start the container with `docker run`,
143
-   specify the command line arguments to your applications rather than
144
-   `apache2` and `sshd`.
... ...
@@ -1,14 +1,10 @@
1
-page_title: Sharing data between 2 couchdb databases
1
+page_title: Dockerizing a CouchDB Service
2 2
 page_description: Sharing data between 2 couchdb databases
3 3
 page_keywords: docker, example, package installation, networking, couchdb, data volumes
4 4
 
5
-# CouchDB Service
5
+# Dockerizing a CouchDB Service
6 6
 
7 7
 > **Note**: 
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12 8
 > - **If you don't like sudo** then see [*Giving non-root
13 9
 >   access*](/installation/binaries/#dockergroup)
14 10
 
15 11
deleted file mode 100644
... ...
@@ -1,8 +0,0 @@
1
-
2
-.. note::
3
-    
4
-    * This example assumes you have Docker running in daemon mode. For
5
-      more information please see :ref:`running_examples`.
6
-    * **If you don't like sudo** then see :ref:`dockergroup`
7
-    * **If you're using OS X or docker via TCP** then you shouldn't use `sudo`
8
-
9 1
deleted file mode 100644
... ...
@@ -1,162 +0,0 @@
1
-page_title: Hello world example
2
-page_description: A simple hello world example with Docker
3
-page_keywords: docker, example, hello world
4
-
5
-# Check your Docker installation
6
-
7
-This guide assumes you have a working installation of Docker. To check
8
-your Docker install, run the following command:
9
-
10
-    # Check that you have a working install
11
-    $ sudo docker info
12
-
13
-If you get `docker: command not found` or something
14
-like `/var/lib/docker/repositories: permission denied`
15
-you may have an incomplete Docker installation or insufficient
16
-privileges to access docker on your machine.
17
-
18
-Please refer to [*Installation*](/installation/)
19
-for installation instructions.
20
-
21
-## Hello World
22
-
23
-> **Note**: 
24
-> 
25
-> - This example assumes you have Docker running in daemon mode. For
26
->   more information please see [*Check your Docker
27
->   install*](#check-your-docker-installation).
28
-> - **If you don't like sudo** then see [*Giving non-root
29
->   access*](/installation/binaries/#dockergroup)
30
-
31
-This is the most basic example available for using Docker.
32
-
33
-Download the small base image named `busybox`:
34
-
35
-    # Download a busybox image
36
-    $ sudo docker pull busybox
37
-
38
-The `busybox` image is a minimal Linux system. You can do the same with
39
-any number of other images, such as `debian`, `ubuntu` or `centos`. The
40
-images can be found and retrieved using the
41
-[Docker.io](http://index.docker.io) registry.
42
-
43
-    $ sudo docker run busybox /bin/echo hello world
44
-
45
-This command will run a simple `echo` command, that
46
-will echo `hello world` back to the console over
47
-standard out.
48
-
49
-**Explanation:**
50
-
51
--   **"sudo"** execute the following commands as user *root*
52
--   **"docker run"** run a command in a new container
53
--   **"busybox"** is the image we are running the command in.
54
--   **"/bin/echo"** is the command we want to run in the container
55
--   **"hello world"** is the input for the echo command
56
-
57
-**Video:**
58
-
59
-See the example in action
60
-
61
-<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/7658.js&quot;id=&quot;asciicast-7658&quot; async></script></body>"></iframe>
62
-
63
-<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/7658.js&quot;id=&quot;asciicast-7658&quot; async></script></body>"></iframe>
64
-
65
-## Hello World Daemon
66
-
67
-> **Note**: 
68
-> 
69
-> - This example assumes you have Docker running in daemon mode. For
70
->   more information please see [*Check your Docker
71
->   install*](#check-your-docker-installation).
72
-> - **If you don't like sudo** then see [*Giving non-root
73
->   access*](/installation/binaries/#dockergroup)
74
-
75
-And now for the most boring daemon ever written!
76
-
77
-We will use the Ubuntu image to run a simple hello world daemon that
78
-will just print hello world to standard out every second. It will
79
-continue to do this until we stop it.
80
-
81
-**Steps:**
82
-
83
-    $ container_id=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done")
84
-
85
-We are going to run a simple hello world daemon in a new container made
86
-from the `ubuntu` image.
87
-
88
- - **"sudo docker run -d "** run a command in a new container. We pass
89
-   "-d" so it runs as a daemon.
90
- - **"ubuntu"** is the image we want to run the command inside of.
91
- - **"/bin/sh -c"** is the command we want to run in the container
92
- - **"while true; do echo hello world; sleep 1; done"** is the mini
93
-   script we want to run, that will just print hello world once a
94
-   second until we stop it.
95
- - **$container_id** the output of the run command will return a
96
-   container id, we can use in future commands to see what is going on
97
-   with this process.
98
-
99
-<!-- -->
100
-
101
-    $ sudo docker logs $container_id
102
-
103
-Check the logs make sure it is working correctly.
104
-
105
- - **"docker logs**" This will return the logs for a container
106
- - **$container_id** The Id of the container we want the logs for.
107
-
108
-<!-- -->
109
-
110
-    $ sudo docker attach --sig-proxy=false $container_id
111
-
112
-Attach to the container to see the results in real-time.
113
-
114
- - **"docker attach**" This will allow us to attach to a background
115
-   process to see what is going on.
116
- - **"–sig-proxy=false"** Do not forward signals to the container;
117
-   allows us to exit the attachment using Control-C without stopping
118
-   the container.
119
- - **$container_id** The Id of the container we want to attach to.
120
-
121
-Exit from the container attachment by pressing Control-C.
122
-
123
-    $ sudo docker ps
124
-
125
-Check the process list to make sure it is running.
126
-
127
- - **"docker ps"** this shows all running process managed by docker
128
-
129
-<!-- -->
130
-
131
-    $ sudo docker stop $container_id
132
-
133
-Stop the container, since we don't need it anymore.
134
-
135
- - **"docker stop"** This stops a container
136
- - **$container_id** The Id of the container we want to stop.
137
-
138
-<!-- -->
139
-
140
-    $ sudo docker ps
141
-
142
-Make sure it is really stopped.
143
-
144
-**Video:**
145
-
146
-See the example in action
147
-
148
-<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/2562.js&quot;id=&quot;asciicast-2562&quot; async></script></body>"></iframe>
149
-
150
-<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/2562.js&quot;id=&quot;asciicast-2562&quot; async></script></body>"></iframe>
151
-
152
-The next example in the series is a [*Node.js Web App*](
153
-../nodejs_web_app/#nodejs-web-app) example, or you could skip to any of the
154
-other examples:
155
-
156
- - [*Node.js Web App*](../nodejs_web_app/#nodejs-web-app)
157
- - [*Redis Service*](../running_redis_service/#running-redis-service)
158
- - [*SSH Daemon Service*](../running_ssh_service/#running-ssh-service)
159
- - [*CouchDB Service*](../couchdb_data_volumes/#running-couchdb-service)
160
- - [*PostgreSQL Service*](../postgresql_service/#postgresql-service)
161
- - [*Building an Image with MongoDB*](../mongodb/#mongodb-image)
162
- - [*Python Web App*](../python_web_app/#python-web-app)
163 1
deleted file mode 100644
... ...
@@ -1,107 +0,0 @@
1
-page_title: Docker HTTPS Setup
2
-page_description: How to setup docker with https
3
-page_keywords: docker, example, https, daemon
4
-
5
-# Running Docker with https
6
-
7
-By default, Docker runs via a non-networked Unix socket. It can also
8
-optionally communicate using a HTTP socket.
9
-
10
-If you need Docker reachable via the network in a safe manner, you can
11
-enable TLS by specifying the tlsverify flag and pointing Docker's
12
-tlscacert flag to a trusted CA certificate.
13
-
14
-In daemon mode, it will only allow connections from clients
15
-authenticated by a certificate signed by that CA. In client mode, it
16
-will only connect to servers with a certificate signed by that CA.
17
-
18
-> **Warning**: 
19
-> Using TLS and managing a CA is an advanced topic. Please make you self
20
-> familiar with openssl, x509 and tls before using it in production.
21
-
22
-## Create a CA, server and client keys with OpenSSL
23
-
24
-First, initialize the CA serial file and generate CA private and public
25
-keys:
26
-
27
-    $ echo 01 > ca.srl
28
-    $ openssl genrsa -des3 -out ca-key.pem
29
-    $ openssl req -new -x509 -days 365 -key ca-key.pem -out ca.pem
30
-
31
-Now that we have a CA, you can create a server key and certificate
32
-signing request. Make sure that "Common Name (e.g. server FQDN or YOUR
33
-name)" matches the hostname you will use to connect to Docker or just
34
-use `\*` for a certificate valid for any hostname:
35
-
36
-    $ openssl genrsa -des3 -out server-key.pem
37
-    $ openssl req -new -key server-key.pem -out server.csr
38
-
39
-Next we're going to sign the key with our CA:
40
-
41
-    $ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \
42
-      -out server-cert.pem
43
-
44
-For client authentication, create a client key and certificate signing
45
-request:
46
-
47
-    $ openssl genrsa -des3 -out client-key.pem
48
-    $ openssl req -new -key client-key.pem -out client.csr
49
-
50
-To make the key suitable for client authentication, create a extensions
51
-config file:
52
-
53
-    $ echo extendedKeyUsage = clientAuth > extfile.cnf
54
-
55
-Now sign the key:
56
-
57
-    $ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \
58
-      -out client-cert.pem -extfile extfile.cnf
59
-
60
-Finally you need to remove the passphrase from the client and server
61
-key:
62
-
63
-    $ openssl rsa -in server-key.pem -out server-key.pem
64
-    $ openssl rsa -in client-key.pem -out client-key.pem
65
-
66
-Now you can make the Docker daemon only accept connections from clients
67
-providing a certificate trusted by our CA:
68
-
69
-    $ sudo docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \
70
-      -H=0.0.0.0:4243
71
-
72
-To be able to connect to Docker and validate its certificate, you now
73
-need to provide your client keys, certificates and trusted CA:
74
-
75
-    $ docker --tlsverify --tlscacert=ca.pem --tlscert=client-cert.pem --tlskey=client-key.pem \
76
-      -H=dns-name-of-docker-host:4243
77
-
78
-> **Warning**: 
79
-> As shown in the example above, you don't have to run the
80
-> `docker` client with `sudo` or
81
-> the `docker` group when you use certificate
82
-> authentication. That means anyone with the keys can give any
83
-> instructions to your Docker daemon, giving them root access to the
84
-> machine hosting the daemon. Guard these keys as you would a root
85
-> password!
86
-
87
-## Other modes
88
-
89
-If you don't want to have complete two-way authentication, you can run
90
-Docker in various other modes by mixing the flags.
91
-
92
-### Daemon modes
93
-
94
- - tlsverify, tlscacert, tlscert, tlskey set: Authenticate clients
95
- - tls, tlscert, tlskey: Do not authenticate clients
96
-
97
-### Client modes
98
-
99
- - tls: Authenticate server based on public/default CA pool
100
- - tlsverify, tlscacert: Authenticate server based on given CA
101
- - tls, tlscert, tlskey: Authenticate with client certificate, do not
102
-   authenticate server based on given CA
103
- - tlsverify, tlscacert, tlscert, tlskey: Authenticate with client
104
-   certificate, authenticate server based on given CA
105
-
106
-The client will send its client certificate if found, so you just need
107
-to drop your keys into ~/.docker/<ca, cert or key>.pem
... ...
@@ -2,7 +2,7 @@ page_title: Dockerizing MongoDB
2 2
 page_description: Creating a Docker image with MongoDB pre-installed using a Dockerfile and sharing the image on Docker.io
3 3
 page_keywords: docker, dockerize, dockerizing, article, example, docker.io, platform, package, installation, networking, mongodb, containers, images, image, sharing, dockerfile, build, auto-building, virtualization, framework
4 4
 
5
-# Dockerizing MongoDB 
5
+# Dockerizing MongoDB
6 6
 
7 7
 ## Introduction
8 8
 
... ...
@@ -18,17 +18,10 @@ instances will bring several benefits, such as:
18 18
  - Ready to run and start working within milliseconds;
19 19
  - Based on globally accessible and shareable images.
20 20
 
21
-> **Note:** 
22
-> 
23
-> This example assumes you have Docker running in daemon mode. To verify,
24
-> try running `sudo docker info`.
25
-> For more information, please see: [*Check your Docker installation*](
26
-> /examples/hello_world/#running-examples).
27
-
28 21
 > **Note:**
29 22
 > 
30 23
 > If you do **_not_** like `sudo`, you might want to check out: 
31
-> [*Giving non-root access*](installation/binaries/#giving-non-root-access).
24
+> [*Giving non-root access*](/installation/binaries/#giving-non-root-access).
32 25
 
33 26
 ## Creating a Dockerfile for MongoDB
34 27
 
... ...
@@ -101,8 +94,7 @@ Now save the file and let's build our image.
101 101
 
102 102
 > **Note:**
103 103
 > 
104
-> The full version of this `Dockerfile` can be found [here](/
105
-> /examples/mongodb/Dockerfile).
104
+> The full version of this `Dockerfile` can be found [here](/examples/mongodb/Dockerfile).
106 105
 
107 106
 ## Building the MongoDB Docker image
108 107
 
... ...
@@ -157,8 +149,6 @@ as daemon process(es).
157 157
     # Usage: mongo --port <port you get from `docker ps`> 
158 158
     $ mongo --port 12345
159 159
 
160
-## Learn more
161
-
162
- - [Linking containers](/use/working_with_links_names/)
163
- - [Cross-host linking containers](/use/ambassador_pattern_linking/)
160
+ - [Linking containers](/userguide/dockerlinks)
161
+ - [Cross-host linking containers](/articles/ambassador_pattern_linking/)
164 162
  - [Creating a Trusted Build](/docker-io/builds/#trusted-builds)
... ...
@@ -1,14 +1,10 @@
1
-page_title: Running a Node.js app on CentOS
2
-page_description: Installing and running a Node.js app on CentOS
1
+page_title: Dockerizing a Node.js Web App
2
+page_description: Installing and running a Node.js app with Docker
3 3
 page_keywords: docker, example, package installation, node, centos
4 4
 
5
-# Node.js Web App
5
+# Dockerizing a Node.js Web App
6 6
 
7 7
 > **Note**: 
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12 8
 > - **If you don't like sudo** then see [*Giving non-root
13 9
 >   access*](/installation/binaries/#dockergroup)
14 10
 
... ...
@@ -187,11 +183,10 @@ Now you can call your app using `curl` (install if needed via:
187 187
     Content-Length: 12
188 188
     Date: Sun, 02 Jun 2013 03:53:22 GMT
189 189
     Connection: keep-alive
190
-    
190
+
191 191
     Hello World
192 192
 
193 193
 We hope this tutorial helped you get up and running with Node.js and
194 194
 CentOS on Docker. You can get the full source code at
195 195
 [https://github.com/gasi/docker-node-hello](https://github.com/gasi/docker-node-hello).
196 196
 
197
-Continue to [*Redis Service*](../running_redis_service/#running-redis-service).
... ...
@@ -1,26 +1,22 @@
1
-page_title: PostgreSQL service How-To
1
+page_title: Dockerizing PostgreSQL
2 2
 page_description: Running and installing a PostgreSQL service
3 3
 page_keywords: docker, example, package installation, postgresql
4 4
 
5
-# PostgreSQL Service
5
+# Dockerizing PostgreSQL
6 6
 
7 7
 > **Note**: 
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12 8
 > - **If you don't like sudo** then see [*Giving non-root
13 9
 >   access*](/installation/binaries/#dockergroup)
14 10
 
15 11
 ## Installing PostgreSQL on Docker
16 12
 
17
-Assuming there is no Docker image that suits your needs in [the index](
18
-http://index.docker.io), you can create one yourself.
13
+Assuming there is no Docker image that suits your needs on the [Docker
14
+Hub]( http://index.docker.io), you can create one yourself.
19 15
 
20
-Start by creating a new Dockerfile:
16
+Start by creating a new `Dockerfile`:
21 17
 
22 18
 > **Note**: 
23
-> This PostgreSQL setup is for development only purposes. Refer to the
19
+> This PostgreSQL setup is for development-only purposes. Refer to the
24 20
 > PostgreSQL documentation to fine-tune these settings so that it is
25 21
 > suitably secure.
26 22
 
... ...
@@ -32,7 +28,7 @@ Start by creating a new Dockerfile:
32 32
     MAINTAINER SvenDowideit@docker.com
33 33
 
34 34
     # Add the PostgreSQL PGP key to verify their Debian packages.
35
-    # It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc 
35
+    # It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
36 36
     RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
37 37
 
38 38
     # Add PostgreSQL's repository. It contains the most recent stable release
... ...
@@ -87,11 +83,11 @@ And run the PostgreSQL server container (in the foreground):
87 87
     $ sudo docker run --rm -P --name pg_test eg_postgresql
88 88
 
89 89
 There are 2 ways to connect to the PostgreSQL server. We can use [*Link
90
-Containers*](/use/working_with_links_names/#working-with-links-names),
91
-or we can access it from our host (or the network).
90
+Containers*](/userguide/dockerlinks), or we can access it from our host
91
+(or the network).
92 92
 
93 93
 > **Note**: 
94
-> The `-rm` removes the container and its image when
94
+> The `--rm` removes the container and its image when
95 95
 > the container exists successfully.
96 96
 
97 97
 ### Using container linking
98 98
deleted file mode 100644
... ...
@@ -1,127 +0,0 @@
1
-page_title: Python Web app example
2
-page_description: Building your own python web app using docker
3
-page_keywords: docker, example, python, web app
4
-
5
-# Python Web App
6
-
7
-> **Note**: 
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12
-> - **If you don't like sudo** then see [*Giving non-root
13
->   access*](/installation/binaries/#dockergroup)
14
-
15
-While using Dockerfiles is the preferred way to create maintainable and
16
-repeatable images, its useful to know how you can try things out and
17
-then commit your live changes to an image.
18
-
19
-The goal of this example is to show you how you can modify your own
20
-Docker images by making changes to a running container, and then saving
21
-the results as a new image. We will do that by making a simple `hello
22
-world` Flask web application image.
23
-
24
-## Download the initial image
25
-
26
-Download the `shykes/pybuilder` Docker image from the `http://index.docker.io`
27
-registry.
28
-
29
-This image contains a `buildapp` script to download
30
-the web app and then `pip install` any required
31
-modules, and a `runapp` script that finds the
32
-`app.py` and runs it.
33
-
34
-    $ sudo docker pull shykes/pybuilder
35
-
36
-> **Note**: 
37
-> This container was built with a very old version of docker (May 2013 -
38
-> see [shykes/pybuilder](https://github.com/shykes/pybuilder) ), when the
39
-> Dockerfile format was different, but the image can
40
-> still be used now.
41
-
42
-## Interactively make some modifications
43
-
44
-We then start a new container running interactively using the image.
45
-First, we set a `URL` variable that points to a
46
-tarball of a simple helloflask web app, and then we run a command
47
-contained in the image called `buildapp`, passing it
48
-the `$URL` variable. The container is given a name
49
-`pybuilder_run` which we will use in the next steps.
50
-
51
-While this example is simple, you could run any number of interactive
52
-commands, try things out, and then exit when you're done.
53
-
54
-    $ sudo docker run -i -t --name pybuilder_run shykes/pybuilder bash
55
-
56
-    $$ URL=http://github.com/shykes/helloflask/archive/master.tar.gz
57
-    $$ /usr/local/bin/buildapp $URL
58
-    [...]
59
-    $$ exit
60
-
61
-## Commit the container to create a new image
62
-
63
-Save the changes we just made in the container to a new image called
64
-`/builds/github.com/shykes/helloflask/master`. You
65
-now have 3 different ways to refer to the container: name
66
-`pybuilder_run`, short-id `c8b2e8228f11`, or long-id
67
-`c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9`.
68
-
69
-    $ sudo docker commit pybuilder_run /builds/github.com/shykes/helloflask/master
70
-    c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9
71
-
72
-## Run the new image to start the web worker
73
-
74
-Use the new image to create a new container with network port 5000
75
-mapped to a local port
76
-
77
-    $ sudo docker run -d -p 5000 --name web_worker /builds/github.com/shykes/helloflask/master /usr/local/bin/runapp
78
-
79
- - **"docker run -d "** run a command in a new container. We pass "-d"
80
-   so it runs as a daemon.
81
- - **"-p 5000"** the web app is going to listen on this port, so it
82
-   must be mapped from the container to the host system.
83
- - **/usr/local/bin/runapp** is the command which starts the web app.
84
-
85
-## View the container logs
86
-
87
-View the logs for the new `web_worker` container and
88
-if everything worked as planned you should see the line
89
-`Running on http://0.0.0.0:5000/` in the log output.
90
-
91
-To exit the view without stopping the container, hit Ctrl-C, or open
92
-another terminal and continue with the example while watching the result
93
-in the logs.
94
-
95
-    $ sudo docker logs -f web_worker
96
-    * Running on http://0.0.0.0:5000/
97
-
98
-## See the webapp output
99
-
100
-Look up the public-facing port which is NAT-ed. Find the private port
101
-used by the container and store it inside of the `WEB_PORT`
102
-variable.
103
-
104
-Access the web app using the `curl` binary. If
105
-everything worked as planned you should see the line
106
-`Hello world!` inside of your console.
107
-
108
-    $ WEB_PORT=$(sudo docker port web_worker 5000 | awk -F: '{ print $2 }')
109
-
110
-    # install curl if necessary, then ...
111
-    $ curl http://127.0.0.1:$WEB_PORT
112
-    Hello world!
113
-
114
-## Clean up example containers and images
115
-
116
-    $ sudo docker ps --all
117
-
118
-List `--all` the Docker containers. If this
119
-container had already finished running, it will still be listed here
120
-with a status of `Exit 0`.
121
-
122
-    $ sudo docker stop web_worker
123
-    $ sudo docker rm web_worker pybuilder_run
124
-    $ sudo docker rmi /builds/github.com/shykes/helloflask/master shykes/pybuilder:latest
125
-
126
-And now stop the running web worker, and delete the containers, so that
127
-we can then delete the images that we used.
... ...
@@ -1,16 +1,8 @@
1
-page_title: Running a Redis service
1
+page_title: Dockerizing a Redis service
2 2
 page_description: Installing and running an redis service
3 3
 page_keywords: docker, example, package installation, networking, redis
4 4
 
5
-# Redis Service
6
-
7
-> **Note**:
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12
-> - **If you don't like sudo** then see [*Giving non-root
13
->   access*](/installation/binaries/#dockergroup)
5
+# Dockerizing a Redis Service
14 6
 
15 7
 Very simple, no frills, Redis service attached to a web application
16 8
 using a link.
... ...
@@ -1,30 +1,21 @@
1
-page_title: Running a Riak service
1
+page_title: Dockerizing a Riak service
2 2
 page_description: Build a Docker image with Riak pre-installed
3 3
 page_keywords: docker, example, package installation, networking, riak
4 4
 
5
-# Riak Service
6
-
7
-> **Note**:
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12
-> - **If you don't like sudo** then see [*Giving non-root
13
->   access*](/installation/binaries/#dockergroup)
5
+# Dockerizing a Riak Service
14 6
 
15 7
 The goal of this example is to show you how to build a Docker image with
16 8
 Riak pre-installed.
17 9
 
18 10
 ## Creating a Dockerfile
19 11
 
20
-Create an empty file called Dockerfile:
12
+Create an empty file called `Dockerfile`:
21 13
 
22 14
     $ touch Dockerfile
23 15
 
24 16
 Next, define the parent image you want to use to build your image on top
25 17
 of. We'll use [Ubuntu](https://index.docker.io/_/ubuntu/) (tag:
26
-`latest`), which is available on the [docker
27
-index](http://index.docker.io):
18
+`latest`), which is available on [Docker Hub](http://index.docker.io):
28 19
 
29 20
     # Riak
30 21
     #
... ...
@@ -101,7 +92,7 @@ are started:
101 101
 ## Create a supervisord configuration file
102 102
 
103 103
 Create an empty file called `supervisord.conf`. Make
104
-sure it's at the same directory level as your Dockerfile:
104
+sure it's at the same directory level as your `Dockerfile`:
105 105
 
106 106
     touch supervisord.conf
107 107
 
... ...
@@ -1,17 +1,10 @@
1
-page_title: Running an SSH service
2
-page_description: Installing and running an sshd service
1
+page_title: Dockerizing an SSH service
2
+page_description: Installing and running an SSHd service on Docker
3 3
 page_keywords: docker, example, package installation, networking
4 4
 
5
-# SSH Daemon Service
5
+# Dockerizing an SSH Daemon Service
6 6
 
7
-> **Note:** 
8
-> - This example assumes you have Docker running in daemon mode. For
9
->   more information please see [*Check your Docker
10
->   install*](../hello_world/#running-examples).
11
-> - **If you don't like sudo** then see [*Giving non-root
12
->   access*](/installation/binaries/#dockergroup)
13
-
14
-The following Dockerfile sets up an sshd service in a container that you
7
+The following `Dockerfile` sets up an SSHd service in a container that you
15 8
 can use to connect to and inspect other container's volumes, or to get
16 9
 quick access to a test container.
17 10
 
... ...
@@ -27,7 +20,7 @@ quick access to a test container.
27 27
     RUN apt-get update
28 28
 
29 29
     RUN apt-get install -y openssh-server
30
-    RUN mkdir /var/run/sshd 
30
+    RUN mkdir /var/run/sshd
31 31
     RUN echo 'root:screencast' |chpasswd
32 32
 
33 33
     EXPOSE 22
... ...
@@ -37,16 +30,15 @@ Build the image using:
37 37
 
38 38
     $ sudo docker build --rm -t eg_sshd .
39 39
 
40
-Then run it. You can then use `docker port` to find
41
-out what host port the container's port 22 is mapped to:
40
+Then run it. You can then use `docker port` to find out what host port
41
+the container's port 22 is mapped to:
42 42
 
43 43
     $ sudo docker run -d -P --name test_sshd eg_sshd
44 44
     $ sudo docker port test_sshd 22
45 45
     0.0.0.0:49154
46 46
 
47
-And now you can ssh to port `49154` on the Docker
48
-daemon's host IP address (`ip address` or
49
-`ifconfig` can tell you that):
47
+And now you can ssh to port `49154` on the Docker daemon's host IP
48
+address (`ip address` or `ifconfig` can tell you that):
50 49
 
51 50
     $ ssh root@192.168.1.2 -p 49154
52 51
     # The password is ``screencast``.
... ...
@@ -58,3 +50,4 @@ container, and then removing the image.
58 58
     $ sudo docker stop test_sshd
59 59
     $ sudo docker rm test_sshd
60 60
     $ sudo docker rmi eg_sshd
61
+
61 62
deleted file mode 100644
... ...
@@ -1,120 +0,0 @@
1
-page_title: Using Supervisor with Docker
2
-page_description: How to use Supervisor process management with Docker
3
-page_keywords: docker, supervisor, process management
4
-
5
-# Using Supervisor with Docker
6
-
7
-> **Note**:
8
-> 
9
-> - This example assumes you have Docker running in daemon mode. For
10
->   more information please see [*Check your Docker
11
->   install*](../hello_world/#running-examples).
12
-> - **If you don't like sudo** then see [*Giving non-root
13
->   access*](/installation/binaries/#dockergroup)
14
-
15
-Traditionally a Docker container runs a single process when it is
16
-launched, for example an Apache daemon or a SSH server daemon. Often
17
-though you want to run more than one process in a container. There are a
18
-number of ways you can achieve this ranging from using a simple Bash
19
-script as the value of your container's `CMD`
20
-instruction to installing a process management tool.
21
-
22
-In this example we're going to make use of the process management tool,
23
-[Supervisor](http://supervisord.org/), to manage multiple processes in
24
-our container. Using Supervisor allows us to better control, manage, and
25
-restart the processes we want to run. To demonstrate this we're going to
26
-install and manage both an SSH daemon and an Apache daemon.
27
-
28
-## Creating a Dockerfile
29
-
30
-Let's start by creating a basic `Dockerfile` for our
31
-new image.
32
-
33
-    FROM ubuntu:13.04
34
-    MAINTAINER examples@docker.io
35
-    RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
36
-    RUN apt-get update
37
-    RUN apt-get upgrade -y
38
-
39
-## Installing Supervisor
40
-
41
-We can now install our SSH and Apache daemons as well as Supervisor in
42
-our container.
43
-
44
-    RUN apt-get install -y openssh-server apache2 supervisor
45
-    RUN mkdir -p /var/run/sshd
46
-    RUN mkdir -p /var/log/supervisor
47
-
48
-Here we're installing the `openssh-server`,
49
-`apache2` and `supervisor`
50
-(which provides the Supervisor daemon) packages. We're also creating two
51
-new directories that are needed to run our SSH daemon and Supervisor.
52
-
53
-## Adding Supervisor's configuration file
54
-
55
-Now let's add a configuration file for Supervisor. The default file is
56
-called `supervisord.conf` and is located in
57
-`/etc/supervisor/conf.d/`.
58
-
59
-    ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
60
-
61
-Let's see what is inside our `supervisord.conf`
62
-file.
63
-
64
-    [supervisord]
65
-    nodaemon=true
66
-
67
-    [program:sshd]
68
-    command=/usr/sbin/sshd -D
69
-
70
-    [program:apache2]
71
-    command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
72
-
73
-The `supervisord.conf` configuration file contains
74
-directives that configure Supervisor and the processes it manages. The
75
-first block `[supervisord]` provides configuration
76
-for Supervisor itself. We're using one directive, `nodaemon`
77
-which tells Supervisor to run interactively rather than
78
-daemonize.
79
-
80
-The next two blocks manage the services we wish to control. Each block
81
-controls a separate process. The blocks contain a single directive,
82
-`command`, which specifies what command to run to
83
-start each process.
84
-
85
-## Exposing ports and running Supervisor
86
-
87
-Now let's finish our `Dockerfile` by exposing some
88
-required ports and specifying the `CMD` instruction
89
-to start Supervisor when our container launches.
90
-
91
-    EXPOSE 22 80
92
-    CMD ["/usr/bin/supervisord"]
93
-
94
-Here We've exposed ports 22 and 80 on the container and we're running
95
-the `/usr/bin/supervisord` binary when the container
96
-launches.
97
-
98
-## Building our image
99
-
100
-We can now build our new image.
101
-
102
-    $ sudo docker build -t <yourname>/supervisord .
103
-
104
-## Running our Supervisor container
105
-
106
-Once We've got a built image we can launch a container from it.
107
-
108
-    $ sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
109
-    2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
110
-    2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
111
-    2013-11-25 18:53:22,342 INFO supervisord started with pid 1
112
-    2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6
113
-    2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7
114
-    . . .
115
-
116
-We've launched a new container interactively using the `docker run` command.
117
-That container has run Supervisor and launched the SSH and Apache daemons with
118
-it. We've specified the `-p` flag to expose ports 22 and 80. From here we can
119
-now identify the exposed ports and connect to one or both of the SSH and Apache
120
-daemons.
... ...
@@ -142,12 +142,11 @@ running in parallel.
142 142
 ### How do I connect Docker containers?
143 143
 
144 144
 Currently the recommended way to link containers is via the link
145
-primitive. You can see details of how to [work with links here](
146
-http://docs.docker.io/use/working_with_links_names/).
145
+primitive. You can see details of how to [work with links
146
+here](/userguide/dockerlinks).
147 147
 
148 148
 Also of useful when enabling more flexible service portability is the
149
-[Ambassador linking pattern](
150
-http://docs.docker.io/use/ambassador_pattern_linking/).
149
+[Ambassador linking pattern](/articles/ambassador_pattern_linking/).
151 150
 
152 151
 ### How do I run more than one process in a Docker container?
153 152
 
... ...
@@ -156,8 +155,7 @@ http://supervisord.org/), runit, s6, or daemontools can do the trick.
156 156
 Docker will start up the process management daemon which will then fork
157 157
 to run additional processes. As long as the processor manager daemon continues
158 158
 to run, the container will continue to as well. You can see a more substantial
159
-example [that uses supervisord here](
160
-http://docs.docker.io/examples/using_supervisord/).
159
+example [that uses supervisord here](/articles/using_supervisord/).
161 160
 
162 161
 ### What platforms does Docker run on?
163 162
 
... ...
@@ -207,5 +205,5 @@ You can find more answers on:
207 207
 - [Ask questions on Stackoverflow](http://stackoverflow.com/search?q=docker)
208 208
 - [Join the conversation on Twitter](http://twitter.com/docker)
209 209
 
210
-Looking for something else to read? Checkout the [*Hello World*](
211
-../examples/hello_world/#hello-world) example.
210
+Looking for something else to read? Checkout the [User
211
+Guide](/userguide/).
... ...
@@ -6,8 +6,6 @@ page_keywords: docker, introduction, documentation, about, technology, understan
6 6
 
7 7
 **Develop, Ship and Run Any Application, Anywhere**
8 8
 
9
-## Introduction
10
-
11 9
 [**Docker**](https://www.docker.io) is a platform for developers and
12 10
 sysadmins to develop, ship, and run applications.  Docker consists of:
13 11
 
... ...
@@ -78,22 +76,17 @@ section](introduction/understanding-docker.md):
78 78
 > [Click here to go to the Understanding
79 79
 > Docker section](introduction/understanding-docker.md).
80 80
 
81
-Next we get [**practical** with the Working with Docker
82
-section](introduction/working-with-docker.md) and you can learn about:
81
+### Installation Guides
83 82
 
84
- - Docker on the command line;
85
- - Get introduced to your first Docker commands;
86
- - Get to know your way around the basics of Docker operation.
83
+Then we'll learn how to install Docker on a variety of platforms in our
84
+[installation](/installation/#installation) section.
87 85
 
88
-> [Click here to go to the Working with
89
-> Docker section](introduction/working-with-docker.md).
86
+> [Click here to go to the Installation
87
+> section](/installation/#installation).
90 88
 
91
-If you want to see how to install Docker you can jump to the
92
-[installation](/installation/#installation) section.
89
+### Docker User Guide
90
+
91
+Once you've gotten Docker installed we recommend you step through our [Docker User Guide](/userguide/), which will give you an in depth introduction to Docker.
92
+
93
+> [Click here to go to the Docker User Guide](/userguide/).
93 94
 
94
-> **Note**:
95
-> We know how valuable your time is so you if you want to get started
96
-> with Docker straight away don't hesitate to jump to [Working with
97
-> Docker](introduction/working-with-docker.md). For a fuller
98
-> understanding of Docker though we do recommend you read [Understanding
99
-> Docker]( introduction/understanding-docker.md).
... ...
@@ -53,8 +53,7 @@ add the *ubuntu* user to it so that you don't have to use
53 53
 `sudo` for every Docker command.
54 54
 
55 55
 Once you`ve got Docker installed, you're ready to try it out – head on
56
-over to the [*First steps with Docker*](/use/basics/) or
57
-[*Examples*](/examples/) section.
56
+over to the [User Guide](/userguide).
58 57
 
59 58
 ## Amazon QuickStart (Release Candidate - March 2014)
60 59
 
... ...
@@ -94,4 +93,4 @@ QuickStart*](#amazon-quickstart) to pick an image (or use one of your
94 94
 own) and skip the step with the *User Data*. Then continue with the
95 95
 [*Ubuntu*](../ubuntulinux/#ubuntu-linux) instructions.
96 96
 
97
-Continue with the [*Hello World*](/examples/hello_world/#hello-world) example.
97
+Continue with the [User Guide](/userguide/).
... ...
@@ -56,20 +56,17 @@ Linux kernel (it even builds on OSX!).
56 56
 
57 57
 ## Giving non-root access
58 58
 
59
-The `docker` daemon always runs as the root user,
60
-and since Docker version 0.5.2, the `docker` daemon
61
-binds to a Unix socket instead of a TCP port. By default that Unix
62
-socket is owned by the user *root*, and so, by default, you can access
63
-it with `sudo`.
64
-
65
-Starting in version 0.5.3, if you (or your Docker installer) create a
66
-Unix group called *docker* and add users to it, then the
67
-`docker` daemon will make the ownership of the Unix
68
-socket read/writable by the *docker* group when the daemon starts. The
69
-`docker` daemon must always run as the root user,
70
-but if you run the `docker` client as a user in the
71
-*docker* group then you don't need to add `sudo` to
72
-all the client commands.
59
+The `docker` daemon always runs as the root user, and the `docker`
60
+daemon binds to a Unix socket instead of a TCP port. By default that
61
+Unix socket is owned by the user *root*, and so, by default, you can
62
+access it with `sudo`.
63
+
64
+If you (or your Docker installer) create a Unix group called *docker*
65
+and add users to it, then the `docker` daemon will make the ownership of
66
+the Unix socket read/writable by the *docker* group when the daemon
67
+starts. The `docker` daemon must always run as the root user, but if you
68
+run the `docker` client as a user in the *docker* group then you don't
69
+need to add `sudo` to all the client commands.
73 70
 
74 71
 > **Warning**: 
75 72
 > The *docker* group (or the group specified with `-G`) is root-equivalent;
... ...
@@ -93,4 +90,4 @@ Then follow the regular installation steps.
93 93
     # run a container and open an interactive shell in the container
94 94
     $ sudo ./docker run -i -t ubuntu /bin/bash
95 95
 
96
-Continue with the [*Hello World*](/examples/hello_world/#hello-world) example.
96
+Continue with the [User Guide](/userguide/).
... ...
@@ -2,23 +2,12 @@ page_title: Installation on CentOS
2 2
 page_description: Instructions for installing Docker on CentOS
3 3
 page_keywords: Docker, Docker documentation, requirements, linux, centos, epel, docker.io, docker-io
4 4
 
5
-# CentOS 
5
+# CentOS
6 6
 
7
-> **Note**:
8
-> Docker is still under heavy development! We don't recommend using it in
9
-> production yet, but we're getting closer with each release. Please see
10
-> our blog post, [Getting to Docker 1.0](
11
-> http://blog.docker.io/2013/08/getting-to-docker-1-0/)
12
-
13
-> **Note**:
14
-> This is a community contributed installation path. The only `official`
15
-> installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
16
-> installation path. This version may be out of date because it depends on
17
-> some binaries to be updated and published.
18
-
19
-The Docker package is available via the EPEL repository. These instructions work
20
-for CentOS 6 and later. They will likely work for other binary compatible EL6 
21
-distributions such as Scientific Linux, but they haven't been tested.
7
+The Docker package is available via the EPEL repository. These
8
+instructions work for CentOS 6 and later. They will likely work for
9
+other binary compatible EL6 distributions such as Scientific Linux, but
10
+they haven't been tested.
22 11
 
23 12
 Please note that this package is part of [Extra Packages for Enterprise
24 13
 Linux (EPEL)](https://fedoraproject.org/wiki/EPEL), a community effort
... ...
@@ -27,13 +16,13 @@ to create and maintain additional packages for the RHEL distribution.
27 27
 Also note that due to the current Docker limitations, Docker is able to
28 28
 run only on the **64 bit** architecture.
29 29
 
30
-To run Docker, you will need [CentOS6](http://www.centos.org) or higher, with
31
-a kernel version 2.6.32-431 or higher as this has specific kernel fixes
32
-to allow Docker to run. 
30
+To run Docker, you will need [CentOS6](http://www.centos.org) or higher,
31
+with a kernel version 2.6.32-431 or higher as this has specific kernel
32
+fixes to allow Docker to run.
33 33
 
34 34
 ## Installation
35 35
 
36
-Firstly, you need to ensure you have the EPEL repository enabled. Please 
36
+Firstly, you need to ensure you have the EPEL repository enabled. Please
37 37
 follow the [EPEL installation instructions](
38 38
 https://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F).
39 39
 
... ...
@@ -59,7 +48,7 @@ If we want Docker to start at boot, we should also:
59 59
     $ sudo chkconfig docker on
60 60
 
61 61
 Now let's verify that Docker is working. First we'll need to get the latest
62
-centos image.
62
+`centos` image.
63 63
 
64 64
     $ sudo docker pull centos:latest
65 65
 
... ...
@@ -73,15 +62,15 @@ This should generate some output similar to:
73 73
     REPOSITORY      TAG             IMAGE ID          CREATED             VIRTUAL SIZE
74 74
     centos          latest          0b443ba03958      2 hours ago         297.6 MB
75 75
 
76
-Run a simple bash shell to test the image:     
76
+Run a simple bash shell to test the image:
77 77
 
78 78
     $ sudo docker run -i -t centos /bin/bash
79 79
 
80
-If everything is working properly, you'll get a simple bash prompt. Type exit to continue.
80
+If everything is working properly, you'll get a simple bash prompt. Type
81
+exit to continue.
81 82
 
82
-**Done!**
83
-You can either continue with the [*Hello World*](/examples/hello_world/#hello-world) example,
84
-or explore and build on the images yourself.
83
+**Done!** You can either continue with the [Docker User
84
+Guide](/userguide/) or explore and build on the images yourself.
85 85
 
86 86
 ## Issues?
87 87
 
... ...
@@ -38,18 +38,18 @@ Which should download the `ubuntu` image, and then start `bash` in a container.
38 38
 
39 39
 ### Giving non-root access
40 40
 
41
-The `docker` daemon always runs as the `root` user, and since Docker
42
-version 0.5.2, the `docker` daemon binds to a Unix socket instead of a
43
-TCP port. By default that Unix socket is owned by the user `root`, and
44
-so, by default, you can access it with `sudo`.
45
-
46
-Starting in version 0.5.3, if you (or your Docker installer) create a
47
-Unix group called `docker` and add users to it, then the `docker` daemon
48
-will make the ownership of the Unix socket read/writable by the `docker`
49
-group when the daemon starts. The `docker` daemon must always run as the
50
-root user, but if you run the `docker` client as a user in the `docker`
51
-group then you don't need to add `sudo` to all the client commands. From
52
-Docker 0.9.0 you can use the `-G` flag to specify an alternative group.
41
+The `docker` daemon always runs as the `root` user and the `docker`
42
+daemon binds to a Unix socket instead of a TCP port. By default that
43
+Unix socket is owned by the user `root`, and so, by default, you can
44
+access it with `sudo`.
45
+
46
+If you (or your Docker installer) create a Unix group called `docker`
47
+and add users to it, then the `docker` daemon will make the ownership of
48
+the Unix socket read/writable by the `docker` group when the daemon
49
+starts. The `docker` daemon must always run as the root user, but if you
50
+run the `docker` client as a user in the `docker` group then you don't
51
+need to add `sudo` to all the client commands. From Docker 0.9.0 you can
52
+use the `-G` flag to specify an alternative group.
53 53
 
54 54
 > **Warning**: 
55 55
 > The `docker` group (or the group specified with the `-G` flag) is
... ...
@@ -70,3 +70,7 @@ Docker 0.9.0 you can use the `-G` flag to specify an alternative group.
70 70
     # Restart the Docker daemon.
71 71
     $ sudo service docker restart
72 72
 
73
+## What next?
74
+
75
+Continue with the [User Guide](/userguide/).
76
+
... ...
@@ -48,5 +48,7 @@ Now let's verify that Docker is working.
48 48
 
49 49
     $ sudo docker run -i -t fedora /bin/bash
50 50
 
51
-**Done!**, now continue with the [*Hello
52
-World*](/examples/hello_world/#hello-world) example.
51
+## What next?
52
+
53
+Continue with the [User Guide](/userguide/).
54
+
... ...
@@ -40,9 +40,8 @@ virtual machine and run the Docker daemon.
40 40
 (but least secure) is to just hit [Enter]. This passphrase is used by the
41 41
 `boot2docker ssh` command.
42 42
 
43
-
44
-Once you have an initialized virtual machine, you can `boot2docker stop` and 
45
-`boot2docker start` it.
43
+Once you have an initialized virtual machine, you can `boot2docker stop`
44
+and `boot2docker start` it.
46 45
 
47 46
 ## Upgrading
48 47
 
... ...
@@ -60,29 +59,19 @@ To upgrade:
60 60
 	boot2docker start
61 61
 ```
62 62
 
63
-
64 63
 ## Running Docker
65 64
 
66 65
 From your terminal, you can try the “hello world” example. Run:
67 66
 
68 67
     $ docker run ubuntu echo hello world
69 68
 
70
-This will download the ubuntu image and print hello world.
69
+This will download the `ubuntu` image and print `hello world`.
71 70
 
72
-# Further details
73
-
74
-The Boot2Docker management tool provides some commands:
75
-
76
-```
77
-$ ./boot2docker
78
-Usage: ./boot2docker [<options>] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|delete|download|version} [<args>]
79
-```
71
+## Container port redirection
80 72
 
81
-## Container port redirection 
82
-
83
-The latest version of `boot2docker` sets up two network adaptors: one using NAT
73
+The latest version of `boot2docker` sets up two network adapters: one using NAT
84 74
 to allow the VM to download images and files from the Internet, and one host only
85
-network adaptor to which the container's ports will be exposed on.
75
+network adapter to which the container's ports will be exposed on.
86 76
 
87 77
 If you run a container with an exposed port:
88 78
 
... ...
@@ -103,6 +92,17 @@ If you want to share container ports with other computers on your LAN, you will
103 103
 need to set up [NAT adaptor based port forwarding](
104 104
 https://github.com/boot2docker/boot2docker/blob/master/doc/WORKAROUNDS.md)
105 105
 
106
+# Further details
107
+
108
+The Boot2Docker management tool provides some commands:
109
+
110
+```
111
+$ ./boot2docker
112
+Usage: ./boot2docker [<options>]
113
+{help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|delete|download|version}
114
+[<args>]
115
+```
106 116
 
117
+Continue with the [User Guide](/userguide/).
107 118
 
108 119
 For further information or to report issues, please see the [Boot2Docker site](http://boot2docker.io).
... ...
@@ -48,5 +48,6 @@ Docker daemon.
48 48
     $ sudo usermod -G docker <username>
49 49
 
50 50
 **Done!**
51
-Now continue with the [*Hello World*](
52
-/examples/hello_world/#hello-world) example.
51
+
52
+Continue with the [User Guide](/userguide/).
53
+
... ...
@@ -56,7 +56,8 @@ Now let's verify that Docker is working.
56 56
     $ sudo docker run -i -t fedora /bin/bash
57 57
 
58 58
 **Done!**
59
-Now continue with the [*Hello World*](/examples/hello_world/#hello-world) example.
59
+
60
+Continue with the [User Guide](/userguide/).
60 61
 
61 62
 ## Issues?
62 63
 
... ...
@@ -24,5 +24,7 @@ page_keywords: IBM SoftLayer, virtualization, cloud, docker, documentation, inst
24 24
 7. Then continue with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
25 25
    instructions.
26 26
 
27
-Continue with the [*Hello World*](
28
-/examples/hello_world/#hello-world) example.
27
+## What next?
28
+
29
+Continue with the [User Guide](/userguide/).
30
+
... ...
@@ -111,8 +111,7 @@ Now verify that the installation has worked by downloading the
111 111
 
112 112
 Type `exit` to exit
113 113
 
114
-**Done!**, now continue with the [*Hello
115
-World*](/examples/hello_world/#hello-world) example.
114
+**Done!**, continue with the [User Guide](/userguide/).
116 115
 
117 116
 ## Ubuntu Raring 13.04 and Saucy 13.10 (64 bit)
118 117
 
... ...
@@ -159,8 +158,7 @@ Now verify that the installation has worked by downloading the
159 159
 
160 160
 Type `exit` to exit
161 161
 
162
-**Done!**, now continue with the [*Hello
163
-World*](/examples/hello_world/#hello-world) example.
162
+**Done!**, now continue with the [User Guide](/userguide/).
164 163
 
165 164
 ### Giving non-root access
166 165
 
... ...
@@ -7,7 +7,7 @@ page_keywords: docker, introduction, documentation, about, technology, understan
7 7
 **What is Docker?**
8 8
 
9 9
 Docker is a platform for developing, shipping, and running applications.
10
-Docker is designed to deliver your applications faster.  With Docker you
10
+Docker is designed to deliver your applications faster. With Docker you
11 11
 can separate your applications from your infrastructure AND treat your
12 12
 infrastructure like a managed application. We want to help you ship code
13 13
 faster, test faster, deploy faster and shorten the cycle between writing
... ...
@@ -317,15 +317,12 @@ Zones.
317 317
 
318 318
 ## Next steps
319 319
 
320
-### Learning how to use Docker
321
-
322
-Visit [Working with Docker](working-with-docker.md).
323
-
324 320
 ### Installing Docker
325 321
 
326 322
 Visit the [installation](/installation/#installation) section.
327 323
 
328
-### Get the whole story
324
+### The Docker User Guide
325
+
326
+[Learn how to use Docker](/userguide/).
329 327
 
330
-[https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)
331 328
 
332 329
deleted file mode 100644
... ...
@@ -1,292 +0,0 @@
1
-page_title: Introduction to working with Docker
2
-page_description: Introduction to working with Docker and Docker commands.
3
-page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
4
-
5
-# An Introduction to working with Docker
6
-
7
-**Getting started with Docker**
8
-
9
-> **Note:** 
10
-> If you would like to see how a specific command
11
-> works, check out the glossary of all available client
12
-> commands on our [Commands Reference](/reference/commandline/cli).
13
-
14
-## Introduction
15
-
16
-In the [Understanding Docker](understanding-docker.md) section we
17
-covered the components that make up Docker, learned about the underlying
18
-technology and saw *how* everything works.
19
-
20
-Now, let's get an introduction to the basics of interacting with Docker.
21
-
22
-> **Note:** 
23
-> This page assumes you have a host with a running Docker
24
-> daemon and access to a Docker client. To see how to install Docker on
25
-> a variety of platforms see the [installation
26
-> section](/installation/#installation).
27
-
28
-## How to use the client
29
-
30
-The client provides you a command-line interface to Docker. It is
31
-accessed by running the `docker` binary.
32
-
33
-> **Tip:** 
34
-> The below instructions can be considered a summary of our
35
-> [interactive tutorial](https://www.docker.io/gettingstarted). If you
36
-> prefer a more hands-on approach without installing anything, why not
37
-> give that a shot and check out the
38
-> [tutorial](https://www.docker.io/gettingstarted).
39
-
40
-The `docker` client usage is pretty simple. Each action you can take
41
-with Docker is a command and each command can take a series of
42
-flags and arguments.
43
-
44
-    # Usage:  [sudo] docker [flags] [command] [arguments] ..
45
-    # Example:
46
-    $ docker run -i -t ubuntu /bin/bash
47
-
48
-## Using the Docker client
49
-
50
-Let's get started with the Docker client by running our first Docker
51
-command. We're going to use the `docker version` command to return
52
-version information on the currently installed Docker client and daemon.
53
-
54
-    # Usage: [sudo] docker version
55
-    # Example:
56
-    $ docker version
57
-
58
-This command will not only provide you the version of Docker client and
59
-daemon you are using, but also the version of Go (the programming
60
-language powering Docker).
61
-
62
-    Client version: 0.8.0
63
-    Go version (client): go1.2
64
-
65
-    Git commit (client): cc3a8c8
66
-    Server version: 0.8.0
67
-
68
-    Git commit (server): cc3a8c8
69
-    Go version (server): go1.2
70
-
71
-    Last stable version: 0.8.0
72
-
73
-### Seeing what the Docker client can do
74
-
75
-We can see all of the commands available to us with the Docker client by
76
-running the `docker` binary without any options.
77
-
78
-    # Usage: [sudo] docker
79
-    # Example:
80
-    $ docker
81
-
82
-You will see a list of all currently available commands.
83
-
84
-    Commands:
85
-         attach    Attach to a running container
86
-         build     Build an image from a Dockerfile
87
-         commit    Create a new image from a container's changes
88
-    . . .
89
-
90
-### Seeing Docker command usage
91
-
92
-You can also zoom in and review the usage for specific Docker commands.
93
-
94
-Try typing Docker followed with a `[command]` to see the usage for that
95
-command:
96
-
97
-    # Usage: [sudo] docker [command] [--help]
98
-    # Example:
99
-    $ docker attach
100
-    Help output . . .
101
-
102
-Or you can also pass the `--help` flag to the `docker` binary.
103
-
104
-    $ docker images --help
105
-
106
-This will display the help text and all available flags:
107
-
108
-    Usage: docker attach [OPTIONS] CONTAINER
109
-
110
-    Attach to a running container
111
-
112
-      --no-stdin=false: Do not attach stdin
113
-      --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
114
-
115
-## Working with images
116
-
117
-Let's get started with using Docker by working with Docker images, the
118
-building blocks of Docker containers.
119
-
120
-### Docker Images
121
-
122
-As we've discovered a Docker image is a read-only template that we build
123
-containers from. Every Docker container is launched from an image. You can
124
-use both images provided by Docker, such as the base `ubuntu` image,
125
-as well as images built by others. For example we can build an image that
126
-runs Apache and our own web application as a starting point to launch containers.
127
-
128
-### Searching for images
129
-
130
-To search for Docker image we use the `docker search` command. The
131
-`docker search` command returns a list of all images that match your
132
-search criteria, together with some useful information about that image.
133
-
134
-This information includes social metrics like how many other people like
135
-the image: we call these "likes" *stars*. We also tell you if an image
136
-is *trusted*. A *trusted* image is built from a known source and allows
137
-you to introspect in greater detail how the image is constructed.
138
-
139
-    # Usage: [sudo] docker search [image name]
140
-    # Example:
141
-    $ docker search nginx
142
-
143
-    NAME                               DESCRIPTION                                     STARS  OFFICIAL   TRUSTED
144
-    dockerfile/nginx                   Trusted Nginx (http://nginx.org/) Build         6                 [OK]
145
-    paintedfox/nginx-php5              A docker image for running Nginx with PHP5.     3                 [OK]
146
-    dockerfiles/django-uwsgi-nginx     Dockerfile and configuration files to buil...   2                 [OK]
147
-    . . .
148
-
149
-> **Note:** 
150
-> To learn more about trusted builds, check out
151
-> [this](http://blog.docker.io/2013/11/introducing-trusted-builds) blog
152
-> post.
153
-
154
-### Downloading an image
155
-
156
-Once we find an image we'd like to download we can pull it down from
157
-[Docker.io](https://index.docker.io) using the `docker pull` command.
158
-
159
-    # Usage: [sudo] docker pull [image name]
160
-    # Example:
161
-    $ docker pull dockerfile/nginx
162
-
163
-    Pulling repository dockerfile/nginx
164
-    0ade68db1d05: Pulling dependent layers
165
-    27cf78414709: Download complete
166
-    b750fe79269d: Download complete
167
-    . . .
168
-
169
-As you can see, Docker will download, one by one, all the layers forming
170
-the image.
171
-
172
-### Listing available images
173
-
174
-You may already have some images you've pulled down or built yourself
175
-and you can use the `docker images` command to see the images
176
-available to you locally.
177
-
178
-    # Usage: [sudo] docker images
179
-    # Example:
180
-    $ docker images
181
-
182
-    REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
183
-    myUserName/nginx    latest              a0d6c70867d2        41 seconds ago      578.8 MB
184
-    nginx               latest              173c2dd28ab2        3 minutes ago       578.8 MB
185
-    dockerfile/nginx    latest              0ade68db1d05        3 weeks ago         578.8 MB
186
-
187
-### Building our own images
188
-
189
-You can build your own images using a `Dockerfile` and the `docker
190
-build` command. The `Dockerfile` is very flexible and provides a
191
-powerful set of instructions for building applications into Docker
192
-images. To learn more about the `Dockerfile` see the [`Dockerfile`
193
-Reference](/reference/builder/) and [tutorial](https://www.docker.io/learn/dockerfile/).
194
-
195
-## Working with containers
196
-
197
-### Docker Containers
198
-
199
-Docker containers run your applications and are built from Docker
200
-images. In order to create or start a container, you need an image. This
201
-could be the base `ubuntu` image or an image built and shared with you
202
-or an image you've built yourself.
203
-
204
-### Running a new container from an image
205
-
206
-The easiest way to create a new container is to *run* one from an image
207
-using the `docker run` command.
208
-
209
-    # Usage: [sudo] docker run [arguments] ..
210
-    # Example:
211
-    $ docker run -d --name nginx_web nginx /usr/sbin/nginx
212
-    25137497b2749e226dd08f84a17e4b2be114ddf4ada04125f130ebfe0f1a03d3
213
-
214
-This will create a new container from an image called `nginx` which will
215
-launch the command `/usr/sbin/nginx` when the container is run. We've
216
-also given our container a name, `nginx_web`. When the container is run
217
-Docker will return a container ID, a long string that uniquely
218
-identifies our container. We use can the container's name or its string
219
-to work with it.
220
-
221
-Containers can be run in two modes:
222
-
223
-* Interactive;
224
-* Daemonized;
225
-
226
-An interactive container runs in the foreground and you can connect to
227
-it and interact with it, for example sign into a shell on that
228
-container. A daemonized container runs in the background.
229
-
230
-A container will run as long as the process you have launched inside it
231
-is running, for example if the `/usr/bin/nginx` process stops running
232
-the container will also stop.
233
-
234
-### Listing containers
235
-
236
-We can see a list of all the containers on our host using the `docker
237
-ps` command. By default the `docker ps` command only shows running
238
-containers. But we can also add the `-a` flag to show *all* containers:
239
-both running and stopped.
240
-
241
-    # Usage: [sudo] docker ps [-a]
242
-    # Example:
243
-    $ docker ps
244
-
245
-    CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS                NAMES
246
-    842a50a13032        $ dockerfile/nginx:latest   nginx               35 minutes ago      Up 30 minutes       0.0.0.0:80->80/tcp   nginx_web
247
-
248
-### Stopping a container
249
-
250
-You can use the `docker stop` command to stop an active container. This
251
-will gracefully end the active process.
252
-
253
-    # Usage: [sudo] docker stop [container ID]
254
-    # Example:
255
-    $ docker stop nginx_web
256
-    nginx_web
257
-
258
-If the `docker stop` command succeeds it will return the name of
259
-the container it has stopped.
260
-
261
-> **Note:** 
262
-> If you want you to more aggressively stop a container you can use the
263
-> `docker kill` command.
264
-
265
-### Starting a Container
266
-
267
-Stopped containers can be started again.
268
-
269
-    # Usage: [sudo] docker start [container ID]
270
-    # Example:
271
-    $ docker start nginx_web
272
-    nginx_web
273
-
274
-If the `docker start` command succeeds it will return the name of the
275
-freshly started container.
276
-
277
-## Next steps
278
-
279
-Here we've learned the basics of how to interact with Docker images and
280
-how to run and work with our first container.
281
-
282
-### Understanding Docker
283
-
284
-Visit [Understanding Docker](understanding-docker.md).
285
-
286
-### Installing Docker
287
-
288
-Visit the [installation](/installation/#installation) section.
289
-
290
-### Get the whole story
291
-
292
-[https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)
... ...
@@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
7 7
 ## 1. Brief introduction
8 8
 
9 9
  - The Remote API has replaced rcli
10
- - The daemon listens on `unix:///var/run/docker.sock` but you can
11
-   [*Bind Docker to another host/port or a Unix socket*](
12
-   /use/basics/#bind-docker).
10
+ - The daemon listens on `unix:///var/run/docker.sock` but you can bind
11
+   Docker to another host/port or a Unix socket.
13 12
  - The API tends to be REST, but for some complex commands, like `attach`
14 13
    or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
15 14
    and `stderr`
... ...
@@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
7 7
 ## 1. Brief introduction
8 8
 
9 9
  - The Remote API has replaced rcli
10
- - The daemon listens on `unix:///var/run/docker.sock` but you can
11
-   [*Bind Docker to another host/port or a Unix socket*](
12
-   /use/basics/#bind-docker).
10
+ - The daemon listens on `unix:///var/run/docker.sock` but you can bind
11
+   Docker to another host/port or a Unix socket.
13 12
  - The API tends to be REST, but for some complex commands, like `attach`
14 13
    or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
15 14
    and `stderr`
... ...
@@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
7 7
 # 1. Brief introduction
8 8
 
9 9
  - The Remote API has replaced rcli
10
- - The daemon listens on `unix:///var/run/docker.sock` but you can
11
-   [*Bind Docker to another host/port or a Unix socket*](
12
-   /use/basics/#bind-docker).
10
+ - The daemon listens on `unix:///var/run/docker.sock` but you can bind
11
+   Docker to another host/port or a Unix socket.
13 12
  - The API tends to be REST, but for some complex commands, like `attach`
14 13
    or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
15 14
    and `stderr`
... ...
@@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
7 7
 # 1. Brief introduction
8 8
 
9 9
  - The Remote API has replaced rcli
10
- - The daemon listens on `unix:///var/run/docker.sock` but you can
11
-   [*Bind Docker to another host/port or a Unix socket*](
12
-   /use/basics/#bind-docker).
10
+ - The daemon listens on `unix:///var/run/docker.sock` but you can bind
11
+   Docker to another host/port or a Unix socket.
13 12
  - The API tends to be REST, but for some complex commands, like `attach`
14 13
    or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
15 14
    and `stderr`
... ...
@@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
7 7
 # 1. Brief introduction
8 8
 
9 9
  - The Remote API has replaced rcli
10
- - The daemon listens on `unix:///var/run/docker.sock` but you can
11
-   [*Bind Docker to another host/port or a Unix socket*](
12
-   /use/basics/#bind-docker).
10
+ - The daemon listens on `unix:///var/run/docker.sock` but you can bind
11
+   Docker to another host/port or a Unix socket.
13 12
  - The API tends to be REST, but for some complex commands, like `attach`
14 13
    or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
15 14
    and `stderr`
... ...
@@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
7 7
 # 1. Brief introduction
8 8
 
9 9
  - The Remote API has replaced rcli
10
- - The daemon listens on `unix:///var/run/docker.sock` but you can
11
-   [*Bind Docker to another host/port or a Unix socket*](
12
-   /use/basics/#bind-docker).
10
+ - The daemon listens on `unix:///var/run/docker.sock` but you can bind
11
+   Docker to another host/port or a Unix socket.
13 12
  - The API tends to be REST, but for some complex commands, like `attach`
14 13
    or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
15 14
    and `stderr`
... ...
@@ -57,7 +57,7 @@ accelerating `docker build` significantly (indicated by `Using cache`):
57 57
 
58 58
 When you're done with your build, you're ready to look into
59 59
 [*Pushing a repository to its registry*](
60
-/use/workingwithrepository/#image-push).
60
+/userguide/dockerrepos/#image-push).
61 61
 
62 62
 ## Format
63 63
 
... ...
@@ -95,7 +95,7 @@ The `FROM` instruction sets the [*Base Image*](/terms/image/#base-image-def)
95 95
 for subsequent instructions. As such, a valid Dockerfile must have `FROM` as
96 96
 its first instruction. The image can be any valid image – it is especially easy
97 97
 to start by **pulling an image** from the [*Public Repositories*](
98
-/use/workingwithrepository/#using-public-repositories).
98
+/userguide/dockerrepos/#using-public-repositories).
99 99
 
100 100
 `FROM` must be the first non-comment instruction in the Dockerfile.
101 101
 
... ...
@@ -200,10 +200,8 @@ default specified in CMD.
200 200
 
201 201
 The `EXPOSE` instructions informs Docker that the container will listen on the
202 202
 specified network ports at runtime. Docker uses this information to interconnect
203
-containers using links (see
204
-[*links*](/use/working_with_links_names/#working-with-links-names)),
205
-and to setup port redirection on the host system (see [*Redirect Ports*](
206
-/use/port_redirection/#port-redirection)).
203
+containers using links (see the [Docker User
204
+Guide](/userguide/dockerlinks)).
207 205
 
208 206
 ## ENV
209 207
 
... ...
@@ -380,7 +378,7 @@ and mark it as holding externally mounted volumes from native host or other
380 380
 containers. The value can be a JSON array, `VOLUME ["/var/log/"]`, or a plain
381 381
 string, `VOLUME /var/log`. For more information/examples and mounting
382 382
 instructions via the Docker client, refer to [*Share Directories via Volumes*](
383
-/use/working_with_volumes/#volume-def) documentation.
383
+/userguide/dockervolumes/#volume-def) documentation.
384 384
 
385 385
 ## USER
386 386
 
... ...
@@ -602,15 +602,6 @@ contains complex json object, so to grab it as JSON, you use
602 602
 The main process inside the container will be sent SIGKILL, or any
603 603
 signal specified with option `--signal`.
604 604
 
605
-### Known Issues (kill)
606
-
607
-- [Issue 197](https://github.com/dotcloud/docker/issues/197) indicates
608
-  that `docker kill` may leave directories behind
609
-  and make it difficult to remove the container.
610
-- [Issue 3844](https://github.com/dotcloud/docker/issues/3844) lxc
611
-  1.0.0 beta3 removed `lcx-kill` which is used by
612
-  Docker versions before 0.8.0; see the issue for a workaround.
613
-
614 605
 ## load
615 606
 
616 607
     Usage: docker load
... ...
@@ -864,11 +855,9 @@ of all containers.
864 864
 The `docker run` command can be used in combination with `docker commit` to
865 865
 [*change the command that a container runs*](#commit-an-existing-container).
866 866
 
867
-See [*Redirect Ports*](/use/port_redirection/#port-redirection)
868
-for more detailed information about the `--expose`, `-p`, `-P` and `--link`
869
-parameters, and [*Link Containers*](
870
-/use/working_with_links_names/#working-with-links-names) for specific
871
-examples using `--link`.
867
+See the [Docker User Guide](/userguide/dockerlinks/) for more detailed
868
+information about the `--expose`, `-p`, `-P` and `--link` parameters,
869
+and linking containers.
872 870
 
873 871
 ### Known Issues (run –volumes-from)
874 872
 
... ...
@@ -934,16 +923,16 @@ manipulate the host's docker daemon.
934 934
 
935 935
     $ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash
936 936
 
937
-This binds port `8080` of the container to port `80` on `127.0.0.1` of the host
938
-machine. [*Redirect Ports*](/use/port_redirection/#port-redirection)
937
+This binds port `8080` of the container to port `80` on `127.0.0.1` of
938
+the host machine. The [Docker User Guide](/userguide/dockerlinks/)
939 939
 explains in detail how to manipulate ports in Docker.
940 940
 
941 941
     $ sudo docker run --expose 80 ubuntu bash
942 942
 
943
-This exposes port `80` of the container for use within a link without publishing
944
-the port to the host system's interfaces. [*Redirect Ports*](
945
-/use/port_redirection/#port-redirection) explains in detail how to
946
-manipulate ports in Docker.
943
+This exposes port `80` of the container for use within a link without
944
+publishing the port to the host system's interfaces. The [Docker User
945
+Guide](/userguide/dockerlinks) explains in detail how to manipulate
946
+ports in Docker.
947 947
 
948 948
     $ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
949 949
 
... ...
@@ -1097,7 +1086,7 @@ Search [Docker.io](https://index.docker.io) for images
1097 1097
       -t, --trusted=false    Only show trusted builds
1098 1098
 
1099 1099
 See [*Find Public Images on Docker.io*](
1100
-/use/workingwithrepository/#find-public-images-on-dockerio) for
1100
+/userguide/dockerrepos/#find-public-images-on-dockerio) for
1101 1101
 more details on finding shared images from the commandline.
1102 1102
 
1103 1103
 ## start
... ...
@@ -1130,7 +1119,7 @@ grace period, SIGKILL
1130 1130
 
1131 1131
 You can group your images together using names and tags, and then upload
1132 1132
 them to [*Share Images via Repositories*](
1133
-/use/workingwithrepository/#working-with-the-repository).
1133
+/userguide/dockerrepos/#working-with-the-repository).
1134 1134
 
1135 1135
 ## top
1136 1136
 
... ...
@@ -11,21 +11,17 @@ The [*Image*](/terms/image/#image-def) which starts the process may
11 11
 define defaults related to the binary to run, the networking to expose,
12 12
 and more, but `docker run` gives final control to
13 13
 the operator who starts the container from the image. That's the main
14
-reason [*run*](/commandline/cli/#cli-run) has more options than any
14
+reason [*run*](/reference/commandline/cli/#cli-run) has more options than any
15 15
 other `docker` command.
16 16
 
17
-Every one of the [*Examples*](/examples/#example-list) shows
18
-running containers, and so here we try to give more in-depth guidance.
19
-
20 17
 ## General Form
21 18
 
22
-As you`ve seen in the [*Examples*](/examples/#example-list), the
23
-basic run command takes this form:
19
+The basic `docker run` command takes this form:
24 20
 
25 21
     $ docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
26 22
 
27 23
 To learn how to interpret the types of `[OPTIONS]`,
28
-see [*Option types*](/commandline/cli/#cli-options).
24
+see [*Option types*](/reference/commandline/cli/#cli-options).
29 25
 
30 26
 The list of `[OPTIONS]` breaks down into two groups:
31 27
 
... ...
@@ -75,9 +71,9 @@ default foreground mode:
75 75
 
76 76
 In detached mode (`-d=true` or just `-d`), all I/O should be done
77 77
 through network connections or shared volumes because the container is
78
-no longer listening to the commandline where you executed `docker run`.
78
+no longer listening to the command line where you executed `docker run`.
79 79
 You can reattach to a detached container with `docker`
80
-[*attach*](commandline/cli/#attach). If you choose to run a
80
+[*attach*](/reference/commandline/cli/#attach). If you choose to run a
81 81
 container in the detached mode, then you cannot use the `--rm` option.
82 82
 
83 83
 ### Foreground
... ...
@@ -85,7 +81,7 @@ container in the detached mode, then you cannot use the `--rm` option.
85 85
 In foreground mode (the default when `-d` is not specified), `docker run`
86 86
 can start the process in the container and attach the console to the process's
87 87
 standard input, output, and standard error. It can even pretend to be a TTY
88
-(this is what most commandline executables expect) and pass along signals. All
88
+(this is what most command line executables expect) and pass along signals. All
89 89
 of that is configurable:
90 90
 
91 91
     -a=[]           : Attach to ``stdin``, ``stdout`` and/or ``stderr``
... ...
@@ -121,11 +117,11 @@ assign a name to the container with `--name` then
121 121
 the daemon will also generate a random string name too. The name can
122 122
 become a handy way to add meaning to a container since you can use this
123 123
 name when defining
124
-[*links*](/use/working_with_links_names/#working-with-links-names)
124
+[*links*](/userguide/dockerlinks/#working-with-links-names)
125 125
 (or any other place you need to identify a container). This works for
126 126
 both background and foreground Docker containers.
127 127
 
128
-### PID Equivalent
128
+### PID Equivalent 
129 129
 
130 130
 And finally, to help with automation, you can have Docker write the
131 131
 container ID out to a file of your choosing. This is similar to how some
... ...
@@ -256,7 +252,7 @@ familiar with using LXC directly.
256 256
 
257 257
 ## Overriding Dockerfile Image Defaults
258 258
 
259
-When a developer builds an image from a [*Dockerfile*](builder/#dockerbuilder)
259
+When a developer builds an image from a [*Dockerfile*](/reference/builder/#dockerbuilder)
260 260
 or when she commits it, the developer can set a number of default parameters
261 261
 that take effect when the image starts up as a container.
262 262
 
... ...
@@ -425,7 +421,7 @@ mechanism to communicate with a linked container by its alias:
425 425
     --volumes-from="": Mount all volumes from the given container(s)
426 426
 
427 427
 The volumes commands are complex enough to have their own documentation in
428
-section [*Share Directories via Volumes*](/use/working_with_volumes/#volume-def).
428
+section [*Share Directories via Volumes*](/userguide/dockervolumes/#volume-def).
429 429
 A developer can define one or more `VOLUME's associated with an image, but only the
430 430
 operator can give access from one container to another (or from a container to a
431 431
 volume mounted on the host).
... ...
@@ -8,18 +8,19 @@ page_keywords: containers, lxc, concepts, explanation, image, container
8 8
 
9 9
 ![](/terms/images/docker-filesystems-busyboxrw.png)
10 10
 
11
-Once you start a process in Docker from an [*Image*](image.md), Docker fetches
12
-the image and its [*Parent Image*](image.md), and repeats the process until it
13
-reaches the [*Base Image*](image.md/#base-image-def). Then the
14
-[*Union File System*](layer.md) adds a read-write layer on top. That read-write
15
-layer, plus the information about its [*Parent Image*](image.md) and some
16
-additional information like its unique id, networking configuration, and
17
-resource limits is called a **container**.
11
+Once you start a process in Docker from an [*Image*](/terms/image), Docker
12
+fetches the image and its [*Parent Image*](/terms/image), and repeats the
13
+process until it reaches the [*Base Image*](/terms/image/#base-image-def). Then
14
+the [*Union File System*](/terms/layer) adds a read-write layer on top. That
15
+read-write layer, plus the information about its [*Parent
16
+Image*](/terms/image)
17
+and some additional information like its unique id, networking
18
+configuration, and resource limits is called a **container**.
18 19
 
19 20
 ## Container State
20 21
 
21
-Containers can change, and so they have state. A container may be **running** or
22
-**exited**.
22
+Containers can change, and so they have state. A container may be
23
+**running** or **exited**.
23 24
 
24 25
 When a container is running, the idea of a "container" also includes a
25 26
 tree of processes running on the CPU, isolated from the other processes
... ...
@@ -31,13 +32,13 @@ processes restart from scratch (their memory state is **not** preserved
31 31
 in a container), but the file system is just as it was when the
32 32
 container was stopped.
33 33
 
34
-You can promote a container to an [*Image*](image.md) with `docker commit`.
34
+You can promote a container to an [*Image*](/terms/image) with `docker commit`.
35 35
 Once a container is an image, you can use it as a parent for new containers.
36 36
 
37 37
 ## Container IDs
38 38
 
39 39
 All containers are identified by a 64 hexadecimal digit string
40 40
 (internally a 256bit value). To simplify their use, a short ID of the
41
-first 12 characters can be used on the commandline. There is a small
41
+first 12 characters can be used on the command line. There is a small
42 42
 possibility of short id collisions, so the docker server will always
43 43
 return the long ID.
... ...
@@ -8,10 +8,10 @@ page_keywords: containers, lxc, concepts, explanation, image, container
8 8
 
9 9
 ![](/terms/images/docker-filesystems-debian.png)
10 10
 
11
-In Docker terminology, a read-only [*Layer*](../layer/#layer-def) is
11
+In Docker terminology, a read-only [*Layer*](/terms/layer/#layer-def) is
12 12
 called an **image**. An image never changes.
13 13
 
14
-Since Docker uses a [*Union File System*](../layer/#ufs-def), the
14
+Since Docker uses a [*Union File System*](/terms/layer/#ufs-def), the
15 15
 processes think the whole file system is mounted read-write. But all the
16 16
 changes go to the top-most writeable layer, and underneath, the original
17 17
 file in the read-only image is unchanged. Since images don't change,
... ...
@@ -7,7 +7,7 @@ page_keywords: containers, lxc, concepts, explanation, image, container
7 7
 ## Introduction
8 8
 
9 9
 In a traditional Linux boot, the kernel first mounts the root [*File
10
-System*](../filesystem/#filesystem-def) as read-only, checks its
10
+System*](/terms/filesystem/#filesystem-def) as read-only, checks its
11 11
 integrity, and then switches the whole rootfs volume to read-write mode.
12 12
 
13 13
 ## Layer
... ...
@@ -6,9 +6,9 @@ page_keywords: containers, concepts, explanation, image, repository, container
6 6
 
7 7
 ## Introduction
8 8
 
9
-A Registry is a hosted service containing [*repositories*](
10
-../repository/#repository-def) of [*images*](../image/#image-def) which
11
-responds to the Registry API.
9
+A Registry is a hosted service containing
10
+[*repositories*](/terms/repository/#repository-def) of
11
+[*images*](/terms/image/#image-def) which responds to the Registry API.
12 12
 
13 13
 The default registry can be accessed using a browser at
14 14
 [Docker.io](http://index.docker.io) or using the
... ...
@@ -16,5 +16,5 @@ The default registry can be accessed using a browser at
16 16
 
17 17
 ## Further Reading
18 18
 
19
-For more information see [*Working with Repositories*](
20
-../use/workingwithrepository/#working-with-the-repository)
19
+For more information see [*Working with
20
+Repositories*](/userguide/dockerrepos/#working-with-the-repository)
... ...
@@ -7,7 +7,7 @@ page_keywords: containers, concepts, explanation, image, repository, container
7 7
 ## Introduction
8 8
 
9 9
 A repository is a set of images either on your local Docker server, or
10
-shared, by pushing it to a [*Registry*](../registry/#registry-def)
10
+shared, by pushing it to a [*Registry*](/terms/registry/#registry-def)
11 11
 server.
12 12
 
13 13
 Images can be associated with a repository (or multiple) by giving them
... ...
@@ -31,5 +31,5 @@ If you create a new repository which you want to share, you will need to
31 31
 set at least the `user_name`, as the `default` blank `user_name` prefix is
32 32
 reserved for official Docker images.
33 33
 
34
-For more information see [*Working with Repositories*](
35
-../use/workingwithrepository/#working-with-the-repository)
34
+For more information see [*Working with
35
+Repositories*](/userguide/dockerrepos/#working-with-the-repository)
36 36
deleted file mode 100644
... ...
@@ -1,13 +0,0 @@
1
-# Use
2
-
3
-## Contents:
4
-
5
- - [First steps with Docker](basics/)
6
- - [Share Images via Repositories](workingwithrepository/)
7
- - [Redirect Ports](port_redirection/)
8
- - [Configure Networking](networking/)
9
- - [Automatically Start Containers](host_integration/)
10
- - [Share Directories via Volumes](working_with_volumes/)
11
- - [Link Containers](working_with_links_names/)
12
- - [Link via an Ambassador Container](ambassador_pattern_linking/)
13
- - [Using Puppet](puppet/)
14 1
\ No newline at end of file
15 2
deleted file mode 100644
... ...
@@ -1,150 +0,0 @@
1
-page_title: Link via an Ambassador Container
2
-page_description: Using the Ambassador pattern to abstract (network) services
3
-page_keywords: Examples, Usage, links, docker, documentation, examples, names, name, container naming
4
-
5
-# Link via an Ambassador Container
6
-
7
-## Introduction
8
-
9
-Rather than hardcoding network links between a service consumer and
10
-provider, Docker encourages service portability, for example instead of:
11
-
12
-    (consumer) --> (redis)
13
-
14
-Requiring you to restart the `consumer` to attach it to a different
15
-`redis` service, you can add ambassadors:
16
-
17
-    (consumer) --> (redis-ambassador) --> (redis)
18
-
19
-Or
20
-
21
-    (consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)
22
-
23
-When you need to rewire your consumer to talk to a different Redis
24
-server, you can just restart the `redis-ambassador` container that the
25
-consumer is connected to.
26
-
27
-This pattern also allows you to transparently move the Redis server to a
28
-different docker host from the consumer.
29
-
30
-Using the `svendowideit/ambassador` container, the link wiring is
31
-controlled entirely from the `docker run` parameters.
32
-
33
-## Two host Example
34
-
35
-Start actual Redis server on one Docker host
36
-
37
-    big-server $ docker run -d --name redis crosbymichael/redis
38
-
39
-Then add an ambassador linked to the Redis server, mapping a port to the
40
-outside world
41
-
42
-    big-server $ docker run -d --link redis:redis --name redis_ambassador -p 6379:6379 svendowideit/ambassador
43
-
44
-On the other host, you can set up another ambassador setting environment
45
-variables for each remote port we want to proxy to the `big-server`
46
-
47
-    client-server $ docker run -d --name redis_ambassador --expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
48
-
49
-Then on the `client-server` host, you can use a Redis client container
50
-to talk to the remote Redis server, just by linking to the local Redis
51
-ambassador.
52
-
53
-    client-server $ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
54
-    redis 172.17.0.160:6379> ping
55
-    PONG
56
-
57
-## How it works
58
-
59
-The following example shows what the `svendowideit/ambassador` container
60
-does automatically (with a tiny amount of `sed`)
61
-
62
-On the Docker host (192.168.1.52) that Redis will run on:
63
-
64
-    # start actual redis server
65
-    $ docker run -d --name redis crosbymichael/redis
66
-
67
-    # get a redis-cli container for connection testing
68
-    $ docker pull relateiq/redis-cli
69
-
70
-    # test the redis server by talking to it directly
71
-    $ docker run -t -i --rm --link redis:redis relateiq/redis-cli
72
-    redis 172.17.0.136:6379> ping
73
-    PONG
74
-    ^D
75
-
76
-    # add redis ambassador
77
-    $ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh
78
-
79
-In the `redis_ambassador` container, you can see the linked Redis
80
-containers `env`:
81
-
82
-    $ env
83
-    REDIS_PORT=tcp://172.17.0.136:6379
84
-    REDIS_PORT_6379_TCP_ADDR=172.17.0.136
85
-    REDIS_NAME=/redis_ambassador/redis
86
-    HOSTNAME=19d7adf4705e
87
-    REDIS_PORT_6379_TCP_PORT=6379
88
-    HOME=/
89
-    REDIS_PORT_6379_TCP_PROTO=tcp
90
-    container=lxc
91
-    REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
92
-    TERM=xterm
93
-    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
94
-    PWD=/
95
-
96
-This environment is used by the ambassador `socat` script to expose Redis
97
-to the world (via the `-p 6379:6379` port mapping):
98
-
99
-    $ docker rm redis_ambassador
100
-    $ sudo ./contrib/mkimage-unittest.sh
101
-    $ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh
102
-
103
-    $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
104
-
105
-Now ping the Redis server via the ambassador:
106
-
107
-Now go to a different server:
108
-
109
-    $ sudo ./contrib/mkimage-unittest.sh
110
-    $ docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh
111
-
112
-    $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
113
-
114
-And get the `redis-cli` image so we can talk over the ambassador bridge.
115
-
116
-    $ docker pull relateiq/redis-cli
117
-    $ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
118
-    redis 172.17.0.160:6379> ping
119
-    PONG
120
-
121
-## The svendowideit/ambassador Dockerfile
122
-
123
-The `svendowideit/ambassador` image is a small `busybox` image with
124
-`socat` built in. When you start the container, it uses a small `sed`
125
-script to parse out the (possibly multiple) link environment variables
126
-to set up the port forwarding. On the remote host, you need to set the
127
-variable using the `-e` command line option.
128
-
129
-    --expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379
130
-
131
-Will forward the local `1234` port to the remote IP and port, in this
132
-case `192.168.1.52:6379`.
133
-
134
-    #
135
-    #
136
-    # first you need to build the docker-ut image
137
-    # using ./contrib/mkimage-unittest.sh
138
-    # then
139
-    #   docker build -t SvenDowideit/ambassador .
140
-    #   docker tag SvenDowideit/ambassador ambassador
141
-    # then to run it (on the host that has the real backend on it)
142
-    #   docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador
143
-    # on the remote host, you can set up another ambassador
144
-    #   docker run -t -i --name redis_ambassador --expose 6379 sh
145
-
146
-    FROM    docker-ut
147
-    MAINTAINER      SvenDowideit@home.org.au
148
-
149
-
150
-    CMD     env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/'  | sh && top
151 1
deleted file mode 100644
... ...
@@ -1,178 +0,0 @@
1
-page_title: First steps with Docker
2
-page_description: Common usage and commands
3
-page_keywords: Examples, Usage, basic commands, docker, documentation, examples
4
-
5
-# First steps with Docker
6
-
7
-## Check your Docker install
8
-
9
-This guide assumes you have a working installation of Docker. To check
10
-your Docker install, run the following command:
11
-
12
-    # Check that you have a working install
13
-    $ docker info
14
-
15
-If you get `docker: command not found` or something like
16
-`/var/lib/docker/repositories: permission denied` you may have an
17
-incomplete Docker installation or insufficient privileges to access
18
-Docker on your machine.
19
-
20
-Please refer to [*Installation*](/installation/#installation-list)
21
-for installation instructions.
22
-
23
-## Download a pre-built image
24
-
25
-    # Download an ubuntu image
26
-    $ sudo docker pull ubuntu
27
-
28
-This will find the `ubuntu` image by name on
29
-[*Docker.io*](../workingwithrepository/#find-public-images-on-dockerio)
30
-and download it from [Docker.io](https://index.docker.io) to a local
31
-image cache.
32
-
33
-> **Note**:
34
-> When the image has successfully downloaded, you will see a 12 character
35
-> hash `539c0211cd76: Download complete` which is the
36
-> short form of the image ID. These short image IDs are the first 12
37
-> characters of the full image ID - which can be found using
38
-> `docker inspect` or `docker images --no-trunc=true`
39
-
40
-**If you're using OS X** then you shouldn't use `sudo`.
41
-
42
-## Running an interactive shell
43
-
44
-    # Run an interactive shell in the ubuntu image,
45
-    # allocate a tty, attach stdin and stdout
46
-    # To detach the tty without exiting the shell,
47
-    # use the escape sequence Ctrl-p + Ctrl-q
48
-    # note: This will continue to exist in a stopped state once exited (see "docker ps -a")
49
-    $ sudo docker run -i -t ubuntu /bin/bash
50
-
51
-## Bind Docker to another host/port or a Unix socket
52
-
53
-> **Warning**:
54
-> Changing the default `docker` daemon binding to a
55
-> TCP port or Unix *docker* user group will increase your security risks
56
-> by allowing non-root users to gain *root* access on the host. Make sure
57
-> you control access to `docker`. If you are binding
58
-> to a TCP port, anyone with access to that port has full Docker access;
59
-> so it is not advisable on an open network.
60
-
61
-With `-H` it is possible to make the Docker daemon to listen on a
62
-specific IP and port. By default, it will listen on
63
-`unix:///var/run/docker.sock` to allow only local connections by the
64
-*root* user. You *could* set it to `0.0.0.0:4243` or a specific host IP
65
-to give access to everybody, but that is **not recommended** because
66
-then it is trivial for someone to gain root access to the host where the
67
-daemon is running.
68
-
69
-Similarly, the Docker client can use `-H` to connect to a custom port.
70
-
71
-`-H` accepts host and port assignment in the following format:
72
-
73
-    tcp://[host][:port]` or `unix://path
74
-
75
-For example:
76
-
77
--   `tcp://host:4243` -> TCP connection on
78
-    host:4243
79
--   `unix://path/to/socket` -> Unix socket located
80
-    at `path/to/socket`
81
-
82
-`-H`, when empty, will default to the same value as
83
-when no `-H` was passed in.
84
-
85
-`-H` also accepts short form for TCP bindings:
86
-
87
-    host[:port]` or `:port
88
-
89
-Run Docker in daemon mode:
90
-
91
-    $ sudo <path to>/docker -H 0.0.0.0:5555 -d &
92
-
93
-Download an `ubuntu` image:
94
-
95
-    $ sudo docker -H :5555 pull ubuntu
96
-
97
-You can use multiple `-H`, for example, if you want to listen on both
98
-TCP and a Unix socket
99
-
100
-    # Run docker in daemon mode
101
-    $ sudo <path to>/docker -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -d &
102
-    # Download an ubuntu image, use default Unix socket
103
-    $ sudo docker pull ubuntu
104
-    # OR use the TCP port
105
-    $ sudo docker -H tcp://127.0.0.1:4243 pull ubuntu
106
-
107
-## Starting a long-running worker process
108
-
109
-    # Start a very useful long-running process
110
-    $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
111
-
112
-    # Collect the output of the job so far
113
-    $ sudo docker logs $JOB
114
-
115
-    # Kill the job
116
-    $ sudo docker kill $JOB
117
-
118
-## Listing containers
119
-
120
-    $ sudo docker ps # Lists only running containers
121
-    $ sudo docker ps -a # Lists all containers
122
-
123
-## Controlling containers
124
-
125
-    # Start a new container
126
-    $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
127
-
128
-    # Stop the container
129
-    $ docker stop $JOB
130
-
131
-    # Start the container
132
-    $ docker start $JOB
133
-
134
-    # Restart the container
135
-    $ docker restart $JOB
136
-
137
-    # SIGKILL a container
138
-    $ docker kill $JOB
139
-
140
-    # Remove a container
141
-    $ docker stop $JOB # Container must be stopped to remove it
142
-    $ docker rm $JOB
143
-
144
-## Bind a service on a TCP port
145
-
146
-    # Bind port 4444 of this container, and tell netcat to listen on it
147
-    $ JOB=$(sudo docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444)
148
-
149
-    # Which public port is NATed to my container?
150
-    $ PORT=$(sudo docker port $JOB 4444 | awk -F: '{ print $2 }')
151
-
152
-    # Connect to the public port
153
-    $ echo hello world | nc 127.0.0.1 $PORT
154
-
155
-    # Verify that the network connection worked
156
-    $ echo "Daemon received: $(sudo docker logs $JOB)"
157
-
158
-## Committing (saving) a container state
159
-
160
-Save your containers state to an image, so the state can be
161
-re-used.
162
-
163
-When you commit your container only the differences between the image
164
-the container was created from and the current state of the container
165
-will be stored (as a diff). See which images you already have using the
166
-`docker images` command.
167
-
168
-    # Commit your container to a new named image
169
-    $ sudo docker commit <container_id> <some_name>
170
-
171
-    # List your containers
172
-    $ sudo docker images
173
-
174
-You now have an image state from which you can create new instances.
175
-
176
-Read more about [*Share Images via Repositories*](
177
-../workingwithrepository/#working-with-the-repository) or
178
-continue to the complete [*Command Line*](/reference/commandline/cli/#cli)
179 1
deleted file mode 100644
... ...
@@ -1,74 +0,0 @@
1
-page_title: Chef Usage
2
-page_description: Installation and using Docker via Chef
3
-page_keywords: chef, installation, usage, docker, documentation
4
-
5
-# Using Chef
6
-
7
-> **Note**:
8
-> Please note this is a community contributed installation path. The only
9
-> `official` installation is using the
10
-> [*Ubuntu*](/installation/ubuntulinux/#ubuntu-linux) installation
11
-> path. This version may sometimes be out of date.
12
-
13
-## Requirements
14
-
15
-To use this guide you'll need a working installation of
16
-[Chef](http://www.getchef.com/). This cookbook supports a variety of
17
-operating systems.
18
-
19
-## Installation
20
-
21
-The cookbook is available on the [Chef Community
22
-Site](http://community.opscode.com/cookbooks/docker) and can be
23
-installed using your favorite cookbook dependency manager.
24
-
25
-The source can be found on
26
-[GitHub](https://github.com/bflad/chef-docker).
27
-
28
-## Usage
29
-
30
-The cookbook provides recipes for installing Docker, configuring init
31
-for Docker, and resources for managing images and containers. It
32
-supports almost all Docker functionality.
33
-
34
-### Installation
35
-
36
-    include_recipe 'docker'
37
-
38
-### Images
39
-
40
-The next step is to pull a Docker image. For this, we have a resource:
41
-
42
-    docker_image 'samalba/docker-registry'
43
-
44
-This is equivalent to running:
45
-
46
-    $ docker pull samalba/docker-registry
47
-
48
-There are attributes available to control how long the cookbook will
49
-allow for downloading (5 minute default).
50
-
51
-To remove images you no longer need:
52
-
53
-    docker_image 'samalba/docker-registry' do
54
-      action :remove
55
-    end
56
-
57
-### Containers
58
-
59
-Now you have an image where you can run commands within a container
60
-managed by Docker.
61
-
62
-    docker_container 'samalba/docker-registry' do
63
-      detach true
64
-      port '5000:5000'
65
-      env 'SETTINGS_FLAVOR=local'
66
-      volume '/mnt/docker:/docker-storage'
67
-    end
68
-
69
-This is equivalent to running the following command, but under upstart:
70
-
71
-    $ docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry
72
-
73
-The resources will accept a single string or an array of values for any
74
-Docker flags that allow multiple values.
75 1
deleted file mode 100644
... ...
@@ -1,60 +0,0 @@
1
-page_title: Automatically Start Containers
2
-page_description: How to generate scripts for upstart, systemd, etc.
3
-page_keywords: systemd, upstart, supervisor, docker, documentation, host integration
4
-
5
-# Automatically Start Containers
6
-
7
-You can use your Docker containers with process managers like
8
-`upstart`, `systemd` and `supervisor`.
9
-
10
-## Introduction
11
-
12
-If you want a process manager to manage your containers you will need to
13
-run the docker daemon with the `-r=false` so that docker will not
14
-automatically restart your containers when the host is restarted.
15
-
16
-When you have finished setting up your image and are happy with your
17
-running container, you can then attach a process manager to manage it.
18
-When your run `docker start -a` docker will automatically attach to the
19
-running container, or start it if needed and forward all signals so that
20
-the process manager can detect when a container stops and correctly
21
-restart it.
22
-
23
-Here are a few sample scripts for systemd and upstart to integrate with
24
-docker.
25
-
26
-## Sample Upstart Script
27
-
28
-In this example We've already created a container to run Redis with
29
-`--name redis_server`. To create an upstart script for our container, we
30
-create a file named `/etc/init/redis.conf` and place the following into
31
-it:
32
-
33
-    description "Redis container"
34
-    author "Me"
35
-    start on filesystem and started docker
36
-    stop on runlevel [!2345]
37
-    respawn
38
-    script
39
-      /usr/bin/docker start -a redis_server
40
-    end script
41
-
42
-Next, we have to configure docker so that it's run with the option
43
-`-r=false`. Run the following command:
44
-
45
-    $ sudo sh -c "echo 'DOCKER_OPTS=\"-r=false\"' > /etc/default/docker"
46
-
47
-## Sample systemd Script
48
-
49
-    [Unit]
50
-    Description=Redis container
51
-    Author=Me
52
-    After=docker.service
53
-
54
-    [Service]
55
-    Restart=always
56
-    ExecStart=/usr/bin/docker start -a redis_server
57
-    ExecStop=/usr/bin/docker stop -t 2 redis_server
58
-
59
-    [Install]
60
-    WantedBy=local.target
61 1
deleted file mode 100644
... ...
@@ -1,699 +0,0 @@
1
-page_title: Network Configuration
2
-page_description: Docker networking
3
-page_keywords: network, networking, bridge, docker, documentation
4
-
5
-# Network Configuration
6
-
7
-## TL;DR
8
-
9
-When Docker starts, it creates a virtual interface named `docker0` on
10
-the host machine.  It randomly chooses an address and subnet from the
11
-private range defined by [RFC 1918](http://tools.ietf.org/html/rfc1918)
12
-that are not in use on the host machine, and assigns it to `docker0`.
13
-Docker made the choice `172.17.42.1/16` when I started it a few minutes
14
-ago, for example — a 16-bit netmask providing 65,534 addresses for the
15
-host machine and its containers.
16
-
17
-But `docker0` is no ordinary interface.  It is a virtual *Ethernet
18
-bridge* that automatically forwards packets between any other network
19
-interfaces that are attached to it.  This lets containers communicate
20
-both with the host machine and with each other.  Every time Docker
21
-creates a container, it creates a pair of “peer” interfaces that are
22
-like opposite ends of a pipe — a packet send on one will be received on
23
-the other.  It gives one of the peers to the container to become its
24
-`eth0` interface and keeps the other peer, with a unique name like
25
-`vethAQI2QT`, out in the namespace of the host machine.  By binding
26
-every `veth*` interface to the `docker0` bridge, Docker creates a
27
-virtual subnet shared between the host machine and every Docker
28
-container.
29
-
30
-The remaining sections of this document explain all of the ways that you
31
-can use Docker options and — in advanced cases — raw Linux networking
32
-commands to tweak, supplement, or entirely replace Docker’s default
33
-networking configuration.
34
-
35
-## Quick Guide to the Options
36
-
37
-Here is a quick list of the networking-related Docker command-line
38
-options, in case it helps you find the section below that you are
39
-looking for.
40
-
41
-Some networking command-line options can only be supplied to the Docker
42
-server when it starts up, and cannot be changed once it is running:
43
-
44
- *  `-b BRIDGE` or `--bridge=BRIDGE` — see
45
-    [Building your own bridge](#bridge-building)
46
-
47
- *  `--bip=CIDR` — see
48
-    [Customizing docker0](#docker0)
49
-
50
- *  `-H SOCKET...` or `--host=SOCKET...` —
51
-    This might sound like it would affect container networking,
52
-    but it actually faces in the other direction:
53
-    it tells the Docker server over what channels
54
-    it should be willing to receive commands
55
-    like “run container” and “stop container.”
56
-    To learn about the option,
57
-    read [Bind Docker to another host/port or a Unix socket](../basics/#bind-docker-to-another-hostport-or-a-unix-socket)
58
-    over in the Basics document.
59
-
60
- *  `--icc=true|false` — see
61
-    [Communication between containers](#between-containers)
62
-
63
- *  `--ip=IP_ADDRESS` — see
64
-    [Binding container ports](#binding-ports)
65
-
66
- *  `--ip-forward=true|false` — see
67
-    [Communication between containers](#between-containers)
68
-
69
- *  `--iptables=true|false` — see
70
-    [Communication between containers](#between-containers)
71
-
72
- *  `--mtu=BYTES` — see
73
-    [Customizing docker0](#docker0)
74
-
75
-There are two networking options that can be supplied either at startup
76
-or when `docker run` is invoked.  When provided at startup, set the
77
-default value that `docker run` will later use if the options are not
78
-specified:
79
-
80
- *  `--dns=IP_ADDRESS...` — see
81
-    [Configuring DNS](#dns)
82
-
83
- *  `--dns-search=DOMAIN...` — see
84
-    [Configuring DNS](#dns)
85
-
86
-Finally, several networking options can only be provided when calling
87
-`docker run` because they specify something specific to one container:
88
-
89
- *  `-h HOSTNAME` or `--hostname=HOSTNAME` — see
90
-    [Configuring DNS](#dns) and
91
-    [How Docker networks a container](#container-networking)
92
-
93
- *  `--link=CONTAINER_NAME:ALIAS` — see
94
-    [Configuring DNS](#dns) and
95
-    [Communication between containers](#between-containers)
96
-
97
- *  `--net=bridge|none|container:NAME_or_ID|host` — see
98
-    [How Docker networks a container](#container-networking)
99
-
100
- *  `-p SPEC` or `--publish=SPEC` — see
101
-    [Binding container ports](#binding-ports)
102
-
103
- *  `-P` or `--publish-all=true|false` — see
104
-    [Binding container ports](#binding-ports)
105
-
106
-The following sections tackle all of the above topics in an order that
107
-moves roughly from simplest to most complex.
108
-
109
-## <a name="dns"></a>Configuring DNS
110
-
111
-How can Docker supply each container with a hostname and DNS
112
-configuration, without having to build a custom image with the hostname
113
-written inside?  Its trick is to overlay three crucial `/etc` files
114
-inside the container with virtual files where it can write fresh
115
-information.  You can see this by running `mount` inside a container:
116
-
117
-    $$ mount
118
-    ...
119
-    /dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...
120
-    /dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...
121
-    tmpfs on /etc/resolv.conf type tmpfs ...
122
-    ...
123
-
124
-This arrangement allows Docker to do clever things like keep
125
-`resolv.conf` up to date across all containers when the host machine
126
-receives new configuration over DHCP later.  The exact details of how
127
-Docker maintains these files inside the container can change from one
128
-Docker version to the next, so you should leave the files themselves
129
-alone and use the following Docker options instead.
130
-
131
-Four different options affect container domain name services.
132
-
133
- *  `-h HOSTNAME` or `--hostname=HOSTNAME` — sets the hostname by which
134
-    the container knows itself.  This is written into `/etc/hostname`,
135
-    into `/etc/hosts` as the name of the container’s host-facing IP
136
-    address, and is the name that `/bin/bash` inside the container will
137
-    display inside its prompt.  But the hostname is not easy to see from
138
-    outside the container.  It will not appear in `docker ps` nor in the
139
-    `/etc/hosts` file of any other container.
140
-
141
- *  `--link=CONTAINER_NAME:ALIAS` — using this option as you `run` a
142
-    container gives the new container’s `/etc/hosts` an extra entry
143
-    named `ALIAS` that points to the IP address of the container named
144
-    `CONTAINER_NAME`.  This lets processes inside the new container
145
-    connect to the hostname `ALIAS` without having to know its IP.  The
146
-    `--link=` option is discussed in more detail below, in the section
147
-    [Communication between containers](#between-containers).
148
-
149
- *  `--dns=IP_ADDRESS...` — sets the IP addresses added as `server`
150
-    lines to the container's `/etc/resolv.conf` file.  Processes in the
151
-    container, when confronted with a hostname not in `/etc/hosts`, will
152
-    connect to these IP addresses on port 53 looking for name resolution
153
-    services.
154
-
155
- *  `--dns-search=DOMAIN...` — sets the domain names that are searched
156
-    when a bare unqualified hostname is used inside of the container, by
157
-    writing `search` lines into the container’s `/etc/resolv.conf`.
158
-    When a container process attempts to access `host` and the search
159
-    domain `exmaple.com` is set, for instance, the DNS logic will not
160
-    only look up `host` but also `host.example.com`.
161
-
162
-Note that Docker, in the absence of either of the last two options
163
-above, will make `/etc/resolv.conf` inside of each container look like
164
-the `/etc/resolv.conf` of the host machine where the `docker` daemon is
165
-running.  The options then modify this default configuration.
166
-
167
-## <a name="between-containers"></a>Communication between containers
168
-
169
-Whether two containers can communicate is governed, at the operating
170
-system level, by three factors.
171
-
172
-1.  Does the network topology even connect the containers’ network
173
-    interfaces?  By default Docker will attach all containers to a
174
-    single `docker0` bridge, providing a path for packets to travel
175
-    between them.  See the later sections of this document for other
176
-    possible topologies.
177
-
178
-2.  Is the host machine willing to forward IP packets?  This is governed
179
-    by the `ip_forward` system parameter.  Packets can only pass between
180
-    containers if this parameter is `1`.  Usually you will simply leave
181
-    the Docker server at its default setting `--ip-forward=true` and
182
-    Docker will go set `ip_forward` to `1` for you when the server
183
-    starts up.  To check the setting or turn it on manually:
184
-
185
-        # Usually not necessary: turning on forwarding,
186
-        # on the host where your Docker server is running
187
-
188
-        $ cat /proc/sys/net/ipv4/ip_forward
189
-        0
190
-        $ sudo echo 1 > /proc/sys/net/ipv4/ip_forward
191
-        $ cat /proc/sys/net/ipv4/ip_forward
192
-        1
193
-
194
-3.  Do your `iptables` allow this particular connection to be made?
195
-    Docker will never make changes to your system `iptables` rules if
196
-    you set `--iptables=false` when the daemon starts.  Otherwise the
197
-    Docker server will add a default rule to the `FORWARD` chain with a
198
-    blanket `ACCEPT` policy if you retain the default `--icc=true`, or
199
-    else will set the policy to `DROP` if `--icc=false`.
200
-
201
-Nearly everyone using Docker will want `ip_forward` to be on, to at
202
-least make communication *possible* between containers.  But it is a
203
-strategic question whether to leave `--icc=true` or change it to
204
-`--icc=false` (on Ubuntu, by editing the `DOCKER_OPTS` variable in
205
-`/etc/default/docker` and restarting the Docker server) so that
206
-`iptables` will protect other containers — and the main host — from
207
-having arbitrary ports probed or accessed by a container that gets
208
-compromised.
209
-
210
-If you choose the most secure setting of `--icc=false`, then how can
211
-containers communicate in those cases where you *want* them to provide
212
-each other services?
213
-
214
-The answer is the `--link=CONTAINER_NAME:ALIAS` option, which was
215
-mentioned in the previous section because of its effect upon name
216
-services.  If the Docker daemon is running with both `--icc=false` and
217
-`--iptables=true` then, when it sees `docker run` invoked with the
218
-`--link=` option, the Docker server will insert a pair of `iptables`
219
-`ACCEPT` rules so that the new container can connect to the ports
220
-exposed by the other container — the ports that it mentioned in the
221
-`EXPOSE` lines of its `Dockerfile`.  Docker has more documentation on
222
-this subject — see the [Link Containers](working_with_links_names.md)
223
-page for further details.
224
-
225
-> **Note**:
226
-> The value `CONTAINER_NAME` in `--link=` must either be an
227
-> auto-assigned Docker name like `stupefied_pare` or else the name you
228
-> assigned with `--name=` when you ran `docker run`.  It cannot be a
229
-> hostname, which Docker will not recognize in the context of the
230
-> `--link=` option.
231
-
232
-You can run the `iptables` command on your Docker host to see whether
233
-the `FORWARD` chain has a default policy of `ACCEPT` or `DROP`:
234
-
235
-    # When --icc=false, you should see a DROP rule:
236
-
237
-    $ sudo iptables -L -n
238
-    ...
239
-    Chain FORWARD (policy ACCEPT)
240
-    target     prot opt source               destination
241
-    DROP       all  --  0.0.0.0/0            0.0.0.0/0
242
-    ...
243
-
244
-    # When a --link= has been created under --icc=false,
245
-    # you should see port-specific ACCEPT rules overriding
246
-    # the subsequent DROP policy for all other packets:
247
-
248
-    $ sudo iptables -L -n
249
-    ...
250
-    Chain FORWARD (policy ACCEPT)
251
-    target     prot opt source               destination
252
-    ACCEPT     tcp  --  172.17.0.2           172.17.0.3           tcp spt:80
253
-    ACCEPT     tcp  --  172.17.0.3           172.17.0.2           tcp dpt:80
254
-    DROP       all  --  0.0.0.0/0            0.0.0.0/0
255
-
256
-> **Note**:
257
-> Docker is careful that its host-wide `iptables` rules fully expose
258
-> containers to each other’s raw IP addresses, so connections from one
259
-> container to another should always appear to be originating from the
260
-> first container’s own IP address.
261
-
262
-## <a name="binding-ports"></a>Binding container ports to the host
263
-
264
-By default Docker containers can make connections to the outside world,
265
-but the outside world cannot connect to containers.  Each outgoing
266
-connection will appear to originate from one of the host machine’s own
267
-IP addresses thanks to an `iptables` masquerading rule on the host
268
-machine that the Docker server creates when it starts:
269
-
270
-    # You can see that the Docker server creates a
271
-    # masquerade rule that let containers connect
272
-    # to IP addresses in the outside world:
273
-
274
-    $ sudo iptables -t nat -L -n
275
-    ...
276
-    Chain POSTROUTING (policy ACCEPT)
277
-    target     prot opt source               destination
278
-    MASQUERADE  all  --  172.17.0.0/16       !172.17.0.0/16
279
-    ...
280
-
281
-But if you want containers to accept incoming connections, you will need
282
-to provide special options when invoking `docker run`.  These options
283
-are covered in more detail on the [Redirect Ports](port_redirection.md)
284
-page.  There are two approaches.
285
-
286
-First, you can supply `-P` or `--publish-all=true|false` to `docker run`
287
-which is a blanket operation that identifies every port with an `EXPOSE`
288
-line in the image’s `Dockerfile` and maps it to a host port somewhere in
289
-the range 49000–49900.  This tends to be a bit inconvenient, since you
290
-then have to run other `docker` sub-commands to learn which external
291
-port a given service was mapped to.
292
-
293
-More convenient is the `-p SPEC` or `--publish=SPEC` option which lets
294
-you be explicit about exactly which external port on the Docker server —
295
-which can be any port at all, not just those in the 49000–49900 block —
296
-you want mapped to which port in the container.
297
-
298
-Either way, you should be able to peek at what Docker has accomplished
299
-in your network stack by examining your NAT tables.
300
-
301
-    # What your NAT rules might look like when Docker
302
-    # is finished setting up a -P forward:
303
-
304
-    $ iptables -t nat -L -n
305
-    ...
306
-    Chain DOCKER (2 references)
307
-    target     prot opt source               destination
308
-    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:49153 to:172.17.0.2:80
309
-
310
-    # What your NAT rules might look like when Docker
311
-    # is finished setting up a -p 80:80 forward:
312
-
313
-    Chain DOCKER (2 references)
314
-    target     prot opt source               destination
315
-    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80
316
-
317
-You can see that Docker has exposed these container ports on `0.0.0.0`,
318
-the wildcard IP address that will match any possible incoming port on
319
-the host machine.  If you want to be more restrictive and only allow
320
-container services to be contacted through a specific external interface
321
-on the host machine, you have two choices.  When you invoke `docker run`
322
-you can use either `-p IP:host_port:container_port` or `-p IP::port` to
323
-specify the external interface for one particular binding.
324
-
325
-Or if you always want Docker port forwards to bind to one specific IP
326
-address, you can edit your system-wide Docker server settings (on
327
-Ubuntu, by editing `DOCKER_OPTS` in `/etc/default/docker`) and add the
328
-option `--ip=IP_ADDRESS`.  Remember to restart your Docker server after
329
-editing this setting.
330
-
331
-Again, this topic is covered without all of these low-level networking
332
-details in the [Redirect Ports](port_redirection.md) document if you
333
-would like to use that as your port redirection reference instead.
334
-
335
-## <a name="docker0"></a>Customizing docker0
336
-
337
-By default, the Docker server creates and configures the host system’s
338
-`docker0` interface as an *Ethernet bridge* inside the Linux kernel that
339
-can pass packets back and forth between other physical or virtual
340
-network interfaces so that they behave as a single Ethernet network.
341
-
342
-Docker configures `docker0` with an IP address and netmask so the host
343
-machine can both receive and send packets to containers connected to the
344
-bridge, and gives it an MTU — the *maximum transmission unit* or largest
345
-packet length that the interface will allow — of either 1,500 bytes or
346
-else a more specific value copied from the Docker host’s interface that
347
-supports its default route.  Both are configurable at server startup:
348
-
349
- *  `--bip=CIDR` — supply a specific IP address and netmask for the
350
-    `docker0` bridge, using standard CIDR notation like
351
-    `192.168.1.5/24`.
352
-
353
- *  `--mtu=BYTES` — override the maximum packet length on `docker0`.
354
-
355
-On Ubuntu you would add these to the `DOCKER_OPTS` setting in
356
-`/etc/default/docker` on your Docker host and restarting the Docker
357
-service.
358
-
359
-Once you have one or more containers up and running, you can confirm
360
-that Docker has properly connected them to the `docker0` bridge by
361
-running the `brctl` command on the host machine and looking at the
362
-`interfaces` column of the output.  Here is a host with two different
363
-containers connected:
364
-
365
-    # Display bridge info
366
-
367
-    $ sudo brctl show
368
-    bridge name     bridge id               STP enabled     interfaces
369
-    docker0         8000.3a1d7362b4ee       no              veth65f9
370
-                                                            vethdda6
371
-
372
-If the `brctl` command is not installed on your Docker host, then on
373
-Ubuntu you should be able to run `sudo apt-get install bridge-utils` to
374
-install it.
375
-
376
-Finally, the `docker0` Ethernet bridge settings are used every time you
377
-create a new container.  Docker selects a free IP address from the range
378
-available on the bridge each time you `docker run` a new container, and
379
-configures the container’s `eth0` interface with that IP address and the
380
-bridge’s netmask.  The Docker host’s own IP address on the bridge is
381
-used as the default gateway by which each container reaches the rest of
382
-the Internet.
383
-
384
-    # The network, as seen from a container
385
-
386
-    $ sudo docker run -i -t --rm base /bin/bash
387
-
388
-    $$ ip addr show eth0
389
-    24: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
390
-        link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff
391
-        inet 172.17.0.3/16 scope global eth0
392
-           valid_lft forever preferred_lft forever
393
-        inet6 fe80::306f:e0ff:fe35:5791/64 scope link
394
-           valid_lft forever preferred_lft forever
395
-
396
-    $$ ip route
397
-    default via 172.17.42.1 dev eth0
398
-    172.17.0.0/16 dev eth0  proto kernel  scope link  src 172.17.0.3
399
-
400
-    $$ exit
401
-
402
-Remember that the Docker host will not be willing to forward container
403
-packets out on to the Internet unless its `ip_forward` system setting is
404
-`1` — see the section above on [Communication between
405
-containers](#between-containers) for details.
406
-
407
-## <a name="bridge-building"></a>Building your own bridge
408
-
409
-If you want to take Docker out of the business of creating its own
410
-Ethernet bridge entirely, you can set up your own bridge before starting
411
-Docker and use `-b BRIDGE` or `--bridge=BRIDGE` to tell Docker to use
412
-your bridge instead.  If you already have Docker up and running with its
413
-old `bridge0` still configured, you will probably want to begin by
414
-stopping the service and removing the interface:
415
-
416
-    # Stopping Docker and removing docker0
417
-
418
-    $ sudo service docker stop
419
-    $ sudo ip link set dev docker0 down
420
-    $ sudo brctl delbr docker0
421
-
422
-Then, before starting the Docker service, create your own bridge and
423
-give it whatever configuration you want.  Here we will create a simple
424
-enough bridge that we really could just have used the options in the
425
-previous section to customize `docker0`, but it will be enough to
426
-illustrate the technique.
427
-
428
-    # Create our own bridge
429
-
430
-    $ sudo brctl addbr bridge0
431
-    $ sudo ip addr add 192.168.5.1/24 dev bridge0
432
-    $ sudo ip link set dev bridge0 up
433
-
434
-    # Confirming that our bridge is up and running
435
-
436
-    $ ip addr show bridge0
437
-    4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
438
-        link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
439
-        inet 192.168.5.1/24 scope global bridge0
440
-           valid_lft forever preferred_lft forever
441
-
442
-    # Tell Docker about it and restart (on Ubuntu)
443
-
444
-    $ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker
445
-    $ sudo service docker start
446
-
447
-The result should be that the Docker server starts successfully and is
448
-now prepared to bind containers to the new bridge.  After pausing to
449
-verify the bridge’s configuration, try creating a container — you will
450
-see that its IP address is in your new IP address range, which Docker
451
-will have auto-detected.
452
-
453
-Just as we learned in the previous section, you can use the `brctl show`
454
-command to see Docker add and remove interfaces from the bridge as you
455
-start and stop containers, and can run `ip addr` and `ip route` inside a
456
-container to see that it has been given an address in the bridge’s IP
457
-address range and has been told to use the Docker host’s IP address on
458
-the bridge as its default gateway to the rest of the Internet.
459
-
460
-## <a name="container-networking"></a>How Docker networks a container
461
-
462
-While Docker is under active development and continues to tweak and
463
-improve its network configuration logic, the shell commands in this
464
-section are rough equivalents to the steps that Docker takes when
465
-configuring networking for each new container.
466
-
467
-Let’s review a few basics.
468
-
469
-To communicate using the Internet Protocol (IP), a machine needs access
470
-to at least one network interface at which packets can be sent and
471
-received, and a routing table that defines the range of IP addresses
472
-reachable through that interface.  Network interfaces do not have to be
473
-physical devices.  In fact, the `lo` loopback interface available on
474
-every Linux machine (and inside each Docker container) is entirely
475
-virtual — the Linux kernel simply copies loopback packets directly from
476
-the sender’s memory into the receiver’s memory.
477
-
478
-Docker uses special virtual interfaces to let containers communicate
479
-with the host machine — pairs of virtual interfaces called “peers” that
480
-are linked inside of the host machine’s kernel so that packets can
481
-travel between them.  They are simple to create, as we will see in a
482
-moment.
483
-
484
-The steps with which Docker configures a container are:
485
-
486
-1.  Create a pair of peer virtual interfaces.
487
-
488
-2.  Give one of them a unique name like `veth65f9`, keep it inside of
489
-    the main Docker host, and bind it to `docker0` or whatever bridge
490
-    Docker is supposed to be using.
491
-
492
-3.  Toss the other interface over the wall into the new container (which
493
-    will already have been provided with an `lo` interface) and rename
494
-    it to the much prettier name `eth0` since, inside of the container’s
495
-    separate and unique network interface namespace, there are no
496
-    physical interfaces with which this name could collide.
497
-
498
-4.  Give the container’s `eth0` a new IP address from within the
499
-    bridge’s range of network addresses, and set its default route to
500
-    the IP address that the Docker host owns on the bridge.
501
-
502
-With these steps complete, the container now possesses an `eth0`
503
-(virtual) network card and will find itself able to communicate with
504
-other containers and the rest of the Internet.
505
-
506
-You can opt out of the above process for a particular container by
507
-giving the `--net=` option to `docker run`, which takes four possible
508
-values.
509
-
510
- *  `--net=bridge` — The default action, that connects the container to
511
-    the Docker bridge as described above.
512
-
513
- *  `--net=host` — Tells Docker to skip placing the container inside of
514
-    a separate network stack.  In essence, this choice tells Docker to
515
-    **not containerize the container’s networking**!  While container
516
-    processes will still be confined to their own filesystem and process
517
-    list and resource limits, a quick `ip addr` command will show you
518
-    that, network-wise, they live “outside” in the main Docker host and
519
-    have full access to its network interfaces.  Note that this does
520
-    **not** let the container reconfigure the host network stack — that
521
-    would require `--privileged=true` — but it does let container
522
-    processes open low-numbered ports like any other root process.
523
-
524
- *  `--net=container:NAME_or_ID` — Tells Docker to put this container’s
525
-    processes inside of the network stack that has already been created
526
-    inside of another container.  The new container’s processes will be
527
-    confined to their own filesystem and process list and resource
528
-    limits, but will share the same IP address and port numbers as the
529
-    first container, and processes on the two containers will be able to
530
-    connect to each other over the loopback interface.
531
-
532
- *  `--net=none` — Tells Docker to put the container inside of its own
533
-    network stack but not to take any steps to configure its network,
534
-    leaving you free to build any of the custom configurations explored
535
-    in the last few sections of this document.
536
-
537
-To get an idea of the steps that are necessary if you use `--net=none`
538
-as described in that last bullet point, here are the commands that you
539
-would run to reach roughly the same configuration as if you had let
540
-Docker do all of the configuration:
541
-
542
-    # At one shell, start a container and
543
-    # leave its shell idle and running
544
-
545
-    $ sudo docker run -i -t --rm --net=none base /bin/bash
546
-    root@63f36fc01b5f:/#
547
-
548
-    # At another shell, learn the container process ID
549
-    # and create its namespace entry in /var/run/netns/
550
-    # for the "ip netns" command we will be using below
551
-
552
-    $ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
553
-    2778
554
-    $ pid=2778
555
-    $ sudo mkdir -p /var/run/netns
556
-    $ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
557
-
558
-    # Check the bridge’s IP address and netmask
559
-
560
-    $ ip addr show docker0
561
-    21: docker0: ...
562
-    inet 172.17.42.1/16 scope global docker0
563
-    ...
564
-
565
-    # Create a pair of "peer" interfaces A and B,
566
-    # bind the A end to the bridge, and bring it up
567
-
568
-    $ sudo ip link add A type veth peer name B
569
-    $ sudo brctl addif docker0 A
570
-    $ sudo ip link set A up
571
-
572
-    # Place B inside the container's network namespace,
573
-    # rename to eth0, and activate it with a free IP
574
-
575
-    $ sudo ip link set B netns $pid
576
-    $ sudo ip netns exec $pid ip link set dev B name eth0
577
-    $ sudo ip netns exec $pid ip link set eth0 up
578
-    $ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
579
-    $ sudo ip netns exec $pid ip route add default via 172.17.42.1
580
-
581
-At this point your container should be able to perform networking
582
-operations as usual.
583
-
584
-When you finally exit the shell and Docker cleans up the container, the
585
-network namespace is destroyed along with our virtual `eth0` — whose
586
-destruction in turn destroys interface `A` out in the Docker host and
587
-automatically un-registers it from the `docker0` bridge.  So everything
588
-gets cleaned up without our having to run any extra commands!  Well,
589
-almost everything:
590
-
591
-    # Clean up dangling symlinks in /var/run/netns
592
-
593
-    find -L /var/run/netns -type l -delete
594
-
595
-Also note that while the script above used modern `ip` command instead
596
-of old deprecated wrappers like `ipconfig` and `route`, these older
597
-commands would also have worked inside of our container.  The `ip addr`
598
-command can be typed as `ip a` if you are in a hurry.
599
-
600
-Finally, note the importance of the `ip netns exec` command, which let
601
-us reach inside and configure a network namespace as root.  The same
602
-commands would not have worked if run inside of the container, because
603
-part of safe containerization is that Docker strips container processes
604
-of the right to configure their own networks.  Using `ip netns exec` is
605
-what let us finish up the configuration without having to take the
606
-dangerous step of running the container itself with `--privileged=true`.
607
-
608
-## Tools and Examples
609
-
610
-Before diving into the following sections on custom network topologies,
611
-you might be interested in glancing at a few external tools or examples
612
-of the same kinds of configuration.  Here are two:
613
-
614
- *  Jérôme Petazzoni has create a `pipework` shell script to help you
615
-    connect together containers in arbitrarily complex scenarios:
616
-    <https://github.com/jpetazzo/pipework>
617
-
618
- *  Brandon Rhodes has created a whole network topology of Docker
619
-    containers for the next edition of Foundations of Python Network
620
-    Programming that includes routing, NAT’d firewalls, and servers that
621
-    offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP:
622
-    <https://github.com/brandon-rhodes/fopnp/tree/m/playground>
623
-
624
-Both tools use networking commands very much like the ones you saw in
625
-the previous section, and will see in the following sections.
626
-
627
-## <a name="point-to-point"></a>Building a point-to-point connection
628
-
629
-By default, Docker attaches all containers to the virtual subnet
630
-implemented by `docker0`.  You can create containers that are each
631
-connected to some different virtual subnet by creating your own bridge
632
-as shown in [Building your own bridge](#bridge-building), starting each
633
-container with `docker run --net=none`, and then attaching the
634
-containers to your bridge with the shell commands shown in [How Docker
635
-networks a container](#container-networking).
636
-
637
-But sometimes you want two particular containers to be able to
638
-communicate directly without the added complexity of both being bound to
639
-a host-wide Ethernet bridge.
640
-
641
-The solution is simple: when you create your pair of peer interfaces,
642
-simply throw *both* of them into containers, and configure them as
643
-classic point-to-point links.  The two containers will then be able to
644
-communicate directly (provided you manage to tell each container the
645
-other’s IP address, of course).  You might adjust the instructions of
646
-the previous section to go something like this:
647
-
648
-    # Start up two containers in two terminal windows
649
-
650
-    $ sudo docker run -i -t --rm --net=none base /bin/bash
651
-    root@1f1f4c1f931a:/#
652
-
653
-    $ sudo docker run -i -t --rm --net=none base /bin/bash
654
-    root@12e343489d2f:/#
655
-
656
-    # Learn the container process IDs
657
-    # and create their namespace entries
658
-
659
-    $ sudo docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a
660
-    2989
661
-    $ sudo docker inspect -f '{{.State.Pid}}' 12e343489d2f
662
-    3004
663
-    $ sudo mkdir -p /var/run/netns
664
-    $ sudo ln -s /proc/2989/ns/net /var/run/netns/2989
665
-    $ sudo ln -s /proc/3004/ns/net /var/run/netns/3004
666
-
667
-    # Create the "peer" interfaces and hand them out
668
-
669
-    $ sudo ip link add A type veth peer name B
670
-
671
-    $ sudo ip link set A netns 2989
672
-    $ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A
673
-    $ sudo ip netns exec 2989 ip link set A up
674
-    $ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A
675
-
676
-    $ sudo ip link set B netns 3004
677
-    $ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B
678
-    $ sudo ip netns exec 3004 ip link set B up
679
-    $ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B
680
-
681
-The two containers should now be able to ping each other and make
682
-connections sucessfully.  Point-to-point links like this do not depend
683
-on a subnet nor a netmask, but on the bare assertion made by `ip route`
684
-that some other single IP address is connected to a particular network
685
-interface.
686
-
687
-Note that point-to-point links can be safely combined with other kinds
688
-of network connectivity — there is no need to start the containers with
689
-`--net=none` if you want point-to-point links to be an addition to the
690
-container’s normal networking instead of a replacement.
691
-
692
-A final permutation of this pattern is to create the point-to-point link
693
-between the Docker host and one container, which would allow the host to
694
-communicate with that one container on some single IP address and thus
695
-communicate “out-of-band” of the bridge that connects the other, more
696
-usual containers.  But unless you have very specific networking needs
697
-that drive you to such a solution, it is probably far preferable to use
698
-`--icc=false` to lock down inter-container communication, as we explored
699
-earlier.
700 1
deleted file mode 100644
... ...
@@ -1,127 +0,0 @@
1
-page_title: Redirect Ports
2
-page_description: usage about port redirection
3
-page_keywords: Usage, basic port, docker, documentation, examples
4
-
5
-# Redirect Ports
6
-
7
-## Introduction
8
-
9
-Interacting with a service is commonly done through a connection to a
10
-port. When this service runs inside a container, one can connect to the
11
-port after finding the IP address of the container as follows:
12
-
13
-    # Find IP address of container with ID <container_id>
14
-    $ docker inspect --format='{{.NetworkSettings.IPAddress}}' <container_id>
15
-
16
-However, this IP address is local to the host system and the container
17
-port is not reachable by the outside world. Furthermore, even if the
18
-port is used locally, e.g. by another container, this method is tedious
19
-as the IP address of the container changes every time it starts.
20
-
21
-Docker addresses these two problems and give a simple and robust way to
22
-access services running inside containers.
23
-
24
-To allow non-local clients to reach the service running inside the
25
-container, Docker provide ways to bind the container port to an
26
-interface of the host system. To simplify communication between
27
-containers, Docker provides the linking mechanism.
28
-
29
-## Auto map all exposed ports on the host
30
-
31
-To bind all the exposed container ports to the host automatically, use
32
-`docker run -P <imageid>`. The mapped host ports will be auto-selected
33
-from a pool of unused ports (49000..49900), and you will need to use
34
-`docker ps`, `docker inspect <container_id>` or `docker port
35
-<container_id> <port>` to determine what they are.
36
-
37
-## Binding a port to a host interface
38
-
39
-To bind a port of the container to a specific interface of the host
40
-system, use the `-p` parameter of the `docker run` command:
41
-
42
-    # General syntax
43
-    $ docker run -p [([<host_interface>:[host_port]])|(<host_port>):]<container_port>[/udp] <image> <cmd>
44
-
45
-When no host interface is provided, the port is bound to all available
46
-interfaces of the host machine (aka INADDR_ANY, or 0.0.0.0). When no
47
-host port is provided, one is dynamically allocated. The possible
48
-combinations of options for TCP port are the following:
49
-
50
-    # Bind TCP port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine.
51
-    $ docker run -p 127.0.0.1:80:8080 <image> <cmd>
52
-
53
-    # Bind TCP port 8080 of the container to a dynamically allocated TCP port on 127.0.0.1 of the host machine.
54
-    $ docker run -p 127.0.0.1::8080 <image> <cmd>
55
-
56
-    # Bind TCP port 8080 of the container to TCP port 80 on all available interfaces of the host machine.
57
-    $ docker run -p 80:8080 <image> <cmd>
58
-
59
-    # Bind TCP port 8080 of the container to a dynamically allocated TCP port on all available interfaces of the host machine.
60
-    $ docker run -p 8080 <image> <cmd>
61
-
62
-UDP ports can also be bound by adding a trailing `/udp`. All the
63
-combinations described for TCP work. Here is only one example:
64
-
65
-    # Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine.
66
-    $ docker run -p 127.0.0.1:53:5353/udp <image> <cmd>
67
-
68
-The command `docker port` lists the interface and port on the host
69
-machine bound to a given container port. It is useful when using
70
-dynamically allocated ports:
71
-
72
-    # Bind to a dynamically allocated port
73
-    $ docker run -p 127.0.0.1::8080 --name dyn-bound <image> <cmd>
74
-
75
-    # Lookup the actual port
76
-    $ docker port dyn-bound 8080
77
-    127.0.0.1:49160
78
-
79
-## Linking a container
80
-
81
-Communication between two containers can also be established in a
82
-Docker-specific way called linking.
83
-
84
-To briefly present the concept of linking, let us consider two
85
-containers: `server`, containing the service, and `client`, accessing
86
-the service. Once `server` is running, `client` is started and links to
87
-server. Linking sets environment variables in `client` giving it some
88
-information about `server`.  In this sense, linking is a method of
89
-service discovery.
90
-
91
-Let us now get back to our topic of interest; communication between the
92
-two containers. We mentioned that the tricky part about this
93
-communication was that the IP address of `server` was not fixed.
94
-Therefore, some of the environment variables are going to be used to
95
-inform `client` about this IP address. This process called exposure, is
96
-possible because the `client` is started after the `server` has been started.
97
-
98
-Here is a full example. On `server`, the port of interest is exposed.
99
-The exposure is done either through the `--expose` parameter to the
100
-`docker run` command, or the `EXPOSE` build command in a `Dockerfile`:
101
-
102
-    # Expose port 80
103
-    $ docker run --expose 80 --name server <image> <cmd>
104
-
105
-The `client` then links to the `server`:
106
-
107
-    # Link
108
-    $ docker run --name client --link server:linked-server <image> <cmd>
109
-
110
-Here `client` locally refers to `server` as `linked-server`. The following
111
-environment variables, among others, are available on `client`:
112
-
113
-    # The default protocol, ip, and port of the service running in the container
114
-    $ LINKED-SERVER_PORT=tcp://172.17.0.8:80
115
-
116
-    # A specific protocol, ip, and port of various services
117
-    $ LINKED-SERVER_PORT_80_TCP=tcp://172.17.0.8:80
118
-    $ LINKED-SERVER_PORT_80_TCP_PROTO=tcp
119
-    $ LINKED-SERVER_PORT_80_TCP_ADDR=172.17.0.8
120
-    $ LINKED-SERVER_PORT_80_TCP_PORT=80
121
-
122
-This tells `client` that a service is running on port 80 of `server` and
123
-that `server` is accessible at the IP address `172.17.0.8`:
124
-
125
-> **Note:**
126
-> Using the `-p` parameter also exposes the port.
127
-
128 1
deleted file mode 100644
... ...
@@ -1,93 +0,0 @@
1
-page_title: Puppet Usage
2
-page_description: Installating and using Puppet
3
-page_keywords: puppet, installation, usage, docker, documentation
4
-
5
-# Using Puppet
6
-
7
-> *Note:* Please note this is a community contributed installation path. The
8
-> only `official` installation is using the
9
-> [*Ubuntu*](/installation/ubuntulinux/#ubuntu-linux) installation
10
-> path. This version may sometimes be out of date.
11
-
12
-## Requirements
13
-
14
-To use this guide you'll need a working installation of Puppet from
15
-[Puppet Labs](https://puppetlabs.com) .
16
-
17
-The module also currently uses the official PPA so only works with
18
-Ubuntu.
19
-
20
-## Installation
21
-
22
-The module is available on the [Puppet
23
-Forge](https://forge.puppetlabs.com/garethr/docker/) and can be
24
-installed using the built-in module tool.
25
-
26
-    $ puppet module install garethr/docker
27
-
28
-It can also be found on
29
-[GitHub](https://github.com/garethr/garethr-docker) if you would rather
30
-download the source.
31
-
32
-## Usage
33
-
34
-The module provides a puppet class for installing Docker and two defined
35
-types for managing images and containers.
36
-
37
-### Installation
38
-
39
-    include 'docker'
40
-
41
-### Images
42
-
43
-The next step is probably to install a Docker image. For this, we have a
44
-defined type which can be used like so:
45
-
46
-    docker::image { 'ubuntu': }
47
-
48
-This is equivalent to running:
49
-
50
-    $ docker pull ubuntu
51
-
52
-Note that it will only be downloaded if an image of that name does not
53
-already exist. This is downloading a large binary so on first run can
54
-take a while. For that reason this define turns off the default 5 minute
55
-timeout for the exec type. Note that you can also remove images you no
56
-longer need with:
57
-
58
-    docker::image { 'ubuntu':
59
-      ensure => 'absent',
60
-    }
61
-
62
-### Containers
63
-
64
-Now you have an image where you can run commands within a container
65
-managed by Docker.
66
-
67
-    docker::run { 'helloworld':
68
-      image   => 'ubuntu',
69
-      command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"',
70
-    }
71
-
72
-This is equivalent to running the following command, but under upstart:
73
-
74
-    $ docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
75
-
76
-Run also contains a number of optional parameters:
77
-
78
-    docker::run { 'helloworld':
79
-      image        => 'ubuntu',
80
-      command      => '/bin/sh -c "while true; do echo hello world; sleep 1; done"',
81
-      ports        => ['4444', '4555'],
82
-      volumes      => ['/var/lib/couchdb', '/var/log'],
83
-      volumes_from => '6446ea52fbc9',
84
-      memory_limit => 10485760, # bytes
85
-      username     => 'example',
86
-      hostname     => 'example.com',
87
-      env          => ['FOO=BAR', 'FOO2=BAR2'],
88
-      dns          => ['8.8.8.8', '8.8.4.4'],
89
-    }
90
-
91
-> *Note:*
92
-> The `ports`, `env`, `dns` and `volumes` attributes can be set with either a single
93
-> string or as above with an array of values.
94 1
deleted file mode 100644
... ...
@@ -1,139 +0,0 @@
1
-page_title: Link Containers
2
-page_description: How to create and use both links and names
3
-page_keywords: Examples, Usage, links, linking, docker, documentation, examples, names, name, container naming
4
-
5
-# Link Containers
6
-
7
-## Introduction
8
-
9
-From version 0.6.5 you are now able to `name` a container and `link` it
10
-to another container by referring to its name. This will create a parent
11
--> child relationship where the parent container can see selected
12
-information about its child.
13
-
14
-## Container Naming
15
-
16
-You can now name your container by using the `--name` flag. If no name
17
-is provided, Docker will automatically generate a name. You can see this
18
-name using the `docker ps` command.
19
-
20
-    # format is "sudo docker run --name <container_name> <image_name> <command>"
21
-    $ sudo docker run --name test ubuntu /bin/bash
22
-
23
-    # the flag "-a" Show all containers. Only running containers are shown by default.
24
-    $ sudo docker ps -a
25
-    CONTAINER ID        IMAGE                            COMMAND             CREATED             STATUS              PORTS               NAMES
26
-    2522602a0d99        ubuntu:12.04                     /bin/bash           14 seconds ago      Exit 0                                  test
27
-
28
-## Links: service discovery for docker
29
-
30
-Links allow containers to discover and securely communicate with each
31
-other by using the flag `--link name:alias`. Inter-container
32
-communication can be disabled with the daemon flag `--icc=false`. With
33
-this flag set to `false`, Container A cannot access Container B unless
34
-explicitly allowed via a link. This is a huge win for securing your
35
-containers. When two containers are linked together Docker creates a
36
-parent child relationship between the containers. The parent container
37
-will be able to access information via environment variables of the
38
-child such as name, exposed ports, IP and other selected environment
39
-variables.
40
-
41
-When linking two containers Docker will use the exposed ports of the
42
-container to create a secure tunnel for the parent to access. If a
43
-database container only exposes port 8080 then the linked container will
44
-only be allowed to access port 8080 and nothing else if inter-container
45
-communication is set to false.
46
-
47
-For example, there is an image called `crosbymichael/redis` that exposes
48
-the port 6379 and starts the Redis server. Let's name the container as
49
-`redis` based on that image and run it as daemon.
50
-
51
-    $ sudo docker run -d --name redis crosbymichael/redis
52
-
53
-We can issue all the commands that you would expect using the name
54
-`redis`; start, stop, attach, using the name for our container. The name
55
-also allows us to link other containers into this one.
56
-
57
-Next, we can start a new web application that has a dependency on Redis
58
-and apply a link to connect both containers. If you noticed when running
59
-our Redis server we did not use the `-p` flag to publish the Redis port
60
-to the host system. Redis exposed port 6379 and this is all we need to
61
-establish a link.
62
-
63
-    $ sudo docker run -t -i --link redis:db --name webapp ubuntu bash
64
-
65
-When you specified `--link redis:db` you are telling Docker to link the
66
-container named `redis` into this new container with the alias `db`.
67
-Environment variables are prefixed with the alias so that the parent
68
-container can access network and environment information from the
69
-containers that are linked into it.
70
-
71
-If we inspect the environment variables of the second container, we
72
-would see all the information about the child container.
73
-
74
-    $ root@4c01db0b339c:/# env
75
-
76
-    HOSTNAME=4c01db0b339c
77
-    DB_NAME=/webapp/db
78
-    TERM=xterm
79
-    DB_PORT=tcp://172.17.0.8:6379
80
-    DB_PORT_6379_TCP=tcp://172.17.0.8:6379
81
-    DB_PORT_6379_TCP_PROTO=tcp
82
-    DB_PORT_6379_TCP_ADDR=172.17.0.8
83
-    DB_PORT_6379_TCP_PORT=6379
84
-    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
85
-    PWD=/
86
-    SHLVL=1
87
-    HOME=/
88
-    container=lxc
89
-    _=/usr/bin/env
90
-    root@4c01db0b339c:/#
91
-
92
-Accessing the network information along with the environment of the
93
-child container allows us to easily connect to the Redis service on the
94
-specific IP and port in the environment.
95
-
96
-> **Note**:
97
-> These Environment variables are only set for the first process in the
98
-> container. Similarly, some daemons (such as `sshd`)
99
-> will scrub them when spawning shells for connection.
100
-
101
-You can work around this by storing the initial `env` in a file, or
102
-looking at `/proc/1/environ`.
103
-
104
-Running `docker ps` shows the 2 containers, and the `webapp/db` alias
105
-name for the Redis container.
106
-
107
-    $ docker ps
108
-    CONTAINER ID        IMAGE                        COMMAND                CREATED              STATUS              PORTS               NAMES
109
-    4c01db0b339c        ubuntu:12.04                 bash                   17 seconds ago       Up 16 seconds                           webapp
110
-    d7886598dbe2        crosbymichael/redis:latest   /redis-server --dir    33 minutes ago       Up 33 minutes       6379/tcp            redis,webapp/db
111
-
112
-## Resolving Links by Name
113
-
114
-> *Note:* New in version v0.11.
115
-
116
-Linked containers can be accessed by hostname.  Hostnames are mapped by
117
-appending entries to '/etc/hosts' using the linked container's alias.
118
-
119
-For example, linking a container using '--link redis:db' will generate
120
-the following '/etc/hosts' file:
121
-
122
-    root@6541a75d44a0:/# cat /etc/hosts
123
-    172.17.0.3  6541a75d44a0
124
-    172.17.0.2  db
125
-
126
-    127.0.0.1   localhost
127
-    ::1     localhost ip6-localhost ip6-loopback
128
-    fe00::0     ip6-localnet
129
-    ff00::0     ip6-mcastprefix
130
-    ff02::1     ip6-allnodes
131
-    ff02::2     ip6-allrouters
132
-    root@6541a75d44a0:/#
133
-
134
-Using this mechanism, you can communicate with the linked container by
135
-name:
136
-
137
-    root@6541a75d44a0:/# echo PING | redis-cli -h db
138
-    PONG
139
-    root@6541a75d44a0:/#
140 1
deleted file mode 100644
... ...
@@ -1,171 +0,0 @@
1
-page_title: Share Directories via Volumes
2
-page_description: How to create and share volumes
3
-page_keywords: Examples, Usage, volume, docker, documentation, examples
4
-
5
-# Share Directories via Volumes
6
-
7
-## Introduction
8
-
9
-A *data volume* is a specially-designated directory within one or more
10
-containers that bypasses the [*Union File
11
-System*](/terms/layer/#ufs-def) to provide several useful features for
12
-persistent or shared data:
13
-
14
- - **Data volumes can be shared and reused between containers:**  
15
-   This is the feature that makes data volumes so powerful. You can
16
-   use it for anything from hot database upgrades to custom backup or
17
-   replication tools. See the example below.
18
- - **Changes to a data volume are made directly:**  
19
-   Without the overhead of a copy-on-write mechanism. This is good for
20
-   very large files.
21
- - **Changes to a data volume will not be included at the next commit:**  
22
-   Because they are not recorded as regular filesystem changes in the
23
-   top layer of the [*Union File System*](/terms/layer/#ufs-def)
24
- - **Volumes persist until no containers use them:**  
25
-   As they are a reference counted resource. The container does not need to be
26
-   running to share its volumes, but running it can help protect it
27
-   against accidental removal via `docker rm`.
28
-
29
-Each container can have zero or more data volumes.
30
-
31
-## Getting Started
32
-
33
-Using data volumes is as simple as adding a `-v` parameter to the
34
-`docker run` command. The `-v` parameter can be used more than once in
35
-order to create more volumes within the new container. To create a new
36
-container with two new volumes:
37
-
38
-    $ docker run -v /var/volume1 -v /var/volume2 busybox true
39
-
40
-This command will create the new container with two new volumes that
41
-exits instantly (`true` is pretty much the smallest, simplest program
42
-that you can run). You can then mount its volumes in any other container
43
-using the `run` `--volumes-from` option; irrespective of whether the
44
-volume container is running or not.
45
-
46
-Or, you can use the `VOLUME` instruction in a `Dockerfile` to add one or
47
-more new volumes to any container created from that image:
48
-
49
-    # BUILD-USING:        $ docker build -t data .
50
-    # RUN-USING:          $ docker run --name DATA data
51
-    FROM          busybox
52
-    VOLUME        ["/var/volume1", "/var/volume2"]
53
-    CMD           ["/bin/true"]
54
-
55
-### Creating and mounting a Data Volume Container
56
-
57
-If you have some persistent data that you want to share between
58
-containers, or want to use from non-persistent containers, it's best to
59
-create a named Data Volume Container, and then to mount the data from
60
-it.
61
-
62
-Create a named container with volumes to share (`/var/volume1` and
63
-`/var/volume2`):
64
-
65
-    $ docker run -v /var/volume1 -v /var/volume2 --name DATA busybox true
66
-
67
-Then mount those data volumes into your application containers:
68
-
69
-    $ docker run -t -i --rm --from DATA --name client1 ubuntu bash
70
-
71
-You can use multiple `-volumes-from` parameters to bring together
72
-multiple data volumes from multiple containers.
73
-
74
-Interestingly, you can mount the volumes that came from the `DATA`
75
-container in yet another container via the `client1` middleman
76
-container:
77
-
78
-    $ docker run -t -i --rm --volumes-from client1 --name client2 ubuntu bash
79
-
80
-This allows you to abstract the actual data source from users of that
81
-data, similar to [*Ambassador Pattern Linking*](
82
-../ambassador_pattern_linking/#ambassador-pattern-linking).
83
-
84
-If you remove containers that mount volumes, including the initial DATA
85
-container, or the middleman, the volumes will not be deleted until there
86
-are no containers still referencing those volumes. This allows you to
87
-upgrade, or effectively migrate data volumes between containers.
88
-
89
-### Mount a Host Directory as a Container Volume:
90
-
91
-    -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
92
-
93
-You must specify an absolute path for `host-dir`. If `host-dir` is
94
-missing from the command, then Docker creates a new volume. If
95
-`host-dir` is present but points to a non-existent directory on the
96
-host, Docker will automatically create this directory and use it as the
97
-source of the bind-mount.
98
-
99
-Note that this is not available from a `Dockerfile` due the portability
100
-and sharing purpose of it. The `host-dir` volumes are entirely
101
-host-dependent and might not work on any other machine.
102
-
103
-For example:
104
-
105
-    # Usage:
106
-    # sudo docker run [OPTIONS] -v /(dir. on host):/(dir. in container):(Read-Write or Read-Only) [ARG..]
107
-    # Example:
108
-    $ sudo docker run -i -t -v /var/log:/logs_from_host:ro ubuntu bash
109
-
110
-The command above mounts the host directory `/var/log` into the
111
-container with *read only* permissions as `/logs_from_host`.
112
-
113
-New in version v0.5.0.
114
-
115
-### Note for OS/X users and remote daemon users:
116
-
117
-OS/X users run `boot2docker` to create a minimalist virtual machine
118
-running the docker daemon. That virtual machine then launches docker
119
-commands on behalf of the OS/X command line. The means that `host
120
-directories` refer to directories in the `boot2docker` virtual machine,
121
-not the OS/X filesystem.
122
-
123
-Similarly, anytime when the docker daemon is on a remote machine, the
124
-`host directories` always refer to directories on the daemon's machine.
125
-
126
-### Backup, restore, or migrate data volumes
127
-
128
-You cannot back up volumes using `docker export`, `docker save` and
129
-`docker cp` because they are external to images. Instead you can use
130
-`--volumes-from` to start a new container that can access the
131
-data-container's volume. For example:
132
-
133
-    $ sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
134
-
135
- - `-rm`:  
136
-   remove the container when it exits
137
- - `--volumes-from DATA`:  
138
-   attach to the volumes shared by the `DATA` container
139
- - `-v $(pwd):/backup`:  
140
-   bind mount the current directory into the container; to write the tar file to
141
- - `busybox`:
142
-   a small simpler image - good for quick maintenance
143
- - `tar cvf /backup/backup.tar /data`:  
144
-   creates an uncompressed tar file of all the files in the `/data` directory
145
-
146
-Then to restore to the same container, or another that you've made
147
-elsewhere:
148
-
149
-    # create a new data container
150
-    $ sudo docker run -v /data --name DATA2 busybox true
151
-    # untar the backup files into the new container᾿s data volume
152
-    $ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
153
-    data/
154
-    data/sven.txt
155
-    # compare to the original container
156
-    $ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
157
-    sven.txt
158
-
159
-You can use the basic techniques above to automate backup, migration and
160
-restore testing using your preferred tools.
161
-
162
-## Known Issues
163
-
164
- - [Issue 2702](https://github.com/dotcloud/docker/issues/2702):
165
-    "lxc-start: Permission denied - failed to mount" could indicate a
166
-    permissions problem with AppArmor. Please see the issue for a
167
-    workaround.
168
- - [Issue 2528](https://github.com/dotcloud/docker/issues/2528): the
169
-    busybox container is used to make the resulting container as small
170
-    and simple as possible - whenever you need to interact with the data
171
-    in the volume you mount it into another container.
172 1
deleted file mode 100644
... ...
@@ -1,251 +0,0 @@
1
-page_title: Share Images via Repositories
2
-page_description: Repositories allow users to share images.
3
-page_keywords: repo, repositories, usage, pull image, push image, image, documentation
4
-
5
-# Share Images via Repositories
6
-
7
-## Introduction
8
-
9
-Docker is not only a tool for creating and managing your own
10
-[*containers*](/terms/container/#container-def) – **Docker is also a
11
-tool for sharing**. A *repository* is a shareable collection of tagged
12
-[*images*](/terms/image/#image-def) that together create the file
13
-systems for containers. The repository's name is a label that indicates
14
-the provenance of the repository, i.e. who created it and where the
15
-original copy is located.
16
-
17
-You can find one or more repositories hosted on a *registry*. There are
18
-two types of *registry*: public and private. There's also a default
19
-*registry* that Docker uses which is called
20
-[Docker.io](http://index.docker.io).
21
-[Docker.io](http://index.docker.io) is the home of "top-level"
22
-repositories and public "user" repositories.  The Docker project
23
-provides [Docker.io](http://index.docker.io) to host public and [private
24
-repositories](https://index.docker.io/plans/), namespaced by user. We
25
-provide user authentication and search over all the public repositories.
26
-
27
-Docker acts as a client for these services via the `docker search`,
28
-`pull`, `login` and `push` commands.
29
-
30
-## Repositories
31
-
32
-### Local Repositories
33
-
34
-Docker images which have been created and labeled on your local Docker
35
-server need to be pushed to a Public (by default they are pushed to
36
-[Docker.io](http://index.docker.io)) or Private registry to be shared.
37
-
38
-### Public Repositories
39
-
40
-There are two types of public repositories: *top-level* repositories
41
-which are controlled by the Docker team, and *user* repositories created
42
-by individual contributors. Anyone can read from these repositories –
43
-they really help people get started quickly! You could also use
44
-[*Trusted Builds*](#trusted-builds) if you need to keep control of who
45
-accesses your images.
46
-
47
-- Top-level repositories can easily be recognized by **not** having a
48
-  `/` (slash) in their name. These repositories represent trusted images
49
-  provided by the Docker team.
50
-- User repositories always come in the form of `<username>/<repo_name>`.
51
-  This is what your published images will look like if you push to the
52
-  public [Docker.io](http://index.docker.io) registry.
53
-- Only the authenticated user can push to their *username* namespace on
54
-  a [Docker.io](http://index.docker.io) repository.
55
-- User images are not curated, it is therefore up to you whether or not
56
-  you trust the creator of this image.
57
-
58
-### Private repositories
59
-
60
-You can also create private repositories on
61
-[Docker.io](https://index.docker.io/plans/). These allow you to store
62
-images that you don't want to share publicly. Only authenticated users
63
-can push to private repositories.
64
-
65
-## Find Public Images on Docker.io
66
-
67
-You can search the [Docker.io](https://index.docker.io) registry or
68
-using the command line interface. Searching can find images by name,
69
-user name or description:
70
-
71
-    $ sudo docker help search
72
-    Usage: docker search NAME
73
-
74
-    Search the docker index for images
75
-
76
-      --no-trunc=false: Don᾿t truncate output
77
-    $ sudo docker search centos
78
-    Found 25 results matching your query ("centos")
79
-    NAME                             DESCRIPTION
80
-    centos
81
-    slantview/centos-chef-solo       CentOS 6.4 with chef-solo.
82
-    ...
83
-
84
-There you can see two example results: `centos` and
85
-`slantview/centos-chef-solo`. The second result shows that it comes from
86
-the public repository of a user, `slantview/`, while the first result
87
-(`centos`) doesn't explicitly list a repository so it comes from the
88
-trusted top-level namespace. The `/` character separates a user's
89
-repository and the image name.
90
-
91
-Once you have found the image name, you can download it:
92
-
93
-    # sudo docker pull <value>
94
-    $ sudo docker pull centos
95
-    Pulling repository centos
96
-    539c0211cd76: Download complete
97
-
98
-What can you do with that image? Check out the
99
-[*Examples*](/examples/#example-list) and, when you're ready with your
100
-own image, come back here to learn how to share it.
101
-
102
-## Contributing to Docker.io
103
-
104
-Anyone can pull public images from the
105
-[Docker.io](http://index.docker.io) registry, but if you would like to
106
-share one of your own images, then you must register a unique user name
107
-first. You can create your username and login on
108
-[Docker.io](https://index.docker.io/account/signup/), or by running
109
-
110
-    $ sudo docker login
111
-
112
-This will prompt you for a username, which will become a public
113
-namespace for your public repositories.
114
-
115
-If your username is available then `docker` will also prompt you to
116
-enter a password and your e-mail address. It will then automatically log
117
-you in. Now you're ready to commit and push your own images!
118
-
119
-> **Note:**
120
-> Your authentication credentials will be stored in the [`.dockercfg`
121
-> authentication file](#authentication-file).
122
-
123
-## Committing a Container to a Named Image
124
-
125
-When you make changes to an existing image, those changes get saved to a
126
-container's file system. You can then promote that container to become
127
-an image by making a `commit`. In addition to converting the container
128
-to an image, this is also your opportunity to name the image,
129
-specifically a name that includes your user name from
130
-[Docker.io](http://index.docker.io) (as you did a `login` above) and a
131
-meaningful name for the image.
132
-
133
-    # format is "sudo docker commit <container_id> <username>/<imagename>"
134
-    $ sudo docker commit $CONTAINER_ID myname/kickassapp
135
-
136
-## Pushing a repository to its registry
137
-
138
-In order to push an repository to its registry you need to have named an
139
-image, or committed your container to a named image (see above)
140
-
141
-Now you can push this repository to the registry designated by its name
142
-or tag.
143
-
144
-    # format is "docker push <username>/<repo_name>"
145
-    $ sudo docker push myname/kickassapp
146
-
147
-## Trusted Builds
148
-
149
-Trusted Builds automate the building and updating of images from GitHub
150
-or BitBucket, directly on Docker.io. It works by adding a commit hook to
151
-your selected repository, triggering a build and update when you push a
152
-commit.
153
-
154
-### To setup a trusted build
155
-
156
-1.  Create a [Docker.io account](https://index.docker.io/) and login.
157
-2.  Link your GitHub or BitBucket account through the [`Link Accounts`](https://index.docker.io/account/accounts/) menu.
158
-3.  [Configure a Trusted build](https://index.docker.io/builds/).
159
-4.  Pick a GitHub or BitBucket project that has a `Dockerfile` that you want to build.
160
-5.  Pick the branch you want to build (the default is the `master` branch).
161
-6.  Give the Trusted Build a name.
162
-7.  Assign an optional Docker tag to the Build.
163
-8.  Specify where the `Dockerfile` is located. The default is `/`.
164
-
165
-Once the Trusted Build is configured it will automatically trigger a
166
-build, and in a few minutes, if there are no errors, you will see your
167
-new trusted build on the [Docker.io](https://index.docker.io) Registry.
168
-It will stay in sync with your GitHub and BitBucket repository until you
169
-deactivate the Trusted Build.
170
-
171
-If you want to see the status of your Trusted Builds you can go to your
172
-[Trusted Builds page](https://index.docker.io/builds/) on the Docker.io,
173
-and it will show you the status of your builds, and the build history.
174
-
175
-Once you've created a Trusted Build you can deactivate or delete it. You
176
-cannot however push to a Trusted Build with the `docker push` command.
177
-You can only manage it by committing code to your GitHub or BitBucket
178
-repository.
179
-
180
-You can create multiple Trusted Builds per repository and configure them
181
-to point to specific `Dockerfile`'s or Git branches.
182
-
183
-## Private Registry
184
-
185
-Private registries are possible by hosting [your own
186
-registry](https://github.com/dotcloud/docker-registry).
187
-
188
-> **Note**:
189
-> You can also use private repositories on
190
-> [Docker.io](https://index.docker.io/plans/).
191
-
192
-To push or pull to a repository on your own registry, you must prefix
193
-the tag with the address of the registry's host (a `.` or `:` is used to
194
-identify a host), like this:
195
-
196
-    # Tag to create a repository with the full registry location.
197
-    # The location (e.g. localhost.localdomain:5000) becomes
198
-    # a permanent part of the repository name
199
-    $ sudo docker tag 0u812deadbeef localhost.localdomain:5000/repo_name
200
-
201
-    # Push the new repository to its home location on localhost
202
-    $ sudo docker push localhost.localdomain:5000/repo_name
203
-
204
-Once a repository has your registry's host name as part of the tag, you
205
-can push and pull it like any other repository, but it will **not** be
206
-searchable (or indexed at all) on [Docker.io](http://index.docker.io),
207
-and there will be no user name checking performed. Your registry will
208
-function completely independently from the
209
-[Docker.io](http://index.docker.io) registry.
210
-
211
-<iframe width="640" height="360" src="//www.youtube.com/embed/CAewZCBT4PI?rel=0" frameborder="0" allowfullscreen></iframe>
212
-
213
-See also
214
-
215
-[Docker Blog: How to use your own registry](
216
-http://blog.docker.io/2013/07/how-to-use-your-own-registry/)
217
-
218
-## Authentication File
219
-
220
-The authentication is stored in a JSON file, `.dockercfg`, located in
221
-your home directory. It supports multiple registry URLs.
222
-
223
-The `docker login` command will create the:
224
-
225
-    [https://index.docker.io/v1/](https://index.docker.io/v1/)
226
-
227
-key.
228
-
229
-The `docker login https://my-registry.com` command will create the:
230
-
231
-    [https://my-registry.com](https://my-registry.com)
232
-
233
-key.
234
-
235
-For example:
236
-
237
-    {
238
-         "https://index.docker.io/v1/": {
239
-                 "auth": "xXxXxXxXxXx=",
240
-                 "email": "email@example.com"
241
-         },
242
-         "https://my-registry.com": {
243
-                 "auth": "XxXxXxXxXxX=",
244
-                 "email": "email@my-registry.com"
245
-         }
246
-    }
247
-
248
-The `auth` field represents
249
-
250
-    base64(<username>:<password>)
251
-
252 1
new file mode 100644
... ...
@@ -0,0 +1,397 @@
0
+page_title: Working with Docker Images
1
+page_description: How to work with Docker images.
2
+page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, Docker images, Docker image, image management, Docker repos, Docker repositories, docker, docker tag, docker tags, Docker.io, collaboration
3
+
4
+# Working with Docker Images
5
+
6
+In the [introduction](/introduction/) we've discovered that Docker
7
+images are the basis of containers. In the
8
+[previous](/userguide/dockerizing/) [sections](/userguide/usingdocker/)
9
+we've used Docker images that already exist, for example the `ubuntu`
10
+image and the `training/webapp` image.
11
+
12
+We've also discovered that Docker stores downloaded images on the Docker
13
+host. If an image isn't already present on the host then it'll be
14
+downloaded from a registry: by default the
15
+[Docker.io](https://index.docker.io) public registry.
16
+
17
+In this section we're going to explore Docker images a bit more
18
+including:
19
+
20
+* Managing and working with images locally on your Docker host;
21
+* Creating basic images;
22
+* Uploading images to [Docker.io](https://index.docker.io).
23
+
24
+## Listing images on the host
25
+
26
+Let's start with listing the images we have locally on our host. You can
27
+do this using the `docker images` command like so:
28
+
29
+    $ sudo docker images
30
+    REPOSITORY       TAG      IMAGE ID      CREATED      VIRTUAL SIZE
31
+    training/webapp  latest   fc77f57ad303  3 weeks ago  280.5 MB
32
+    ubuntu           13.10    5e019ab7bf6d  4 weeks ago  180 MB
33
+    ubuntu           saucy    5e019ab7bf6d  4 weeks ago  180 MB
34
+    ubuntu           12.04    74fe38d11401  4 weeks ago  209.6 MB
35
+    ubuntu           precise  74fe38d11401  4 weeks ago  209.6 MB
36
+    ubuntu           12.10    a7cf8ae4e998  4 weeks ago  171.3 MB
37
+    ubuntu           quantal  a7cf8ae4e998  4 weeks ago  171.3 MB
38
+    ubuntu           14.04    99ec81b80c55  4 weeks ago  266 MB
39
+    ubuntu           latest   99ec81b80c55  4 weeks ago  266 MB
40
+    ubuntu           trusty   99ec81b80c55  4 weeks ago  266 MB
41
+    ubuntu           13.04    316b678ddf48  4 weeks ago  169.4 MB
42
+    ubuntu           raring   316b678ddf48  4 weeks ago  169.4 MB
43
+    ubuntu           10.04    3db9c44f4520  4 weeks ago  183 MB
44
+    ubuntu           lucid    3db9c44f4520  4 weeks ago  183 MB
45
+
46
+We can see the images we've previously used in our [user guide](/userguide/).
47
+Each has been downloaded from [Docker.io](https://index.docker.io) when we
48
+launched a container using that image.
49
+
50
+We can see three crucial pieces of information about our images in the listing.
51
+
52
+* What repository they came from, for example `ubuntu`.
53
+* The tags for each image, for example `14.04`.
54
+* The image ID of each image.
55
+
56
+A repository potentially holds multiple variants of an image. In the case of
57
+our `ubuntu` image we can see multiple variants covering Ubuntu 10.04, 12.04,
58
+12.10, 13.04, 13.10 and 14.04. Each variant is identified by a tag and you can
59
+refer to a tagged image like so:
60
+
61
+    ubuntu:14.04
62
+
63
+So when we run a container we refer to a tagged image like so:
64
+
65
+    $ sudo docker run -t -i ubuntu:14.04 /bin/bash
66
+
67
+If instead we wanted to build an Ubuntu 12.04 image we'd use:
68
+
69
+    $ sudo docker run -t -i ubuntu:12.04 /bin/bash
70
+
71
+If you don't specify a variant, for example you just use `ubuntu`, then Docker
72
+will default to using the `ubunut:latest` image.
73
+
74
+> **Tip:** 
75
+> We recommend you always use a specific tagged image, for example
76
+> `ubuntu:12.04`. That way you always know exactly what variant of an image is
77
+> being used.
78
+
79
+## Getting a new image
80
+
81
+So how do we get new images? Well Docker will automatically download any image
82
+we use that isn't already present on the Docker host. But this can potentially
83
+add some time to the launch of a container. If we want to pre-load an image we
84
+can download it using the `docker pull` command. Let's say we'd like to
85
+download the `centos` image.
86
+
87
+    $ sudo docker pull centos
88
+    Pulling repository centos
89
+    b7de3133ff98: Pulling dependent layers
90
+    5cc9e91966f7: Pulling fs layer
91
+    511136ea3c5a: Download complete
92
+    ef52fb1fe610: Download complete
93
+    . . .
94
+
95
+We can see that each layer of the image has been pulled down and now we
96
+can run a container from this image and we won't have to wait to
97
+download the image.
98
+
99
+    $ sudo docker run -t -i centos /bin/bash
100
+    bash-4.1#
101
+
102
+## Finding images
103
+
104
+One of the features of Docker is that a lot of people have created Docker
105
+images for a variety of purposes. Many of these have been uploaded to
106
+[Docker.io](https://index.docker.io). We can search these images on the
107
+[Docker.io](https://index.docker.io) website.
108
+
109
+![indexsearch](/userguide/search.png)
110
+
111
+We can also search for images on the command line using the `docker search`
112
+command. Let's say our team wants an image with Ruby and Sinatra installed on
113
+which to do our web application development. We can search for a suitable image
114
+by using the `docker search` command to find all the images that contain the
115
+term `sinatra`.
116
+
117
+    $ sudo docker search sinatra
118
+    NAME                                   DESCRIPTION                                     STARS     OFFICIAL   TRUSTED
119
+    training/sinatra                       Sinatra training image                          0                    [OK]
120
+    marceldegraaf/sinatra                  Sinatra test app                                0
121
+    mattwarren/docker-sinatra-demo                                                         0                    [OK]
122
+    luisbebop/docker-sinatra-hello-world                                                   0                    [OK]
123
+    bmorearty/handson-sinatra              handson-ruby + Sinatra for Hands on with D...   0
124
+    subwiz/sinatra                                                                         0
125
+    bmorearty/sinatra                                                                      0
126
+    . . .
127
+
128
+We can see we've returned a lot of images that use the term `sinatra`. We've
129
+returned a list of image names, descriptions, Stars (which measure the social
130
+popularity of images - if a user likes an image then they can "star" it), and
131
+the Official and Trusted statuses. Official repositories are XXX and Trusted
132
+repositories are [Trusted Build](/userguide/dockerrepos/) that allow you to
133
+validate the source and content of an image.
134
+
135
+We've reviewed the images available to use and we decided to use the
136
+`training/sinatra` image. So far we've seen two types of images repositories,
137
+images like `ubuntu`, which are called base or root images. These base images
138
+are provided by Docker Inc and are built, validated and supported. These can be
139
+identified by their single word names.
140
+
141
+We've also seen user images, for example the `training/sinatra` image we've
142
+chosen. A user image belongs to a member of the Docker community and is built
143
+and maintained by them.  You can identify user images as they are always
144
+prefixed with the user name, here `training`, of the user that created them.
145
+
146
+## Pulling our image
147
+
148
+We've identified a suitable image, `training/sinatra`, and now we can download it using the `docker pull` command.
149
+
150
+    $ sudo docker pull training/sinatra
151
+
152
+The team can now use this image by run their own containers.
153
+
154
+    $ sudo docker run -t -i training/sinatra /bin/bash
155
+    root@a8cb6ce02d85:/#
156
+
157
+## Creating our own images
158
+
159
+The team has found the `training/sinatra` image pretty useful but it's not quite what
160
+they need and we need to make some changes to it. There are two ways we can
161
+update and create images.
162
+
163
+1. We can update a container created from an image and commit the results to an image.
164
+2. We can use a `Dockerfile` to specify instructions to create an image.
165
+
166
+### Updating and committing an image
167
+
168
+To update an image we first need to create a container from the image
169
+we'd like to update.
170
+
171
+    $ sudo docker run -t -i training/sinatra /bin/bash
172
+    root@0b2616b0e5a8:/#
173
+
174
+> **Note:** 
175
+> Take note of the container ID that has been created, `0b2616b0e5a8`, as we'll
176
+> need it in a moment.
177
+
178
+Inside our running container let's add the `json` gem.
179
+
180
+    root@0b2616b0e5a8:/# gem install json
181
+
182
+Once this has completed let's exit our container using the `exit`
183
+command.
184
+
185
+Now we have a container with the change we want to make. We can then
186
+commit a copy of this container to an image using the `docker commit`
187
+command.
188
+
189
+    $ sudo docker commit -m="Added json gem" -a="Kate Smith" \
190
+    0b2616b0e5a8 ouruser/sinatra:v2
191
+    4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c
192
+
193
+Here we've used the `docker commit` command. We've specified two flags: `-m`
194
+and `-a`. The `-m` flag allows us to specify a commit message, much like you
195
+would with a commit on a version control system. The `-a` flag allows us to
196
+specify an author for our update.
197
+
198
+We've also specified the container we want to create this new image from,
199
+`0b2616b0e5a8` (the ID we recorded earlier) and we've specified a target for
200
+the image:
201
+
202
+    ouruser/sinatra:v2
203
+
204
+Let's break this target down. It consists of a new user, `ouruser`, that we're
205
+writing this image to. We've also specified the name of the image, here we're
206
+keeping the original image name `sinatra`. Finally we're specifying a tag for
207
+the image: `v2`.
208
+
209
+We can then look at our new `ouruser/sinatra` image using the `docker images`
210
+command.
211
+
212
+    $ sudo docker images
213
+    REPOSITORY          TAG     IMAGE ID       CREATED       VIRTUAL SIZE
214
+    training/sinatra    latest  5bc342fa0b91   10 hours ago  446.7 MB
215
+    ouruser/sinatra     v2      3c59e02ddd1a   10 hours ago  446.7 MB
216
+    ouruser/sinatra     latest  5db5f8471261   10 hours ago  446.7 MB
217
+
218
+To use our new image to create a container we can then:
219
+
220
+    $ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash
221
+    root@78e82f680994:/#
222
+
223
+### Building an image from a `Dockerfile`
224
+
225
+Using the `docker commit` command is a pretty simple way of extending an image
226
+but it's a bit cumbersome and it's not easy to share a development process for
227
+images amongst a team. Instead we can use a new command, `docker build`, to
228
+build new images from scratch.
229
+
230
+To do this we create a `Dockerfile` that contains a set of instructions that
231
+tell Docker how to build our image.
232
+
233
+Let's create a directory and a `Dockerfile` first.
234
+
235
+    $ mkdir sinatra
236
+    $ cd sinatra
237
+    $ touch Dockerfile
238
+
239
+Each instructions creates a new layer of the image. Let's look at a simple
240
+example now for building our own Sinatra image for our development team.
241
+
242
+    # This is a comment
243
+    FROM ubuntu:14.04
244
+    MAINTAINER Kate Smith <ksmith@example.com>
245
+    RUN apt-get -qq update
246
+    RUN apt-get -qqy install ruby ruby-dev
247
+    RUN gem install sinatra
248
+
249
+Let's look at what our `Dockerfile` does. Each instruction prefixes a statement and is capitalized.
250
+
251
+    INSTRUCTION statement
252
+
253
+> **Note:**
254
+> We use `#` to indicate a comment
255
+
256
+The first instruction `FROM` tells Docker what the source of our image is, in
257
+this case we're basing our new image on an Ubuntu 14.04 image.
258
+
259
+Next we use the `MAINTAINER` instruction to specify who maintains our new image.
260
+
261
+Lastly, we've specified three `RUN` instructions. A `RUN` instruction executes
262
+a command inside the image, for example installing a package. Here we're
263
+updating our APT cache, installing Ruby and RubyGems and then installing the
264
+Sinatra gem.
265
+
266
+> **Note:** 
267
+> There are [a lot more instructions available to us in a Dockerfile](/reference/builder).
268
+
269
+Now let's take our `Dockerfile` and use the `docker build` command to build an image.
270
+
271
+    $ sudo docker build -t="ouruser/sinatra:v2" .
272
+    Uploading context  2.56 kB
273
+    Uploading context
274
+    Step 0 : FROM ubuntu:14.04
275
+     ---> 99ec81b80c55
276
+    Step 1 : MAINTAINER Kate Smith <ksmith@example.com>
277
+     ---> Running in 7c5664a8a0c1
278
+     ---> 2fa8ca4e2a13
279
+    Removing intermediate container 7c5664a8a0c1
280
+    Step 2 : RUN apt-get -qq update
281
+     ---> Running in b07cc3fb4256
282
+     ---> 50d21070ec0c
283
+    Removing intermediate container b07cc3fb4256
284
+    Step 3 : RUN apt-get -qqy install ruby ruby-dev
285
+     ---> Running in a5b038dd127e
286
+    Selecting previously unselected package libasan0:amd64.
287
+    (Reading database ... 11518 files and directories currently installed.)
288
+    Preparing to unpack .../libasan0_4.8.2-19ubuntu1_amd64.deb ...
289
+    . . .
290
+    Setting up ruby (1:1.9.3.4) ...
291
+    Setting up ruby1.9.1 (1.9.3.484-2ubuntu1) ...
292
+    Processing triggers for libc-bin (2.19-0ubuntu6) ...
293
+     ---> 2acb20f17878
294
+    Removing intermediate container a5b038dd127e
295
+    Step 4 : RUN gem install sinatra
296
+     ---> Running in 5e9d0065c1f7
297
+    . . .
298
+    Successfully installed rack-protection-1.5.3
299
+    Successfully installed sinatra-1.4.5
300
+    4 gems installed
301
+     ---> 324104cde6ad
302
+    Removing intermediate container 5e9d0065c1f7
303
+    Successfully built 324104cde6ad
304
+
305
+We've specified our `docker build` command and used the `-t` flag to identify
306
+our new image as belonging to the user `ouruser`, the repository name `sinatra`
307
+and given it the tag `v2`.
308
+
309
+We've also specified the location of our `Dockerfile` using the `.` to
310
+indicate a `Dockerfile` in the current directory.
311
+
312
+> **Note::**
313
+> You can also specify a path to a `Dockerfile`.
314
+
315
+Now we can see the build process at work. The first thing Docker does is
316
+upload the build context: basically the contents of the directory you're
317
+building in. This is done because the Docker daemon does the actual
318
+build of the image and it needs the local context to do it.
319
+
320
+Next we can see each instruction in the `Dockerfile` being executed
321
+step-by-step. We can see that each step creates a new container, runs
322
+the instruction inside that container and then commits that change -
323
+just like the `docker commit` work flow we saw earlier. When all the
324
+instructions have executed we're left with the `324104cde6ad` image
325
+(also helpfully tagged as `ouruser/sinatra:v2`) and all intermediate
326
+containers will get removed to clean things up.
327
+
328
+We can then create a container from our new image.
329
+
330
+    $ sudo docker run -t -i ouruser/sinatra /bin/bash
331
+    root@8196968dac35:/#
332
+
333
+> **Note:** 
334
+> This is just the briefest introduction to creating images. We've
335
+> skipped a whole bunch of other instructions that you can use. We'll see more of
336
+> those instructions in later sections of the Guide or you can refer to the
337
+> [`Dockerfile`](/reference/builder/) reference for a
338
+> detailed description and examples of every instruction.
339
+
340
+## Setting tags on an image
341
+
342
+You can also add a tag to an existing image after you commit or build it. We
343
+can do this using the `docker tag` command. Let's add a new tag to our
344
+`ouruser/sinatra` image.
345
+
346
+    $ sudo docker tag 5db5f8471261 ouruser/sinatra:devel
347
+
348
+The `docker tag` command takes the ID of the image, here `5db5f8471261`, and our
349
+user name, the repository name and the new tag.
350
+
351
+Let's see our new tag using the `docker images` command.
352
+
353
+    $ sudo docker images ouruser/sinatra
354
+    REPOSITORY          TAG     IMAGE ID      CREATED        VIRTUAL SIZE
355
+    ouruser/sinatra     latest  5db5f8471261  11 hours ago   446.7 MB
356
+    ouruser/sinatra     devel   5db5f8471261  11 hours ago   446.7 MB
357
+    ouruser/sinatra     v2      5db5f8471261  11 hours ago   446.7 MB
358
+
359
+## Push an image to Docker.io
360
+
361
+Once you've built or created a new image you can push it to [Docker.io](
362
+https://index.docker.io) using the `docker push` command. This allows you to
363
+share it with others, either publicly, or push it into [a private
364
+repository](https://index.docker.io/plans/).
365
+
366
+    $ sudo docker push ouruser/sinatra
367
+    The push refers to a repository [ouruser/sinatra] (len: 1)
368
+    Sending image list
369
+    Pushing repository ouruser/sinatra (3 tags)
370
+    . . .
371
+
372
+## Remove an image from the host
373
+
374
+You can also remove images on your Docker host in a way [similar to
375
+containers](
376
+/userguide/usingdocker) using the `docker rmi` command.
377
+
378
+Let's delete the `training/sinatra` image as we don't need it anymore.
379
+
380
+    $ docker rmi training/sinatra
381
+    Untagged: training/sinatra:latest
382
+    Deleted: 5bc342fa0b91cabf65246837015197eecfa24b2213ed6a51a8974ae250fedd8d
383
+    Deleted: ed0fffdcdae5eb2c3a55549857a8be7fc8bc4241fb19ad714364cbfd7a56b22f
384
+    Deleted: 5c58979d73ae448df5af1d8142436d81116187a7633082650549c52c3a2418f0
385
+
386
+> **Note:** In order to remove an image from the host, please make sure
387
+> that there are no containers actively based on it.
388
+
389
+# Next steps
390
+
391
+Until now we've seen how to build individual applications inside Docker
392
+containers. Now learn how to build whole application stacks with Docker
393
+by linking together multiple Docker containers.
394
+
395
+Go to [Linking Containers Together](/userguide/dockerlinks).
396
+
0 397
new file mode 100644
... ...
@@ -0,0 +1,73 @@
0
+page_title: Getting started with Docker.io
1
+page_description: Introductory guide to getting an account on Docker.io
2
+page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, central service, services, how to, container, containers, automation, collaboration, collaborators, registry, repo, repository, technology, github webhooks, trusted builds
3
+
4
+# Getting Started with Docker.io
5
+
6
+*How do I use Docker.io?*
7
+
8
+In this section we're going to introduce you, very quickly!, to
9
+[Docker.io](https://index.docker.io) and create an account.
10
+
11
+[Docker.io](https://www.docker.io) is the central hub for Docker. It
12
+helps you to manage Docker and its components. It provides services such
13
+as:
14
+
15
+* Hosting images.
16
+* User authentication.
17
+* Automated image builds and work flow tools like build triggers and web
18
+  hooks.
19
+* Integration with GitHub and BitBucket.
20
+
21
+Docker.io helps you collaborate with colleagues and get the most out of
22
+Docker.
23
+
24
+In order to use Docker.io you will need to register an account. Don't
25
+panic! It's totally free and really easy.
26
+
27
+## Creating a Docker.io Account
28
+
29
+There are two ways you can create a Docker.io account:
30
+
31
+* Via the web, or
32
+* Via the command line.
33
+
34
+### Sign up via the web!
35
+
36
+Fill in the [sign-up form](https://www.docker.io/account/signup/) and
37
+choose your user name and specify some details such as an email address.
38
+
39
+![Register using the sign-up page](/userguide/register-web.png)
40
+
41
+### Signup via the command line
42
+
43
+You can also create a Docker.io account via the command line using the
44
+`docker login` command.
45
+
46
+    $ sudo docker login
47
+
48
+### Confirm your email
49
+
50
+Once you've filled in the form then check your email for a welcome
51
+message and activate your account.
52
+
53
+![Confirm your registration](/userguide/register-confirm.png)
54
+
55
+### Login!
56
+
57
+Then you can login using the web console:
58
+
59
+![Login using the web console](/userguide/login-web.png)
60
+
61
+Or via the command line and the `docker login` command:
62
+
63
+    $ sudo docker login
64
+
65
+Now your Docker.io account is active and ready for you to use!
66
+
67
+##  Next steps
68
+
69
+Now let's start Dockerizing applications with our "Hello World!" exercise.
70
+
71
+Go to [Dockerizing Applications](/userguide/dockerizing).
72
+
0 73
new file mode 100644
... ...
@@ -0,0 +1,193 @@
0
+page_title: Dockerizing Applications: A "Hello World!"
1
+page_description: A simple "Hello World!" exercise that introduced you to Docker.
2
+page_keywords: docker guide, docker, docker platform, virtualization framework, how to, dockerize, dockerizing apps, dockerizing applications, container, containers
3
+
4
+# Dockerizing Applications: A "Hello World!"
5
+
6
+*So what's this Docker thing all about?*
7
+
8
+Docker allows you to run applications inside containers. Running an
9
+application inside a container takes a single command: `docker run`.
10
+
11
+## Hello World!
12
+
13
+Let's try it now.
14
+
15
+    $ sudo docker run ubuntu:14.04 /bin/echo "Hello World!"
16
+    Hello World!
17
+
18
+And you just launched your first container!
19
+
20
+So what just happened? Let's step through what the `docker run` command
21
+did.
22
+
23
+First we specified the `docker` binary and the command we wanted to
24
+execute, `run`. The `docker run` combination *runs* containers.
25
+
26
+Next we specified an image: `ubuntu:14.04`. This is the source of the container
27
+we ran. Docker calls this an image. In this case we used an Ubuntu 14.04
28
+operating system image.
29
+
30
+When you specify an image, Docker looks first for the image on your
31
+Docker host. If it can't find it then it downloads the image from the public
32
+image registry: [Docker.io](https://index.docker.io).
33
+
34
+Next we told Docker what command to run inside our new container:
35
+
36
+    /bin/echo "Hello World!"
37
+
38
+When our container was launched Docker created a new Ubuntu 14.04
39
+environment and then executed the `/bin/echo` command inside it. We saw
40
+the result on the command line:
41
+
42
+    Hello World!
43
+
44
+So what happened to our container after that? Well Docker containers
45
+only run as long as the command you specify is active. Here, as soon as
46
+`Hello World!` was echoed, the container stopped.
47
+
48
+## An Interactive Container
49
+
50
+Let's try the `docker run` command again, this time specifying a new
51
+command to run in our container.
52
+
53
+    $ sudo docker run -t -i ubuntu:14.04 /bin/bash
54
+    root@af8bae53bdd3:/#
55
+
56
+Here we've again specified the `docker run` command and launched an
57
+`ubuntu:14.04` image. But we've also passed in two flags: `-t` and `-i`.
58
+The `-t` flag assigns a pseudo-tty or terminal inside our new container
59
+and the `-i` flag allows us to make an interactive connection by
60
+grabbing the standard in (`STDIN`) of the container.
61
+
62
+We've also specified a new command for our container to run:
63
+`/bin/bash`. This will launch a Bash shell inside our container.
64
+
65
+So now when our container is launched we can see that we've got a
66
+command prompt inside it:
67
+
68
+    root@af8bae53bdd3:/#
69
+
70
+Let's try running some commands inside our container:
71
+
72
+    root@af8bae53bdd3:/# pwd
73
+    /
74
+    root@af8bae53bdd3:/# ls
75
+    bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
76
+
77
+You can see we've run the `pwd` to show our current directory and can
78
+see we're in the `/` root directory. We've also done a directory listing
79
+of the root directory which shows us what looks like a typical Linux
80
+file system.
81
+
82
+You can play around inside this container and when you're done you can
83
+use the `exit` command to finish.
84
+
85
+    root@af8bae53bdd3:/# exit
86
+
87
+As with our previous container, once the Bash shell process has
88
+finished, the container is stopped.
89
+
90
+## A Daemonized Hello World!
91
+
92
+Now a container that runs a command and then exits has some uses but
93
+it's not overly helpful. Let's create a container that runs as a daemon,
94
+like most of the applications we're probably going to run with Docker.
95
+
96
+Again we can do this with the `docker run` command:
97
+
98
+    $ sudo docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done"
99
+    1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147
100
+
101
+Wait what? Where's our "Hello World!" Let's look at what we've run here.
102
+It should look pretty familiar. We ran `docker run` but this time we
103
+specified a flag: `-d`. The `-d` flag tells Docker to run the container
104
+and put it in the background, to daemonize it.
105
+
106
+We also specified the same image: `ubuntu:14.04`.
107
+
108
+Finally, we specified a command to run:
109
+
110
+    /bin/sh -c "while true; do echo hello world; sleep 1; done"
111
+
112
+This is the (hello) world's silliest daemon: a shell script that echoes
113
+`hello world` forever.
114
+
115
+So why aren't we seeing any `hello world`'s? Instead Docker has returned
116
+a really long string:
117
+
118
+    1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147
119
+
120
+This really long string is called a *container ID*. It uniquely
121
+identifies a container so we can work with it.
122
+
123
+> **Note:** 
124
+> The container ID is a bit long and unwieldy and a bit later
125
+> on we'll see a shorter ID and some ways to name our containers to make
126
+> working with them easier.
127
+
128
+We can use this container ID to see what's happening with our `hello
129
+world` daemon.
130
+
131
+Firstly let's make sure our container is running. We can
132
+do that with the `docker ps` command. The `docker ps` command queries
133
+the Docker daemon for information about all the container it knows
134
+about.
135
+
136
+    $ docker ps
137
+    CONTAINER ID  IMAGE         COMMAND               CREATED        STATUS       PORTS NAMES
138
+    1e5535038e28  ubuntu:14.04  /bin/sh -c 'while tr  2 minutes ago  Up 1 minute        insane_babbage
139
+
140
+Here we can see our daemonized container. The `docker ps` has returned some useful
141
+information about it, starting with a shorter variant of its container ID:
142
+`1e5535038e28`.
143
+
144
+We can also see the image we used to build it, `ubuntu:14.04`, the command it
145
+is running, its status and an automatically assigned name,
146
+`insane_babbage`. 
147
+
148
+> **NoteL** 
149
+> Docker automatically names any containers you start, a
150
+> little later on we'll see how you can specify your own names.
151
+
152
+Okay, so we now know it's running. But is it doing what we asked it to do? To see this
153
+we're going to look inside the container using the `docker logs`
154
+command. Let's use the container name Docker assigned.
155
+
156
+    $ sudo docker logs insane_babbage
157
+    hello world
158
+    hello world
159
+    hello world
160
+    . . .
161
+
162
+The `docker logs` command looks inside the container and returns its standard
163
+output: in this case the output of our command `hello world`.
164
+
165
+Awesome! Our daemon is working and we've just created our first
166
+Dockerized application!
167
+
168
+Now we've established we can create our own containers let's tidy up
169
+after ourselves and stop our daemonized container. To do this we use the
170
+`docker stop` command.
171
+
172
+    $ sudo docker stop insane_babbage
173
+    insane_babbage
174
+
175
+The `docker stop` command tells Docker to politely stop the running
176
+container. If it succeeds it will return the name of the container it
177
+has just stopped.
178
+
179
+Let's check it worked with the `docker ps` command.
180
+
181
+    $ docker ps
182
+    CONTAINER ID  IMAGE         COMMAND               CREATED        STATUS       PORTS NAMES
183
+
184
+Excellent. Our container has been stopped.
185
+
186
+# Next steps
187
+
188
+Now we've seen how simple it is to get started with Docker let's learn how to
189
+do some more advanced tasks.
190
+
191
+Go to [Working With Containers](/userguide/usingdocker).
192
+
0 193
new file mode 100644
... ...
@@ -0,0 +1,241 @@
0
+page_title: Linking Containers Together
1
+page_description: Learn how to connect Docker containers together.
2
+page_keywords: Examples, Usage, user guide, links, linking, docker, documentation, examples, names, name, container naming, port, map, network port, network
3
+
4
+# Linking Containers Together
5
+
6
+In [the Using Docker section](/userguide/usingdocker) we touched on
7
+connecting to a service running inside a Docker container via a network
8
+port. This is one of the ways that you can interact with services and
9
+applications running inside Docker containers. In this section we're
10
+going to give you a refresher on connecting to a Docker container via a
11
+network port as well as introduce you to the concepts of container
12
+linking.
13
+
14
+## Network port mapping refresher
15
+
16
+In [the Using Docker section](/userguide/usingdocker) we created a
17
+container that ran a Python Flask application.
18
+
19
+    $ sudo docker run -d -P training/webapp python app.py
20
+
21
+> **Note:** 
22
+> Containers have an internal network and an IP address
23
+> (remember we used the `docker inspect` command to show the container's
24
+> IP address in the [Using Docker](/userguide/usingdocker/) section).
25
+> Docker can have a variety of network configurations. You can see more
26
+> information on Docker networking [here](/articles/networking/).
27
+
28
+When we created that container we used the `-P` flag to automatically map any
29
+network ports inside that container to a random high port from the range 49000
30
+to 49900 on our Docker host.  When we subsequently ran `docker ps` we saw that
31
+port 5000 was bound to port 49155.
32
+
33
+    $ sudo docker ps nostalgic_morse
34
+    CONTAINER ID  IMAGE                   COMMAND       CREATED        STATUS        PORTS                    NAMES
35
+    bc533791f3f5  training/webapp:latest  python app.py 5 seconds ago  Up 2 seconds  0.0.0.0:49155->5000/tcp  nostalgic_morse
36
+
37
+We also saw how we can bind a container's ports to a specific port using
38
+the `-p` flag.
39
+
40
+    $ sudo docker run -d -p 5000:5000 training/webapp python app.py
41
+
42
+And we saw why this isn't such a great idea because it constrains us to
43
+only one container on that specific port.
44
+
45
+There are also a few other ways we can configure the `-p` flag. By
46
+default the `-p` flag will bind the specified port to all interfaces on
47
+the host machine. But we can also specify a binding to a specific
48
+interface, for example only to the `localhost`.
49
+
50
+    $ sudo docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py
51
+
52
+This would bind port 5000 inside the container to port 5000 on the
53
+`localhost` or `127.0.0.1` interface on the host machine.
54
+
55
+Or to bind port 5000 of the container to a dynamic port but only on the
56
+`localhost` we could:
57
+
58
+    $ sudo docker run -d -p 127.0.0.1::5000 training/webapp python app.py
59
+
60
+We can also bind UDP ports by adding a trailing `/udp`, for example:
61
+
62
+    $ sudo docker run -d -p 127.0.0.1:5000:5000/udp training/webapp python app.py
63
+
64
+We also saw the useful `docker port` shortcut which showed us the
65
+current port bindings, this is also useful for showing us specific port
66
+configurations. For example if we've bound the container port to the
67
+`localhost` on the host machine this will be shown in the `docker port`
68
+output.
69
+
70
+    $ docker port nostalgic_morse
71
+    127.0.0.1:49155
72
+
73
+> **Note:** 
74
+> The `-p` flag can be used multiple times to configure multiple ports.
75
+
76
+## Docker Container Linking
77
+
78
+Network port mappings are not the only way Docker containers can connect
79
+to one another. Docker also has a linking system that allows you to link
80
+multiple containers together and share connection information between
81
+them. Docker linking will create a parent child relationship where the
82
+parent container can see selected information about its child.
83
+
84
+## Container naming
85
+
86
+To perform this linking Docker relies on the names of your containers.
87
+We've already seen that each container we create has an automatically
88
+created name, indeed we've become familiar with our old friend
89
+`nostalgic_morse` during this guide. You can also name containers
90
+yourself. This naming provides two useful functions:
91
+
92
+1. It's useful to name containers that do specific functions in a way
93
+   that makes it easier for you to remember them, for example naming a
94
+   container with a web application in it `web`.
95
+
96
+2. It provides Docker with reference point that allows it to refer to other
97
+   containers, for example link container `web` to container `db`.
98
+
99
+You can name your container by using the `--name` flag, for example:
100
+
101
+    $ sudo docker run -d -P --name web training/webapp python app.py
102
+
103
+You can see we've launched a new container and used the `--name` flag to
104
+call the container `web`. We can see the container's name using the
105
+`docker ps` command.
106
+
107
+    $ sudo docker ps -l
108
+    CONTAINER ID  IMAGE                  COMMAND        CREATED       STATUS       PORTS                    NAMES
109
+    aed84ee21bde  training/webapp:latest python app.py  12 hours ago  Up 2 seconds 0.0.0.0:49154->5000/tcp  web
110
+
111
+We can also use `docker inspect` to return the container's name.
112
+
113
+    $ sudo docker inspect -f "{{ .Name }}" aed84ee21bde
114
+    /web
115
+
116
+> **Note:** 
117
+> Container names have to be unique. That means you can only call
118
+> one container `web`. If you want to re-use a container name you must delete the
119
+> old container with the `docker rm` command before you can create a new
120
+> container with the same name. As an alternative you can use the `--rm`
121
+> flag with the `docker run` command. This will delete the container
122
+> immediately after it stops.
123
+
124
+## Container Linking
125
+
126
+Links allow containers to discover and securely communicate with each
127
+other. To create a link you use the `--link` flag. Let's create a new
128
+container, this one a database.
129
+
130
+    $ sudo docker run -d --name db training/postgres
131
+
132
+Here we've created a new container called `db` using the `training/postgres`
133
+image, which contains a PostgreSQL database.
134
+
135
+Now let's create a new `web` container and link it with our `db` container.
136
+
137
+    $ sudo docker run -d -P --name web --link db:db training/webapp python app.py
138
+
139
+This will link the new `web` container with the `db` container we created
140
+earlier. The `--link` flag takes the form:
141
+
142
+    --link name:alias
143
+
144
+Where `name` is the name of the container we're linking to and `alias` is an
145
+alias for the link name. We'll see how that alias gets used shortly.
146
+
147
+Let's look at our linked containers using `docker ps`.
148
+
149
+    $ docker ps
150
+    CONTAINER ID  IMAGE                     COMMAND               CREATED             STATUS             PORTS                    NAMES
151
+    349169744e49  training/postgres:latest  su postgres -c '/usr  About a minute ago  Up About a minute  5432/tcp                 db
152
+    aed84ee21bde  training/webapp:latest    python app.py         16 hours ago        Up 2 minutes       0.0.0.0:49154->5000/tcp  db/web,web
153
+
154
+We can see our named containers, `db` and `web`, and we can see that the `web`
155
+containers also shows `db/web` in the `NAMES` column. This tells us that the
156
+`web` container is linked to the `db` container in a parent/child relationship.
157
+
158
+So what does linking the containers do? Well we've discovered the link creates
159
+a parent-child relationship between the two containers. The parent container,
160
+here `db`, can access information on the child container `web`. To do this
161
+Docker creates a secure tunnel between the containers without the need to
162
+expose any ports externally on the container. You'll note when we started the
163
+`db` container we did not use either of the `-P` or `-p` flags. As we're
164
+linking the containers we don't need to expose the PostgreSQL database via the
165
+network.
166
+
167
+Docker exposes connectivity information for the parent container inside the
168
+child container in two ways:
169
+
170
+* Environment variables,
171
+* Updating the `/etc/host` file.
172
+
173
+Let's look first at the environment variables Docker sets. Inside the `web`
174
+container let's run the `env` command to list the container's environment
175
+variables.
176
+
177
+    root@aed84ee21bde:/opt/webapp# env
178
+    HOSTNAME=aed84ee21bde
179
+    . . .
180
+    DB_NAME=/web/db
181
+    DB_PORT=tcp://172.17.0.5:5432
182
+    DB_PORT_5000_TCP=tcp://172.17.0.5:5432
183
+    DB_PORT_5000_TCP_PROTO=tcp
184
+    DB_PORT_5000_TCP_PORT=5432
185
+    DB_PORT_5000_TCP_ADDR=172.17.0.5
186
+    . . .
187
+
188
+> **Note**:
189
+> These Environment variables are only set for the first process in the
190
+> container. Similarly, some daemons (such as `sshd`)
191
+> will scrub them when spawning shells for connection.
192
+
193
+We can see that Docker has created a series of environment variables with
194
+useful information about our `db` container. Each variables is prefixed with
195
+`DB` which is populated from the `alias` we specified above. If our `alias`
196
+were `db1` the variables would be prefixed with `DB1_`. You can use these
197
+environment variables to configure your applications to connect to the database
198
+on the `db` container. The connection will be secure, private and only the
199
+linked `web` container will be able to talk to the `db` container.
200
+
201
+In addition to the environment variables Docker adds a host entry for the
202
+linked parent to the `/etc/hosts` file. Let's look at this file on the `web`
203
+container now.
204
+
205
+    root@aed84ee21bde:/opt/webapp# cat /etc/hosts
206
+    172.17.0.7  aed84ee21bde
207
+    . . .
208
+    172.17.0.5  db
209
+
210
+We can see two relevant host entries. The first is an entry for the `web`
211
+container that uses the Container ID as a host name. The second entry uses the
212
+link alias to reference the IP address of the `db` container. Let's try to ping
213
+that host now via this host name.
214
+
215
+    root@aed84ee21bde:/opt/webapp# apt-get install -yqq inetutils-ping
216
+    root@aed84ee21bde:/opt/webapp# ping db
217
+    PING db (172.17.0.5): 48 data bytes
218
+    56 bytes from 172.17.0.5: icmp_seq=0 ttl=64 time=0.267 ms
219
+    56 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.250 ms
220
+    56 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.256 ms
221
+
222
+> **Note:** 
223
+> We had to install `ping` because our container didn't have it.
224
+
225
+We've used the `ping` command to ping the `db` container using it's host entry
226
+which resolves to `172.17.0.5`. We can make use of this host entry to configure
227
+an application to make use of our `db` container.
228
+
229
+> **Note:** 
230
+> You can link multiple child containers to a single parent. For
231
+> example, we could have multiple web containers attached to our `db`
232
+> container.
233
+
234
+# Next step
235
+
236
+Now we know how to link Docker containers together the next step is
237
+learning how to manage data, volumes and mounts inside our containers.
238
+
239
+Go to [Managing Data in Containers](/userguide/dockervolumes).
240
+
0 241
new file mode 100644
... ...
@@ -0,0 +1,176 @@
0
+page_title: Working with Docker.io
1
+page_description: Learning how to use Docker.io to manage images and work flow
2
+page_keywords: repo, Docker.io, Docker Hub, registry, index, repositories, usage, pull image, push image, image, documentation
3
+
4
+# Working with Docker.io
5
+
6
+So far we've seen a lot about how to use Docker on the command line and
7
+your local host. We've seen [how to pull down
8
+images](/userguide/usingdocker/) that you can run your containers from
9
+and we've seen how to [create your own images](/userguide/dockerimages).
10
+
11
+Now we're going to learn a bit more about
12
+[Docker.io](https://index.docker.io) and how you can use it to enhance
13
+your Docker work flows.
14
+
15
+[Docker.io](https://index.docker.io) is the public registry that Docker
16
+Inc maintains. It contains a huge collection of images, over 15,000,
17
+that you can download and use to build your containers. It also provides
18
+authentication, structure (you can setup teams and organizations), work
19
+flow tools like webhooks and build triggers as well as privacy features
20
+like private repositories for storing images you don't want to publicly
21
+share.
22
+
23
+## Docker commands and Docker.io
24
+
25
+Docker acts as a client for these services via the `docker search`,
26
+`pull`, `login` and `push` commands.
27
+
28
+## Searching for images
29
+
30
+As we've already seen we can search the
31
+[Docker.io](https://index.docker.io) registry via it's search interface
32
+or using the command line interface. Searching can find images by name,
33
+user name or description:
34
+
35
+    $ sudo docker search centos
36
+    NAME           DESCRIPTION                                     STARS     OFFICIAL   TRUSTED
37
+    centos         Official CentOS 6 Image as of 12 April 2014     88
38
+    tianon/centos  CentOS 5 and 6, created using rinse instea...   21
39
+    ...
40
+
41
+There you can see two example results: `centos` and
42
+`tianon/centos`. The second result shows that it comes from
43
+the public repository of a user, `tianon/`, while the first result,
44
+`centos`, doesn't explicitly list a repository so it comes from the
45
+trusted top-level namespace. The `/` character separates a user's
46
+repository and the image name.
47
+
48
+Once you have found the image you want, you can download it:
49
+
50
+    $ sudo docker pull centos
51
+    Pulling repository centos
52
+    0b443ba03958: Download complete
53
+    539c0211cd76: Download complete
54
+    511136ea3c5a: Download complete
55
+    7064731afe90: Download complete
56
+
57
+The image is now available to run a container from.
58
+
59
+## Contributing to Docker.io
60
+
61
+Anyone can pull public images from the [Docker.io](http://index.docker.io)
62
+registry, but if you would like to share your own images, then you must
63
+register a user first as we saw in the [first section of the Docker User
64
+Guide](/userguide/dockerio/).
65
+
66
+To refresh your memory, you can create your user name and login to
67
+[Docker.io](https://index.docker.io/account/signup/), or by running:
68
+
69
+    $ sudo docker login
70
+
71
+This will prompt you for a user name, which will become a public
72
+namespace for your public repositories, for example:
73
+
74
+    training/webapp
75
+
76
+Here `training` is the user name and `webapp` is a repository owned by
77
+that user.
78
+
79
+If your user name is available then `docker` will also prompt you to
80
+enter a password and your e-mail address. It will then automatically log
81
+you in. Now you're ready to commit and push your own images!
82
+
83
+> **Note:**
84
+> Your authentication credentials will be stored in the [`.dockercfg`
85
+> authentication file](#authentication-file) in your home directory.
86
+
87
+## Pushing a repository to Docker.io
88
+
89
+In order to push an repository to its registry you need to have named an image,
90
+or committed your container to a named image as we saw
91
+[here](/userguide/dockerimages).
92
+
93
+Now you can push this repository to the registry designated by its name
94
+or tag.
95
+
96
+    $ sudo docker push yourname/newimage
97
+
98
+The image will then be uploaded and available for use.
99
+
100
+## Features of Docker.io
101
+
102
+Now let's look at some of the features of Docker.io. You can find more
103
+information [here](/docker-io/).
104
+
105
+* Private repositories
106
+* Organizations and teams
107
+* Automated Builds
108
+* Webhooks
109
+
110
+## Private Repositories
111
+
112
+Sometimes you have images you don't want to make public and share with
113
+everyone. So Docker.io allows you to have private repositories. You can
114
+sign up for a plan [here](https://index.docker.io/plans/).
115
+
116
+## Organizations and teams
117
+
118
+One of the useful aspects of private repositories is that you can share
119
+them only with members of your organization or team. Docker.io lets you
120
+create organizations where you can collaborate with your colleagues and
121
+manage private repositories. You can create and manage an organization
122
+[here](https://index.docker.io/account/organizations/).
123
+
124
+## Automated Builds
125
+
126
+Automated Builds automate the building and updating of images from [GitHub](https://www.github.com)
127
+or [BitBucket](http://bitbucket.com), directly on Docker.io. It works by adding a commit hook to
128
+your selected GitHub or BitBucket repository, triggering a build and update when you push a
129
+commit.
130
+
131
+### To setup an Automated Build
132
+
133
+1.  Create a [Docker.io account](https://index.docker.io/) and login.
134
+2.  Link your GitHub or BitBucket account through the [`Link Accounts`](https://index.docker.io/account/accounts/) menu.
135
+3.  [Configure an Automated Build](https://index.docker.io/builds/).
136
+4.  Pick a GitHub or BitBucket project that has a `Dockerfile` that you want to build.
137
+5.  Pick the branch you want to build (the default is the `master` branch).
138
+6.  Give the Automated Build a name.
139
+7.  Assign an optional Docker tag to the Build.
140
+8.  Specify where the `Dockerfile` is located. The default is `/`.
141
+
142
+Once the Automated Build is configured it will automatically trigger a
143
+build, and in a few minutes, if there are no errors, you will see your
144
+new Automated Build on the [Docker.io](https://index.docker.io) Registry.
145
+It will stay in sync with your GitHub and BitBucket repository until you
146
+deactivate the Automated Build.
147
+
148
+If you want to see the status of your Automated Builds you can go to your
149
+[Automated Builds page](https://index.docker.io/builds/) on the Docker.io,
150
+and it will show you the status of your builds, and the build history.
151
+
152
+Once you've created an Automated Build you can deactivate or delete it. You
153
+cannot however push to an Automated Build with the `docker push` command.
154
+You can only manage it by committing code to your GitHub or BitBucket
155
+repository.
156
+
157
+You can create multiple Automated Builds per repository and configure them
158
+to point to specific `Dockerfile`'s or Git branches.
159
+
160
+### Build Triggers
161
+
162
+Automated Builds can also be triggered via a URL on Docker.io. This
163
+allows you to rebuild an Automated build image on demand.
164
+
165
+## Webhooks
166
+
167
+Webhooks are attached to your repositories and allow you to trigger an
168
+event when an image or updated image is pushed to the repository. With
169
+a webhook you can specify a target URL and a JSON payload will be
170
+delivered when the image is pushed.
171
+
172
+## Next steps
173
+
174
+Go and use Docker!
175
+
0 176
new file mode 100644
... ...
@@ -0,0 +1,142 @@
0
+page_title: Managing Data in Containers
1
+page_description: How to manage data inside your Docker containers.
2
+page_keywords: Examples, Usage, volume, docker, documentation, user guide, data, volumes
3
+
4
+# Managing Data in Containers
5
+
6
+So far we've been introduced some [basic Docker
7
+concepts](/userguide/usingdocker/), seen how to work with [Docker
8
+images](/userguide/dockerimages/) as well as learned about [networking
9
+and links between containers](/userguide/dockerlinks/). In this section
10
+we're going to discuss how you can manage data inside and between your
11
+Docker containers.
12
+
13
+We're going to look at the two primary ways you can manage data in
14
+Docker.
15
+
16
+* Data volumes, and
17
+* Data volume containers.
18
+
19
+## Data volumes
20
+
21
+A *data volume* is a specially-designated directory within one or more
22
+containers that bypasses the [*Union File
23
+System*](/terms/layer/#ufs-def) to provide several useful features for
24
+persistent or shared data:
25
+
26
+- Data volumes can be shared and reused between containers
27
+- Changes to a data volume are made directly
28
+- Changes to a data volume will not be included when you update an image
29
+- Volumes persist until no containers use them
30
+
31
+### Adding a data volume
32
+
33
+You can add a data volume to a container using the `-v` flag with the
34
+`docker run` command. You can use the `-v` multiple times in a single
35
+`docker run` to mount multiple data volumes. Let's mount a single volume
36
+now in our web application container.
37
+
38
+    $ sudo docker run -d -P --name web -v /webapp training/webapp python app.py
39
+
40
+This will create a new volume inside a container at `/webapp`.
41
+
42
+> **Note:** 
43
+> You can also use the `VOLUME` instruction in a `Dockerfile` to add one or
44
+> more new volumes to any container created from that image.
45
+
46
+### Mount a Host Directory as a Data Volume
47
+
48
+In addition to creating a volume using the `-v` flag you can also mount a
49
+directory from your own host into a container.
50
+
51
+    $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
52
+
53
+This will mount the local directory, `/src/webapp`, into the container as the
54
+`/opt/webapp` directory. This is very useful for testing, for example we can
55
+mount our source code inside the container and see our application at work as
56
+we change the source code. The directory on the host must be specified as an
57
+absolute path and if the directory doesn't exist Docker will automatically
58
+create it for you.
59
+
60
+> **Note::** 
61
+> This is not available from a `Dockerfile` due the portability
62
+> and sharing purpose of it. As the host directory is, by its nature,
63
+> host-dependent it might not work all hosts.
64
+
65
+Docker defaults to a read-write volume but we can also mount a directory
66
+read-only.
67
+
68
+    $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py
69
+
70
+Here we've mounted the same `/src/webapp` directory but we've added the `ro`
71
+option to specify that the mount should be read-only.
72
+
73
+## Creating and mounting a Data Volume Container
74
+
75
+If you have some persistent data that you want to share between
76
+containers, or want to use from non-persistent containers, it's best to
77
+create a named Data Volume Container, and then to mount the data from
78
+it.
79
+
80
+Let's create a new named container with a volume to share.
81
+
82
+    $ docker run -d -v /dbdata --name dbdata training/postgres
83
+
84
+You can then use the `--volumes-from` flag to mount the `/dbdata` volume in another container.
85
+
86
+    $ docker run -d --volumes-from dbdata --name db1 training/postgres
87
+
88
+And another:
89
+
90
+    $ docker run -d --volumes-from dbdata --name db2 training/postgres
91
+
92
+You can use multiple `-volumes-from` parameters to bring together multiple data
93
+volumes from multiple containers.
94
+
95
+You can also extend the chain by mounting the volume that came from the
96
+`dbdata` container in yet another container via the `db1` or `db2` containers.
97
+
98
+    $ docker run -d --name db3 --volumes-from db1 training/postgres
99
+
100
+If you remove containers that mount volumes, including the initial `dbdata`
101
+container, or the subsequent containers `db1` and `db2`, the volumes will not
102
+be deleted until there are no containers still referencing those volumes. This
103
+allows you to upgrade, or effectively migrate data volumes between containers.
104
+
105
+## Backup, restore, or migrate data volumes
106
+
107
+Another useful function we can perform with volumes is use them for
108
+backups, restores or migrations.  We do this by using the
109
+`--volumes-from` flag to create a new container that mounts that volume,
110
+like so:
111
+
112
+    $ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
113
+
114
+Here's we've launched a new container and mounted the volume from the
115
+`dbdata` container. We've then mounted a local host directory as
116
+`/backup`. Finally, we've passed a command that uses `tar` to backup the
117
+contents of the `dbdata` volume to a `backup.tar` file inside our
118
+`/backup` directory. When the command completes and the container stops
119
+we'll be left with a backup of our `dbdata` volume.
120
+
121
+You could then to restore to the same container, or another that you've made
122
+elsewhere. Create a new container.
123
+
124
+    $ sudo docker run -v /dbdata --name dbdata2 ubuntu
125
+
126
+Then un-tar the backup file in the new container's data volume.
127
+
128
+    $ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
129
+
130
+You can use this techniques above to automate backup, migration and
131
+restore testing using your preferred tools.
132
+
133
+# Next steps
134
+
135
+Now we've learned a bit more about how to use Docker we're going to see how to
136
+combine Docker with the services available on
137
+[Docker.io](https://index.docker.io) including Automated Builds and private
138
+repositories.
139
+
140
+Go to [Working with Docker.io](/userguide/dockerrepos).
141
+
0 142
new file mode 100644
... ...
@@ -0,0 +1,98 @@
0
+page_title: The Docker User Guide
1
+page_description: The Docker User Guide home page
2
+page_keywords: docker, introduction, documentation, about, technology, docker.io, user, guide, user's, manual, platform, framework, virtualization, home, intro
3
+
4
+# Welcome to the Docker User Guide
5
+
6
+In the [Introduction](/) you got a taste of what Docker is and how it
7
+works. In this guide we're going to take you through the fundamentals of
8
+using Docker and integrating it into your environment.
9
+
10
+We’ll teach you how to use Docker to:
11
+
12
+* Dockerizing your applications.
13
+* Run your own containers.
14
+* Build Docker images.
15
+* Share your Docker images with others.
16
+* And a whole lot more!
17
+
18
+We've broken this guide into major sections that take you through
19
+the Docker life cycle:
20
+
21
+## Getting Started with Docker.io
22
+
23
+*How do I use Docker.io?*
24
+
25
+Docker.io is the central hub for Docker. It hosts public Docker images
26
+and provides services to help you build and manage your Docker
27
+environment. To learn more;
28
+
29
+Go to [Using Docker.io](/userguide/dockerio).
30
+
31
+## Dockerizing Applications: A "Hello World!"
32
+
33
+*How do I run applications inside containers?*
34
+
35
+Docker offers a *container-based* virtualization platform to power your
36
+applications. To learn how to Dockerize applications and run them.
37
+
38
+Go to [Dockerizing Applications](/userguide/dockerizing).
39
+
40
+## Working with Containers
41
+
42
+*How do I manage my containers?*
43
+
44
+Once you get a grip on running your applications in Docker containers
45
+we're going to show you how to manage those containers. To find out
46
+about how to inspect, monitor and manage containers:
47
+
48
+Go to [Working With Containers](/userguide/usingdocker).
49
+
50
+## Working with Docker Images
51
+
52
+*How can I access, share and build my own images?*
53
+
54
+Once you've learnt how to use Docker it's time to take the next step and
55
+learn how to build your own application images with Docker.
56
+
57
+Go to [Working with Docker Images](/userguide/dockerimages)
58
+
59
+## Linking Containers Together
60
+
61
+Until now we've seen how to build individual applications inside Docker
62
+containers. Now learn how to build whole application stacks with Docker
63
+by linking together multiple Docker containers.
64
+
65
+Go to [Linking Containers Together](/userguide/dockerlinks).
66
+
67
+## Managing Data in Containers
68
+
69
+Now we know how to link Docker containers together the next step is
70
+learning how to manage data, volumes and mounts inside our containers.
71
+
72
+Go to [Managing Data in Containers](/userguide/dockervolumes).
73
+
74
+## Working with Docker.io
75
+
76
+Now we've learned a bit more about how to use Docker we're going to see
77
+how to combine Docker with the services available on Docker.io including
78
+Trusted Builds and private repositories.
79
+
80
+Go to [Working with Docker.io](/userguide/dockerrepos).
81
+
82
+## Getting help
83
+
84
+* [Docker homepage](http://www.docker.io/)
85
+* [Docker.io](http://index.docker.io)
86
+* [Docker blog](http://blog.docker.io/)
87
+* [Docker documentation](http://docs.docker.io/)
88
+* [Docker Getting Started Guide](http://www.docker.io/gettingstarted/)
89
+* [Docker code on GitHub](https://github.com/dotcloud/docker)
90
+* [Docker mailing
91
+  list](https://groups.google.com/forum/#!forum/docker-user)
92
+* Docker on IRC: irc.freenode.net and channel #docker
93
+* [Docker on Twitter](http://twitter.com/docker)
94
+* Get [Docker help](http://stackoverflow.com/search?q=docker) on
95
+  StackOverflow
96
+* [Docker.com](http://www.docker.com/)
97
+
0 98
new file mode 100644
1 99
Binary files /dev/null and b/docs/sources/userguide/login-web.png differ
2 100
new file mode 100644
3 101
Binary files /dev/null and b/docs/sources/userguide/register-confirm.png differ
4 102
new file mode 100644
5 103
Binary files /dev/null and b/docs/sources/userguide/register-web.png differ
6 104
new file mode 100644
7 105
Binary files /dev/null and b/docs/sources/userguide/search.png differ
8 106
new file mode 100644
... ...
@@ -0,0 +1,316 @@
0
+page_title: Working with Containers
1
+page_description: Learn how to manage and operate Docker containers.
2
+page_keywords: docker, the docker guide, documentation, docker.io, monitoring containers, docker top, docker inspect, docker port, ports, docker logs, log, Logs
3
+
4
+# Working with Containers
5
+
6
+In the [last section of the Docker User Guide](/userguide/dockerizing)
7
+we launched our first containers. We launched two containers using the
8
+`docker run` command.
9
+
10
+* Containers we ran interactively in the foreground.
11
+* One container we ran daemonized in the background.
12
+
13
+In the process we learned about several Docker commands:
14
+
15
+* `docker ps` - Lists containers.
16
+* `docker logs` - Shows us the standard output of a container.
17
+* `docker stop` - Stops running containers.
18
+
19
+> **Tip:**
20
+> Another way to learn about `docker` commands is our
21
+> [interactive tutorial](https://www.docker.io/gettingstarted).
22
+
23
+The `docker` client is pretty simple. Each action you can take
24
+with Docker is a command and each command can take a series of
25
+flags and arguments.
26
+
27
+    # Usage:  [sudo] docker [flags] [command] [arguments] ..
28
+    # Example:
29
+    $ docker run -i -t ubuntu /bin/bash
30
+
31
+Let's see this in action by using the `docker version` command to return
32
+version information on the currently installed Docker client and daemon.
33
+
34
+    $ sudo docker version
35
+
36
+This command will not only provide you the version of Docker client and
37
+daemon you are using, but also the version of Go (the programming
38
+language powering Docker).
39
+
40
+    Client version: 0.8.0
41
+    Go version (client): go1.2
42
+
43
+    Git commit (client): cc3a8c8
44
+    Server version: 0.8.0
45
+
46
+    Git commit (server): cc3a8c8
47
+    Go version (server): go1.2
48
+
49
+    Last stable version: 0.8.0
50
+
51
+### Seeing what the Docker client can do
52
+
53
+We can see all of the commands available to us with the Docker client by
54
+running the `docker` binary without any options.
55
+
56
+    $ sudo docker
57
+
58
+You will see a list of all currently available commands.
59
+
60
+    Commands:
61
+         attach    Attach to a running container
62
+         build     Build an image from a Dockerfile
63
+         commit    Create a new image from a container's changes
64
+    . . .
65
+
66
+### Seeing Docker command usage
67
+
68
+You can also zoom in and review the usage for specific Docker commands.
69
+
70
+Try typing Docker followed with a `[command]` to see the usage for that
71
+command:
72
+
73
+    $ sudo docker attach
74
+    Help output . . .
75
+
76
+Or you can also pass the `--help` flag to the `docker` binary.
77
+
78
+    $ sudo docker images --help
79
+
80
+This will display the help text and all available flags:
81
+
82
+    Usage: docker attach [OPTIONS] CONTAINER
83
+
84
+    Attach to a running container
85
+
86
+      --no-stdin=false: Do not attach stdin
87
+      --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
88
+
89
+
90
+None of the containers we've run did anything particularly useful
91
+though. So let's build on that experience by running an example web
92
+application in Docker.
93
+
94
+> **Note:** 
95
+> You can see a full list of Docker's commands
96
+> [here](/reference/commandline/cli/).
97
+
98
+## Running a Web Application in Docker
99
+
100
+So now we've learnt a bit more about the `docker` client let's move onto
101
+the important stuff: running more containers. So far none of the
102
+containers we've run did anything particularly useful though. So let's
103
+build on that experience by running an example web application in
104
+Docker.
105
+
106
+For our web application we're going to run a Python Flask application.
107
+Let's start with a `docker run` command.
108
+
109
+    $ sudo docker run -d -P training/webapp python app.py
110
+
111
+Let's review what our command did. We've specified two flags: `-d` and
112
+`-P`. We've already seen the `-d` flag which tells Docker to run the
113
+container in the background. The `-P` flag is new and tells Docker to
114
+map any required network ports inside our container to our host. This
115
+lets us view our web application.
116
+
117
+We've specified an image: `training/webapp`. This image is a
118
+pre-built image we've created that contains a simple Python Flask web
119
+application.
120
+
121
+Lastly, we've specified a command for our container to run: `python
122
+app.py`. This launches our web application.
123
+
124
+> **Note:** 
125
+> You can see more detail on the `docker run` command in the [command
126
+> reference](/reference/commandline/cli/#run) and the [Docker Run
127
+> Reference](/reference/run/).
128
+
129
+## Viewing our Web Application Container
130
+
131
+Now let's see our running container using the `docker ps` command.
132
+
133
+    $ sudo docker ps -l
134
+    CONTAINER ID  IMAGE                   COMMAND       CREATED        STATUS        PORTS                    NAMES
135
+    bc533791f3f5  training/webapp:latest  python app.py 5 seconds ago  Up 2 seconds  0.0.0.0:49155->5000/tcp  nostalgic_morse
136
+
137
+You can see we've specified a new flag, `-l`, for the `docker ps`
138
+command. This tells the `docker ps` command to return the details of the
139
+*last* container started.
140
+
141
+> **Note:** 
142
+> The `docker ps` command only shows running containers. If you want to
143
+> see stopped containers too use the `-a` flag.
144
+
145
+We can see the same details we saw [when we first Dockerized a
146
+container](/userguide/dockerizing) with one important addition in the `PORTS`
147
+column.
148
+
149
+    PORTS
150
+    0.0.0.0:49155->5000/tcp
151
+
152
+When we passed the `-P` flag to the `docker run` command Docker mapped any
153
+ports exposed in our image to our host.
154
+
155
+> **Note:** 
156
+> We'll learn more about how to expose ports in Docker images when
157
+> [we learn how to build images](/userguide/dockerimages).
158
+
159
+In this case Docker has exposed port 5000 (the default Python Flask
160
+port) on port 49155.
161
+
162
+Network port bindings are very configurable in Docker. In our last
163
+example the `-P` flag is a shortcut for `-p 5000` that makes port 5000
164
+inside the container to a high port (from the range 49000 to 49900) on
165
+the local Docker host. We can also bind Docker container's to specific
166
+ports using the `-p` flag, for example:
167
+
168
+    $ sudo docker run -d -p 5000:5000 training/webapp python app.py
169
+
170
+This would map port 5000 inside our container to port 5000 on our local
171
+host. You might be asking about now: why wouldn't we just want to always
172
+use 1:1 port mappings in Docker containers rather than mapping to high
173
+ports? Well 1:1 mappings have the constraint of only being able to map
174
+one of each port on your local host. Let's say you want to test two
175
+Python applications: both bound to port 5000 inside your container.
176
+Without Docker's port mapping you could only access one at a time.
177
+
178
+So let's now browse to port 49155 in a web browser to
179
+see the application.
180
+
181
+![Viewing the web application](/userguide/webapp1.png).
182
+
183
+Our Python application is live!
184
+
185
+## A Network Port Shortcut
186
+
187
+Using the `docker ps` command to return the mapped port is a bit clumsy so
188
+Docker has a useful shortcut we can use: `docker port`. To use `docker port` we
189
+specify the ID or name of our container and then the port for which we need the
190
+corresponding public-facing port.
191
+
192
+    $ sudo docker port nostalgic_morse 5000
193
+    0.0.0.0:49155
194
+
195
+In this case we've looked up what port is mapped externally to port 5000 inside
196
+the container.
197
+
198
+## Viewing the Web Application's Logs
199
+
200
+Let's also find out a bit more about what's happening with our application and
201
+use another of the commands we've learnt, `docker logs`.
202
+
203
+    $ sudo docker logs -f nostalgic_morse
204
+    * Running on http://0.0.0.0:5000/
205
+    10.0.2.2 - - [23/May/2014 20:16:31] "GET / HTTP/1.1" 200 -
206
+    10.0.2.2 - - [23/May/2014 20:16:31] "GET /favicon.ico HTTP/1.1" 404 -
207
+
208
+This time though we've added a new flag, `-f`. This causes the `docker
209
+logs` command to act like the `tail -f` command and watch the
210
+container's standard out. We can see here the logs from Flask showing
211
+the application running on port 5000 and the access log entries for it.
212
+
213
+## Looking at our Web Application Container's processes
214
+
215
+In addition to the container's logs we can also examine the processes
216
+running inside it using the `docker top` command.
217
+
218
+    $ sudo docker top nostalgic_morse
219
+    PID                 USER                COMMAND
220
+    854                 root                python app.py
221
+
222
+Here we can see our `python app.py` command is the only process running inside
223
+the container.
224
+
225
+## Inspecting our Web Application Container
226
+
227
+Lastly, we can take a low-level dive into our Docker container using the
228
+`docker inspect` command. It returns a JSON hash of useful configuration
229
+and status information about Docker containers.
230
+
231
+    $ docker inspect nostalgic_morse
232
+
233
+Let's see a sample of that JSON output.
234
+
235
+    [{
236
+        "ID": "bc533791f3f500b280a9626688bc79e342e3ea0d528efe3a86a51ecb28ea20",
237
+        "Created": "2014-05-26T05:52:40.808952951Z",
238
+        "Path": "python",
239
+        "Args": [
240
+           "app.py"
241
+        ],
242
+        "Config": {
243
+           "Hostname": "bc533791f3f5",
244
+           "Domainname": "",
245
+           "User": "",
246
+    . . .
247
+
248
+We can also narrow down the information we want to return by requesting a
249
+specific element, for example to return the container's IP address we would:
250
+
251
+    $ sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}'
252
+    172.17.0.5
253
+
254
+## Stopping our Web Application Container
255
+
256
+Okay we've seen web application working. Now let's stop it using the
257
+`docker stop` command and the name of our container: `nostalgic_morse`.
258
+
259
+    $ sudo docker stop nostalgic_morse
260
+    nostalgic_morse
261
+
262
+We can now use the `docker ps` command to check if the container has
263
+been stopped.
264
+
265
+    $ sudo docker ps -l
266
+
267
+## Restarting our Web Application Container
268
+
269
+Oops! Just after you stopped the container you get a call to say another
270
+developer needs the container back. From here you have two choices: you
271
+can create a new container or restart the old one. Let's look at
272
+starting our previous container back up.
273
+
274
+    $ sudo docker start nostalgic_morse
275
+    nostalgic_morse
276
+
277
+Now quickly run `docker ps -l` again to see the running container is
278
+back up or browse to the container's URL to see if the application
279
+responds.
280
+
281
+> **Note:** 
282
+> Also available is the `docker restart` command that runs a stop and
283
+> then start on the container.
284
+
285
+## Removing our Web Application Container
286
+
287
+Your colleague has let you know that they've now finished with the container
288
+and won't need it again. So let's remove it using the `docker rm` command.
289
+
290
+    $ sudo docker rm nostalgic_morse
291
+    Error: Impossible to remove a running container, please stop it first or use -f
292
+    2014/05/24 08:12:56 Error: failed to remove one or more containers
293
+
294
+What's happened? We can't actually remove a running container. This protects
295
+you from accidentally removing a running container you might need. Let's try
296
+this again by stopping the container first.
297
+
298
+    $ sudo docker stop nostalgic_morse
299
+    nostalgic_morse
300
+    $ sudo docker rm nostalgic_morse
301
+    nostalgic_morse
302
+
303
+And now our container is stopped and deleted.
304
+
305
+> **Note:**
306
+> Always remember that deleting a container is final!
307
+
308
+# Next steps
309
+
310
+Until now we've only used images that we've downloaded from
311
+[Docker.io](https://index.docker.io) now let's get introduced to
312
+building and sharing our own images.
313
+
314
+Go to [Working with Docker Images](/userguide/dockerimages).
315
+
0 316
new file mode 100644
1 317
Binary files /dev/null and b/docs/sources/userguide/webapp1.png differ